IbPRIA 2023 image
IbPRIA 2025: 12th Iberian Conference on Pattern Recognition and Image Analysis
Coimbra, Portugal. June 30 - July 3, 2025
IbPRIA 2025 Accepted Papers
Oral Session 2 - Biomedical Applications 1

A Machine Learning Method for Authentication of Human Ancient Mitochondrial DNA
Denis Yamunaque, Armando J. Pinho, Antti Sajantila, Diogo Pratas
Abstract:
Reliable authentication of ancient DNA (aDNA) is essential for genetic studies in archaeology and evolutionary biology. Traditional methods like radiocarbon dating are expensive and susceptible to con- tamination, while computational tools such as molecular clocking, phy- logenetic analysis, and damage analysis depend on complex, resource- intensive read-level data and are also prone to contamination. In this paper, we introduce a machine learning-based approach for authenti- cating human ancient mitochondrial aDNA (amtDNA) using exclusively FASTA sequences, eliminating the need for read-level data and allow- ing flexible ancient thresholds. By leveraging sequence features such as CG-content, relative size, N-content, and age estimation via normal- ized relative compression, our method distinguishes ancient from mod- ern samples with accuracy and F1-scores exceeding 90%. This demon- strates the robustness and efficiency of our model, offering a scalable, less invasive alternative to traditional methods.

A Novel Deep Learning Framework for Predicting Antimicrobial Peptide Activity Using ProtBert and Neural Networks
Maryam Abbasi, Verónica Vasconcelos, Edgar M. C. O. S. Vicente, Ana L. M. Santos, Joel P. Arrais
Abstract:
In this study, we propose an innovative framework for predicting the inhibitory capacity of antimicrobial peptides against bacteria, specifically targeting the minimal inhibitory concentration (MIC). The work aims to address the critical challenge posed by antibiotic-resistant bacteria by leveraging advanced deep learning techniques. Unlike previous efforts that focused on traditional machine learning or convolutional neural networks, we introduce a novel integration of the ProtBert-BFD Transformer with a fully connected neural network. The ProtBert-BFD model is pretrained on a large corpus of protein sequences using a self-supervised approach, which allows it to capture rich, long-range dependencies in peptide sequences. By combining ProtBert-generated embeddings with a predictive neural network, this framework achieves improved accuracy in MIC prediction compared to existing models. We utilize a dataset of MIC and pMIC values against Escherichia coli to validate the approach. The results demonstrate that the model significantly enhances the prediction of peptide efficacy, providing a potential tool for personalized medicine and the fight against antibiotic resistance. The findings highlight the effectiveness of transformer-based architectures in AMP characterization and provide a foundation for future research in antimicrobial peptide discovery.

AI-based system for assistance in minimally invasive renal procedures using Mixed Reality. First steps.
Emilio Delgado, Daniel Caballero, Lucía Salazar-Carrasco, Ignacio Sánchez-Varo, Jesús León-Regalado, Juan A. Sánchez-Margallo, Roberto Rodríguez-Echeverria, Francisco M. Sánchez-Margallo
Abstract:
The main objective of this study is the implementation and configuration of an assistance system for minimally invasive renal surgeries, incorporating an automatic segmentation module for the renal anatomy based on Artificial Intelligence (AI) and using computed tomography (CT) image studies, and its integration into an immersive and interactive mixed reality (MR) system. The goal is to enrich surgical planning, ensuring greater accuracy and safety to improve patient outcomes. The imaging dataset for this study was obtained from the KITS23 challenge, with 20 CT imaging studies randomly selected from the 489 available with ground truth annotations. The interactive interface using MR was developed using Unity in conjunction with the Microsoft HoloLens v2 device. For medical image segmentation, the Vista3D AI model was employed due to its versatility and high performance. All studies were successfully segmented, demonstrating a Dice score distribution with a high concentration of values above 0.8 for renal anatomy segmentation, indicating robust and consistent performance. However, for cyst segmentation, the Dice score distribution revealed a significant proportion of lower values, reflecting the complexity of this type of anatomical structures. In addition, an application for MR visualization of 3D renal anatomical models was developed to facilitate surgical planning. This application allows clinicians to better identify the renal anatomy in order to enhance traditional surgical planning methods. The development of this assistive system lays the foundation for increased accuracy, reduced errors and improved surgical outcomes, contributing to safer and more efficient procedures.

Exploiting Generative Models for Downstream Classification Tasks on Latent Spaces using 3D Brain MRI Scans: a Down Syndrome Case Study
Jordi Malé, Juan Fortea, Martínez-Abadías, Mateus Rozalem-Aranha, Xavier Sevillano
Abstract:
Generative models have emerged as powerful tools in medical imaging, enabling tasks such as segmentation, anomaly detection, and high-quality synthetic data generation. These models typically rely on learning meaningful latent representations, which are particularly valuable given the high-dimensional nature of 3D medical images like brain magnetic resonance imaging (MRI) scans. Despite their potential, latent representations remain underexplored in terms of their structure, information content, and applicability to downstream clinical tasks. Investigating these representations is crucial for advancing the use of generative models in neuroimaging research and clinical decision-making. In this work, we develop a variational autoencoder (VAE) to encode 3D brain MRI scans into a compact latent space for generative and predictive applications. We systematically evaluate the effectiveness of the learned representations through three key analyses: (i) a qualitative assessment of MRI reconstruction quality, (ii) a visualization of the latent space structure using Principal Component Analysis, and (iii) different downstream classification tasks on a proprietary dataset of euploid and Down syndrome individuals brain MRI scans. Our results demonstrate that the VAE successfully captures essential brain features while maintaining high reconstruction fidelity. The latent space exhibits clear clustering patterns, particularly in distinguishing euploid subjects from persons with Down syndrome. Furthermore, classification experiments on this latent space reveal the potential of generative models for encoding biologically relevant brain anatomical features, facilitating research on disorders with associated neuroanatomical alterations.

Multitask Learning Approach for Foveal Avascular Zone Segmentation in OCTA Images
Tânia Melo, Ângela Carneiro, Aurélio Campilho, Ana Maria Mendonça
Abstract:
The segmentation of the foveal avascular zone (FAZ) in optical coherence tomography angiography (OCTA) images plays a crucial role in diagnosing and monitoring ocular diseases such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). However, accurate FAZ segmentation remains challenging due to image quality and variability. This paper provides a comprehensive review of FAZ segmentation techniques, including traditional image processing methods and recent deep learning-based approaches. We propose two novel deep learning methodologies: a multitask learning framework that integrates vessel and FAZ segmentation, and a conditionally trained network that employs vessel-aware loss functions. The performance of the proposed methods was evaluated on the OCTA-500 dataset using the Dice coefficient, Jaccard index, 95% Hausdorff distance, and average symmetric surface distance. Experimental results demonstrate that the multitask segmentation framework outperforms existing state-of-the-art methods, achieving superior FAZ boundary delineation and segmentation accuracy. The conditionally trained network also improves upon standard U-Net-based approaches but exhibits limitations in refining the FAZ contours.

VesselView: A CNN for Segmentation of Vessels in High-Resolution Retinal Fundus Images
Roi Santos-Mateos, Alexander Velev-Santos, Xosé M. Pardo
Abstract:
Retinal fundus imaging offers a noninvasive window into the eye’s microvasculature, critical for early detection of both ocular and systemic diseases. In this work, we introduce VesselView, a U-Net–inspired convolutional neural network designed for precise segmentation of retinal vessels in high-resolution fundus images. VesselView features double-convolution residual blocks with large kernels, a deepened bottleneck and skip connections. We conduct a fair comparative evaluation on the FIVES dataset at its full resolution (2048×2048), benchmarking VesselView against state-of-the-art models. The quantitative results demonstrate that VesselView achieves superior overall performance—as measured by the area under the ROC curve—with particularly strong results in glaucomatous and normal images. The qualitative comparative provides complementary evidence to the quantitative findings by showing that VesselView balances fewer missed vessels with more false positives than competing models, and validates the critical role of the chosen skip connections through an ablation study. These findings underscore the potential of specialized deep learning architectures for high-resolution retinal vessel segmentation.


Publisher



Endorsed by

IAPR


Technical Sponsors

AERFAI
APRP