IbPRIA 2023 image
IbPRIA 2025: 12th Iberian Conference on Pattern Recognition and Image Analysis
Coimbra, Portugal. June 30 - July 3, 2025
IbPRIA 2025 Accepted Papers
Oral Session 3 - Machine and Deep Learning 1

A New Subgraph Extraction Algorithm through a Kinship Approach for Link Prediction in Knowledge Graphs
Carla Piñol, Manuel Curado, Jose F. Vicent, Antonio J. Banegas-Luna
Abstract:
Inductive reasoning on knowledge graphs (KG) is taking the role of a converter from a link prediction to a graph classification problem. These conversion tasks use subgraph extraction techniques with graphs neural networks (GNNs) that result in a degradation of GNN expressiveness due to oversmoothing. Recent work has reformulated the subgraph extraction task in this problem as a local clustering procedure based on a personalized PageRank. However, despite obtaining better results, this approach is affected by the density of the network, limiting its results to networks with medium-low density. In this paper, we propose a new subgraph extraction method that takes into account network density using a kinship-based approach. In the evaluation, using real KGs, the proposed algorithm considerably improves the link prediction model in dense networks.

Large Language Models for Interactive Machine Translation
Sergio Gómez González, Miguel Domingo, Ángel Navarro, Francisco Casacuberta
Abstract:
Machine translation is an ever-evolving field and is attached to continuous improvement. Consequently, the results that it offers are far from being perfect but are obtained really fast. In practical applications, the translations obtained automatically need to be revised by a human translator. On the other hand, the translations performed completely by humans have a high quality but take much time. Interactive Machine Translation (IMT) has been one of the most promising approaches to improve the quality of the translation while minimizing the user effort and time needed. Last advances in natural language processing have involved Large Language Models (LLM) with great success. In this work we integrate LLMs into two IMT interaction protocols: prefix-based and segment-based. We have performed a comparative study with four different multilingual LLMs for the IMT task in a renowned dataset for both IMT protocols. The systems that we propose effectively reduce the post-editing effort for the prefix-based approach.

Multi-Hop Pooling: Leveraging Transition Matrices for Hierarchical Graph Representation Learning
Ahmed Begga, Francisco Escolano, Miguel Angel Lozano
Abstract:
This paper introduces Multi-Hop Pooling, a novel graph neural network pooling method that leverages transition matrices to capture multi-scale structural information. Unlike existing approaches that focus on either local or global graph properties, our method employs transition matrix powers to identify significant patterns across multiple hop distances. By explicitly modeling information propagation at different scales, we capture complex structural relationships that combine both local and global perspectives. Extensive experiments on seven benchmark datasets demonstrate that our method consistently outperforms state-of-the-art pooling approaches, with particularly significant improvements on molecular graphs where long-range interactions are crucial. Ablation studies confirm that incorporating multi-hop transition information substantially enhances pooling effectiveness. Our work bridges graph theory and neural architectures to develop more expressive hierarchical graph representations for improved graph classification.

Node Representation Diversity via Entropy Maximization in Graph Neural Networks
Ahmed Begga, Francisco Escolano, Miguel Angel Lozano
Abstract:
Graph Neural Networks (GNNs) have demonstrated remarkable performance across various domains, but face significant limitations in deeper architectures due to the oversmoothing problem—where node representations become increasingly indistinguishable through successive graph convolution operations. This paper proposes a novel information-theoretic approach that primarily addresses oversmoothing through Rényi entropy optimization. Our method quantifies and maximizes the diversity of node representations across network layers using kernel density estimation with Gaussian kernels. By formulating a graph-structured entropy regularization term that respects the underlying topology, we encourage networks to maintain discriminative features while preserving essential structural information. This approach integrates seamlessly with existing GNN architectures, requiring minimal modifications to the training procedure. Extensive experiments on benchmark datasets demonstrate that our Rényi entropy regularization consistently improves performance across multiple GNN variants, with particularly significant gains in deeper architectures where oversmoothing is most problematic. The results show that maintaining representation diversity through entropy maximization effectively counters the homogenization tendency of deep GNNs, establishing a principled foundation for developing more robust and expressive graph neural networks.


Publisher



Endorsed by

IAPR


Technical Sponsors

AERFAI
APRP