IbPRIA 2023 image number 3
IbPRIA 2023: 11th Iberian Conference on Pattern Recognition and Image Analysis
Alicante, Spain. June 27-30, 2023
News
June 25, 2023
Proceedings available

June 21, 2023
Handbook program and DC book of abstracts published

June 9, 2023
Program published

May 16, 2023
Preliminary Program published

Mar 25, 2023
Registration is open

Feb 13, 2023
Submission deadline extended to March 5, 2023 (hard deadline)

Dic 23, 2022
The awarded papers will be invited to prepare extended versions for Pattern Recognition Letters

Dic 22, 2022
A short list of presented papers will be invited to submit extended versions for possible publication in Pattern Analysis and Applications

October 21, 2022
IAPR endorsement approved
Tutorials (June 27)
A brief history of unsupervised machine translation: from a crazy idea to the future of MT?

Machine Translation (MT) has traditionally relied on millions of examples of existing translations. In 2011, Ravi and Knight attempted the impossible—training MT systems without parallel data—but their statistical decipherment approach was only shown to work in very limited settings. Barely a decade later, we have seen the first serious claims of state-of-the-art MT results without using any explicit parallel data. Interestingly, this progress has come from increasingly simpler ideas combined with scale, an illustrative example of the broader trend in AI. In this talk, I will present the journey that has led to this progress, and reflect on what it means to be a researcher in the era of large language models.

Mikel Artetxe

Mikel Artetxe is a co-founder of Reka. Prior to that, he was a Research Scientist at FAIR (Meta AI), and he did his PhD at the University of the Basque Country, advised by Eneko Agirre and Gorka Labaka, and interned at DeepMind, FAIR, and Google. Mikel's general research area is in Natural Language Processing and Machine Learning. His background is mostly on multilinguality, focusing on low-resource scenarios and, in particular, unsupervised machine translation and cross-lingual representation learning. More recently, he has also been working on natural language generation, few-shot learning and large-scale language models.

Machine Learning for Computational Photography

In this tutorial, we will explore the use of deep learning techniques in the field of computational photography. In recent years, we have seen how deep learning techniques have enabled the creation of high quality pictures and videos captured using smartphones that look like they were taken with a professional DSLR camera. In particular, deep learning techniques have shown great potential to improve image quality and to perform post-capture image edits, e.g. creative retouching, improving low-light photography, creating depth of field effect, etc. Throughout the talk, we will present multiple research works and show how deep learning techniques can be applied to real-world computational photography applications: rendering a natural camera bokeh effect, relighting human portraits, realistic background replacement, etc. We will also discuss the challenges and limitations of using deep learning in this field, as well as future directions for research and development.

Sergio Orts

Sergio Orts-Escolano is a Staff Research Scientist at Google. His research interests include human-centric 3D computer vision and machine learning, with a special focus on topics such as depth sensing, segmentation and matting, image relighting, neural rendering, generative models, volumetric reconstruction, and immersive 3D telepresence. Before joining Google, he was an assistant professor in the department of Computer Science and Artificial Intelligence at the University of Alicante, Spain. Previously, he was a Senior Scientist at PerceptiveIO and a researcher at Microsoft Research where he was one of the leading members of the Holoportation project (3D virtual human teleportation in real-time). He has authored more than 50 publications in top journals and conferences like CVPR, ECCV, SIGGRAPH, 3DV, BMVC, IROS, and TPAMI.

Continual Visual Learning: Where are we?

Several methods are being developed to tackle the problem of incremental learning in the context of deep learning-based models, i.e., adapting a model, originally trained on a set of classes, to additionally handle new classes, in the absence of training data of the original classes. They aim to mitigate “catastrophic forgetting”—an abrupt degradation of performance on the original set of classes, when the training objective is adapted to the new classes. In this tutorial, we plan to provide a comprehensive description of the main categories of incremental learning methods, e.g., based on distillation loss, growing the capacity of the network, introducing regularization constraints, or using autoencoders to capture knowledge from the initial training set, and analyze the state of affairs. We will then study the new challenges of learning incrementally in frameworks that are not fully supervised, such as semi or self supervised learning.

Karteek Alahari

Karteek Alahari is a senior researcher (known as chargé de recherche in France, which is equivalent to a tenured associate professor) at Inria. He is based in the Thoth research team at the Inria Grenoble - Rhône-Alpes center. He was previously a postdoctoral fellow in the Inria WILLOW team at the Department of Computer Science in ENS (École Normale Supérieure), after completing his PhD in 2010 in the UK. His current research focuses on addressing the visual understanding problem in the context of large-scale datasets. In particular, he works on learning robust and effective visual representations, when only partially-supervised data is available. This includes frameworks such as incremental learning, weakly-supervised learning, adversarial training, etc. Dr. Alahari's research has been funded by a Google research award, the French national research agency, and other industrial grants, including Facebook, NaverLabs Europe, Valeo.


Publisher



Endorsed by

IAPR


Technical Sponsors

AERFAI
APRP