IbPRIA 2015 image number 3
IbPRIA 2015: 7th Iberian Conference on Pattern Recognition and Image Analysis
Santiago de Compostela, Spain. June 17-19
AERFAI
APRP USC CITIUS
Plenary Talks
Towards Affordable Self-driving Cars

Raquel Urtasun

Raquel Urtsun is an Assistant Professor of Computer Science at the University of Toronto. From September 2009 to January 2014 she was an Assistant Professor at TTI-Chicago. She was also a visiting professor at ETH Zurich during the spring semester of 2010, working with Prof. Marc Pollefeys. Previously she was a postdoctoral research scientist at UC Berkeley EECS and ICSI. Before that, she was a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, working with Prof. Trevor Darrell. She completed her PhD at the Computer Vision Laboratory, at EPFL, Switzerland, on June 2006, under the supervision of Prof. Pascal Fua and Prof. David Fleet at the University of Toronto. She previously worked as a research assistant at the Ecole National Superior de Telecomunication (ENST) in Paris, in the Image Processing Department. She graduated as an Electric Engineer in the Universidad Publica de Navarra, Pamplona, Spain. Her major interests are statistical learning and computer vision, with a particular interest in non-parametric Bayesian statistics, latent variable models, structured prediction and their application to 3D scene understanding and human pose estimation.

Abstract

Developing autonomous systems that are able to assist humans in everyday's tasks is one of the grand challenges in modern computer science. A notable example are autonomous driving systems which can help decrease fatalities caused by traffic accidents. In order to perform tasks such as navigation, recognition and interaction, these systems should be able to efficiently extract knowledge of their environment. In this talk, I'll show how graphical models provide a great mathematical formalism to extract this knowledge. In particular I'll discuss the role of Big Data, models as well as learning and inference techniques in this quest.

Visual illusions: sometimes it is better not to see the truth!

Jesús Malo

Jesús Malo (1970) received the M.Sc. degree in Physics in 1995 and the Ph.D. degree in Physics in 1999 both from the Universitat de València. He was the recipient of the Vistakon European Research Award in 1994 for his work in Physiological Optics. In 2000 and 2001 he worked as Fulbright Postdoc at the Vision Group of the NASA Ames Research Center (with A.B. Watson), and at the Lab of Computational Vision of the Center for Neural Science, New York University (with E.P. Simoncelli). He came back to the NYU in 2013 for a semester. He served as Associate Editor of IEEE Trans. Im. Proc in the 2009 - 2014 period and now he is Academic Editor of PLoS ONE until 2017. He is with the Image and Signal Processing Group at the Universitat de València. He is member of the Asociación de Mujeres Investigadoras y Tecnólogas (AMIT). He is interested in models of low-level human vision, their relations with information theory and machine learning, and their applications to image processing and vision science experimentation.

Abstract

People are used to believe that the world is as we see it. In other words, we feel that what we see is true: seeing is believing, they say. However, visual illusions proof that *sometimes* we cannot trust our eyes. Low-level examples include motion, texture and color aftereffects. A naive/superficial interpretation of visual illusions would see these phenomena as failures of our information processing system. In this talk I argue that this is not the case. A careful analysis of the statistics of texture, motion and color signals using recently proposed techniques, shows that if one applies information-maximization or error-minimization strategies to design sensors for optimal information processing, one ends up with systems that display the same sort of perceptual illusions as we do. Particularly under changes in the statistics of the environment. Therefore, optimally designed systems subject to physical constraints sometimes induce wrong interpretations of the world. But that is not a failure, it is just a by product of trying to keep an optimal performance in a changing environment. Visual illusions do not mean that your brain is cheating you, it is only trying to do its best. (To be optimal) sometimes it is better not to see the truth!

Direct and Dense 3D Reconstruction from Autonomous Quadrotors

Daniel Cremers

Daniel Cremers is a professor for Computer Science and Mathematics at the Technical University of Munich. He received Bachelor degrees in Mathematics (1994) and Physics (1994), and a Master's degree in Theoretical Physics (1997) from the University of Heidelberg. In 2002 he obtained a PhD in Computer Science from the University of Mannheim, Germany. Subsequently he spent two years as a postdoc at the University of California at Los Angeles and one year as a permanent researcher at Siemens Corporate Research (Princeton). From 2005 until 2009 he was associate professor at the University of Bonn, Germany. Since 2009 he holds the chair for Computer Vision and Pattern Recognition at the Technical University of Munich. Daniel is interested in computer vision and optimization with a particular focus on image-based 3D reconstruction, 3D shape analysis and convex variational methods. His publications received several awards, including the Best Paper of the Year 2003 by the Int. Pattern Recognition Society, the Olympus Award 2004 by the German Pattern Recognition Society and the 2005 UCLA Chancellor's Award for Postdoctoral Research. He is recipient of an ERC Starting Grant (2009) and an ERC Proof of Concept Grant (2014). In December 2010 the magazine Capital listed Prof. Cremers among "Germany's Top 40 Researchers Below 40".

Abstract

The reconstruction of the 3D world from images is among the central challenges in computer vision. Starting in the 2000s, researchers have pioneered algorithms which can reconstruct camera motion and sparse feature-points in real-time. In my talk, I will show that one can autonomously fly quadrotors and reconstruct their environment using onboard color or RGB-D cameras. In particular, I will introduce spatially dense methods for camera tracking and reconstruction which do not require feature point estimation, which exploit all available input data and which recover dense geometry rather than sparse point clouds. This is joint work with Jakob Engel, Jan Stuehmer, Martin R. Oswald, Frank Steinbruecker, Christian Kerl, Erik Bylow and Juergen Sturm.