Most of researchers in this group have evolved from Computer Vision to Autonomous Robotics. They experiment with robots who observe and act, and also with situated devices (cameras) mounted on humans to implement applications for the visually impaired. In these cases, state-of-the art paradigms (energy minimization, probabilistic methods, information theory, spectral theory) inspire efficient algorithms for solving visual tasks in near real-time.
On the other hand, some other researchers in the group have traversed the inverse path. Despite the utility of range 2D and 3D scanners, the avenue of cheaper and powerful sensors like cameras (stereo, omnidirectional, simple webcams) brings our roboticians to a new arena. In this context, solutions designed for range sensors must be re-formulated, from fusing these sensors with cameras towards exclusively visual-based technology.
We all believe that the cross fertilization of both areas offers an interesting scientific landscape. A good example is the so called Structure from Motion (SFM) problem in Computer Vision and its dual one, the Simultaneous Localization and Mapping (SLAM) problem in Robotics. We are making some significant progress in this direction and also in other related problems like place recognition, homing, robot guidance and robot coordination (AIBOs teams).
Recent past research projects and new ones open an interesting context to progress through this exciting area. Thus, we invite interested people to join us and collaborate in a philosophy of a pool of closely-related research interests (classification, feature extraction, grouping, matching, egomotion, object recognition, SLAM, tracking, visual navigation, robot coordination).