Search

Research

Computer Vision and Pattern Recognition

Feature Extraction: Detection of interesting points for subsequent tasks. Our current approach consists of filtering non-interesting points for the Kadir detector. We exploit the continiuty of entropy over space and scale and perform Bayesian learning to determine thresholds.
Contact: P. Suau.

Clustering and Segmentation: For the clustering task we follow the approach of Gaussian mixtures learning but starting with a unique kernel (class) and splitting it until necessary. In order to do that we exploit an entropy-based criterion (thus entropy estimation is needed. either with Parzen windows or using Entropic Graphs. We're also testing spectral and graph-cuts methods.
Contact: A. Peñalver.

Graph-matching & clustering: Structural recognition with graphs by means of kernel-based mehods for improving the robustness of Softassing-like algorithms. Graph clustering is face with adaptive EM algorithms which estimate both the prototypes and the membership variables. Experiments with random graphs and object images are performed. Also tested in matching and clustering protein surfaces.
Contact: M.A. Lozano.

Egomotion Estimation: Estimate the rotation and translation of the observer. In the past we've done it exploiting 3D info, but now we're involved in the monocular case. We're testing RANSACed Thin-plate splines in combination with graph-based filters and epipolar geometry.
Contact: M. Alvarado & Francisco Escolano.

Real-time Object Recognition: Related to the two latter research items, we have applied transformational graph filters (filter structural outliers and recompute K-NN graph) with robust image features in order to detect the appearance of object models in image streams. Our current algorithm is cubic with the number of feastures but we're improving it.
Contact: W. Aguilar & F. Escolano

 

Autonomous Robots and Wearable Devices

SLAM & Devices for the Blind: Our SLAM algorithm called "Entropy Minimization SLAM" has proved to be robust and efficient (quasi-linear complexity) to build maps from the enviromnent. We're improving the algorithm and incorporating visual localization.
Contact: J.M. Sáez.

3D SLAM for Underwater Vehicles: "Entropy Minim. SLAM" has another interesting field of application. We're collaborating with the AQUA project in order to map subaquatic structures if the robot is equiped with a stereo camera. Initial results are very good even without explicit assumptions about the environment.
Contact: J.M. Sáez.

Primitive Extraction & Plane-based SLAM: The use of stereo cameras + light patterns projection yields very dense 3D clouds from which we may extract planes and primitives. However, we have developed methods for extracting these primitives from original stereo images, which implies a sensor model and grouping algorithms.
Contact: D. Viejo.

Four-legged Robots Control & Coordination: We are experimenting with learning algorithms for finding the optimal spatio-temporal configuration of the articulations for a given task (e.g. adaptation to a given surface). We're also developing methods for make AIBO's cooperate and in a near future we will be able to endow them with visual localization (SLAM) capabilities.
Contact: D. Gallardo, I. Alfonso & A. Botía.

Omnidirection Vision for Localization and Mapping: The avenue of these types of sensors offers new perspectives for localization and mapping. However, on-line learning and feature selection are key in order to optimally exploit their capabilities in real time. We are developing efficient algorithms for robot navigation.
Contact: B. Bonev & M. Cazorla.