Work in progress

Autonomous Robot navigation guided by visual targets

Initial/final date: 
01 January 2001 to 31 December 2003
Project researchers: 
Main researcher: 
Project type: 
-
Funding Entity: 
DPI 2000-11352-C02
Description: 
The goal of this project is to make possible the guidance of a mobile robot through a visual interface. The robot will be equipped with two cameras to acquire images from its environment, which will be shown in a control screen. To guide the robot, the operator will need only to indicate, on the observed image, the place where the robot should go. All the navigation aspects, such as controlling the velocity, the steering, obstacle avoidance, trajectory planning and monitoring, will be left to the robot, until the desired location is reached or a new goal is indicated by the operator. The navigation algorithm will be based on a multiagent approach and the coordination among the different agents will be done by means of a bidding mechanism. The work will be carried out on real mobile robots, in environments of increasing complexity, on which no a priori knowledge is assumed. Wheeled robots will be used in office-like and smooth outdoor environments and a legged robot will be used in more difficult terrains. In all cases, the navigation task will be carried out autonomously and based only on the visual information provided by the cameras. One of the difficulties of this project is to achieve some sense of orientation in an autonomous robot in such a way that the robot does not get lost when the goal location is temporarily out of sight.
Funding Amount (€): 
0.00
Research line: 
Autonomous Robots
Acronym: 
ARGOS-QUALNAV

Qualitative Navigation of Autonomous Robots with Learning by Experience capabilities

Initial/final date: 
01 December 2003 to 30 November 2006
Project researchers: 
Main researcher: 
Project type: 
-
Funding Entity: 
DPI-2003-05193-C02-02
Description: 
The goal of this project is to extend the multiagent system for robot navigation, developed in the project DPI2000-1352-C02-02, by means of a CBR (Case-Based Reasoning) Agent. This agent will provide learning by experience capabilities to the navigation system. This new functionality will be based upon the detection (via vision and laser sensors) of environment situations that may affect the behaviour of the robot in such away that when confronted with a situation similar to previously experienced ones, the CBR Agent should be able to forsee the results of navigation actions based on what happened in the past similar situations. We will focus our attention to those situations that lead to failures in reaching the navigation goal in order to avoid a new failure. However, we will also take into account those pairs of situation-action that lead to successfully reaching the goal. To develop such CBR agent, we must deal with what can be called continuous CBR, that is to study how to represent in the case-base simbolic abstractions (high-level knowledge) of low-level continuous information coming from the sensor readings. Another relevant aspect is that concerning the notion of similarity between situations and, in particular, how to measure these similarities. How to reuse a past successful solution is also a relevant aspect to be developed. It is worth noticing that there are extremely few existing efforts in CBR research dealing with continuous CBR. We, therefore, face an extremely important and interesting research problem.
Funding Amount (€): 
0.00
Research line: 
Autonomous Robots
Acronym: 
QUALNAVEX