Mobile Robots HomeAimProjectsPeoplePublicationsDemosRSL Lab


This project aims to create a robust navigation and localisation system using a panoramic visual sensor. Using visual landmarks, topological maps can be built which combine efficient global localisation and accurate local positioning. Hierachical mapping and probabilistic localisation are also being investigated.

Panoramic Vision Sensor

Panoramic vision sensors allow a robot to capture images with a 360 degree view of the environment. Thus more information is available to the robot for navigation tasks. We use a normal video camera pointing at a hyperboloidal mirror to extend the field of view captured by the camera to 360 degrees. This sensor is then mounted on the top of the robot with the camera facing upwards. Raw video images contain a circular image of the environment and have to be unwarped in software to a more familiar view. Example images (warped and unwarped) are shown below.

Learning Places with Visual Landmarks

Sets of unique visual landmarks are used to represent particular places. By learning a series of places a robot can build a map of its environment. Landmarks are automatically selected static panoramic images using a type of interest operation. They are then evaluated dynamically using a short seires of movements inspired by the "Turn Back and Look" behaviour in wasps in order to chose landmarks which are robust to perturbations in perspective and illumination. The best performing landmarks throughout the image are chosen to represent the place in the landmark set. During this TBL movement the depth of landmarks can also be estimated using a bearing only form of Simultaneous Localisation and Mapping. Here is an example of selected landmarks and their depths and uncertainty regions:

Detecting Local Space

The extent of local space surrounding the mobile robot contains information valuable to the localisation process. While not being able to extract depth infomration like range sensors, the panoramic sensor can provide a rough estimate of the extent of local space surrounding the robot. This information can then be used to constrain the localisation search. Occupancy grids of local space can formed by combining the results of carpet detection techniques over time. This sytem uses carpet colour matching using a colour space model of carpet, and gadient boundary detection of carpet regions.

Localisation Experiments


Feedback & Queries: Simon Thompson
Date Last Modified: Tuesday, 17th April 2001