The VIPER Front Page

Visual Position Estimation:
Estimating Position from Outdoor Images

Fabio Cozman
Carlos Ernesto Guestrin


The Basic Idea

Viper is a "smart" teleoperation interface which analyzes images sent by a mobile robot in space missions and helps the human teleoperator on Earth. Teleoperation of mobile robots is a difficult and stressing task; it is well-known that remote drivers get lost easily, despite having maps and visible landmarks. Our goal is to reduce the cognitive load on teleoperators by providing cues that help prevent them getting lost and disoriented.

The basic idea, illustrated by the figure below, follows these steps:

  1. the system receives the images from the rover;
  2. features extracted from the images and from a digital elevation map of the region are matched to estimate the rover's position;
  3. visual cues are overlayed on the images and on the map to aid the user on teleoperation tasks.

We call the system VIPER, for VIsual Position EstimatoR.



More information about VIPER



Related Information

We have not been able to find much information online about outdoor position estimation, but if you find something, please let us know. There are many excellent printed papers about outdoor localization, about positioning and navigation for space rovers, etc. A (very) brief sample is given in the references in our earlier system description.


This work has been conducted at the Robotics Institute at the School of Computer Science, Carnegie Mellon University. It has been partially funded by NASA; Fabio Cozman has a scholarship from CNPq (Brazil). We thank these four organizations for all their support.