The Basic Idea
This page talks about an application of computer vision to space robotics:
A "smart" teleoperation interface which analyzes images sent by a
mobile robot in space missions and helps the human teleoperator on
Earth.
Teleoperation of mobile robots is a difficult and stressing task;
it is well-known that remote drivers get lost easily, despite having maps
and visible landmarks.
Our goal is to reduce the cognitive load
on teleoperators by providing cues that help prevent them
getting lost and disoriented.
The figure below contains the basic idea:
the system receives the images from the rover and uses visual cues
and a map of the rover's environment to produce position
estimates that help the operator.
We call the system
VIPER,
for
VIsual
Position
Estimato
R.
We have run VIPER with data obtained in Pittsburgh,
using sequences of images as illustrated above.
VIPER estimates position with errors of less than 100 meters;
such behavior has been observed also with data from Dromedary Peak, Utah.
Related Information
I have not been able to find much information online about outdoor position
estimation, but if you find something,
please let me know.
There are many excellent printed papers about outdoor localization,
about positioning and navigation for space rovers, etc. A (very) brief
sample is given in the references
in our system description.