We have presented a quantitative, feature-based approach for pose estimation from outdoor visual information. Our objective is the construction of an interface for rover teleoperation that can intelligently process rover imagery and help human operators. We presented results with real data which demonstrate an improvement over the state of the art on outdoor position estimation.
The superiority of our implementation, as compared to others, stems from two factors. First, we allow several mountain features, large and small, nearby and far away, to be used by the estimator. To manage the complexity created by this rich set of measurements, we have to impose some quantitative structure; our posterior distributions give this structure. Second, we have a fast, efficient implementation of the pre-compilation stage, where all possible visibility relationships are calculated and stored. Both factors are responsible for the accuracy and time performance of our method.
There are several aspects of the work call for further development. A stream of images must be presented to the user and processed by the system, and results of position estimation must be overlayed on the images so they can be assimilated by the operator. Such achievements will make it easier to remotely drive a rover in a wide variety of environments, with a particular impact on lunar missions.
Tue Jun 24 00:46:56 EDT 1997