A Real-time Method for Depth Enhanced Monocular Odometry

Download: PDF.

“A Real-time Method for Depth Enhanced Monocular Odometry” by J. Zhang, M. Kaess, and S. Singh. Autonomous Robots, AURO, vol. 41, no. 1, Jan. 2017, pp. 31-43.

Abstract

Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery of camera motion. In addition, the method utilizes depth by structure from motion using the previously estimated motion, and salient visual features for which depth is unavailable. Therefore, the method is able to extend RGB-D visual odometry to large scale, open environments where depth often cannot be sufficiently acquired. The core of our method is a bundle adjustment step that refines the motion estimates in parallel by processing a sequence of images, in a batch optimization. We have evaluated our method in three sensor setups, one using an RGB-D camera, and two using combinations of a camera and a 3D lidar. Our method is rated #4 on the KITTI odometry benchmark irrespective of sensing modality-compared to stereo visual odometry methods which retrieve depth by triangulation. The resulting average position error is 1.14 % of the distance traveled.

Download: PDF.

BibTeX entry:

@article{Zhang17auro,
   author = {J. Zhang and M. Kaess and S. Singh},
   title = {A Real-time Method for Depth Enhanced Monocular Odometry},
   journal = {Autonomous Robots, AURO},
   volume = {41},
   number = {1},
   pages = {31-43},
   month = jan,
   year = {2017}
}
Last updated: November 10, 2024