IMU-KLT:  IMU-Aided KLT Feature Tracking

   by Myung Hwangbo, Jun-Sik Kim and Takeo Kanade



Main features

  • Robust feature tracking for rapid camera-ego rotations by virtue of IMU fusion
  • Affine photometric model for template image alignment (8 parameters)
  • GPU implementation in a CUDA framework (CPU version is also available.)

Watch more videos

   Camera-IMU system
Nvidia GPU
Main
Download
GPU
Data Set
Videos
Publications


  Introduction

Feature tracking is a front-end stage to many vision applications from optical flow to object tracking to 3D reconstruction. Robust tracking performance is mandatory for improved results in higher-level algorithms such as visual odometry in autonomous vehicle navigation. We implemented the KLT (Kanade-Lucas-Tomasi) method to track a set of feature points in an image sequence. Our goal is to enhance KLT to increase the number of feature points and their tracking length under realtime constraint.

Robustness can be increased by addressing the following two issues of KLT: bounded search region and a low-order tracking motion model. The first issue can be addressed by fusing the IMU with KLT so that its revised search region is more likely to have a true global minimum based on estimated camera-ego motion. The second issue can be resolved by using a high-order motion model to treat severe appearance change to a template due to camera rolling and outdoor illumination.

Additional computational load caused by the increased number of parameters in a more complex motion model can be alleviated by restricting the Hessian computation and GPU parallel programming. This enhanced KLT in cooperation with IMU can achieve a video-rate tracking of up to 1000 features simultaneously even under rapid camera rotations.


  Source Code in C++/CUDA (Ver 1.0)

Both CPU and GPU implementations are available and currently implemented together in a single program. So it is possible to run our KLT tracking program even though you don't have a GPU in your computer. The GPU-based tracker is much faster when a high volume of features are tracked. We provide a command line option to choose either CPU or GPU in run time.

The source code in c++/CUDA which can be automatically built by CMake. We have tested it on Windows XP and Ubuntu 10.04.

The following libraries and build system are required.

Installation:
  1. Download required libraries and compile them if needed.
  2. Download the source (100KB) and decompress it to any folder ($HOME).
  3. Move to $HOME and type "cmake ." (Don't miss the period .)
  4. Download dataset examples (aerial or desk) and unzip it under $HOME/data.
  5. Go to $HOME/bin and try one of the following commands
    • ./klt_tracker -f data_aerial_uav.cfg or
    • ./klt_tracker -f data_desk_scene.cfg
  6. Does it work?
Read the GPU implementation note for more detail.


  Data Set (Video and IMU data)

  • Desk scene: 640 x 480 @ 30Hz, click here for download (43.1MB)
  • UAV aerial video: 320 x 240 @ 30Hz, click here for download (7.7MB)


  Publications

  • Jun-Sik Kim, Myung Hwangbo, and Takeo Kanade, "Realtime Affine-photometric KLT Feature Tracker on GPU in CUDA Framework", The Fifth IEEE Workshop on Embedded Computer Vision in ICCV 2009, Sept 2009, pp. 1306-1311. [pdf]
  • Myung Hwangbo, Jun-Sik Kim, and Takeo Kanade, "Inertial-aided KLT Feature Tracking for a Moving Camera", IEEE/RSJ Int'l Conf. on Intelligent Robots and Systems (IROS'09), Oct 2009, pp. 1909-1916. [pdf]


  Contact


Send email to Myung Hwangbo (myung@cs.cmu.edu) or Jun-Sik Kim (kimjs@cs.cmu.edu) if you have any questions.

Last updated on Dec. 4th, 2009