Vision Based Tactical Driving
Todd M Jochem
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
January 11, 1996
Abstract
Much progress has been made toward solving the autonomous lane
keeping problem using vision based methods. Systems have been
demonstrated which can drive robot vehicles at high speeds for long
distances. The current challenge for vision based on-road navigation
researchers is to create systems that maintain the performance of the
existing lane keeping systems, while adding the ability to execute
tactical level driving tasks like lane transition and intersection
detection and navigation.
There are many ways to add tactical functionality to a driving
system. Solutions range from developing task specific software
modules to grafting additional functionality onto a basic lane
keeping system. Solutions like these are problematic because they
either make reuse of acquired knowledge difficult or impossible, or
preclude the use of alternative lane keeping systems.
A more desirable solution is to develop a robust, lane keeper
independent control scheme that provides the functionality to execute
tactical actions. Based on this hypothesis, techniques that are used
to execute tactical level driving tasks should:
- Be based on a single framework that is applicable to a variety of tactical level actions,
- Be extensible to other vision based lane keeping systems, and
- Require little or no modification of the lane keeping system with which it is being used.
This thesis examines a framework, called Virtual Active Vision, which
provides this functionality through intelligent control of the visual
information presented to the lane keeping system. Novel solutions
based on this framework for two classes of tactical driving tasks,
lane transition and intersection detection and traversal, are
presented in detail. Specifically, algorithms which allow the ALVINN
lane keeping system to robustly execute lane transition maneuvers
like lane changing, entrance and exit ramp detection and traversal,
and obstacle avoidance are presented. Additionally, with the aid of
active camera control, the ALVINN system enhanced with Virtual Active
Vision tools can successfully detect and navigate basic road
intersections.
My
complete thesis is online. It contains many images, most of which are in
color. Unfortunately, this leads to a quite large file - 12.6MB compressed and
35MB uncompressed. Individual chapters are available below.
Chapter 1 introduces the thesis and descibes Virtual Active Vision and
virtual cameras. (664 KB)
Chapter 2 describes how virtual camera can be used to increase the
performance of the ALVINN lane keeping system by focussing the system's
attention on only important parts of the scene. (209 KB)
Chapter 3 describes how virtual cameras were used to enable ALVINN to
execute tasks like lane changing, exit and entrance ramp detection, and
obstacle avoidance maneuvers. (4.4 MB)
Chapter 4 describes how virtual cameras were used, along with active
camera control, to enable AVLINN to detect and navigate through simple
intersection. (6.4 MB)
Chapter 5 describes how this work relates to other similar systems like
RALPH and ROBIN. (227KB)
Chapter 6 describes the testbed vehicle, the Navlab 5, that was used for
most of the experiments in this dissertation. (1.1 MB)
Chapter 7 describes the contribution of this wotj to the fields of mobile
robots and computer vision, and presents fertile areas of future work. (41 KB)