The RANGER Navigator


Investigators: Alonzo Kelly, Anthony Stentz

Carnegie Mellon researchers are developing a software control system for cross country autonomous vehicles called RANGER, for Real-time Autonomous Navigator with a Geometric Engine. The goal of the project is to increase speed and enhance the reliability of robotic vehicles in rugged outdoor settings.

RANGER has navigated over distances of 15 autonomous kilometers, moving continuously, and has at times reached speeds of 15 km/hr. The system has been used successfully on a converted U.S. Army jeep called the NAVLAB II and on a specialized Lunar Rover vehicle that may, one day, explore the moon.

The key to the success of the system is its adaptability. It explicitly computes the vehicle reaction time and required sensory throughput and adapts its perception and planning systems to meet the demands of the moment.

Operational Modes

The system can autonomously seek a predefined goal or it can be configured to supervise remote or in-situ human drivers and keep them out of trouble.

Goal-Seeking

The system can follow a predefined path while avoiding any dangerous hazards along the way or it can seek a sequence of positions or a particular compass heading. In survival mode, seeking no particular goal, it will follow the natural contours of the surrounding terrain.

World Model

A computerized terrain map data structure is maintained which models the geometry of the environment. It is an array of elevations that represents the world as a 2-1/2 D surface where the vertical direction is aligned with the gravity vector. This representation, combined with a model of vehicle geometry, permits a robust assessment of vehicle safety.

Vehicle Model

RANGER is an innovative solution to the difficulties of autonomous control of land vehicles based on a tightly-coupled, adaptive feedforward control loop. The system incorporates measurements of both the state of the vehicle and the state of the environment and maintains high fidelity models of both that are updated at very high rates.

At sufficiently high speeds, it becomes necessary to explicitly account for the difference between the ideal response of the vehicle to its commands and its actual response. RANGER models the vehicle as a dynamic system in the sense of modern control theory. The linear system model is expressed in the following generic block diagram.

FIFO queues and time tags are used to model the delays associated with physical i/o and to register contemporary events in time. The command vector u includes the steering, brake, and throttle commands. The disturbances ud model the terrain contact constraint. The state vector x includes the 3D position and 3 axis orientation of the vehicle body as well as its linear and angular velocity. The system dynamics matrix A propagates the state of the vehicle forward in time. The output vector y is a time continuous expression of predicted hazards where each element of the vector is a different hazard.

Hazard Assessment

Hazards include regions of unknown terrain, hills that would cause a tip-over, holes and cliffs that would cause a fall, and small obstacles that would collide with the vehicle wheels or body.

The process of predicting hazardous conditions involves the numerical solution of the equations of motion while enforcing the constraint that the vehicle remain in contact with the terrain. This process is a feedforward process where the current vehicle state furnishes the initial conditions for numerical integration. The feedforward approach to hazard assessment imparts high-speed stability to both goal-seeking and hazard avoidance behaviors.

System components above the state space model in the software hierarchy translate the hazard signals y(t) into a vote vector. This is accomplished by integrating out the time dimension to generate a vote for each steering direction based on a normalization of the worst case of all of the considered hazards.

In the figure, the system issues a left turn command to avoid the hill to its right. The histograms represent the votes for each candidate trajectory, for each hazard. Higher values indicate safer trajectories. The hazards are excessive roll, excessive pitch, collision with the undercarriage, and collision with the wheels. The tactical vote is the overall vote of hazard avoidance. It wants to turn left. The strategic vote is the goal-seeking vote. Here it votes for straight ahead.

Arbitration

At times, goal-seeking may cause collision with obstacles because, for example, the goal may be behind an obstacle. The system incorporates an arbiter which permits obstacle avoidance and goal-seeking to coexist and to simultaneously influence the behavior of the host vehicle. The arbiter can also integrate the commands of a human driver with the autonomous system.

Sensors

RANGER accommodates both laser rangefinder and stereo perception systems and it incorporates its own integrated stereo correlation algorithm. In either case, the design achieves significant increases in vehicle speeds without sacrificing either safety or robustness.

Adaptive Perception

Perception has long been acknowledged as the bottleneck in autonomous vehicle research. Yet, a moving vehicle generates images which contain much redundant information. Removal of this redundancy is the key to fast moving robot vehicles.

A new range image perception algorithm has been developed for RANGER. It selectively extracts a very small portion of each range image in order to reduce the perceptual throughput to a bare minimum. In this way, vehicle speed is less limited by the computer speed.

The algorithm searches each image for a band of geometry that is between two range extremes, called the range window as shown in the figure below of a range image of a hill in front of the vehicle. Only the data between the white lines is processed by RANGER. The algorithm also accounts for vehicle speed by moving the range window out as speeds increase.

The approach also stabilizes the sensor in software because the search for the data of interest adapts automatically to both the shape of the terrain and the attitude of the vehicle. It is up to 6000 times faster than traditional approaches and it achieves the throughput necessary for 20 m.p.h. motion on an ordinary computer workstation.

Position Estimation

RANGER incorporates a sophisticated Kalman Filter algorithm that merges the indications of all of the navigation sensors into a single consistent estimate of the vehicle position, attitude, and velocity. Any number of sensors in any combination can be accommodated including, wheel or transmission encoders, compasses, gyroscopes, accelerometers, doppler radar, inclinometers, terrain aids such as landmarks and beacons, and inertial and satellite navigation systems.

Implementation

While the real-time core of the system can be expressed in about 1000 lines of C, RANGER includes a complete simulation and development environment incorporating a data logger and simulators for natural terrain, vehicles, sensors, and pan/tilt mechanisms. Real-time animated graphics provide feedback to the human supervisor. A custom C language interpreter is used to configure and control the system at run-time.

Architecture

Implemented in C language, the system is composed at the highest level of four objects. The Map Manager integrates all environmental sensor images into a single consistent world model. The Controller implements hazard detection and avoidance, goal-seeking, and arbitrates between them. The Vehicle encapsulates the state of the vehicle and provides dynamics simulation and feedforward. The Kalman Filter implements the position estimation system.


Last Modified: 12:16pm EST, November 19, 1996

Back to Robotics Institute Homepage