16-735 Homework 1
Michael Dille
9/12/2007

 

1. Implement the bug algorithm of your choice, explain why you chose it, and describe the implementation.

I decided to first implement the Bug1 algorithm. This was done because it looked like a logical place to start, after which I would then attempt to implement the other bug algorithms. (But of course, as of this writing, they're not yet complete.) I was also particularly interested in figuring out how to implement wall-following for raster obstacles (lacking a mathematical description), and Bug1 offered this opportunity.

I reused most of the graphical display from the previous assignment and developed a variation of the map description file I had also designed. The primary change was the addition of a new "r" field specifying an image file containing a rasterized obstacle map. The format now appears as:

[width] [height]
i [initx] [inity]
g [goalx] [goaly]
r [filename of raster map image file]

Shown below are several examples of the algorithm's execution. The start and goal positions are given by stationary green and red circles, respectively. The robot is represented by a moving red circle, and its path is traced out in blue.

Successful run, robot radius = 3

Successful run, robot radius = 8

Failure (no path exists), robot radius = 3

Source code: Available here
Videos: Available here

This appears to compile and run fine on linux.andrew.cmu.edu, and should do the same on most other Linux systems. No attempt was made to target other OSes.

Discussion

I implemented the Bug1 algorithm fairly literally, but there are several subtleties that need to be dealt with, especially when using raster obstacle maps.

My implementation used a state machine to switch between the following phases:

  1. Start move to goal
  2. Move to goal
  3. Circumnavigate
  4. Circumnavigate to closest

The algorithm starts in phase 1. This phase determines the vector to the goal and computes an appropriate unit displacement to move at each step that skips over no cells that the line passes through (to prevent "flying over" obstacles) and which will assuredly reach the goal if followed far enough. This is done by clamping the maximum horizontal or vertical displacement to one pixel.

After one timestep in phase 1, the algorithm iterates phase 2, which adds this displacement to the current position until either the goal is reached or a the current position is occupied by an obstacle. In the prior case, it terminates successfully, and in the latter, it estimates a vector representing the right-handed direction along the obstacle.

After phase 2, phase 3 is iterated to follow the edge of the obstacle until the original point at which the obstacle was struck (the "hit point") is again reached. It does this by using the tangent vector estimate to determine an angle along which to start a search for unvisited obstacle edge pixels. This search is performed radially, starting at the estimate tangency angle, until such a pixel is found. The current position is then updated to be this location, the tangent vector re-estimated using this most recent displacement, and then the process is repeated. Also at each point, the distance to the goal is measured, with the point having the lowest distance maintained.

Phase 4 is entered when the obstacle following has brought the robot back to the original hit point. This phase performs obstacle edge following until the point along the obstacle nearest the goal (the "leave point") is reached. (If the hit point was the nearest, failure is returned, as progress cannot be made.) The algorithm then re-transitions into phase 1, and when phase 2 is entered again, immediate re-collision with the obstacle is detected, returning failure (no path exists). Otherwise, phase 2 moves closer to the goal, and everything repeats yet again.

A few additional details pertaining to the map remain. First, to simulate a robot with finite radius and to more accurately express the desire to trace but not scrape against obstacles, obstacles in the map are first inflated by a configurable robot radius (for now, I am assuming the robot is always circular). To generate obstacle edge pixels as required by the wall-following implementation just described, one pass of a simple edge detection technique is performed: any obstacle pixel not completely surrounded (assuming 4-point connectivity) is treated as an edge. Though simple, this technique proved very effective and apparently quite robust.