15-494/694 Cognitive Robotics: Lab 5
Learning Goal: This lab will introduce you to the internal
workings of the particle filter. You will then extend the particle
filter to handle landmarks more intelligently. For the latter
portions of the lab you will need a private copy of Tekkotsu.
Part 1: Setting Up the ParticleTest1 Demo
- Save the ParticleTest1 world
in your Tekkotsu/project/worlds directory. Here is the mirage file.
- Build the world by typing "WorldBuilder ParticleTest1.ian" in
your worlds directory.
- Start Mirage and examine the world. You'll see it matches the
particle filter demo world from the particle filter lecture notes.
- Save the ParticleTest1 demo
in your Tekkotsu/project directory and compile it.
- Read the source code for ParticleTest1 so you understand how it works.
Basically, it sets up a world map and then enters a loop. Every time the user
types "msg go" it performs a localization operation. The user can also
move the robot by typing PilotDemo commands.
Part 2: Running the Demo
- Run Tekkotsu and start the ParticleTest1 behavior.
- Look in the world map to see the six ellipses and a large number
of randomly distributed particles. The robot starts out at the origin
facing north, but it should have low confidence in its position since
the particles have huge variance. (The current Pilot code displays
the standard deviation of the particles along with the agent's
location after each localization step, but not initially.).
- Look at the SegCam display to see what the robot is looking at.
- Type "msg go" to trigger a localization operation, and examine
the effect on the particles.
- Repeat the localization two more times.
- Notice that the robot's location and heading, shown by the blue
triangle on the world map, are taken as the average of the particle
cloud. In the current situation that means the robot ends up far from
any particle.
- Notice there are four clusters of particles in the world map. At
either of the two southern locations, where the robot would be facing
north, it should see a green landmark on the left and a red landmark
on the right. For the two northern locations, where the robot is
facing south, the view is reversed: the red landmark should be on the
left and the green landmark on the right. The actual view in the
SegCam window shows the green landmark is on the left. So why aren't
the two northern clusters ruled out? We'll answer this question in
Part 3.
- Type the following sequence of commands and observe the effects
on the particle cloud. During the "fwd 3000" step, refresh the world
map several times and watch what the particle clusters do.
- msg R
- msg fwd 3000
- msg L
- msg go
- msg go
- Why are there two particle clouds instead of one?
- Repeat the R/fwd 3000/L/go/go command sequence one more time.
Part 3: Recognizing Landmark Configurations
In the classic particle filter scenario, the robot receives a
continuous stream of sensor readings (typically sonar or laser
rangefinder values) and updates the particles continually. Each
particle encodes a hypothesis about the robot's current position on
the map. Each particle is scored based on how well it predicts the
sensor data the robot is receiving. Particles whose predictions match
the data receive higher scores and are thus more likely to spawn new
particles when a resampling occurs. At the same time, particles'
hypotheses (but not their scores) are updated by odometry as the robot
moves. So there are two kinds of updates: sensor updates, and
odometry updates.
Tekkotsu works a little differently because it relies on complex
visual landmarks. These take time to extract from the image, and may
require the robot to hold still to avoid motion blur. Thus, while the
Tekkotsu particle filter makes frequent odometry updates, it makes
sensor updates only occasionally, when requested by the Pilot (which
makes sure the robot isn't moving.)
To update a particle's score, we look in the local map for a
description of what the robot currently sees. We then compare this
with what the world map says the robot should be seeing if its
position and heading were what the particle indicates. The better the
match, the better the particle's score.
Currently, the matching of local map shapes to world map shapes is
based on individual landmark type (e.g., ellipse, AprilTag, etc),
landmark color, and landmark distance and bearing. Type and color
must match exactly; distance and bearing use fuzzy matching with
gaussian tuning curves. Since landmarks are matched independently,
there is no way to express pairwise relationships such as "the green
landmark is to the left of the red landmark".
Here is how you will fix this problem:
- Examine the file Tekkotsu/Localization/ShapeSensorModel.cc. The
LocalShapeEvaluator class handles sensor updates to the particles.
Most of the work is done in the
LocalShapeEvaluator::evaluateWorkhorse() method. What is the
mathematical calculation used for matching local map ellipse landmarks
to world map ellipse landmarks?
- Although bearing information is taken into account when matching
ellipses, in our ParticleTest1 world the red and green landmarks are
fairly close together, and the bearing gaussian is broadly tuned, so
the bearing discrepancies in the reversed views don't significantly
affect the particle score. This is why we see four particle clusters
instead of two.
- Devise a modification to the evaluator that looks at pairs of
local landmarks to help adjust the particle's score. It should do
this after the local landmark has been bound to a particlar world
landmark. You can come up with your own algorithm, but here's a
suggestion to get you started. Consider every pair of local landmarks
(but if you examine A+B don't also look at B+A.) Compute the relative
bearing of each landmark (use atan2) and look at their difference;
this will be an AngSignPi quantity since the order of the landmarks
matters. Compare the bearing difference of the local landmarks with
the predicted bearing difference of the corresponding world landmarks.
If the magnitude is between 10 and 170 degrees (expressed in radians),
and the signs are opposite, give the particle a low score; otherwise
adjust the score based on a reasonably broad gaussian tuning
curve.
- Test your algorithm in the ParticleTest1 world.
Part 4: Localization with Dominoes
The current LocalShapeEvaluator does not include support for domino
landmarks, which were only recently added to Tekkotsu. Dominoes are
interesting because they have orientations, unlike ellipses and
cylinders which are circularly symmetric. Thus, a single domino
landmark can provide both position and heading information.
Extend the LocalShapeEvaluator to support domino landmarks.
What to Hand In
Supply the following in a zip file:
- A brief writeup describing the evaluation function you
developed in part 3, and what you think its strengths and weaknesses are.
- Your version of ShapeSensorModel.cc and any other files you
modified to complete the assignment.
- Representative screenshots showing how your particle filter performs
in the ParticleTest1 world and in a domino world that you construct.
- The source file for your domino world and domino test program.
This is a more complex assignment than previous labs, so you have two
weeks to complete it, but don't wait until the last minute to start on
it!
Due Friday, February 27, 2015.
|