15-494/694 Cognitive Robotics: Lab 5
Learning Goal: This lab will introduce you to the Tekkotsu
Pilot and use of a particle filter for localization.
Part 1: Odometry on the Real Robot
- This part should be done as a group for efficiency's sake. Move
one of the robots to the floor. You'll want to hook up an Ethernet
cable because WiFi doesn't work well in the lab. Fold up the arm so
it doesn't bump into the walls.
- Run the PilotDemo behavior in Root Control > Framework Demos
> Navigation Demos > PilotDemo.
- Open the world SketchGUI and observe the robot (a blue triangle)
and the initial particle cloud.
- Use the PilotDemo commands to drive the robot around. Have it
move forward by a meter. What is the robot's position estimate?
- Use a tape measure to measure the actual distance the robot
traveled. Do this several times. How accurate is the robot's
translational odometry?
- Have the robot turn 90 degrees left and then 90 degrees right.
How close does it end up to its original heading?
- How does the particle cloud move when the robot moves?
- When you're done, put the robot back on the table and hook it up
to the charger.
Part 2: TagCourse Demo
- Read the wiki page on the TagCourse Maze.
- Start Mirage using TagCourse.mirage and explore the world.
Notice the green dot at the origin, which is the robot's starting
point, and the red dot that mark's the intended stopping point.
- Run Tekkotsu in Mirage.
- Park the simulated robot's arm by doing to Root Control > File
Access > Load Posture > calliope2sp and loading park1.pos, then
park2.pos.
- Run Root Control > Framework Demos > Navigation >
TagCourse. Display the world map and watch how the robot adds
AprilTags to the map as it goes. Also notice how the particle cloud
slowly disperses as the robot moves around the world.
- Study the source code at
/usr/local/Tekkotsu/Behaviors/Demos/Navigation/TagCourse.cc.fsm
to understand how the behavior is implemented.
- TagCourse is built on top of PilotDemo, so you can use
PilotDemo commands to control the robot. Now that the robot has found
all the AprilTags, let's use the tags as landmarks to demonstrate
localization. Take screenshots as you go.
- Type "msg rand" to randomize the particles, and refresh the world map.
- Drive the robot to a new position somewhere within the course by
using PilotDemo commands such as "F" to go forward and "L" to turn
left. Refresh the world map as the robot moves. Note that both the
robot's position and the particles' positions continue to update via
odometry.
- Use the head controller to move the camera so that no AprilTags
are visible.
- Type "msg loc" to tell the Pilot to localize. The resulting
particle cloud may still be somewhat diffuse. Type "msg loc" again
and see what happens.
- The Pilot needs to see at least two landmarks to localize, but it
can get better results if it sees more landmarks. Use the head
controller to point the camera at a different pair of AprilTags and
try "msg loc" again. What is the effect on the particle cloud?
Part 3: Improvements to TagCourse
- If the robot gets slightly off course, either due to inaccurate
motion or a minor collision, it might see more than two AprilTags the
next time it examines its camera image. The tags it's "supposed" to
see will be near the center of the image, but there may be other tags
visible at the edges as a result of the heading being off. Currently,
the TagCourse behavior refuses to move if there aren't exactly two
tags visible.
- Verify this behavior in Mirage. Start with the robot at the
origin (press "r" in the Mirage window to reload). Run Root Control
> Framework Demos > Navigation > PilotDemo and give the
command "msg l" to turn 10 degrees to the left. Then run the
TagCourse demo. Examine the representations in both the world and
camera shape spaces.
- Modify TagCourse to make it more robust. It should not be upset
by tags at the edges of the camera image if it can see two good tags
near the center of the image. Note: to test your modified code you
must change the name of the behavior and edit the REGISTER_BEHAVIOR
call so that you don't clash with the built-in TagCourse demo.
Part 4: Particle Filter Bingo
In this exercise you're going to simulate how a particle filter uses
sensor readings to update the particle weights. You may want to refer
to the following source files:
- /usr/local/Tekkotsu/Shared/ParticleFilter.h
- /usr/local/Tekkotsu/Localization/ShapeBasedParticleFilter.cc
- /usr/local/Tekkotsu/Localization/ShapeSensorModel.cc
Consider a world where we have several red and green cylinders
scattered around. These are our landmarks. Assume the landmarks are
visually indistinguishable except by color. Let's have two of each
color. Assign each landmark a unique name of form LM1, LM2, etc.
If the robot can start out at any location with any heading, we should
have localization particles uniformly distributed throughout the
world. You can achieve this using Tekkotsu's built in particle filter
by writing particleFilter->resetFilter() . Note that if
you have defined a polygon named worldBounds , the
particles will be restricted to lie within it. See the source code
for the Emaze demo for an example of how to do this. Set up a square
world two meters on a side.
At the beginning of the simulation, your program should pick a random
location and orientation for the robot somewhere within the world
bounds. You can set the robot's pose in the world by calling
pilot->setAgent(...) . Before you do this, you'll need to disable
the Pilot's auto-updating of the agent position. Do it this way:
pilot->getChild("RunMotionModel")->stop()
At every step of the bingo game we're going to take one sensor reading
and update the particle weights. The user specifies a landmark name
such as LM3 via a text message. Your code should clear the local
shape space, compute the bearing and distance of that landmark from
the robot, add a small amount of random nose, and use the result to
create an appropriate cylinder in local shape space at those
coordinates, representing the sensor reading.
Once you've created the cylinder in local space you can call the
particle filter's updateSensors method to update the particle weights
based on sensor readings, by doing
particleFilter->updateSensors(particleFilter->getSensorModel(),false,false)
Then you can update the particle display in the world map by doing
particleFilter->displayParticles(...) . Try displaying
500 particles instead of the usual 50 or 100.
Write your own method colorParticles() that sets the colors of all
the localization particle graphic elements in the world shape space,
using the jet color map, so that particles with the lowest weights are
dark blue, and those with the highest weights are bright red. To
learn how to generate a jet color map, see this file:
- /usr/local/Tekkotsu/tools/mon/org/tekkotsu/mon/TCPVisionListener.java
When the bingo game begins, all the particles are black. Suppose your
first sensor reading says there's a red cylinder at a certain bearing
and distance. You can't know which red cylinder you're looking at,
and you don't know your heading, so particles around both red
cylinders at approximately the correct distance should get high
weights. As additional sensor readings come in, only some particles
will be compatible with both readings, so the number of highly
weighted particles should decrease.
After the third sensor reading, try calling
particleFilter->resample() . Then redisplay the particles
and see what happens.
What to Hand In
Finish the lab for homework.
- For Part 1, hand in your measurements.
- For Part 2, hand in your screen shots and a description of what you observed.
- For Part 3, hand in your modified TagCourse source code and a brief description
of what changes you made.
- For Part 4, hand in your source code and some screenshots showing
the initial world SketchGUI state and the result after each sensor reading.
Due Friday, February 26, 2016.
|