15-494/694 Cognitive Robotics: Lab 5
I. Software Update and Initial Setup
- If you are running on your personal laptop you will need to
update your copy of the cozmo-tools package. (The REL workstation
copies are kept updated by course staff.) To apply the update,
assuming you put the cozmo-tools package in /opt, do this:
$ cd /opt/cozmo-tools
$ sudo git pull
- For this lab you will need a robot, a charger, a Kindle, and some
light cubes.
- Log in to the workstation.
- Make a lab5 directory.
- Put the robot on the desktop facing sideways (away from the edge).
- Put the cubes the robot so he can't see them.
- Connect the Kindle to the robot and start simple_cli.
- Make sure the robot is in radio contact with the cubes even though he can't
see them. Type "cube1" in simple_cli and it should show up.
- Type "show viewer" in simple_cli so you can see what the robot sees. He should
not be seeing any cubes.
II. Examining Cozmo's Path Planner
You can do this portion of the lab with a partner. Cozmo includes a
built-in path planner that is used with the go_to_pose
action. In this portion of the lab we will investigate the path
planner's behavior by constructing a maze from FixedCustomObstacles.
- Download the file Lab5.fsm and read it.
- Compile and run Lab5 and examine the obstacles in the world
viewer.
- Take some tape and mark out the obstacles on the tabletop.
Also mark the destination location.
- Run Lab5 and type "tm" to fire the
=TM>
transition and send the robot on its way.
- Q1: Does Cozmo reach the Lab5 destination without crashing
through any walls?
- Q2: If you make the doorway to the right more narrow, will Cozmo still
go through it? How narrow can it get?
- Extend the maze with some additional walls so that Cozmo has to
go through three separate doorways to reach the goal. Q3: Can
he get there?
- Q4: What happens if Cozmo is surrounded by obstacles and
cannot reach the goal?
- Q5: Can cubes be used as obstacles to alter Cozmo's path?
How far will he go to advoid a cube?
III. Using Cubes as Landmarks
You can do this portion of the lab with a partner.
The particle filter demo has been rewritten to use a Pose object to
describe landmark poses, and to accept either cubes or ArUco tags as
landmarks. There is a working demo in
/opt/cozmo-tools/cozmo_fsm/examples/PF_Aruco.py using the
new landmark format.
Cubes have an orientation given by cube.pose.rotation.angle_z ,
and this allows the cube to act like a compass, providing additional
location information.
Clarification:
To understand the difference between bearing and orientation, imagine
the robot looking at a cube straight ahead. If the robot rotates in
place, the cube's bearing changes, but its orientation does not. Now
imagine the robot standing still and the cube rotating in place: the
cube's bearing remains "straight ahead" but its orientation is
changing.
When looking at a cube head-on, the marker image is a perfect square.
If the cube is rotated a bit, the marker becomes a rhombus. This is
how cube orientation is determined. But we're getting the cube's
orientation from the Cozmo SDK, which reports orientation relative to
its world map, not the robot's camera. So no matter what the robot
does or how badly the particle filter messes up, as long as no one
bumps the cube its orientation reported by the SDK will never change.
This isn't a good simulation of the orientation information the
robot's sensor is actually providing.
To fix this, our orientation "sensor" should be reporting orientation
relative to the robot rather than relative to the SDK's internal world
map. If we take cube.pose.rotation.angle_z and subtract
the robot's heading, we end up with its orientation relative to the
robot. With relative orientation, rotating the cube will change its
orientation but not its bearing; rotating the robot will change both
the cube orientation and the bearing. Translating the robot in any
direction other than directly toward/away from the cube will also
change the cube bearing. Thus, different locations on the map are
usually associated with different bearings, while different headings
are associated with different orienations (and also different
bearings).
Since each particle can make a different prediction about the cube's
relative orientation, it can be scored based on how well its
prediction matches the orientation sensor value.
So we need to subtract robot heading from the SDK's reported cube
orientation to recover the "sensed" orientation. But where do we get
the robot's heading from? We should use
robot.pose.rotation.angle_z because that's in the SDK's
coordinate system. It may not match the particle filter's heading
estimate or the orientation of our own world map, but it's how the SDK
is turning relative orientation into absolute orientation, which is
what we want to undo to derive our sensor value.
Note that you should only use a cube's information in the sensor model
if the cube's is_visible attribute is currently true.
- Download the file PF_Cube.py which
replaces the four ArUco markers with the three cubes. For cubes,
instead of the "id" in the landmark list being a marker number, the
cube object itself serves as the id. This program doesn't work
because we don't have a cube sensor model.
- Write CubeBrgOriSensorModel to use both bearing and orientation
cues. If the robot is looking at a single cube, the particles should
arrange themselves on a line of constant bearing, with the correct
heading.
- Test your sensor model with PF_Cube.
- Write CubeCombinedSensorModel to use bearing, orientation, and
distance cues, and test your sensor model with PF_Cube.
IV. Not-SLAM
Do this part on your own, not with a partner.
SLAM is Simultaneous Localization and Mapping. We will experiment
with a particle SLAM algorithm in a later lab. For now, let's explore
a simpler approach.
We can construct a map by adding cubes to it the first time each
cube is seen. Simply take the cube's current pose and insert it into
the landmark list.
- Write a program NotSLAM.py that uses CubeSLAMSensorModel, which is like
CubeCombinedSensorModel except that if it sees a cube that's not yet in the
landark list, it adds it to the list.
- Q6: How does your particle filter perform with a single
cube serving as a landmark?
- Q7: Suppose you have two cubes available as landmarks.
What difference does this make?
- The reason this approach is not considered "SLAM" is that
map-building is not on-going. Once a cube is placed on the map, it
never moves again. Q8: What happens if you displace one of the
two landmark cubes by 50 millimeters? What effect does this have on the
particle filter's behavior?
V. 15-694 Problems
Effects of Cube rotation. Suppose a cube is rotated in place
rather than translated. Q9: What effect does cube rotation
have on the particle filter if the robot can only see a single cube?
What happens if it is using two cubes as landmarks, and only one is
rotated?
Detecting Landmark Changes. Write code for NotSLAM to detect
when a cube has been substantially moved and update the landmark map
accordingly. There are several ways you might do this. One way is to
look for a large, sudden increase in the prediction error for the
cube's location or orientation.
Hand In
Collect all your code, and your answers to questions Q1 through Q8 (or
Q9) in a zip file.
Hand in your work through AutoLab by Friday Feb. 24.
|