15-494/694 Cognitive Robotics Lab 6: RRT Demo and Learning Color Classes
I. Software Update, SDK Update, and Initial Setup
Note: You can do this lab/homework assignment either
individually, or in teams of two.
At the beginning of every lab you should update your copy of the
cozmo-tools package. Do this:
$ cd ~/cozmo-tools
$ git pull
II. Experiments with the RRT Path Planner
- Download the file Lab6.py and run it in
simple_cli by typing
runfsm('Lab6') . The robot's outline
is shown at the starting location.
- Run the demo several more times and observe the variation in the
solutions.
- Read the Lab6 source code to see how the demo works.
III. Harder Planning Problem
You can construct a harder planning problem by adding more obstacles
between the start and the goal, forcing the robot to head down corridors
and turn corners. But make sure that the corridors are wide enough
for the robot to fit through.
Make an environment that shows off what the path planner can do. Include
at least one convex polygon obstacle. See cozmo_fsm/rrt_shapes.py for
obstacle definitions.
This step actually requires a substantial amount of coding, but none of it is
difficult. You will need to do the following:
- Study the Rectangle class in rrt_shapes.py to understand how it
represents vertices at points in homoegeneous coordinates. The
Rectangle constructor takes a center point as input, plus height,
width, and orientation, and generates the vertices from there.
Notice that the vertices are expressed relative to the center
point, so we can translate a Rectangle by simply changing its
center, and rotate it by changing just its orient value.
- Redo the (outdated and incomplete) implementation of the
Polygon class in rrt_shapes.py to follow the conventions of
Rectangle for representing vertices. Since we are not restricting
ourselves to regular polygons, the user will have to input a list
of vertices. You should calculate the center, then re-express the
vertices as offsets from the center in case the user didn't do
this themself. You can assume that the polygon is convex; you
don't need to check for this. Treat the orient field the same way
that Rectangle does.
- Implement the collides_poly method for Polygon using the
Separating Axis Theorem. Study how Rectangle does
rectangle-rectangle collision detection using the Separating Axis
Theorem. (Review the slides from the World Map lecture on how the
SAT works.) Rectangles are a special case because all their edges
are aligned with either the x or y axis, so taking the projection
of a vertex onto an axis is trivial. If the rectangle is rotated,
you have to un-rotate the vertices first, and un-rotate the other
rectangle by the same amount, before using this projection trick.
But for arbitrary polygons this shortcut doesn't apply because the
edges aren't axis-aligned, so you'll need to consider each edge
orientation individually.
- Implement the collides_circle method for Polygon. This doesn't
use the Separating Axis Theorem, but there are other approaches
that are straightforward to implement. Google "circle polygon
collision detection" for advice, or
see this
page.
- Add code to path_viewer.py to display polygons.
IV. Supervised Learning: Support Vector Machines
Support Vector Machines (SVMs) learn decision boundaries between
classes by selecting from among the set of training points those
points (vectors) closest to the decision boundary. They therefore
avoid having to store all the training data. See the illustration
here.
-
Make a lab6 directory and download the
files color_svm.py
and sample_image.jpg into it.
- Run the demo by typing "python3 -i color_svm.py". The "-i" switch
is necessary to keep Python in "interactive" mode so it doesn't quit
when the main program finishes.
- To learn a "medium green" color class corrsponding to the bottle
cap on the right side of the image, left click on some points on
the bottle cap, and right click on some points of other
colors.
- Try to maintain roughly equal numbers of positive and negative
examples of "medium green". The order in which you pick points
doesn't matter because the classifier is retrained from scratch
every time a point is added.
- If your training set gets out of balance, the SVM may set the
decision boundary to something crazy, and all the pixels may be
selected or deselected. Just add some more training points to
bring things into balance, and the model will recover.
- The SVM can also set bad decision boundaries if your data are
not cleanly separable, and the fraction of misclassified points
exceeds some threshold. Again, adding more training points will
cure the problem.
- Modify the demo to include options for saving and reloading the
trained classifier using pickle.
Read this
page to learn how to do that.
V. Train Cozmo to Recognize An Object
Unfortunately the color_svm demo cannot run inside cozmo-tools due to
problems with the tkinter GUI interface, which matplotlib relies on. So
you will have to train your classifier offline. But you can still collect
images from Cozmo using the new SaveImage node in nodes.py.
- Pick a uniformly colored object you want Cozmo to track.
- Get a good picture of the object through Cozmo's camera using
the SaveImage node.
- Using your modified color_svm program, train the classifier.
- Write a state machine program to load the trained classifier,
classify pixels in the latest camera image, and display that
result with matplotlib. You won't be able to make this real-time
interactive because of the tkinter problem; you will have to use
plt.show() and further execution will be blocked until you type
"q" in the plot window. So you can only process one image at a
time.
VI. Make Cozmo Find and Track Your Object
Write code so Cozmo looks for for your object if it's not in view, and
drives up to it, maintaining a modest distance. If you gently move
the object, Cozmo should continue to move so as to maintain the
desired distance.
You can test for the presence of the object by counting the number of
pixels that are in your desired color class.
Hand In
Hand in all the code you wrote above, plus relevant screen shots.
|