15-494/694 Cognitive Robotics: Lab 3
Learning Goal: This lab will introduce you to the Tekkotsu
vision system, the SketchGUI tool, and the MapBuilder.
Part I: Setting Up
Remember at the start of every lab to do a "make" on your workstation
and then do "sendtekkotsu" so your robot is running the latest version
of the Tekkotsu framework, which is updated frequently. It is
essential that libtekkotsu.so on the robot matches the version of
libtekkotsu.so on your workstation.
Start up Tekkotsu on the robot and launch the head control and the
RawCam and SegCam viewers in the ControllerGUI.
Part II: Shape Extraction
- We will supply you with colored easter egg halves and rolls of
colored tape. Using the ControllerGUI's Seg viewer, determine which
colors the robot sees well, given the default RGBK color map.
- Compose a scene of several easter egg halves for the robot to
look at. Write a behavior that uses a MapBuilderNode to look at the
scene and extract ellipses into the world shape space.
- Examine the camera shape space by clicking on the "C" button in
the ControllerGUI. Click on rawY to superimpose the camera image on
top of the extracted ellipse shapes.
- Examine the world shape space. How does the distortion of the
ellipse shapes vary with distance?
- Add another node to your behavior to examine the results in
camera shape space and report how many ellipses the robot sees.
- What happens if two easter eggs touch? Does the robot still see
them as two separate objects, or does it see them as one large
ellipse? Experiment and see.
- Modify your behavior so that for every ellipse it finds in the
camera image, it constructs another ellipse, centered at the same
spot, but with axes that are 50% larger than the original ellipse. The
new ellipse should be the same color as the extracted ellipse. When
you look in camera space after your behavior has run, and select the
rawY image plus all shapes, you should see a collection of ellipse
pairs. Take a screenshot to hand in.
Part III: Working with Lines
- Use a strip of colored tape to make a roughly vertical
line. Arrange easter egg halves on either side of the line. Verify
that you can use the MapBuilder to detect both the line and the easter
eggs (as ellipses).
- Using the online
reference pages, look up the
pointIsLeftOf() method
of the LineData class. Remember to first select the DualCoding name
space from the main Reference page before trying a search.
- Also in the online reference pages, look up the
getCentroid() method of EllipseData. What type of object
does this method return?
- Modify your behavior to report how many ellipses appear on each
side of the line. If there is no line visible, the behavior should
report that instead. If multiple lines are detected, just use the
first line. Use the
setInfinite() method to convert the
line shape from a line segment to an infinite line, and notice how
this affects the rendering of the line in the SketchGUI.
Part IV: April Tags and Polygons
- Point the robot's camera at some AprilTags and run the AprilTest
demo, which can be found under Root Control > Framework Demos
> Vision Demos. Look in the camera shape space to see the detected
AprilTags.
- Read the source code for AprilTest, which you can find in
/usr/local/Tekkotsu/Behaviors/Demos/Vision/AprilTest.cc.fsm.
- Read the documentation for the PolygonData class, focusing on the
constructor and the
isInside() method.
- Write a behavior that looks for three ellipses of a given color
(your choice) and forms a closed polygon in camera space joining their
centroids. You should be able to see this polygon in the camera
SketchGUI.
- Extend your behavior to also look for AprilTags. Your behavior
should report the tagID of the AprilTag that appears inside the
polygon formed by the three easter eggs. Use the SketchGUI to compose
a display showing the ellipses (easter eggs), the polygon, and the
AprilTags, and take a screenshot.
Part V: Virtual Reality
- Point the camera at some ellipses and run the DrawShapes
Demo, which you can find at Root Control > Framework Demos >
Vision Demos > DrawShapes.
- Look in the RawCam viewer and you will see the ellipse shapes
superimposed on the raw camera image. Note: this only applies to
RawCam, not SegCam.
- Now use the Head Control to move the camera, and notice that the
shapes stay registered with the ellipses as the camera image changes.
Tekkotsu is translating from world coordinates back to camera
coordinates in order to draw the ellipses correctly in the current
camera image. Because the shapes are in world space, you can also use
the Walk Control to move the robot's body, and the shapes will
continue to display correctly, modulo any odometry error.
- Look at the source code for the DrawShapes demo to see how it
works. Essentially, you simply push a shape onto the
VRmixin::drawShapes vector and it will automatically be drawn in the
camera image.
- Write your own behavior that looks for a line, then constructs
two small ellipses (in worldShS) that are centered on the endpoints of
the line, and causes these ellipses to be drawn in the raw camera
image. Include a screenshot of the result.
Part VI: Dominoes
- Compile and run the FindDominoes
demo (source code here).
You will need to use this domino.plist
file rather than the one mentioned in the instructions, because our
robots are running Ubuntu 14.04 instead of 12.04. You need the
domino.plist file to get the right camera settings on the robot; it's
not necessary with Mirage.
- Note that dominoes are only built in local and world spaces, not
in camera space. This is because the domino extraction algorithm
relies on the project-to-ground operation to eliminate perspective
effects before examining the relationships among the lines and
ellipses that potentially indicate a domino.
- Write code to describe the dominoes the robot sees, by speaking a
sentence such as "I see a domino with 3 and 5 dots." See the source
code for the SeeShapes demo in
/usr/local/Tekkotsu/Behaviors/Demos/Vision/SeeShapes.cc.fsm for an
example of how to generate this type of speech. You can test your code in Mirage
if you wish.
What to Hand In
Finish the lab for homework. For each exercise above, hand in your
source code and a screen shot showing that your behavior worked.
Due Friday, February 6, 2015.
|