15-494/694 Cognitive Robotics: Lab 4
I. Software Update and Initial Setup
At the beginning of every lab you should update your copy of
vex-aim-tools. To do this on Linux or Windows:
$ cd vex-aim-tools
$ git pull
II. Measuring the Camera Tilt
Translating camera coordinates to groundplane coordinates requires
knowing the camera pose. The camera is located 43.47 mm above the
groundplane. It is supposed to be tilted 18o down, but
there is considerable variation from one robot to the next. Thus, we
need to measure the camera tilt for each robot in order to calibrate
its vision system. Follow these steps.
- Run simple_cli and type "show crosshairs" (or press "c" in the
camera viewer) to display a crosshairs in the camera viewer.
- Position a ruler so that the 0 mark directly abuts the
front of the robot. The edge of the ruler that is calibrated
in millimeters should lie along the vertical crosshairs line, like
this:

The other edge of the ruler will not be parallel with the
crosshairs line due to parallax effect. That's okay.
- Take the tip of a pen and position it at the center of the
crosshairs in the camera image.
- Read the distance value d at the pen tip on the ruler.
You should see a value between 90 and 120 mm. Note that the camera is recessed slightly and looks out
through a clear plastic window; add 3 mm to your d value to compensate for this.
- Calculate the camera tilt in degrees as atan2(43.47, d)
× 180/pi. You should get a value between 17 and 25 degrees.
- Report this value along with your robot number when you hand in your homework.
- In aim_fsm/aim_kin.py, edit the value of
camera_angle to match your
measured value.
III. Effect of Calibrating Camera Tilt
Set up an Aruco marker at a known distance directly in front of the
robot. (Measure distance relative to the base frame, which is at the
center of the robot.) If you do robot.set_pose(0,0,0,0)
before letting the robot see the marker, then you can directly read
the calculated distance by doing "show objects" in simple_cli. Do
this once with the camera angle in aim_kin.py set at 18 degrees, and
again with the camera angle set to the value you measured for this
robot. Report both values as part of your homework handin.
IV. Examining Data Association
- Reset the robot's coordinate system by
doing
set_pose(0,0,0,0) .
- Show the robot two orange barrels side-by-side, straight ahead
at a distance of around 150 mm. You can figure out which one is
OrangeBarrel.a and which is OrangeBarrel.b by doing "show objects"
and looking at the y-coordinate. Objects further to the left have
more positive y-coordinates because in the base reference frame, the
x axis points straight ahead and the y axis points to the left. You
can also hold your hand in front of one of the barrels and do "show
objects" to see which barrel is still visible.
- Remove OrangeBarrel.b and place it behind the robot. Now the
unseen barrel shows up as dark orange in the world map viewer, and
"show objects" indicates that it is not visible. Since the
barrel's location falls within the camera field of view, we can
confirm that the barrel is not there, so the barrel is treated as
"missing".
- Do
Turn(180).now() to turn the robot around so it
sees the barrel you moved. Note that it is assigned the identity
OrangeBarrel.b again, i.e., we assume that this is the barrel that
went missing. The old OrangeBarrel.b disappears from the world
map.
- Quit simple_cli. Position the two orange barrels so they are
side-by-side at a distance of 150 mm ahead of the robot, but far
to the left in its field of view:
- Start simple_cli and do "show objects". OrangeBarrel.a should
have a more positive y coordinate than OrangeBarrel.b, indicating
that OrangeBarrel.a is to the left of OrangeBarrel.b.
- Use one hand to block the camera, and slide both barrels to
near the right edge of the camera's field of view:
- Unblock the camera. Because both barrels are far from their
initial positions, neither matches a world map object, so the
greedy data association algorithm tries to make assignments, and
it may choose poorly. In other words, OrangeBarrel.a, which was
originally the barrel on the left (more positive y-coordinate),
might end up as the barrel on the right (less positive
y-coordinate). Document this with screenshots showing the barrel
configurations, the worldmap views, and the output of "show
objects".
V. Kinematics Calculations
- In simple_cli, type "show kine" to see the robot's kinematic
tree. Then type "show kine camera" to see the parameters for the
camera reference frame.
- Read through aim_fsm/aim_kin.py to see the detailed kinematic
structure of the robot.
- Read through aim_fsm/kine.py to understand how the forward
kinematics solver works.
- The camera reference frame has its origin at the center of the
camera image plane, with the x-axis pointing to the right, the
y-axis pointing down, and the z-axis pointing straight out. Use
the joint_to_base() function from kine.py and the
translation_part() function from geometry.py to calculate the
position of the origin of the camera reference frame in base frame
coordinates.
- A fruit fly enters the AI Maker Space and hovers directly in
front of the robot's camera, at a distance of 20 mm from the
center of the camera image plane. Write down the fruit fly's
coordinates in the camera reference frame, using homogeneous
coordinates. Then write a Python expression using functions from
kine.py to calculate the location of the fruit fly in base frame
coordinates. Show the expression and your result.
Homework Problems (Solve Individually)
- Write a Flash node that allows you to program complex patterns
on the LEDs. The contructor should look like
Flash(pattern, cycles=None, duration=None)
The pattern argument should be a list of form
((pattern1, duration1), (pattern2, duration2), ...)
Where patterni is either a color, or a list of
six colors (one per LED), and
durationi is a value in seconds. A color is
either an (r,g,b) triple or a member of vex.Colors.
This convention will allow you to construct arbitrarily complex
light patterns. For example, the code below implements
alternating red and blue flashes like some police vehicles use:
blue = vex.Color.BLUE
red = vex.Color.RED
Flash([ ((blue, red, blue, red, blue red), 2),
((red, blue, red, blue red, blue), 2) ])
To implement Flash you will need to use the poll() mechanism and
set_polling_interval() that StateNode inherits from
EventListener.
If the cycles argument is an integer, the node will run through
that many cycles of the whole pattern and then post a completion
event and stop. If the duration argument is a number, the node
will run through the pattern repeatedly for the specified duration
(in seconds) and then post a completion event and stop. If both
arguments are None, the pattern will repeated indefinitely, but
the LEDs should be cleared when the node's stop() method is
called, as might happen if an outgoing transition fires.
- Display reference frame: read aim_fsm/aim_kin.py and
modify the file to add a reference frame for the color display at
the top of the robot. We want the origin to be at the center of
the display, with the z axis pointing up. Following the right
hand rule, if we want the top left corner of the display to have
negative coordinates and the bottom right corner to have positive
coordinates, how should the x and y axes be oriented?
- Hungarian algorithm: in worldmap.py the
function
associate_objects_of_type matches newly-seen
objects with worldmap objects. This is the "data association"
problem discussed in lecture. The current code uses a greedy
algorithm, which can produce suboptimal results. Rewrite this
code to use
the Hungarian
algorithm. Note: you can write this code yourself, or you can
ask ChatGPT or Copilot or some other LLM to write it for you.
Either is acceptable, just document what you did.
- Fruit chameleon: review the latest version of
GPT_test.fsm and you'll see that the CheckResponse node has been
modified to support intermixing #hashtag commands with spoken text
in the same response. The only requirement is that each #hashtag
command must appear on a line by itself. Now watch
this video
demo of "the fruit chameleon". I had trouble getting Celeste
to perform two separate actions in response to a camera image.
See if you can do a better job of instructing Celeste on how to
play the game. This can involve changing the preamble in GPT_test
and/or changing the explanation you give of the fruit chameleon
task. You'll need to incorporate your implementation of #glow
from the previous lab. Also, since we don't have fruit lying
around the lab, you can use these fruit
images instead. We'll leave printouts of them in the robot
cabinet, first cubbyhole. Just fold the sheet like you did the
Aruco tag printouts.
What to Hand In
Hand in a zip file containing the following:
- Part II: your robot number and measured camera angle.
- Part III: Your Aruco distance measurements with standard 18
degree camera angle and with the actual camera angle you measured
in lab.
- Part IV: images and observations from your data association
experiment.
- Part V: the Python expressions and results from your two kinematics
calculations (camera origin, and fruit fly location).
- Your code for the Flash problem.
- Your modified version of aim_kin.py with the color display
reference frame, and your answer to the question about orientation of
the axes.
- Your modified version of
associate_objects_of_type
using the Hungarian algorithm. If you used an LLM to help you
code this, explain how you did it.
- Your modifications to the GPT_test preamble and/or the game
instructions to make Celeste a better-behaved fruit chameleon.
|