15-494/694 Cognitive Robotics:
2018 Final Project Ideas

Cozmo's Magic Dream House (Noelle Toong and Keya Varia)

  • Navigate between rooms, and between floors.
  • Operate the elevator by pressing a button.
  • Move objects around within the house.
  • Do something cool.

Neural Net Line Follower (Kristin Yin and Jason Ma)

  • Train a neural network to look at color-segmented camera images and output a steering command.
  • Allows the robot to follow a line made from colored tape.
  • Another neural network could be trained to detect forks or steep turns.

Adaptive Color Segmentation (Nathalie Domingo and Lizzie Thrasher)

  • Create a test panel with known color swatches.
  • Use machine learning to dynamically learn color classes based on current lighting conditions.
  • Use color segmented image to detect objects (balls, tape lines, chips, etc.)

Neural Net Drawing Recognizer (Dhruv Khurana)

  • Train by showing Cozmo a grayscale image.
  • Cozmo should then be able to recognize that image in the environment.

Cozmo Soccer (Oshadha Gunasekara)

  • Use a pingpong ball as the soccer ball. Develop visual ball detector.
  • Develop a ball handling attachment. (Initially just cardboard; fancy 3D printed version later.)
  • Write code to capture the ball.
  • Write code to detect a goal box and shoot the ball into the goal.
  • Stretch goal: pass the ball to another robot.
  • Stretch goal: have the second robot accept the pass and kick the ball into the goal.

Multi-Robot Cooperation Demos (Maverick Chen and Tatyana Mustakos)

  • Use shared world map to allow robots to assemble into a formation, e.g., a chorus line (do a can-can dance).
  • Program a "give" operation that allows robots to rendezvous, and one robot to give a cube to another.
  • Allow two robots to cooperatively push a large object such as an empty tissue box.
  • Come up with your own multi-robot demo idea.

Robot-Independent Shared Map Service (Cem Ersoz)

  • The current map server is integrated with the Cozmo SDK and requires a robot connection.
  • It would be better to have the map server run on a machine with no robot, and more processing power.
  • Multiple web cameras could attach directly to this server.
  • Could add hooks for computation-intensive services like neural net object detection.

Rock, Paper, Scissors (Elora Strom)

  • Recognize hand gestures by looking at the hand outline in the camera image.
  • Represent rock with a fist and paper with splayed fingers (high curvature outline); scissors has just two fingers extended.
  • Robot signals its move with a combination of face LED display, head/lift gestures, and speech. (Speech has high latency but serves to confirm the other output modes.)

Shell Game (Bonny Chen)

  • Use color segmentation to detect cups (could be easter egg halves).
  • Detect where the object starts.
  • Track the moving cups.
  • Detect when motion stops, and signal cup prediction.
  • Detect win or loss and react appropriately.

Detection of Partially Visible Cubes (unchosen)

  • Cozmo's built-in cube detection requires the entire cube to be visible in the camera frame.
  • Could train a convolutional neural net to detect partial cubes by looking for the corners and edge lines.
  • Partial cube detection could guide Cozmo to turn in that direction to obtain a better view.
  • Could do the same thing for ArUco markers.
  • This effectively extends the horizontal field of view of the camera, which is only 58o.

Fun With Quboids (unchosen)

  • Quboids are cardboard cubes with custom markers on the faces and magnets inside.
  • Design lift attachment for capturing and dragging quboids.
  • Assemble quiboid structures using the magnets to snap them together.
  • See last year's projects for an initial take on this idea by David Kyle; there is room for refinement and extension.

Dave Touretzky
Last modified: Mon Jun 5 03:15:58 EDT 2017