The purpose of the project is to give you a chance to get involved in some research, and to work on something in more depth. Feel free to design your own project, and talk to Chris about it. You can use any type of computer/OS/language you want. You can work in groups or alone. Final report: generate a web page describing what you did, and email me the URL.
Biped robot [Chris Atkeson]: We are building a biped robot. I need help developing controllers for it.
Speech Recognition [Chris Atkeson]: I am recording audio from lectures. Can you get commercial speech recognition to work well?
Process video from lectures [Chris Atkeson]: I have a large collection of captured lectures and will be capturing some from this class. Can you track the speaker? Figure out where the head and arms are? Figure out what is being looked at and pointed to? Can you combine information from multiple cameras (stereo).
Smart Shoes [Chris Atkeson]: I would like to develop shoes that can measure how they move and the forces being applied.
People Recognizer [Chris Atkeson]: Get a computer to recognize people, potentially with a user's help. Then make it portable/wearable.
Behavior Modification Devices [Chris Atkeson]: Design a computer system that detects certain behavior conditions (such as violating a diet) and gives the user feedback to alter behavior.
Performance Art [Chris Atkeson]: Design some cool performance art or body ornamentation using computers.
Prototype a robot head on a cart [Chris Atkeson]: We want to build humanoid robots that can see, hear, and speak. Prototye a vision and auditory system that we can move around on a cart. Issues including deciding where to look, interpreting what you see and hear, and localizing stuff using vision and sound.
Interaction input devices [Rachel Gockley]: Speech recognition is very challenging in a noisy environment, especially when the input is unconstrained (as with Valerie). However, typing to a robot which then speaks to you is rather unnatural. Design and prototype a different input mechanism for Valerie.
Inferring intent [Rachel Gockley]: Currently, Valerie greets everyone who walks by her booth, even if they're clearly rushing by with no intent to stop. Augment her capabilities to acknowledge only those people who are, say, slowing down and looking at her as they approach the booth.
Useful information displays [Rachel Gockley]: Valerie's booth has a screen imbedded off to the side, which currently only displays the project's webpage, or videos of projects in RI. However, this display could be used to draw maps when Valerie's giving directions, or to show other information as relevant. Plus, Valerie could explicitly look at this display to direct a visitor's attention.
Give Valerie more domain-specific knowledge [Rachel Gockley]: Make Valerie more useful by making her able to chat about a new set of information. For example, give her access to the times and locations of courses, talks, and seminars. Could she also tell people about restaurants, bars, financial information, movie times...?
Help Valerie find people [Rachel Gockley]: Valerie uses a laser range-finder to find people at the entrance to Newell-Simon. Use audio or video to augment her sensory capabilities.
Emotion recognition from speech [Rachel Gockley]: The human voice conveys emotion such that even pre-verbal infants understand it. Build a system that can classify speech samples by emotion.