The Honda ASIMO Humanoid Robot

Carnegie Mellon

I work with humanoid robots, focusing on the intersection of computer vision and planning. My aim is to equip humanoids with some of the perception skills needed to autonomously navigate, manipulate and interact in everyday human environments. This includes biped walking, obstacle avoidance, object localization & grasping, tracking and interacting with people, etc. The resulting algorithms and methods should be generally applicable in vision and robotics, but deliver particularly cool results on a humanoid. At least, that's the idea...

At Carnegie Mellon, I have the fortune to work with a Honda ASIMO humanoid robot. Joel Chestnutt and I have been focusing on navigation autonomy on ASIMO, using vision and an efficient planner operating at the level of footsteps to enable the robot to safely walk around in a room containing unpredictably moving obstacles. Check out a video of ASIMO in action [AVI, 7MB].

Digital Human Research Center, Japan

The HRP2 Humanoid Robot

I spent the summer of 2005 as a JSPS Fellow at the AIST Digital Human Research Center in Odaiba, Tokyo, supervised by Dr. Satoshi Kagami. I researched ways of using vision and 3D range data to incrementally build environment maps for humanoid navigation. I got to play with stereo rigs, range finders, motion capture and one cool humanoid, the Kawada HRP-2! Here is a poster [pdf, 26MB] that summarizes my research at DHRC during the summer. This video [wmv, 58M] demonstrates the many ways we used motion capture to develop sensing, planning and navigation algorithms for the HRP-2. I continued collaborating with the DHRC throughout my studies. See the publications page for more details.

Yale

Before coming to CMU, I spent a year as a postgraduate fellow in the Social Robotics Lab at Yale University, advised by Brian Scassellati. I worked with Nico, a humanoid robot created to study models of social development and learning in children and to help diagnose disorders like autism. My work focused on on Nico's active foveated vision system, which I used to implement the robotic equivalent of the human vestibulo-ocular reflex and to enable Nico to perform visual self-recognition. I also collaborated with the Yale Child Study Center and worked on using eye-tracking as a diagnostic and potentially therapeutic tool for children with autism.

Cambridge

Back during my last year as an undergrad at Cambridge University's Computer Laboratory, I wrote a thesis on using support vector machines and vision to perform automatic facial expression recognition in real-time. Although I now do robotics, I still retain a personal interest for research related to facial expressions.