Research interests:
Robot task programming from human demonstration, human hand motion analysis, computer vision
Thesis:
Automatic Robot Instruction from Human Demonstration
Summary of work:
Conventional methods for programming a robot either are inflexible or demand significant expertise. While the notion of automatic programming by high-level goal specification addresses these issues, the overwhelming complexity of planning manipulator grasps and paths remains a formidable obstacle to practical implementation. Instead, we adopt the approach of teaching the robot by human demonstration. Our system observes a human performing the task, recognizes the human grasp, and maps it onto the manipulator. Using human actions to guide robot execution greatly reduces the planning complexity.
In analyzing the task sequence, the system first divides the observed sensory data into meaningful temporal segments, namely the pregrasp, grasp, and manipulation phases. This is achieved by analyzing the human hand motion profiles. Subsequent to task segmentation, a grasp taxonomy is used to recognize the human hand grasp. The grasp taxonomy is based on the contact web, which is a 3D graphical structure of contact points between the hand and the grasped object.
The recognized grasp is used to guide the grasp planning of the manipulator. The trajectory of the manipulator approximately follows that of the human hand during the execution of the task. Once all these are accomplished, control signals are then produced for the robot system to replicate the task. Example tasks involved different types of grasps have been used to demonstrate this approach of task programming.

Movies of task simulation

PUMA and Utah/MIT hand executing an insertion task subsequent to human demonstration (MPEG file - 321 kBytes) - made possible with Richard Voyles' help


sbk@cs.cmu.edu