Abstract

To alleviate the problem of overwhelming complexity in grasp synthesis and path planning associated with robot task planning, we adopt the approach of teaching the robot by demonstrating in front of it. The system has four components: the data acquisition system, the grasping task recognition module, the task translator, and the robot system. The data acquisition system extracts the perceptual data stream recorded during the execution of the task. This data stream is then interpreted by the grasping task recognition module, which produces higher levels of abstraction to describe both the motion and actions taken in the task. The output of the grasping task recognition module is subsequently provided to the task translator which in turn creates commands for the robot system to replicate the observed task. In this paper, we describe how these components work, with more emphasis on the task recognition module. The robot system that we use to perform the grasping tasks comprises the PUMA 560 arm and the Utah/MIT hand.