Paper
S.B. Kang and K. Ikeuchi, ``Toward automatic robot instruction from
perception -- Temporal segmentation of tasks from human hand motion,''
IEEE Trans. on Robotics and Automation, vol. 11, no. 5, Oct. 1995.
Abstract
Our approach to program a robot is by direct human demonstration of the
grasping task in
front of the system. The system analyzes the stream of perceptual data
measured during the
human execution of the task and then produces commands to the robot system to
replicate
the observed task. In order to analyze the stream of perceptual data, it is
easier to first segment
this stream into meaningful and separate units for individual analysis.
This paper
describes work on the temporal segmentation of grasping task sequences based
on human
hand motion. The segmentation process results in the identification of motion
breakpoints
separating the different constituent phases of the grasping task. A grasping
task is composed
of three basic phases: pregrasp phase, static grasp phase, and manipulation
phase.
We show that by analyzing the fingertip polygon area (which is an indication
of the hand
preshape) and the speed of hand movement (which is an indication of the hand
transportation),
we can divide a task into meaningful action segments such as approach
object (which
corresponds to the pregrasp phase), grasp object, manipulate object, place
object, and depart
(a special case of the pregrasp phase which signals the termination of the
task). We introduce
a measure called the {\em volume sweep rate}, which is the product of the
fingertip polygon
area and the hand speed. The profile of this measure is also used in the
determination of the
task breakpoints.
The temporal task segmentation process is important as it serves as a
preprocessing step to
the characterization of the task phases. Once the breakpoints have been
identified, steps to
recognize the grasp and extract the object motion can then be carried out.