Warning:
This page is
provided for historical and archival purposes
only. While the seminar dates are correct, we offer no
guarantee of informational accuracy or link
validity. Contact information for the speakers, hosts and
seminar committee are certainly out of date.
Principal Research Scientist Computer Science Department and the Robotics Institute Carnegie Mellon University
We have been developing a novel method to program a robot from observation, namely Assembly Plan from Observation, or APO. In this approach, a human programs a robot by performing assembly operations in front of the APO observation system, which includes a camera. The APO system recognizes the observed assembly operations and generates an assembly plan for the robot to replicate the assembly task.
In this talk, I will first briefly overview a previous system that recognizes classes of assembly operations, such as put-on or insert-into, from observation. This system was designed based on abstract task models which links relation transitions with assembly operations. During observation, the system takes two static images, one before and one after each assembly operation. It then extracts face contact relations from recovered object poses. By consulting the abstract task models, the system associates an extracted relation transition with the operation necessary to achieve the transition.
In the second part of my talk, I will describe our latest version of the APO system. While the previous system analyzes only two static images (obtained before and after each assembly operation), the present system analyzes the entire human task execution. This enables the present system to infer the human execution strategy and use it as a cue to plan manipulator execution. The old system does not have this capability. The latest APO system has the task description module that analyzes a task sequence to recover human hand actions. Subsequently, it maps human hand motion to robot motion to effect the same object transfer; in addition, it also plans the robot grasping strategy based on the observed human grasp.
There are three worlds in our APO paradigm: the human world to be observed, the APO internal world whose descriptions are to be generated by the APO system, and the robot world in which a robot mimics human operations. The existing versions of the APO system do not have the ability to maintain exact consistency between the APO internal world and the robot world. Inconsistencies between these worlds are caused by both uncertainties in the robot world and modelling imperfections in the internal world. As a result, performing fine manipulation in the robot world without sensory feedback is generally inadequate. This problem can be resolved by using what are called skill libraries associated with each manipulation. Each skill library encapsulates the robot motion and sensory strategies necessary to robustly execute an operation, such as turning a screw into a threaded hole and inserting a peg into a hole. It is essential that the robot system be equipped with skill libraries for the APO system to be a practically viable system. I conclude this talk with the skill library design issues.
Host: Yangsheng Xu (xu+@cs.cmu.edu) Appointment: Ava Cruse (avac@cs.cmu.edu)