Interactive Multi-Modal Robot Programming
|
Video Sequence
1. gesture
recognition
2. programming
3. execution
|
- Overview:
- As robots enter the human environment and come in contact
with inexperienced users, they need to be able to interact
with users in a multi-modal fashion - keyboard and mouse
are no longer acceptable as the only input modalities.
-
- We are currently investigating the multi-modal interaction
scenario, where robots are controlled through
two-handed gestures and speech. Multi-modality comes in
handy when the user needs to teach a new skill and compose
programs out of basic skill sets known as primitives. We
are using HTK
(Hidden Markov Model Toolkit), and off-the-shelf software
component for gesture and speech recognition. The mobile
vacuum cleaning robot,
Cye,
is used as a test-bed, but the framework is not restricted
to any particular platform.
-
- Representative Publication:
- S. Iba, C. J. J. Paredis, and P. K. Khosla,
"Interactive Multi-Modal Robot Programming,"
International Conference on Robotics and Automation (ICRA) 2002, Washington, D.C.,
pp. 161-168, May 11 - 15, 2002.
(Finalist for the Anton Philips Best Student Paper Award)
|
|
Interactive Multi-Modal Programming for Robotic Arc
Welding / Learning by Observation
|
Video [1,2,3]
Video [1]
|
- Overview:
-
The aim of this experiment is to use multi-modal commands (hand-gestures
& voice) to perform an arc welding task, and compare against
conventional programming method using teach-pendant. We investigated under
three different scenarios:
- Reduced sensor mode: Only acceleration and velocity of the hand is
available (no absolute position of the hand)
- Full sensor mode: Absolute position of the hand is available
- Learning by observation: Use teach-pendant to store points, but the
system can infer new position from previous pattern that is similar.
-
- Representative Publications:
- K.R. Dixon, M. Strand, and P.K. Khosla, "Predictive
Robot Programming," In Proceedings of the IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS 2002),
pp. 876-881, October, 2002.
- K.R. Dixon and P.K. Khosla, "Unsupervised
Model-Based Prediction of User Actions," Technical report,
Carnegie Mellon University, Institute for Complex Engineered Systems,
2002.
|
|
Gesture Based Control of a Mobile Robot |
Video [1,2,3]
|
- Overview:
- Compared to a mouse and keyboard, hand gestures have
high redundancy to convey geometric and temporal data. They are rich in
vocabulary while being intuitive to users. These features make hand
gestures an attractive tool to interact with robots. For this particular
system, we have experimented with the use of single-handed gesture to
control a single mobile robot. We have developed a gesture spotting and recognition algorithm based on a
Hidden Markov Model (HMM) for this system.
-
- Representative Publication:
- S. Iba, J.M.Vande Weghe, C. Paredis, and P. Khosla,
"
An Architecture for Gesture Based Control of Mobile
Robots," Proceedings of the IEEE/RSJ
International Conference on Intelligent Robots and
Systems (IROS'99), pp. 851-857, October, 1999. (Slides)
|
|
Interactive Two-Handed Gesture Based Control of a Manipulator
|
Video [1]
|
- Overview:
-
Another experiment is conducted to show the usefulness of
hand gesture as an interaction mode for a robotic system. The manipulator
with pre-defined geometric primitives performs a drawing task, while the
user can play around with their parameter through hand gestures. The
system will be extended into a functional interactive programming system
by providing extra modalities (such as speech, tactile feedback) to
compose and modify primitives.
|
Gesture Based Programming
|
|
- Overview:
- Gesture-Based Programming is a new form of programming
by human demonstration that views the demonstration as a series of
inexact gestures that convey the intention of the task strategy, not
the details of the strategy itself. This is analogous to the type of
programming that occurs between human teacher and student and is more
intuitive for both. However, it requires a shared ontology between
teacher and student -- in the form of a common skill database -- to
abstract the observed gestures to meaningful intentions that can be
mapped onto previous experiences and previously-acquired skills.
-
- Link:
- Image Sequence of the demonstration (Dr. Voyles,
UMN)
-
- Representative Publications:
- R.M. Voyles, J.D. Morrow, and P.K. Khosla,
"Towards
Gesture-Based Programming: Shape from Motion Primordial Learning of
Sensorimotor Primitives," Journal of Robotics
and Autonomous Systems, v. 22, n. 3-4, pp. 361-375, Dec. 1997.
R. Voyles, Toward
Gesture-Based Programming: Agent-Based Haptic Skill Acquisition and
Interpretation, doctoral dissertation, tech. report
CMU-RI-TR-97-36, Robotics Institute, Carnegie Mellon University, August,
1997.
|
Vibrotactile Feedback Device
for Human-Computer Interfaces |
Video [1]
|
- Overview:
- The goal of this work is to convey finger touch and
force information (i.e., haptic feedback) to a human operator so (s)he
can "feel" what the remote or virtual hand is grabbing. Why is
haptic feedback so important? Numerous human factor studies have shown
that our ability to manipulate objects relies heavily on the contact
(touch and force) information we gather. Consequently, we are in the
process of demonstrating that haptic feedback, even in crude forms, can
help a person manipulate remote or virtual objects better than visual
feedback alone.
Building a user-transparent tactile feedback system is a
difficult research problem, since current and near-term actuator
technologies do not provide the fidelity needed to produce realistic
sensations. Moreover, these technologies are not sufficiently small and
lightweight for a person to wear in a glove. We have opted to use
vibrotactile feedback (using vibration to convey information) so that we
can have a wearable system. The picture on the bottom left shows the
vibrotactile glove we have developed, which uses miniature voice coils
(e.g., small audio speakers) to produce vibrations on the wearer's
fingertips and palm.
-
- Representative Publications:
- K. Shimoga, A. Murray, and P. Khosla, "
A Touch Reflection System for Interaction with Remote and Virtual
Environments," IEEE/RSJ International Conf. on Intelligent
Robots and Systems, August, 1995.
|
|
Haptic Interface (sensing and
actuation) based on Electrorheological Gel
|
|
- Overview:
- The system consists of three interchangeable parts: an intrinsic
tactile sensor for measuring net force/torque, an extrinsic tactile
sensor for measuring contact distributions, and a tactile actuator for
displaying tactile distributions. The novel components are the extrinsic
sensor and tactile actuator which are "inside-out symmetric"
to each other and employ an electrorheological gel for actuation.
- Representative Publications:
- R. Voyles, G. Fedder, and P. Khosla, "
A Modular Tactile Sensor and Actuator Based on an Electrorheological Gel,"
Proceedings of 1996 IEEE International Conference on Robotics and
Automation, April, 1996.
|