|
|
|
The RavenClaw/Olympus framework provides a robust platform for research
on various dialog management and spoken language interface issues. Some
of the current research projects supported by the RavenClaw/Olympus
framework are briefly outlined below:
Error handling |
One of the main goals behind the development of the RavenClaw
dialog management framework was providing a solid test-bed for
exploring error handling and grounding issues in spoken language
interfaces. Currently, Dan Bohus's dissertation
focuses on these
aspects. Some of the questions under scrutiny
are: how does a system “know that it doesn’t know”? How do we
develop systems that can monitor and accurately update their
beliefs? What set of strategies can be used to set a conversation
back on track, and what are the typical user behaviors in
response to these strategies? What techniques can be used to
learn the optimal system behavior on-line, from detected error segments,
and how do we make these systems adapt and improve their
performance over time? More details are available here
and on Dan's web-page.
|
Timing and turn-taking |
In his dissertation project,
Antoine Raux is currently exploring issues of timing and turn-taking.
Most spoken language interfaces assume a rigid (you speak - I speak)
turn-taking behavior. This assumption can lead to turn-overtaking
issues, slow down the dialog, and sometimes lead to complete
communication breakdowns. Antoine is currently extending the
RavenClaw/Olympus architecture to enable more flexible turn-taking
behaviors. More details are available here.
|
Multi-participant dialog |
In the TeamTalk project,
Thomas Harris
investigates some of the challenges related to multi-participant dialog.
More details about this project are available here.
|
Dynamic dialog task construction |
We have explored issues of dynamic dialog task construction in the
context of several spoken dialog systems: LARRI,
IPA and
Madeleine. In these domains,
the dialog task structure is not fixed in advance, like in most
information-access systems. Rather, the dialog task is constructed
on the fly, based on information selected from a backend. For instance
the LARRI system helps aircraft
mainenance personnel throughout the execution of maintenance tasks.
The structure of the dialog depends on the structure of the maintenance
task and is constructed dynamically, based on an XML
specification returned from the maintenance task library.
|
Automatic knowledge extraction |
Building a spoken language interface requires a number of
language resources, such as dictionary, language model, grammar,
language generation templates. The development of these resources
requires significant amounts of expert knowledge and time. For some
domains, this knowledge exists in a different form, not suitable
for direct use in a spoken language interface. For instance, for
the LARRI system, vast
amounts of technical documentation and maintenance procedures are
available in text/pdf/paper format. Can we automatically (or
semi-automatically) acquire the necessary language resources
from these documents? Can we automatically
create a spoken language interface from a technical manual?
|
Taskable agents |
In the context of the Vera
system, we have explored issues related to taskable agents. Vera
can not only receive calls but also initiate calls in an effort to
locate a person and deliver messages.
|
|