summary .
roomline .
let's go public .
larri .
ipa .
vera .
meetingline .
conquest .
teamtalk .
sublime .
madeleine .
ravencalendar
|
LARRI
[developed by Dan Bohus, Yitao Sun, Kayur Patel, Ananlada Chotimongkol] |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
LARRI (LAnguage-Based Retrieval of Repair Information) is a multi-modal system for support of maintenance and repair activities for aircraft mechanics. The system implements a Level 4/5 IETM (Interactive Electronic Technical Manual), that is, semantically-annotated documentation. LARRI explores the potential integration between language-based interfaces and such materials. Here is a short video, and a paper describing the system. LARRI integrates a graphical user interface for easy visualization of dense technical information (i.e. instructions, video-streams, animations, annotations) with a spoken dialog system that facilitates information access and offers task guidance and mentoring in this environment. The graphical interface is accessible via a head-worn display connected to a wearable client computer. A rotary mouse (dial) provides direct access to the GUI elements. The natural language spoken-dialog component enhances GUI functionality (which provides only for simple selection), with support for natural language commands and for direct information access. The preponderance of hands- and eyes-busy situations in this domain makes natural language dialog a very attractive interaction modality. The user can interact with the system both on a simple command level (manipulating the GUI or simply stepping through a procedure) and on a goal-oriented level (where the user describes desired end-states and the system translates these into the corresponding command sequences). A number of interesting research issues have been raised throughout the development of the LARRI system: [1] how can we automatically extract from the technical manual the language resources required for spoken language interface development (e.g. language models, grammars, generation templates, etc.)? [2] unlike Communicator, LARRI is not a slot-filling spoken dialog system. The system has to act like a multi-modal (also makes use of a head-mounted display) personal assistant and guide/advice/monitor the user through the completion of a specific task. This leads to an asynchronous and more complex interaction. Understanding-errors are expected to have a higher cost in this domain (no easy slot-overwrite option), and thus effective confirmation/clarification behaviors are required. [3] we explored the taskable-agent problem in the context of this system. LARRI has to be able to understand and summarize spoken descriptions of problems, and locate appropriate experts for them. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Sample dialog | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Here is a segment from a sample interaction with LARRI:
|