In addition, automatic recordings of actions and histories of interactions will make the briefings and evaluations of the actions more efficient. This rich multi-modal record will provide an organizational memory of command post activity that can be used, for example, to provide rapid drill-down access in decision briefings. These histories will also be used for creating "macros" or "intelligent agents" to automate routine tasks. Information about the participants' attention will also be used to provide non-verbal cues to index the history for later description. Note taking will be supported by public joint notes or action items that can be entered by voice, handwriting or typing. Private notes or side conversations will also be supported by local written or spoken messages. The joint activity will be recorded, tracked and processed to provide indexing, rapid browsing and summaries for the review of late participants. The histories will be automatically summarized and presented in an interactive "meeting browser" tool, to facilitate rapid understanding and evaluation of the activities. Similarly, the comments and views of earlier participants can be replayed at relevant times during the discussion for a later group of participants. The history can be linked directly with commercial presentation tools that will enable briefing material to be directly linked with the supporting information assets. When information in the command post information record is updated, briefings can, if appropriate, be updated automatically.
All of this will be supported by flexible, open tools that will enable developers to rapidly create new systems and adapt solutions to emerging situations. Significantly fewer people will be needed to configure the systems and interface to new databases and external data sources, due to high-level interactive editors and tools. Many capabilities will be available for the end-users to customize, and others will be easily changed by developers using the high-level toolkits for multimodal, 3D, and collaborative interactions.
All of this will be possible by building on our substantial existing technology and knowledge base. Our JANUS speech recognition system, NPen++ handwriting recognition system, and person and gaze tracking are among the world's most accurate. Our Multimodal Toolkit will be used to control these. Our Ariadne map tool will make visualizations over maps easier to build. Our Pebbles work on multi-user interaction with shared displays and personal digital assistants will be integrated to facilitate multi-user shared interactions. The Alice toolkit has demonstrated how easily 3D environments and visualizations can be created. The CSPACE system (developed in the ITO IC&V program) will be used to interlink commercial presentation tools with the meeting record and provide support for evolving documents (through versioning and update events). Each of these have already been demonstrated on individual tasks, some of which have been map-based. The proposed work will provide revolutionary productivity gains by integrating these technologies and further enhancing them in the direction of making them more practical, higher-accuracy and more effective for real people.