In this section we describe how ROGUE mediates the interaction between the planner and the robot. We show how symbolic action descriptions are turned into robot commands, as well as how robot sensor data is incorporated into the planner's knowledge base so that the planner can compensate for changes in the environment or unexpected failures of its actions.
The key to this communication model is based on a pre-defined language and model translation between PRODIGY4.0 and Xavier. The procedures to do this translation are manually generated, but are in a systematic format and may be extended at any time to augment the actions or sensing capabilities of the system. It is an open problem to automate the generation of these procedures because it is not only challenging to select what features of the world may be relevant for replanning, but also how to detect those features using existing sensors.