|
Overview
|
Autonomous systems, especially those in safety-critical applications,
must be extremely reliable. Unfortunately, the inherent complexity of
autonomy software, and its interaction with rich, uncertain
environments, makes it very difficult to verify such systems using
traditional testing techniques. Formal verification, particularly
model-checking, has been used successfully to formally verify complex
hardware and software systems. While powerful, to date formal
verification has had little impact on the development of autonomous
systems. This is primarily because one typically has to manually
translate the autonomy software into a model-checking language, a
process that is both tedious and error prone.
This project, a collaborative effort between the
School of Computer Science
at Carnegie Mellon University and the
Automated Software Engineering Group
at NASA Ames Research Center,
is developing tools and techniques to support formal
verification of autonomous systems. The goal is to make formal
verification, and model checking in particular, easy enough to use,
and powerful enough, so that it can be employed as a regular part of
the software development cycle. The hope is that by verifying early
in the development cycle, subsequent testing and debugging can be
significantly reduced, leading to autonomous systems that are more
reliable and easier to develop. We also anticipate that the
approaches we are pursuing will prove applicable to other software
domains.
 
|
Approach
|
Many autonomous systems are developed using specialized languages
(both procedural and model-based) that are tuned to particular roles,
such as planning, plan execution, fault diagnosis, and real-time
control. Our approach is to handle the computational complexity of
formal verification by focusing on such specialized languages (or
germane subsets of them). We are also focusing on the internal
correctness of programs written using these languages, rather than
the correctness of the compilers/interpreters of the languages, or of
the interactions of the autonomous system with its environment (both
of which are, of course, also critically important).
In model checking, one specifies the system in a formal language, such
as
SMV or
PROMELA,
along with formal specifications that indicate
desirable properties of the system that one wants verified. In SMV,
properties are specified in a temporal logic called CTL. The model
checker then determines whether the properties hold under all possible
execution traces. Essentially, model-checking exhaustively (but
intelligently) considers the complete execution tree. Counter-examples
are provided for any specification that does not hold.
Our general approach (illustrated below) is to automate the
translation of autonomous system software and specifications into the
SMV model-checking language, perform model checking using standard
algorithms, translate counter-examples back into terms that are
meaningful to the software developer, and then provide tools for
visualizing and explaining the counter-examples. This also involves
defining common classes of properties to be verified for each
specialized language, and extending the languages to support
specification of those properties directly (rather than having to
write CTL formulae).
 
|
Results
|
Livingstone
is a model-based fault diagnosis and recovery system developed at NASA
Ames. We have developed a translator that automatically converts MPL
(the language used by Livingstone) into SMV, which can then be input
directly into the SMV model checker. The translator has been tested
on dozens of MPL models, including models describing the
Deep Space One spacecraft,
Xavier,
Nomad, and an
in-situ processing plant.
We have also begun to produce explanations for counter-examples of
such models. Click here for a more
simple example.
Other work includes developing advanced
model-checking techniques
that are especially
efficient for the types of models that the MPL translator produces
(those with large numbers of invariant expressions). We have
developed a translator for
TDL (Task Description Language),
a language for expressing task-level control constructs, such as task
decomposition, task synchronization, execution monitoring and
exception handling. The current translator focuses on the
sychronization aspects of TDL programs. We are beginning to
investigate the use of
temporal queries to help in the understanding of
counter-examples (temporal queries are CTL formulae that contain a
variable).
Future work includes explaining more complex types of properties,
extending the TDL translator, extending the TDL language (to enable
properties to be specified directly), and developing techniques for
visualizing counter examples.
|
|
|