The multiple graph technique has the advantage of heuristics that
can aggregate the costs of multiple worlds, but the disadvantage of
computing some redundant information in different graphs (c.f.
Figure 6) and using every graph to compute heuristics (c.f
). Our next approach addresses these limitations by
condensing the multiple planning graphs to a single planning graph,
called a labelled uncertainty graph (
). The idea is to
implicitly represent multiple planning graphs by collapsing the
graph connectivity into one planning graph, but use annotations,
called labels (
), to retain information about multiple worlds.
While we could construct the
by generating each of the
multiple graphs and taking their union, instead we define a direct
construction procedure. We start in a manner similar to the unioned
single planning graph (
) by constructing an initial layer of
all literals in our source belief state. The difference with the
is that we can prevent loss of information about multiple
worlds by keeping a label for each literal the records which of the
worlds is relevant. As we will discuss, we use a few simple
techniques to propagate the labels through actions and effects and
label subsequent literal layers. Label propagation relies on
expressing labels as propositional formulas and using standard
propositional logic operations. The end product is a single
planning graph with labels on all graph elements; labels indicate
which of the explicit multiple graphs (if we were to build them)
contain each graph element.
We are trading planning graph structure space for label storage
space. Our choice of BDDs to represent labels helps lower the
storage requirements on labels. The worst-case complexity of the
is equivalent to the
representation. The
's
complexity savings is not realized when the projected possible
worlds and the relevant actions for each are completely disjoint;
however, this does not often appear in practice. The space savings
comes in two ways: (1) redundant representation of actions and
literals is avoided, and (2) labels that facilitate non-redundant
representation are stored as BDDs. A nice feature of the BDD
package [7] we use is that it efficiently represents many
individual BDDs in a shared BDD that leverages common substructure.
Hence, in practice the
contains the same information as
with much lower construction and usage costs.
In this section we present construction of the
without
mutexes, then describe how to introduce mutexes, and finally discuss
how to extract relaxed plans.