As stated earlier, DERSNLP+EBL is based on a complete and
correct domain-independent planning strategy.
Like PRIAR [22]
and SPA [13], it is implemented
on a partial-order planner. In this aspect it differs
from state-space systems such as
PRODIGY/ANALOGY [42,40] and PARIS
[3]. Like PRODIGY/ANALOGY, it employs
the case adaptation strategy, derivational replay which
stores planning experience in the form of successful plan
derivations. Previous decisions made in earlier planning episodes
become instructions to guide the search process in solving
the new problem.
Derivational replay includes all of the following elements,
as illustrated in Figure 1
[40,42]: a facility within the
underlying planner to generate a trace of the derivation of a plan,
the indexing and storage of the derivation trace in a library
of previous cases, the retrieval of multiple cases in preparation for
solving a new problem, and finally, a replay mechanism by which the planner
employs the retrieved plan derivations as a sequence of instructions
to guide the new search process.
![]() |
DERSNLP+EBL's methodology depends on aspects that it has in common with MOLGEN [10] and PRIAR [22]. It requires an eager case adaptation strategy, so that a skeletal plan will be constructed which contains all of the constraints added on the advice of the retrieved cases, and only those constraints. This is to separate the failure resulting from the previous guidance from any subsequent planning effort. In eager case adaptation, the planning decisions which are encapsulated in the retrieved cases are greedily adopted before these decisions are extended to solve any extra goals not covered. Multiple retrieved plan derivations are replayed in sequence to produce the skeletal plan which then contains all of the recommended plan constraints. The planner returns to from-scratch planning only after all of the previous decisions in the retrieved cases have been visited. The skeletal plan is then further refined to achieve any goals left open. Previous work has demonstrated the effectiveness of this approach to plan-space replay as well as its advantage over state-space replay [15,16].
Eager case adaptation can also be described as extension-first. The skeletal plan is first extended in the search for a solution, and, only after this extension fails, is the plan backtracked over, discarding the plan constraints which were added on the advice of previous episodes. The general approach to case adaptation therefore involves three distinct phases: case replay, case extension, and, if extension fails, recovery. During the search process employed in extending the skeletal plan, the planner constructs an explanation for the plan's failure which becomes a reason for the case retrieval failure. Explanations are formed for the analytical failures that occur in the leaf nodes directly under the skeletal plan (See Figure 2). An analytical failure is explained by a set of inconsistent plan constraints. These failure explanations are immediately regressed up the search paths as they are encountered. The regressed explanations are collected at the root of the tree to form a reason for the retrieval error. DERSNLP+EBL detects that a retrieval error has occurred when all ways of refining the skeletal plan have been tried, and the planner is forced to backtrack over this plan. At this point the failure reason is fully constructed. Performing skeletal plan extension as a separate process prior to recovery allows the planner to identify the retrieval error in terms of the failure of the skeletal plan, and to construct a reason for the failure. This reason is then communicated to the Storer to be used in augmenting the library with a new repairing case.
![]() |
Consider a simple example illustrated in Figure 3 which is taken from the Logistics Transportation domain shown in Figure 4. The goal is to have package OB1 located at the destination location ld. The package is initially at location l1. There is a plane located at lp which can be used to transport the package. Figure 3a illustrates a previous plan which contains steps that determine the plane's route to the destination airport as well as steps which accomplish the loading of the package at the right place along this route. Eagerly replaying these earlier step addition decisions for a new problem in which there is an extra package to transport produces a skeletal plan which may readily be extended to include the loading and unloading of the extra package as long as this package lies along the same route. However, if the new package is off the old route, the planner may not be able to solve the extra goal without backtracking over some of its previous step addition decisions. (See Figure 3b).
A case failure reason is shown in Figure 5.
It gives the conditions under which
a future replay of the case will again result in failure.
These conditions refer to the presence in the new problem of a
set, , of negatively interacting goals,
as well as some initial state conditions, contained in
.A summary of the information content of the failure reason is:
There is an extra package to transport to the same destination location,
and that package is not at the destination location, is not inside the plane,
and is not located on the plane's route.
Subsequent to backtracking over the skeletal plan, the planner continues its search, and will go on to find a solution to the full problem if one exists. This new solution achieves all of the negatively interacting goals identified in the failure reason. Moreover, since these goals represent a subset of the problem goals, the new derivation may be used to construct a case covering these goals alone. DERSNLP+EBL stores the new case directly beneath the failing case so as to censor its retrieval. This is to ensure that whenever the failure reason holds (for example, whenever there is an extra package which is off the plane's route), the retriever is directed away from the failing case and toward the case that repairs the failure.
We are now in a position to describe in more detail DERSNLP+EBL's eager derivation replay strategy, as well as how it learns the reasons underlying a case failure.