At some level, the appearance of ,
, and
in (6) shouldn't contribute to the effective length of the
learned clause because that all of these literals are currently false.
Length-bounded learning cannot make this distinction.
We see from this example that it is useful to retain clauses of arbitrary length provided that they are relevant to the current search context. If the context subsequently changes, we can remove some clauses to make room for new ones that are more suitable to the new context. This is what relevance-bounded learning does.
In relevance-bounded learning, the effective length of a clause is
defined in terms of the current partial assignment. The irrelevance of a clause is defined as one less than the number of
unvalued or satisfied literals it contains. For example, the clause
(6) under the partial assignment has an irrelevance of 1.
The idea of relevance-bounded learning originated in dynamic
backtracking [GinsbergGinsberg1993], in which clauses were deleted if
their irrelevance exceeded
. This idea was generalized by Bayardo
and Miranker Bayardo:relsat, who defined a general
relevance bound and then deleted all clauses whose irrelevance
exceeded that bound. This generalization is implemented in the RELSAT satisfiability engine [Bayardo SchragBayardo Schrag1997].
Like length-bounded learning, relevance-bounded learning retains all
clauses of length less than the irrelevance bound , since the
irrelevance of a clause can never exceed its length. But the
technique of relevance-bounded learning also allows the retention of
longer clauses if they are applicable to the current portion of the
search space. Such clauses are only removed when they are no longer
relevant.
Returning to our example, if we backtrack from the original partial
assignment with and find ourselves exploring
, the short nogood (5) will be 0-irrelevant
(since we can unit propagate to conclude
) and the long
one (6) will be 4-irrelevant. Using a relevance bound of 3 or
less, this nogood would be discarded.
The condition that nogoods are only retained until their irrelevance
exceeds a bound is sufficient to ensure that only a polynomial
number of nogoods are retained at any point.7 Experimental results [Bayardo MirankerBayardo Miranker1996]
show that relevance-bounded learning is more effective than its
length-bounded counterpart and, even with a relevance bound of 1, the
results are comparable to those of learning without
restriction.8