An important concept in CSPs is the concept of local consistency. Local consistencies are properties that can be applied in a CSP, using (typically) polynomial algorithms, to remove inconsistent values either prior to or during search. Arc consistency is the most commonly used local consistency property in the existing constraint programming engines. We now give a definition of arc consistency.
The usefulness of AC processing was recognized early, and as a
result, various AC algorithms for binary constraints have been
proposed in the literature (e.g. AC-3 in mac77, AC-4
in mh86, AC-5 in vhdt92, AC-7 in
bfr95, AC-2001 in br01, AC3.1 in
yap01). Some of them have been extended to the
non-binary case (e.g. GAC-4 in mm88, GAC-Schema in
br97, GAC-2001 in br01). AC can be
enforced on a binary CSP with optimal worst-case time
complexity. The worst-case complexity of enforcing GAC on a
non-binary CSP is
[Bessière RéginBessière Régin1996a].
In this paper we use algorithms AC-2001 and GAC-2001 for theoretical and empirical comparisons with specialized algorithms for the encodings. This is not restrictive, in the sense that any generic AC (and GAC) algorithm can be used instead.
Following Debruyne & Bessiére db97, we call a local
consistency property stronger than
iff for any
problem enforcing
deletes at least the same values as
, and
strictly stronger iff it is stronger and there is at least
one problem where
deletes more values than
. We call
equivalent to
iff they delete the same values for all
problems. Similarly, we call a search algorithm
stronger than
a search algorithm
iff for every problem
visits at most
the same search tree nodes as
, and strictly stronger iff it is
stronger and there is at least one problem where
visits less
nodes than
.
is equivalent to
iff they visit the same
nodes for all problems.
Following Bacchus et al. bcvbw02, the asymptotic
cost (or just cost hereafter) of a search algorithm is
determined by the worst-case number of nodes that the algorithm
has to visit to solve the CSP, and the worst-case time complexity
of the algorithm at each node1. As in the paper by Bacchus et al.
bcvbw02, we use this measure to set asymptotic bounds
in the relative performance of various algorithms. For example, if
two algorithms
and
always visit the same nodes and
enforces a property at each node with exponentially higher
complexity than the property enforced by
, then we say that
algorithm
can have an exponentially greater cost than
algorithm
.
Nikolaos Samaras 2005-11-09