Problem solutions can exist anywhere in the search space. Using IDA* search, the children are expanded in a depth-first manner from left to right, bounded in depth by the cost threshold. If the solution lies on the right side of the tree, a far greater number of nodes must be expanded than if the solution lies on the left side of the tree. If information can be found to re-order the operators in the tree from one search iteration to the next, the performance of IDA* can be greatly improved.
Powley and Korf suggest two methods of ordering the search space [25]. First, children of each node can be ordered and expanded by increasing heuristic distance to a goal node. Alternatively, the search algorithm can expand the tree a few levels and sort the frontier set (the set of nodes at that level in the tree) by increasing h value. Search begins each iteration from the frontier set and this frontier set is updated each iteration. In both of these cases, although the nodes may have the same f value, nodes with smaller h values generally reflect a more accurate estimated distance and are preferred.
Instead of ordering individual nodes, Cook et al. [10] order the set of operators to guide the next IDA* iteration to the most promising node. The most promising node is the node from the cut-off set (a child node not expanded in the previous iteration) with the smallest h value. As an example, Figure 4 shows a search tree expanded using one iteration of IDA* with operator ordering 0, 1, 2, 3. The path to the most promising leaf node (indicated with a star) is 1 3 2 3 0. The new operator ordering is computed using the order of operators as they appear in this path after removing duplicates. Operators not appearing in the path are added to the end of the operator list, retaining their original relative ordering. Thus the ordering of operators for the example in Figure 4 changes from 0, 1, 2, 3 (try operator 0 first, operator 1 next, operator 2 next, and operator 3 last for every node in the tree) to 1, 3, 2, 0.
This section describes a number of alternative approaches to parallel search. Our theoretical empirical analyses in the following sections demonstrate that many of the reported approaches offer some benefit in certain conditions, but no single approach is always the most effective at scaling AI algorithms.