next up previous contents
Next: Proposal Up: Introduction Previous: Introduction

Background

The use of heuristic evaluation functions to guide search through large state spaces is fundamental to the field of Artificial Intelligence. Which state should be visited next in the search for a better, nearer, cheaper goal state? The evaluation function maps features of each state to a real value which assesses the state's promise. For example, in the domain of chess, a classic evaluation function is obtained by summing material advantage weighted by 1 for pawns, 3 for bishops and knights, 5 for rooks, and 9 for queens. The function values provide a basis for decision-making in search. The choice of evaluation function ``critically determines search results'' [Nilsson1980, p.74,] in popular algorithms for planning and control ( tex2html_wrap_inline1432 ), game-playing (alpha-beta), and combinatorial optimization (hillclimbing, simulated annealing).

Evaluation functions have generally been designed by human domain experts. The weights {1,3,5,9} in the chess evaluation function given above summarize the judgment of generations of chess players. In combinatorial optimization domains, such as the Traveling Salesperson Problem [Lin and Kernighan1973], the state space consists of legal candidate solutions, and the domain's objective function (the function which evaluates the quality of a final solution) can itself serve as a natural evaluation function to guide search. However, to get good optimization results, engineers often spend considerable effort tweaking the coefficients of penalty terms and other additions to their objective function. This excerpt, from a book on VLSI layout by simulated annealing [Wong et al. 1988], is typical:

Clearly, the objective function to be minimized is the channel width w. However, w is too crude a measure of the quality of intermediate solutions. Instead, for any valid partition tex2html_wrap_inline1438 , the following cost function is used:

  equation49

where ... tex2html_wrap_inline1440 and tex2html_wrap_inline1442 are constants, ...

In this application, the authors hand-tuned the coefficients and set tex2html_wrap_inline1444 . (I will later show a machine-learning procedure which achieved much better performance on this task by assigning, counterintuitively, a negative value to tex2html_wrap_inline1442 .) The simulated annealing literature is full of similar examples of evaluation functions being engineered for good performance (e.g. [Cohn1992, Szykman and Cagan1995]).


next up previous contents
Next: Proposal Up: Introduction Previous: Introduction

Justin A. Boyan
Sat Jun 22 20:49:48 EDT 1996