Professional Documents
Culture Documents
Points
Definitions
Best-first search
Hill climbing
Problems with hill climbing
An example: local and global
heuristic functions
Simulated annealing
The A* procedure
Means-ends analysis
Definitions
Heuristics (Greek heuriskein = find, discover): "the study
of the methods and rules of discovery and invention".
We use our knowledge of the problem to consider some
(not all) successors of the current state (preferably just
one, as with an oracle). This means pruning the state
space, gaining speed, but perhaps missing the solution!
In chess: consider one (apparently best) move, maybe a few -- but
not all possible legal moves.
In the travelling salesman problem: select one nearest city, give up
complete search (the greedy technique). This gives us, in polynomial
time, an approximate solution of the inherently exponential problem; it
can be proven that the approximation error is bounded.
Definitions (2)
For heuristic search to work, we must be able to rank
the children of a node. A heuristic function takes a state
and returns a numeric value -- a composite assessment
of this state. We then choose a child with the best score
(this could be a maximum or minimum).
A heuristic function can help gain or lose a lot, but
finding the right function is not always easy.
The 8-puzzle: how many misplaced tiles? how many
slots away from the correct place? and so on.
Water jugs: ???
Chess: no simple counting of pieces is adequate.
Definitions (3)
The principal gain -- often spectacular -- is the
reduction of the state space. For example, the
full tree for Tic-Tac-Toe has 9! leaves. If we
consider symmetries, the tree becomes six
times smaller, but it is still quite large.
With a fairly simple heuristic function we can
get the tree down to 40 states. (More on this
when we discuss games.)
Heuristics can also help speed up exhaustive,
blind search, such as depth-first and breadthfirst search.
Best-first search
The algorithm
select a heuristic function (e.g., distance to the goal);
put the initial node(s) on the open list;
repeat
select N, the best node on the open list;
succeed if N is a goal node; otherwise put N on the
closed list and add N's children to the open list;
until
we succeed or the open list becomes empty (we fail);
A closed node reached on a different path is made open.
NOTE: "the best" only means "currently appearing the best"...
Hill climbing
This is a greedy algorithm: go as high up as possible
as fast as possible, without looking around too much.
The algorithm
select a heuristic function;
set C, the current node, to the highest-valued initial
node;
loop
select N, the highest-valued child of C;
return C if its value is better than the value of N;
otherwise set C to N;
2.
3.
Goal
state
Initial
state
Move
Move
Move
2a
2b
continued
Simulated annealing
The intuition for this search method, an
improvement on hill-climbing, is the
process of gradually cooling a liquid.
There is a schedule of "temperatures",
changing in time. Reaching the "freezing
point" (temperature 0) stops the process.
Instead of a random restart, we use a
more systematic method.
The A* procedure
Hill-climbing (and its clever versions) may miss an optimal solution.
Here is a search method that ensures optimality of the solution.
The algorithm
keep a list of partial paths (initially root to root, length 0);
repeat
succeed if the first path P reaches the goal node;
otherwise remove path P from the list;
extend P in all possible ways, add new paths to the list;
sort the list by the sum of two values: the real cost of P till now,
and an estimate of the remaining distance;
prune the list by leaving only the shortest path for each node
reached so far;
until
success or the list of paths becomes empty;
A
5
5
2
3.5
11.5
3.4
9.2
B 5.8
7.1
3.5
Hill-climbing
happens to
succeed here:
9.2
D
10.1
11.5
S
5.8
7.1
E
9.2
3.5
7.1
0.0
S
3+10.1
A*, step 1
A*, step 2
13.1
13.2
3+4+5.8
3+5+9.2
A*, step 3
12.8
B
3+4+4+3.4
13.1
13.2
17.2
D
3+4+5+7.1
S
13.1
13.2
A*, step 4
12.8
B
14.4
17.2
4+5+10.1
19.1
4+2+7.1
S
13.1
13.2
A*, step 5
12.8
B
14.4
17.2
19.1
19.1
13.1
E
4+2+5+5.8
4+2+4+3.5
S
13.1
13.2
A*, step 6
12.8
B
14.4
17.2
19.1
19.1
13.1
E
16.8
13.5
m(G)
d(x)
=4
=2
m(K)
d(x)
=4
=3
d(x)
=1
=4
=0
=1
m(L)
d(x)
=5
m(M)
=2
m(D)
m(J)
=4
=3
m(I)
=3
m(F)
1
m(H)
=3
=3
m(C)
m(E)
=0
=5
d(x)
=4
m(B)
1
m(A)
m(N)
d(x)
=2
=5
A, ..., N: states;
m = misplaced;
d = depth;
f(x) = m(x) + d(x).
Means-ends analysis
The idea is due to Newell and Simon (1957): work by
reducing the difference between states, and so
approaching the goal state. There are procedures,
indexed by differences, that change states.
The General Problem Solver algorithm
set C, the current node, to any initial node;
loop
succeed if the goal state has been reached, otherwise
find the difference between C and the goal state;
choose a procedure that reduces this difference, apply
it to C to produce the new current state;
P
miles
D
> 1000
i
s
100-1000
t
a 10-100
n
c
<1
e
Prerequisite
Plane
Train
Car
Taxi
e
Walk
X
X
At plane At train
At car
At taxi