Presentation is loading. Please wait.

Presentation is loading. Please wait.

Heuristic Search Generate and Test Hill Climbing Best First Search

Similar presentations


Presentation on theme: "Heuristic Search Generate and Test Hill Climbing Best First Search"— Presentation transcript:

1 Heuristic Search Generate and Test Hill Climbing Best First Search
Simple Hill Climbing Steepest Ascend Hill Climbing Simulated Annealing Best First Search A* IDA* Beam Search AO*

2 Hill Climbing Problems: local maxima problem plateau problem Ridge
goal Ridge goal plateau

3 Simulated Annealing T = initial temperature x = initial guess
v = Energy(x) Repeat while T > final temperature Repeat n times X’  Move(x) V’ = Energy(x’) If v’ < v then accept new x [ x  x’ ] Else accept new x with probability exp(-(v’ – v)/T) T = 0.95T /* for example */ At high temperature, most moves accepted At low temperature, only moves that improve energy are accepted

4 Best first Search BestFirstSearch(state space =<S,P,I,G,W>,f)
Open  {<f(I),I>} Closed   while Open   do <wx,x>  ExtractMin(Open) if Goal(x, ) then return x Insert(x,Closed) for y  Child(x ,) do if yClosed then Insert(<f(y),y>,Open) return fail Open is implemented as a priority queue

5 Best First Search Example: The Eight-puzzle
The task is to transform the initial puzzle 2 8 3 1 6 4 7 5 into the goal state 1 2 3 8 4 7 6 5 by shifting numbers up, down, left or right.

6 Best-First (Heuristic) Search
2 8 3 1 6 4 7 5 4 2 8 3 1 6 4 7 5 2 8 3 1 4 7 6 5 2 8 3 1 6 4 7 5 5 3 5 2 8 3 1 4 7 6 5 2 3 1 8 4 7 6 5 2 8 3 1 4 7 6 5 3 3 4 8 3 2 1 4 7 6 5 2 8 3 7 1 4 6 5 3 4 to the goal 8 3 2 1 4 7 6 5 3 to further unnecessary expansions

7 Best First Search - A* <S,P,I,G,W> - state space
h*(x) - a minimum path weight from x to the goal state h(x) - heuristic estimate of h*(x) g*(x) - a minimum path weight from I to x g(x) - estimate of g*(x) (i.e. minimal weight found as far f*(x) = g*(x) + h*(x) f(x) = g(x) + h(x)

8 Search strategies - A* A*Search(state space =<S,P,I,G,W>,h)
Open  {<h(I),0,I>} Closed   while Open   do <fx,gx,x>  ExtractMin(Open) [minimum for fx] if Goal(x, ) then return x Insert(<fx,gx,x>,Closed) for y  Child(x ,) do gy = gx + W(x,y) fy = gy + h(y) if there is no <f,g,y>Closed with ffy then Insert(< fy,gy,y>,Open) [replace existing <f,g,y>, if present] return fail

9 Search strategies - A*

10 Best-First Search (A*)
2 8 3 1 6 4 7 5 0+4 2 8 3 1 6 4 7 5 2 8 3 1 4 7 6 5 2 8 3 1 6 4 7 5 1+5 1+3 1+5 2 8 3 1 4 7 6 5 2 3 1 8 4 7 6 5 2 8 3 1 4 7 6 5 2+3 2+3 2+4 8 3 2 1 4 7 6 5 2 8 3 7 1 4 6 5 2 3 1 8 4 7 6 5 2 3 1 8 4 7 6 5 3+3 3+4 3+2 3+4 goal ! 1 2 3 8 4 7 6 5 1 2 3 8 4 7 6 5 1 2 3 7 8 4 6 5 5+0 4+1 5+2

11 Admissible search Definition
An algorithm is admissible if it is guaranteed to return an optimal solution (with minimal possible path weight from the start state to a goal state) whenever a solution exists.

12 Admissibility What must be true about h for A* to find optimal path? 2
73 X Y h=100 h=1 1 1 h=0 h=0

13 Admissibility What must be true about h for A* to find optimal path?
A* finds optimal path if h is admissible; h is admissible when it never overestimates. 2 73 X Y h=100 h=1 1 1 h=0 h=0

14 Admissibility What must be true about h for A* to find optimal path?
A* finds optimal path if h is admissible; h is admissible when it never overestimates. In this example, h is not admissible. 2 73 X Y h=100 h=1 1 1 h=0 h=0 g(X)+h(X) = 102 g(Y)+h(Y) = 74 Optimal path is not found!

15 Admissibility of a Heuristic Function
Definition A heuristic function h is said to be admissible if 0  h(n)  h*(n) for all nS.

16 Proof of Optimality of A*
Start Let G be an optimal goal state with path cost f* G2 be a sub optimal goal state with path cost g(G2) > f* Suppose n is a leaf node on the optimal path to G n G G2 Because h is admissible, f*  f(n) Because n has not been chosen for expansion, f(n)  f(G2) Therefore f*  f(G2) Because G2 is a goal state, h(G2) = 0, and thus f(G2) = g(G2) Therefore f*  g(G2) which contradicts our assumption

17 Extreme Cases If we set h=0, then A* is Breadth First search; h=0 is admissible heuristic (when all costs are non-negative). If g=0 for all nodes, A* is similar to Steepest Ascend Hill Climbing If both g and h =0, it can be as dfs or bfs depending on the structure of Open

18 How to chose a heuristic?
Original problem P Relaxed problem P' A set of constraints removing one or more constraints P is complex P' becomes simpler Use cost of a best solution path from n in P' as h(n) for P Admissibility: h* h cost of best solution in P >= cost of best solution in P' Solution space of P Solution space of P'

19 How to chose a heuristic - 8-puzzle
Example: 8-puzzle Constraints: to move from cell A to cell B cond1: there is a tile on A cond2: cell B is empty cond3: A and B are adjacent (horizontally or vertically) Removing cond2: h2 (sum of Manhattan distances of all misplaced tiles) Removing cond2 and cond3: h1 (# of misplaced tiles) Here h2  h1

20 How to chose a heuristic - 8-puzzle
h1(start) = 7 h2(start) = 18

21 Performance comparison
Search Cost (nodes) Effective Branching Factor d IDS A* (h1) A* (h2) 2 4 6 8 10 12 14 16 18 20 22 24 112 680 6384 47127 - 13 39 93 227 539 1301 3056 7276 18094 39135 25 73 113 211 363 676 1219 1641 2.45 2.87 2.73 2.80 2.79 2.78 1.79 1.48 1.34 1.33 1.38 1.42 1.44 1.45 1.46 1.47 1.30 1.24 1.22 1.23 1.25 1.26 1.27 1.28 Compare: Iterative deepening search and A* with two heuristics; 100 random problem instances per depth used; Results: A* is consistently and significantly better than IDS; h2 is at least as good as h1, usually better; Effective branching factor remains fairly constant; Data are averaged over 100 instances of 8-puzzle, for varous solution lengths. Note: there are still better heuristics for the 8-puzzle.

22 Comparing heuristic functions
Bad estimates of the remaining distance can cause extra work! Given two algorithms A1 and A2 with admissible heuristics h1 and h2 < h*(n), if h1(n) < h2(n) for all non-goal nodes n, then A1 expands at least as many nodes as A2 h2 is said to be more informed than h1 or, A2 dominates A1

23 Relaxing optimality requirements
A* algorithm Theorem If h is admissible, then A* algorithm is  -admissible, i.e. it finds a path with a cost at most (1+ )C*. Note hF does not need to be admissible

24 Relaxing optimality requirements
Weighted evaluation function fw(n) = (1–w)g(n) + w h(n) w = Breadth First w = 1/2 - A* w = 1 - Best First

25 IDA*: Iterative deepening A*
To reduce the memory requirements at the expense of some additional computation time, combine uninformed iterative deepening search with A* Use an f-cost limit instead of a depth limit

26 Iterative Deepening A*
In the first iteration, we determine a “f-cost limit” f’(n0) = g’(n0) + h’(n0) = h’(n0), where n0 is the start node. We expand nodes using the depth-first algorithm and backtrack whenever f’(n) for an expanded node n exceeds the cut-off value. If this search does not succeed, determine the lowest f’-value among the nodes that were visited but not expanded. Use this f’-value as the new limit value and do another depth-first search. Repeat this procedure until a goal node is found.

27 IDA* Algorithm - Top level
function IDA*(problem) returns solution root := Make-Node(Initial-State[problem]) f-limit := f-Cost(root) loop do solution, f-limit := DFS-Contour(root, f-limit) if solution is not null, then return solution if f-limit = infinity, then return failure end

28 IDA* contour expansion
function DFS-Countour(node,f-limit) returns solution sequence and new f-limit if f-Cost[node] > f-limit then return (null, f-Cost[node] ) if Goal-Test[problem](State[node]) then return (node,f-limit) for each node s in Successor(node) do solution, new-f := DFS-Contour(s, f-limit) if solution is not null, then return (solution, f-limit) next-f := Min(next-f, new-f); end return (null, next-f)

29 IDA* on a graph example S A D 13.4 12.4 11.9 B 13.7 D 16.9 A 19.4 E
12.9 14.0 19.9 16.9 19.7 17.7 13.0 C E E B B F 14.0 20.0 21.7 17.0 21.0 24.9 25.4 19.0 13.0 D F B F C E A C G 4.0 0.0 3.0 0.0 G C G F 0.0 G

30 IDA* trace Level 0: IDA(S, 11.9) 11.9 IDA(S, 11.9) IDA(A, 11.9) 13.4
IDA(D, 11.9) 12.4 IDA(S, 12.4) IDA(A, 12.4) 13.4 Level 2: IDA(D, 12.9) IDA(A, 12.9) 19.4 IDA(E, 12.9) 12.9 Level 3: IDA(S, 12.9) IDA(F, 12.9) 13.0

31 IDA*: Iteration 1, Limit = 4
2 8 3 1 6 4 7 5 0+4 2 8 3 1 6 4 7 5 2 8 3 1 4 7 6 5 1+5 1+3 2 8 3 1 4 7 6 5 2 3 1 8 4 7 6 5 2 8 3 1 4 7 6 5 2+3 2+3 2+4

32 IDA*: Iteration 2, Limit = 5
8 3 1 6 4 7 5 0+4 2 8 3 1 6 4 7 5 2 8 3 1 4 7 6 5 1+5 1+3 2 8 3 1 4 7 6 5 2 3 1 8 4 7 6 5 2+3 2+3 8 3 2 1 4 7 6 5 2 8 3 7 1 4 6 5 2 3 1 8 4 7 6 5 3+3 3+4 3+2 goal ! 1 2 3 8 4 7 6 5 1 2 3 8 4 7 6 5 5+0 4+1


Download ppt "Heuristic Search Generate and Test Hill Climbing Best First Search"

Similar presentations


Ads by Google