Presentation is loading. Please wait.

Presentation is loading. Please wait.

HEURISTIC SEARCH. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe.

Similar presentations


Presentation on theme: "HEURISTIC SEARCH. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe."— Presentation transcript:

1 HEURISTIC SEARCH

2 Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe.

3 Heuristically reduced state space for tic-tac-toe. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

4 First three levels of the tic-tac-toe state space reduced by symmetry Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

5 The “most wins” heuristic applied to the first children in tic-tac-toe. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

6 Is it possible to completely cover with non- overlapping dominos an 8x8 grid having two diagonally opposite corners removed?

7

8 Issues: 1) Representation (organization) 2) Algorithm (process)

9 The local maximum problem for hill-climbing with 3-level look ahead Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

10

11 priority queue not necessary if states are added to open in sorted order not necessary if using a monotone heuristic

12 Heuristic search of a hypothetical state space. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 (assume P3 is the goal state and lower scores are preferred)

13 A trace of the execution of best_first_search for previous figure. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

14 Heuristic search of a hypothetical state space with open and closed states highlighted. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

15 Eight Puzzle…

16 Compare: best-first with depth factor: 26 moves to solution (7725 in CLOSED; 7498 left in OPEN) best-first without depth factor: 48 moves to solution (272 in CLOSED; 332 in OPEN) 30 shuffle moves Consider this screen shot of a brute-force breath- first search in process for the 15- puzzle where the optimal solution is known (by another method) to be located at depth 26. How long will it take to find the solution?

17 The start state, first moves, and goal state for an example-8 puzzle. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 123 456 78 or Goal

18 Three heuristics applied to states in the 8-puzzle. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

19

20 The heuristic f applied to states in the 8-puzzle. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 (but should use a better heuristic…)

21 State space generated in heuristic search of the 8-puzzle graph. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Notice that Luger is using the “Tiles out of place” heuristic.

22 State space generated in heuristic search of the 8-puzzle graph. The successive stages of open and closed that generate this graph are: Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

23 Open and closed as they appear after the 3rd iteration of heuristic search Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

24 Heuristic search with inclusion of a depth factor (heuristic here is # of tiles out of place) Nilsson

25 i.e., as long as h(n) is not overestimated (i.e., because that could prevent search of the optimal path) Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

26 (i.e., locally admissible) Note: If the graph search algorithm for best-first search is used with a monotonic heuristic, a new path cannot be shorter than one already found.

27 Note: the best h(n) can be is the actual cost of the optimal path from node n to the goal. If h 1 (n) < h 2 (n), then h 1 (n) is actually underestimating the cost more than h 2 (n) and is, therefore, less informed. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

28 Comparison of state space searched using heuristic search with space searched by breadth-first search. The proportion of the graph searched heuristically is shaded. The optimal search selection is in bold. Heuristic used is f(n) = g(n) + h(n) where h(n) is tiles out of place. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

29 Number of nodes generated as a function of branching factor, B, for various lengths, L, of solution paths. The relating equation is T = (B L+1 – 1)/(B – 1), adapted from Nilsson (1980) *. * Note: equation in text is incorrect but the one shown here is correct for root node at depth 0 Branching Factor = B Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

30 Informal plot of cost of searching and cost of computing heuristic evaluation against informedness of heuristic, adapted from Nilsson (1980). Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 (i.e., cost to run the heuristic) (i.e., cost to traverse the search graph)

31 Two-“Person” Games

32 State space for a variant of Nim. Each state partitions the seven matches into one or more piles of different sizes. First player who cannot make a legal move, wins. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

33 Exhaustive minimax for the game of Nim. Bold lines indicate forced win for MIN. Each node is marked with its derived value (0 or 1) under minimax. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Note: Figure is slightly different from text...

34 Getting ready to apply minimax to a hypothetical state space. Leaf states show heuristic values. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

35 Applying minimax to a hypothetical state space. Leaf states show heuristic values; internal states show backed-up values. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

36 Heuristic measuring conflict applied to states of tic-tac-toe. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

37 Two-ply minimax applied to the opening move of tic-tac-toe, from Nilsson (1971). Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

38 Two ply minimax, and one of two possible MAX second moves, from Nilsson (1971). Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

39 Two-ply minimax applied to X’s move near the end of the game, from Nilsson (1971). Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

40 Let’s reconsider the application of minimax to the hypothetical state space shown a few slides back... Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Alpha-Beta Pruning

41 Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

42

43

44

45

46

47

48

49

50

51

52

53 Alpha-beta pruning applied to the previous minimax state space search. States without numbers are not evaluated. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005

54 Alpha-Beta Search As successors of a node are given backed-up values, the bounds on the backed-up values can change but: –Alpha values of MAX nodes can never decrease –Beta values of MIN nodes can never increase Rules for discontinuing search: –Discontinue search below any MIN node having a beta value ≤ to alpha value of any of its MAX node ancestors. Final backed-up value of this MIN node can be set to its beta value. –Discontinue search below any MAX node having an alpha value ≥ the beta value of any of its MIN node ancestors. Final backed-up value of this MAX node can be set to its alpha value. Rules for computing alpha and beta values: –The alpha value of a MAX node is set equal to the current largest final backed-up value of its successors. –The beta value of a MIN node is set equal to the current smallest final backed-up value of its successors.

55 Mini-Max

56

57 Mini-Max with Alpha-Beta Cutoffs

58

59 Nodeα-β values A (max)α = - , β =  B (min)α = - , β =  E (max)α = - , β =  L E (max)α = 2, β =  M E (max)α = 10, β =  B (min)α = - , β = 10 F (max)α = - , β = 10 N F (max)α = 8, β = 10 O F (max)α = 8, β = 10 B (min)α = - , β = 8 A (max)α = 8, β =  C (min)α = 8, β =  G (max)α = 8, β =  P G (max)α = 8, β =  Q G (max)α = 8, β =  C (min)α = 8, β = 8 Search terminated below C because β @ C <= α @ A A (max)α = 8, β =  D (min)α = 8, β =  J (max)α = 8, β =  V J (max)α = 8, β =  W J (max)α = 8, β =  D (min)α = 8, β = 5 Search terminated below D because β @ D <= α @ A Processing sequence for α-β assignment and visiting leaf nodes:

60 Another Example of Mini-Max with Alpha-Beta Cutoffs

61 There is no need to search below a MAX node whose  value is greater than the parent node’s  value (because the MIN node above wouldn’t accept anything > than its  value), nor to search below a MIN node (e.g., node C) whose  value is less than the parent’s  value (because the MAX node above wouldn’t accept anything < than its  value).

62 Processing Sequence for Previous Search


Download ppt "HEURISTIC SEARCH. Luger: Artificial Intelligence, 5 th edition. © Pearson Education Limited, 2005 Portion of the state space for tic-tac-toe."

Similar presentations


Ads by Google