Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002.

Similar presentations


Presentation on theme: "Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002."— Presentation transcript:

1 Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002

2 Uninformed search (= blind search) –have no information about the number of steps or the path cost from the current state to the goal Informed search (= heuristic search) –have some domain-specific information –we can use this information to speed-up search –e.g. Bucharest is southeast of Arad. –e.g. the number of tiles that are out of place in an 8-puzzle position –e.g. for missionaries and cannibals problem, select moves that move people across the river quickly Search Strategies

3 Heuristic Search Let us suppose that we have one piece of information: a heuristic function h(n) = 0, n a goal node h(n) > 0, n not a goal node we can think of h(n) as a “guess” as to how far n is from the goal Best-First-Search(state,h) nodes <- MakePriorityQueue(state, h(state)) while (nodes != empty) node = pop(nodes) if (GoalTest(node) succeeds return node for each child in succ(node) nodes <- push(child,h(child)) return failure

4 Heuristics: Example Travel: h(n) = distance(n, goal) Oradea Zerind Arad Sibiu Fagaras Rimnicu Vilcea Pitesti Timisoara Lugoj Mehadia Dobreta Craiova Neamt Iasi Vaslui Urziceni Bucharest Giurgiu Hirsova Eforie

5 Heuristics: Example 8-puzzle: h(n) = tiles out of place h(n) = 3 123 86 754

6 Example - cont h(n) = 3 123 86 754 12 863 754 123 864 75 123 86 754 h(n) = 2h(n) = 4

7 h(n) = 3 123 86 754 12 863 754 123 864 75 123 86 754 h(n) = 2h(n) = 4 123 86 754 123 864 75 h(n) = 1 h(n) = 3

8 123 86 754 12 863 754 123 864 75 123 86 754 h(n) = 2h(n) = 4 123 86 754 123 864 75 h(n) = 1 123 864 75 123 84 765 123 864 75 h(n) = 2 h(n) = 0 h(n) = 2 h(n) = 3

9 Best-First-Search Performance Completeness –Complete if either finite depth, or minimum drop in h value for each operator Time complexity –Depends on how good the heuristic function is –A “perfect” heuristic function will lead search directly to the goal –We rarely have “perfect” heuristic function Space Complexity –Maintains fringe of search in memory –High storage requirement Optimality –Non-optimal solutions 3 2 1 x 1 1 1x Suppose the heuristic drops to one everywhere except along the path along which the solution lies

10 Iterative Improvement Algorithms Start with the complete configuration and to make modifications to improve the quality Consider the states laid out on the surface of a landscape Keep track of only the current state => Simplification of Best-First-Search Do not look ahead beyond the immediate neighbors of that state –Ex: amnesiac climb to summit in a thick fog

11 Iterative Improvement Basic Principle

12 Hill-Climbing Simple loop that continually moves in the direction of increasing value does not maintain a search tree, so the node data structure need only record the state and its evaluation. Always try to make changes that improve the current state Steepest-ascent: pick the steepest next state “Like climbing Everest in thick fog with amnesia” Hill-Climbing(state,h) Current = state do forever next = maximum valued successor of current if (value(next) < value(current) return current current = next

13 Drawbacks Local maxima : halt with local maxima Plateaux : random walk Ridges : oscillate from side to side, limit progress Random-Restart Hill-Climbing Conducts a series of hill-climbing searches from randomly generated initial states.

14 Hill-Climbing Performance Completeness –Not complete, does not use systematic search method Time complexity –Depends on heuristic function Space Complexity –Very low storage requirement Optimality –Non-optimal solutions –Often results in locally optimal solution

15 Simulated-Annealing take some upnhill steps to escape the local minimum Instead of picking the best move, it picks a random move If the move improves the situation, it is executed. Otherwise, move with some probability less than 1. Physical analogy with the annealing process: –Allowing liquid to gradually cool until it freezes The heuristic value is the energy, E Temperature parameter, T, controls speed of convergence.

16 Simulated-Annealing Algorithm Simulated-Annealing(state,schedule) Current = state For t=1,2,… T = schedule(t) If T=0 return current next = a randomly selected successor of current  E = value(next)-value(current) if (  E>0) current = next else current = next with probability

17 Simulated Annealing 5 101520 2 6 14 18 10 T= 100 – t*5 Probability > 0.9 Solution Quality The schedule determines the rate at which the temperature is lowered If the schedule lowers T slowly enough, the algorithm will find a global optimum High temperature T is characterized by a large portion of accepted uphill moves whereas at low temperature only downhill moves are accepted => If a suitable annealing schedule is chosen, simulated annealing has been found capable of finding a good solution, though this is not guaranteed to be the absolute minimum.

18 Beam Search Overcomes storage complexity of Best-First-Search Maintains the k best nodes in the fringe of the search tree (sorted by the heuristic function) When k = 1, Beam search is equivalent to Hill- Climbing When k is infinite, Beam search is equivalent to Best- First-Search If you add a check to avoid repeated states, memory requirement remains high Incomplete, search may delete the path to the solution.

19 Beam Search Algorithm Beam-Search(state,h,k) nodes <- MakePriorityQueue(state, h(state)) while (nodes != empty) node = pop(nodes) if (GoalTest(node) succeeds return node for each child in succ(node) nodes <- push(child,h(child)) If size(nodes) > k delete last item in nodes return failure

20 Search Performance Heuristic 1: Tiles out of place Heuristic 1: Manhattan distance* *Manhattan distance =.total number of horizontal and vertical moves required to move all tiles to their position in the goal state from their current position. => Choice of heuristic is critical to heuristic search algorithm performance. h1 = 7 h2 = 2+1+1+2+1+1+1+0=9 325 71 468 012 345 678 8-Square


Download ppt "Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002."

Similar presentations


Ads by Google