Presentation is loading. Please wait.

Presentation is loading. Please wait.

Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search.

Similar presentations


Presentation on theme: "Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search."— Presentation transcript:

1 Informed search algorithms Chapter 4

2 Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search

3 Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability" Expand most desirable unexpanded node Implementation: Order the nodes in fringe in decreasing order of desirability Special cases: –greedy best-first search –A * search

4 Romania with step costs in km

5 Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., h SLD (n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal

6 Greedy best-first search example

7

8

9

10 Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt Time? O(b m ), but a good heuristic can give dramatic improvement Space? O(b m ) -- keeps all nodes in memory Optimal? No

11 A * search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

12 A * search example

13

14

15

16

17

18 Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: h SLD (n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A * using TREE-SEARCH is optimal

19 Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G 2 ) = g(G 2 )since h(G 2 ) = 0 (G 2 is a goal) > g(G) since G 2 is sub-optimal = g(n) + d(n,G)since there is only one path from root to G (tree) g(n) + h(n)since h is admissible = f(n)by definition of f Hence f(G 2 ) > f(n), and A * will never select G 2 for expansion. Recall that it is only when a node is picked for expansion that we check if it is goal.

20 Properties of A* Complete?? Yes Optimal?? Yes Let C* be the cost of the optimal solution. A* expands all nodes with f(n) < C* A* expands some nodes with f(n) = C* A* expands no nodes with f(n) > C*

21 A* using Graph-Search A* using Graph-Search can produce sub-optimal solutions, even if the heuristic function is admissible. Sub-optimal solutions can be returned because Graph- Search can discard the optimal path to a repeated state if it is not the first one generated.

22 Consistent heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, c(n,a,n') + h(n') h(n) If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

23 Optimality under consistency Recall: If h is consistent, then A* expands nodes in order of increasing f value. First goal node selected for expansion must be optimal. –Recall that only when a node is selected for expansion only then it is tested whether it is a goal state or not. Why? Because if a goal node G 2 is selected later for expansion than G 1 this means that: –f(G 1 ) f(G 2 ) –g(G 1 ) + h(G 1 ) g(G 2 ) + h(G 2 ) recall h(G 1 ) = h(G 2 )=0 since G 1 and G 2 are goal, hence –g(G 1 ) g(G 2 ) which means that G 2 is sub-optimal solution.

24 Straight-Line Distance Heuristic is Consistent Why? We know that the general triangle inequality is satisfied when each side is measured by the straight-line. And c(n,a,n) is greater or equal to the straight-line distance between n and n. –d(n.state, goalstate) d(n.state, goalstate) + d(n.state, n.state) By Euclidian distance triangle property Also recall that –d(n.state, goalstate) = h(n) and –d(n.state, goalstate) = h(n), and –d(n.state, n.state) c(n,a,n) for any action a. Hence, –h(n) h(n) + d(n.state, n.state) h(n) + c(n,a,n) I.e. h is consistent.

25 Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance |x 1 -x 2 |+|y 1 +y 2 | (i.e., no. of squares from desired location of each tile) h 1 (S) = ? h 2 (S) = ?

26 Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance |x 1 -x 2 |+|y 1 +y 2 | (i.e., no. of squares from desired location of each tile) h 1 (S) = ? 8 h 2 (S) = ? = 18

27 Why are they admissible? Misplaced tiles: No move can get more than one misplaced tile into place,so this measure is a guaranteed underestimate and hence admissible. Manhattan: In fact, each move can at best decrease by one the rectilinear distance of a tile from its goal.

28 Are they consistent? c(n,a,n) = 1 for any action a Claim: h 1 (n) h 1 (n) + c(n,a,n) = h 1 (n) + 1 –Now, no move (action) can get more than one misplaced tile into place. –Also, no move can create more than one new misplaced tile. –Hence, the above follows. I.e. h 1 is consistent. Similar reasoning for h 2 as well.

29 A* Graph-Search (without consistent heuristic) The problem is that sub-optimal solutions can be returned because Graph- Search can discard the optimal path to a repeated state if it is not the first one generated. When the heuristic function h is consistent, then we proved that the optimal path to any repeated state is the first one to be followed. When h is not consistent, we can still use Graph-Search if we discard the longer path. We store in the hashtable pairs (statekey, pathcost) Graph-Search(problem, fringe) node = MakeNode(problem.initialstate); Insert(fringe, node); do if ( Empty(fringe) ) return null; //failure node = Remove(fringe); if (problem.GoalTest(node.state)) return node; key_cost_pair = Find( closed, node.state.getKey() ); if (key_cost_pair == null || key_cost_pair.pathcost > node.pathcost) Insert (closed, (node.state.getKey(), node.pathcost) ); InsertAll (fringe, Expand(node, problem) ); while (true);

30 A* using Graph-Search Lets try it here

31 Dominance If h 2 (n) h 1 (n) for all n (both admissible) then h 2 dominates h 1 h 2 is better for search –Let C* be the cost of the optimal solution. –A* expands all nodes with f(n) < C* –A* expands some nodes with f(n) = C* –A* expands no nodes with f(n) > C* –Hence, we want f(n) (g(n)+h(n)) to be as big as possible. Since we cant do anything about g(n) we are interested in having h(n) as big as possible. Typical search costs (average number of nodes expanded): d=12IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes

32 The max of heuristics If a collection of admissible heuristics h 1, …, h m is available for a problem, and none of them dominates any of the others, we create a compound heuristic as: –h(n) = max {h 1 (n), …, h m (n)} Is it admissible? Yes, because each component is admissible, so h wont over- estimate the distance to the goal. If h 1, …, h m are consistent, is h consistent as well? Yes. For any action (move) a: c(n,a,n)+h(n) = c(n,a,n)+ max {h 1 (n), …, h m (n)} = max {c(n,a,n)+h 1 (n), …, c(n,a,n)+h m (n)} (from consistency of h i ) max {h 1 (n), …, h m (n)} = h(n)

33 Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution

34 Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n-queens In such cases, we can use local search algorithms, which keep a single "current" state, and try to improve it.

35 Example: n-queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal How many successors from each state? (n-1)*n

36 8-queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly h = 17 for the above state

37 Hill-climbing search "Like climbing Everest in thick fog with amnesia" Hill-Climbing chooses randomly among the set of best successors, if there is more than one.

38 Hill-climbing search Problem: depending on initial state, can get stuck in local maxima

39 Hill-climbing search: 8-queens problem A local minimum with h = 1

40 Local Minima Starting from a randomly generated state of the 8- queens, hill-climbing gets stuck 86% of the time, solving only 14% of problems. It works quickly, taking just 4 steps on average when it succeeds, and 3 when it gets stuck – not bad for a state space with million states. Memory is constant, since we keep only one state. Since it is so attractive, what can we do in order to not get stuck?

41 Random-restart hill climbing Well known adage: If at first you dont succeed, try, try again. It conducts a series of hill-climbing searches from randomly generated initial states, stopping when a goal is found. If each hill-climbing search has a probability p of success, then the expected number of restarts is 1/p. For 8-queens, p=0.14, so we need roughly 7 iterations to find a goal, i.e. 22 steps (3 steps for failures and 4 for success)


Download ppt "Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Local search algorithms Hill-climbing search."

Similar presentations


Ads by Google