Presentation is loading. Please wait.

Presentation is loading. Please wait.

Solving problems by searching Chapter 3 in AIMA. 2 Problem Solving Rational agents need to perform sequences of actions in order to achieve goals. Intelligent.

Similar presentations


Presentation on theme: "Solving problems by searching Chapter 3 in AIMA. 2 Problem Solving Rational agents need to perform sequences of actions in order to achieve goals. Intelligent."— Presentation transcript:

1 Solving problems by searching Chapter 3 in AIMA

2 2 Problem Solving Rational agents need to perform sequences of actions in order to achieve goals. Intelligent behavior can be generated by having a look-up table or reactive policy that tells the agent what to do in every circumstance, but: –Such a table or policy is difficult to build –All contingencies must be anticipated A more general approach is for the agent to have knowledge of the world and how its actions affect it and be able to simulate execution of actions in an internal model of the world in order to determine a sequence of actions that will accomplish its goals. This is the general task of problem solving and is typically performed by searching through an internally modeled space of world states.

3 3 Problem Solving Task Given: –An initial state of the world –A set of possible actions or operators that can be performed. –A goal test that can be applied to a single state of the world to determine if it is a goal state. Find: –A solution stated as a path of states and operators that shows how to transform the initial state into one that satisfies the goal test.

4 4 Well-defined problems A problem can be defined formally by five components: –The initial state that the agent starts in –A description of the possible actions available to the agent -> Actions(s) –A description of what each action does; the transition model -> Result(s,a) Together, the initial state, actions, and transition model implicitly define the state space of the problem—the set of all states reachable from the initial state by any sequence of actions. -> may be infinite –The goal test, which determines whether a given state is a goal state. –A path cost function that assigns a numeric cost to each path.

5 5 Example: Romania (Route Finding Problem) Initial state: Arad Goal state: Bucharest Path cost: Number of intermediate cities, distance traveled, expected travel time Formulate goal: be in Bucharest Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

6 6 Selecting a state space Real world is absurdly complex  state space must be abstracted for problem solving –The process of removing detail from a representation is called abstraction. (Abstract) state = set of real states (Abstract) action = complex combination of real actions –e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" (Abstract) solution = –set of real paths that are solutions in the real world Each abstract action should be "easier" than the original problem

7 7 Measuring Performance Path cost: a function that assigns a cost to a path, typically by summing the cost of the individual actions in the path. –May want to find minimum cost solution. Search cost: The computational time and space (memory) required to find the solution. Generally there is a trade-off between path cost and search cost and one must satisfice and find the best solution in the time that is available.

8 8 Problem-solving agents

9 9 Example: The 8-puzzle states? actions? goal test? path cost?

10 10 Example: The 8-puzzle states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move

11 11 Route Finding Problem States: A location (e.g., an airport) and the current time. Initial state: user's query Actions: Take any flight from the current location, in any seat class, leaving after the current time, leaving enough time for within-airport transfer if needed. Transition model: The state resulting from taking a flight will have the flight's destination as the current location and the flight's arrival time as the current time. Goal test: Are we at the final destination specified by the user? Path cost: monetary cost, waiting time, flight time, customs and immigration procedures, seat quality, time of day, type of airplane, frequent-flyer mileage awards, and so on.

12 12 More example problems Touring problems: visit every city at least once, starting and ending at Bucharest Travelling salesperson problem (TSP) : each city must be visited exactly once – find the shortest tour VLSI layout design: positioning millions of components and connections on a chip to minimize area, minimize circuit delays, minimize stray capacitances, and maximize manufacturing yield Robot navigation Internet searching Automatic assembly sequencing Protein design

13 13 Example: robotic assembly states?: real-valued coordinates of robot joint angles parts of the object to be assembled actions?: continuous motions of robot joints goal test?: complete assembly path cost?: time to execute

14 14 Tree search algorithms The possible action sequences starting at the initial state form a search tree Basic idea: –offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

15 15 Example: Romania (Route Finding Problem)

16 16 Tree search example

17 17 Tree search example

18 18 Tree search example

19 19 Implementation: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

20 20 Implementation: general tree search Fringe (Frontier): the collection of nodes that have been generated but not yet been expanded Each element of a fringe is a leaf node, a node with no successors Search strategy: a function that selects the next node to be expanded from fringe We assume that the collection of nodes is implemented as a queue

21 21 Implementation: general tree search

22 22 Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: –completeness: does it always find a solution if one exists? –time complexity: how long does it take to find the solution? –space complexity: maximum number of nodes in memory –optimality: does it always find a least-cost solution? Time and space complexity are measured in terms of –b: maximum branching factor of the search tree –d: depth of the least-cost solution –m: maximum depth of the state space (may be ∞)

23 23 Uninformed search strategies Uninformed (blind, exhaustive, brute-force) search strategies use only the information available in the problem definition and do not guide the search with any additional information about the problem. –Breadth-first search –Uniform-cost search –Depth-first search –Depth-limited search –Iterative deepening search

24 24 Breadth-first search (BFS) Expand shallowest unexpanded node Expands search nodes level by level, all nodes at level d are expanded before expanding nodes at level d+1 Implemented by adding new nodes to the end of the queue (FIFO queue): GENERAL-SEARCH(problem, ENQUEUE-AT-END) Since eventually visits every node to a given depth, guaranteed to be complete. Also optimal provided path cost is a nondecreasing function of the depth of the node (e.g. all operators of equal cost) since nodes explored in depth order.

25 25 Properties of breadth-first search Assume there are an average of b successors to each node, called the branching factor. Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… +b d + b(b d -1) = O(b d+1 ) Space? O(b d+1 ) (keeps every node in memory) Optimal? Yes (if cost = 1 per step) Space is the bigger problem (more than time)

26 26 Uniform-cost search Expand least-cost unexpanded node Like breadth-first except always expand node of least cost instead of least depth (i.e. sort new queue by path cost). Equivalent to breadth-first if step costs all equal Do not recognize goal until it is the least cost node on the queue and removed for goal testing. Guarantees optimality as long as path cost never decreases as a path increases (non-negative operator costs).

27 27 Uniform-cost search Implementation: fringe = queue ordered by path cost Complete? Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O(b ceiling(C*/ ε) ) where C * is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution, O(b ceiling(C*/ ε) ) Optimal? Yes – nodes expanded in increasing order of g(n)

28 28 Depth-first search (DFS) Expand deepest unexpanded node Always expand node at deepest level of the tree, i.e. one of the most recently generated nodes. When hit a dead-end, backtrack to last choice. Implementation: LIFO queue, i.e., put new nodes to front of the queue

29 29 Properties of depth-first search Complete? No: fails in infinite-depth spaces, spaces with loops –Modify to avoid repeated states along path  complete in finite spaces Time? O(b m ): terrible if m is much larger than d – but if solutions are dense, may be much faster than breadth-first Space? O(bm), i.e., linear space! Optimal? No : –Not guaranteed optimal since can find deeper solution before shallower ones explored.

30 30 Depth-limited search (DLS) = depth-first search with depth limit l, i.e., nodes at depth l have no successors Recursive implementation: Problem if l<d is chosen

31 31 Iterative deepening search

32 32 Iterative deepening search l =0

33 33 Iterative deepening search l =1

34 34 Iterative deepening search l =2

35 35 Iterative deepening search l =3

36 36 Iterative deepening search Number of nodes generated in a depth-limited search to depth d with branching factor b: N DLS = b 0 + b 1 + b 2 + … + b d-2 + b d-1 + b d Number of nodes generated in an iterative deepening search to depth d with branching factor b: N IDS = (d+1)b 0 + d b^ 1 + (d-1)b^ 2 + … + 3b d-2 +2b d-1 + 1b d For b = 10, d = 5, –N DLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 –N IDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 Overhead = (123,456 - 111,111)/111,111 = 11%

37 37 Properties of iterative deepening search Complete? Yes Time? (d+1)b 0 + d b 1 + (d-1)b 2 + … + b d = O(b d ) Space? O(bd) Optimal? Yes, if step cost = 1

38 38 Summary of algorithms

39 39 Repeated states Failure to detect repeated states can turn a linear problem into an exponential one!

40 40 Repeated states Three methods for reducing repeated work in order of effectiveness and computational overhead: –Do not follow self-loops (remove successors back to the same state). –Do no create paths with cycles (remove successors already on the path back to the root). O(d) overhead. –Do not generate any state that was already generated. Requires storing all generated states (O(b) space) and searching them (usually using a hash-table for efficiency).

41 Informed (Heuristic) Search

42 42 Heuristic Search Heuristic or informed search exploits additional knowledge about the problem that helps direct search to more promising paths. A heuristic function, h(n), provides an estimate of the cost of the path from a given node to the closest goal state. Must be zero if node represents a goal state. –Example: Straight-line distance from current location to the goal location in a road navigation problem. Many search problems are NP-complete so in the worst case still have exponential time complexity; however a good heuristic can: –Find a solution for an average problem efficiently. –Find a reasonably good but not optimal solution efficiently.

43 43 Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Order the nodes in decreasing order of desirability Special cases: –greedy best-first search –A * search

44 44 Romania with step costs in km

45 45 Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., h SLD (n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal

46 46 Greedy best-first search example

47 47 Greedy best-first search example

48 48 Greedy best-first search example

49 49 Greedy best-first search example

50 50 Does not find shortest path to goal (through Rimnicu) since it is only focused on the cost remaining rather than the total cost.

51 51 Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt  Time? O(b m ), but a good heuristic can give dramatic improvement Space? O(b m ) -- keeps all nodes in memory (Since must maintain a queue of all unexpanded states) Optimal? No However, a good heuristic will avoid this worst- case behavior for most problems.

52 52 A * search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

53 53 A * search example

54 54 A * search example

55 55 A * search example

56 56 A * search example

57 57 A * search example

58 58 A * search example

59 59 Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) ≤ h * (n), where h * (n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: h SLD (n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A * using TREE- SEARCH is optimal

60 60 Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G 2 ) = g(G 2 )since h(G 2 ) = 0 g(G 2 ) > g(G) since G 2 is suboptimal f(G) = g(G)since h(G) = 0 f(G 2 ) > f(G)from above

61 61 Optimality of A * (proof) Suppose some suboptimal goal G 2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G 2 )> f(G) from above h(n)≤ h*(n)since h is admissible g(n) + h(n)≤ g(n) + h * (n) f(n) ≤ f(G) Hence f(G 2 ) > f(n), and A * will never select G 2 for expansion

62 62 Consistent heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

63 63 Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) –A* is complete as long as Branching factor is always finite Every operator adds cost at least d > 0 Time? Exponential Space? Keeps all nodes in memory Optimal? Yes Time and space complexity still O(b m ) in the worst case since must maintain and sort complete queue of unexplored options. However, with a good heuristic can find optimal solutions for many problems in reasonable time.

64 64 Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? h 2 (S) = ?

65 65 Admissible heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = ? 8 h 2 (S) = ? 3+1+2+2+2+3+3+2 = 18

66 66 Dominance If h 2 (n) ≥ h 1 (n) for all n (both admissible) then h 2 dominates h 1 h 2 is better for search : Since A* expands all nodes whose f value is less than that of an optimal solution, it is always better to use a heuristic with a higher value as long as it does not over-estimate. Typical search costs (average number of nodes expanded): d=12IDS = 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDS = too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes

67 67 Experimental Results on 8-puzzle problems A heuristic should also be easy to compute, otherwise the overhead of computing the heuristic could outweigh the time saved by reducing search (e.g. using full breadth-first search to estimate distance wouldn’t help).

68 68 Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h 1 (n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h 2 (n) gives the shortest solution

69 69 Inventing Heuristics Many good heuristics can be invented by considering relaxed versions of the problem (abstractions). For 8-puzzle: A tile can move from square A to B if A is adjacent to B and B is blank –(a) A tile can move from square A to B if A is adjacent to B. –(b) A tile can move from square A to B if B is blank. –(c) A tile can move from square A to B. If there are a number of features that indicate a promising or unpromising state, a weighted sum of these features can be useful. Learning methods can be used to set weights.

70 70 Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n- queens In such cases, we can use local search algorithms keep a single "current" state, try to improve it

71 71 Example: n-queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal

72 Extra slides

73 73 Example: vacuum world Single-state, start in #5. Solution?

74 74 Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?

75 75 Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency –Nondeterministic: Suck may dirty a clean carpet –Partially observable: location, dirt at current location. –Percept: [L, Clean], i.e., start in #5 or #7 Solution?

76 76 Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency –Nondeterministic: Suck may dirty a clean carpet –Partially observable: location, dirt at current location. –Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]

77 77 Vacuum world state space graph states? actions? goal test? path cost?

78 78 Vacuum world state space graph states? integer dirt and robot location actions? Left, Right, Suck goal test? no dirt at all locations path cost? 1 per action

79 79 Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

80 80 Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

81 81 Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

82 82 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

83 83 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

84 84 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

85 85 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

86 86 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

87 87 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

88 88 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

89 89 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

90 90 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

91 91 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

92 92 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

93 93 Graph search

94 94 Hill-climbing search "Like climbing Everest in thick fog with amnesia"

95 95 Hill-climbing search Problem: depending on initial state, can get stuck in local maxima

96 96 Hill-climbing search: 8-queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly h = 17 for the above state

97 97 Hill-climbing search: 8-queens problem A local minimum with h = 1

98 98 Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

99 99 Properties of simulated annealing search One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etc

100 100 Local beam search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

101 101 Genetic algorithms A successor state is generated by combining two parent states Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation

102 102 Genetic algorithms Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc

103 103 Genetic algorithms

104 104 Problem types Deterministic, fully observable  single-state problem –Agent knows exactly which state it will be in; solution is a sequence Non-observable  sensorless problem (conformant problem) –Agent may have no idea where it is; solution is a sequence Nondeterministic and/or partially observable  contingency problem –percepts provide new information about current state –often interleave} search, execution Unknown state space  exploration problem

105 105 Optimality of A * A * expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=f i, where f i < f i+1


Download ppt "Solving problems by searching Chapter 3 in AIMA. 2 Problem Solving Rational agents need to perform sequences of actions in order to achieve goals. Intelligent."

Similar presentations


Ads by Google