Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 416 Artificial Intelligence Lecture 4 Finish Uninformed Searches Begin Informed Searches Lecture 4 Finish Uninformed Searches Begin Informed Searches.

Similar presentations


Presentation on theme: "CS 416 Artificial Intelligence Lecture 4 Finish Uninformed Searches Begin Informed Searches Lecture 4 Finish Uninformed Searches Begin Informed Searches."— Presentation transcript:

1 CS 416 Artificial Intelligence Lecture 4 Finish Uninformed Searches Begin Informed Searches Lecture 4 Finish Uninformed Searches Begin Informed Searches

2 Administrivia There is an ACM meeting after class (OLS 011). I’ll end early. Getting you registered for license of VisualStudio There is an ACM meeting after class (OLS 011). I’ll end early. Getting you registered for license of VisualStudio

3 Uniform-Cost Search (review) Always expand the lowest-path-cost node Don’t evaluate the node until it is “expanded”… not when it is the result of expansion Always expand the lowest-path-cost node Don’t evaluate the node until it is “expanded”… not when it is the result of expansion

4 Uniform-Cost Search (review) Fringe = [S 0 ] Expand(S)  {A 1, B 5, C 15 } Fringe = [S 0 ] Expand(S)  {A 1, B 5, C 15 }

5 Uniform-Cost Search (review) Fringe = [A 1, B 5, C 15 ] Expand(A) = {G 11 } Fringe = [A 1, B 5, C 15 ] Expand(A) = {G 11 }

6 Uniform-Cost Search (review) Fringe = [B 5, G 11, C 15 ] Expand(B)  {G 10 } Fringe = [B 5, G 11, C 15 ] Expand(B)  {G 10 }

7 Uniform-Cost Search (review) Fringe = [G 10, C 15 ] Expand(G)  Goal Fringe = [G 10, C 15 ] Expand(G)  Goal

8 Bidirectional Search Search from goal to start Search from start to goal Search from goal to start Search from start to goal Two b d/2 searches instead of one b d

9 Bidirectional Search Implementation Each search checks nodes before expansion to see if they are on the fringe of the other searchEach search checks nodes before expansion to see if they are on the fringe of the other search Example: Bidirectional BFS search with d=6 & b=10Example: Bidirectional BFS search with d=6 & b=10 –Worst case: both search trees must expand all but one element of the third level of the tree  2* (10 + 100 + 1000 + 10000 - 10) node expansions  Versus 1 * (10 + 100 + … + 10000000) epansions Implementation Each search checks nodes before expansion to see if they are on the fringe of the other searchEach search checks nodes before expansion to see if they are on the fringe of the other search Example: Bidirectional BFS search with d=6 & b=10Example: Bidirectional BFS search with d=6 & b=10 –Worst case: both search trees must expand all but one element of the third level of the tree  2* (10 + 100 + 1000 + 10000 - 10) node expansions  Versus 1 * (10 + 100 + … + 10000000) epansions

10 Bidirectional Search Implementation Checking fringe of other treeChecking fringe of other tree –At least one search tree must be kept in memory –Checking can be done in constant time (hash table) Searching back from goalSearching back from goal –Must be able to compute predecessors to node n: Pred (n) –Easy with 15-puzzle, but how about chess? Implementation Checking fringe of other treeChecking fringe of other tree –At least one search tree must be kept in memory –Checking can be done in constant time (hash table) Searching back from goalSearching back from goal –Must be able to compute predecessors to node n: Pred (n) –Easy with 15-puzzle, but how about chess?

11 Avoid Repeated States Search algorithms that forget their history are doomed to repeat it. Russell and Norvig So remember where you’ve been… on a listSo remember where you’ve been… on a list If you come upon a node you’ve visited before, don’t expand itIf you come upon a node you’ve visited before, don’t expand it Let’s call this GRAPH-SEARCHLet’s call this GRAPH-SEARCH Search algorithms that forget their history are doomed to repeat it. Russell and Norvig So remember where you’ve been… on a listSo remember where you’ve been… on a list If you come upon a node you’ve visited before, don’t expand itIf you come upon a node you’ve visited before, don’t expand it Let’s call this GRAPH-SEARCHLet’s call this GRAPH-SEARCH

12 GRAPH-SEARCH Faster and smaller space requirements if many repeating states Time-space requirements are a function of state space not depth/branhing of tree to goalTime-space requirements are a function of state space not depth/branhing of tree to goal Completeness and Optimality of search techniques change when used in GRAPH-SEARCHCompleteness and Optimality of search techniques change when used in GRAPH-SEARCH –Uniform-cost search and BFS only optimal w/ constant step costs –DFS requires update node costs and descendent node costs Faster and smaller space requirements if many repeating states Time-space requirements are a function of state space not depth/branhing of tree to goalTime-space requirements are a function of state space not depth/branhing of tree to goal Completeness and Optimality of search techniques change when used in GRAPH-SEARCHCompleteness and Optimality of search techniques change when used in GRAPH-SEARCH –Uniform-cost search and BFS only optimal w/ constant step costs –DFS requires update node costs and descendent node costs

13 Interesting problems Exercise 3.9: 3 cannibals and 3 missionaries and a boat that can hold one or two people are on one side of the river. Get everyone across the river (early AI problem, 1968)3 cannibals and 3 missionaries and a boat that can hold one or two people are on one side of the river. Get everyone across the river (early AI problem, 1968) 8-puzzle and 15-puzzle, invented by Sam Loyd in good ol’ USA in 1870s. Think about search space.8-puzzle and 15-puzzle, invented by Sam Loyd in good ol’ USA in 1870s. Think about search space. Rubik’s cubeRubik’s cube Traveling Salesman Problem (TSP)Traveling Salesman Problem (TSP) Exercise 3.9: 3 cannibals and 3 missionaries and a boat that can hold one or two people are on one side of the river. Get everyone across the river (early AI problem, 1968)3 cannibals and 3 missionaries and a boat that can hold one or two people are on one side of the river. Get everyone across the river (early AI problem, 1968) 8-puzzle and 15-puzzle, invented by Sam Loyd in good ol’ USA in 1870s. Think about search space.8-puzzle and 15-puzzle, invented by Sam Loyd in good ol’ USA in 1870s. Think about search space. Rubik’s cubeRubik’s cube Traveling Salesman Problem (TSP)Traveling Salesman Problem (TSP)

14 Chapter 4 – Informed Search INFORMED? Uses problem-specific knowledge beyond the definition of the problem itselfUses problem-specific knowledge beyond the definition of the problem itself –selecting best lane in traffic –playing a sport  what’s the heuristic (or evaluation function)? INFORMED? Uses problem-specific knowledge beyond the definition of the problem itselfUses problem-specific knowledge beyond the definition of the problem itself –selecting best lane in traffic –playing a sport  what’s the heuristic (or evaluation function)? www.curling.ca

15 Best-first Search BFS/DFS/UCS differ in how they select a node to pull off the fringe We want to pull the node that’s on the optimal path off the fringeWe want to pull the node that’s on the optimal path off the fringe –But if we know the “best” node to explore, we don’t have to search!!! Use an evaluation function to select node to expand f(n) = evaluation function = expected “cost” for a path from root to goal that goes through node nf(n) = evaluation function = expected “cost” for a path from root to goal that goes through node n Select the node n that minimizes f(n)Select the node n that minimizes f(n) How do we build f(n)?How do we build f(n)? BFS/DFS/UCS differ in how they select a node to pull off the fringe We want to pull the node that’s on the optimal path off the fringeWe want to pull the node that’s on the optimal path off the fringe –But if we know the “best” node to explore, we don’t have to search!!! Use an evaluation function to select node to expand f(n) = evaluation function = expected “cost” for a path from root to goal that goes through node nf(n) = evaluation function = expected “cost” for a path from root to goal that goes through node n Select the node n that minimizes f(n)Select the node n that minimizes f(n) How do we build f(n)?How do we build f(n)?

16 Evaluation function: f(n) Combine two costs f(n) = g(n) + h(n)f(n) = g(n) + h(n) –g(n) = cost to get to n from start –h(n) = cost to get to from n to goal Combine two costs f(n) = g(n) + h(n)f(n) = g(n) + h(n) –g(n) = cost to get to n from start –h(n) = cost to get to from n to goal

17 Heuristics A function, h(n), that estimates cost of cheapest path from node n to the goal h(n) = 0 if n == goal nodeh(n) = 0 if n == goal node A function, h(n), that estimates cost of cheapest path from node n to the goal h(n) = 0 if n == goal nodeh(n) = 0 if n == goal node

18 Greedy Best-first Search Trust your heuristic evaluate node that minimizes h(n)evaluate node that minimizes h(n) f(n) = h(n)f(n) = h(n) Example: getting from A to B Explore nodes with shortest straight distance to BExplore nodes with shortest straight distance to B Shortcomings of heuristic?Shortcomings of heuristic? –Greedy can be bad… Climbing K2 accomplishes a goal, but the cost of getting there is prohibitive Trust your heuristic evaluate node that minimizes h(n)evaluate node that minimizes h(n) f(n) = h(n)f(n) = h(n) Example: getting from A to B Explore nodes with shortest straight distance to BExplore nodes with shortest straight distance to B Shortcomings of heuristic?Shortcomings of heuristic? –Greedy can be bad… Climbing K2 accomplishes a goal, but the cost of getting there is prohibitive

19 A* (A-star) Search Don’t simply minimize the cost to goal… minimize the cost from start to goal… f(n) = g(n) + h(n)f(n) = g(n) + h(n) –g(n) = cost to get to n from start –h(n) = cost to get from n to goal Select node from fringe that minimizes f(n) Don’t simply minimize the cost to goal… minimize the cost from start to goal… f(n) = g(n) + h(n)f(n) = g(n) + h(n) –g(n) = cost to get to n from start –h(n) = cost to get from n to goal Select node from fringe that minimizes f(n)

20 A* is Optimal? A* can be optimal if h(n) satisfies conditions h(n) never overestimates cost to reach the goalh(n) never overestimates cost to reach the goal –it is eternally optimisic –called an admissible heurisitic –f(n) never overestimates cost of a solution through n Proof of optimality?Proof of optimality? A* can be optimal if h(n) satisfies conditions h(n) never overestimates cost to reach the goalh(n) never overestimates cost to reach the goal –it is eternally optimisic –called an admissible heurisitic –f(n) never overestimates cost of a solution through n Proof of optimality?Proof of optimality?

21 A* is Optimal We must prove that A* will not return a suboptimal goal or a suboptimal path to a goal Let G be a suboptimal goal nodeLet G be a suboptimal goal node –f(G) = g(G) + h(G) –h(G) = 0 because G is a goal node –f(G) = g(G) > C* (because G is suboptimal) We must prove that A* will not return a suboptimal goal or a suboptimal path to a goal Let G be a suboptimal goal nodeLet G be a suboptimal goal node –f(G) = g(G) + h(G) –h(G) = 0 because G is a goal node –f(G) = g(G) > C* (because G is suboptimal)

22 A* is Optimal We must prove that A* will not return a suboptimal goal or a suboptimal path to a goal Let G be a suboptimal goal nodeLet G be a suboptimal goal node –f(G) = g(G) + h(G) –h(G) = 0 because G is a goal node –f(G) = g(G) > C* (because G is suboptimal) We must prove that A* will not return a suboptimal goal or a suboptimal path to a goal Let G be a suboptimal goal nodeLet G be a suboptimal goal node –f(G) = g(G) + h(G) –h(G) = 0 because G is a goal node –f(G) = g(G) > C* (because G is suboptimal) Let n be a node on the optimal pathLet n be a node on the optimal path –because h(n) does not overestimate –f(n) = g(n) + h(n) <= C* Therefore f(n) <= C* < f(G)Therefore f(n) <= C* < f(G) –node n will be selected before node G Let n be a node on the optimal pathLet n be a node on the optimal path –because h(n) does not overestimate –f(n) = g(n) + h(n) <= C* Therefore f(n) <= C* < f(G)Therefore f(n) <= C* < f(G) –node n will be selected before node G

23 Repeated States and GRAPH-SEARCH GRAPH-SEARCH always ignores all but the first occurrence of a state during search Lower cost path may be tossedLower cost path may be tossed –So, don’t throw away subsequent occurrences –Or, ensure that the optimal path to any repeated state is always the first one followed Additional constraint on heurisitic, consistencyAdditional constraint on heurisitic, consistency GRAPH-SEARCH always ignores all but the first occurrence of a state during search Lower cost path may be tossedLower cost path may be tossed –So, don’t throw away subsequent occurrences –Or, ensure that the optimal path to any repeated state is always the first one followed Additional constraint on heurisitic, consistencyAdditional constraint on heurisitic, consistency

24 Consistent heuristic: h(n) Heuristic function must be monotonic for every node, n, and successor, n’, obtained with action afor every node, n, and successor, n’, obtained with action a –estimated cost of reaching goal from n is no greater than cost of getting to n’ plus estimated cost of reaching goal from n’ –h(n) <= c(n, a, n’) + h(n’) This implies f(n) along any path are nondecreasingThis implies f(n) along any path are nondecreasing Heuristic function must be monotonic for every node, n, and successor, n’, obtained with action afor every node, n, and successor, n’, obtained with action a –estimated cost of reaching goal from n is no greater than cost of getting to n’ plus estimated cost of reaching goal from n’ –h(n) <= c(n, a, n’) + h(n’) This implies f(n) along any path are nondecreasingThis implies f(n) along any path are nondecreasing

25 Examples of consistent h(n) h(n) <= c(n, a, n succ ) + h(n succ ) recall h(n) is admissiblerecall h(n) is admissible –The quickest you can get there from here is 10 minutes  It may take more than 10 minutes, but not fewer After taking an action and learning the costAfter taking an action and learning the cost –It took you two minutes to get here and you still have nine minutes to go –We cannot learn… it took you two minutes to get here and you have seven minutes to go h(n) <= c(n, a, n succ ) + h(n succ ) recall h(n) is admissiblerecall h(n) is admissible –The quickest you can get there from here is 10 minutes  It may take more than 10 minutes, but not fewer After taking an action and learning the costAfter taking an action and learning the cost –It took you two minutes to get here and you still have nine minutes to go –We cannot learn… it took you two minutes to get here and you have seven minutes to go 2 10 9 0

26 Example of inconsistent h(n) As a thought exercise for after class Consider what happens when a heuristic is inconsistentConsider what happens when a heuristic is inconsistent Consider how one could have a consistent but non- admissible heuristicConsider how one could have a consistent but non- admissible heuristic As a thought exercise for after class Consider what happens when a heuristic is inconsistentConsider what happens when a heuristic is inconsistent Consider how one could have a consistent but non- admissible heuristicConsider how one could have a consistent but non- admissible heuristic

27 Proof of monotonicity of f(n) If h(n) is consistent (monotonic) then f(n) along any path is nondecreasing then f(n) along any path is nondecreasing let n’ be a successor of nlet n’ be a successor of n –g(n’) = g(n) + c (n, a, n’) for some a –f(n’) = g(n’) + h(n’) = g(n) + c(n, a, n’) + h(n’) >= g(n) + h(n) = f(n) If h(n) is consistent (monotonic) then f(n) along any path is nondecreasing then f(n) along any path is nondecreasing let n’ be a successor of nlet n’ be a successor of n –g(n’) = g(n) + c (n, a, n’) for some a –f(n’) = g(n’) + h(n’) = g(n) + c(n, a, n’) + h(n’) >= g(n) + h(n) = f(n) monotonicity implies h(n) <= c(n, a, n’) + h(n’)

28 Contours Because f(n) is nondecreasing we can draw contours If we know C*If we know C* We only need to explore contours less than C*We only need to explore contours less than C* Because f(n) is nondecreasing we can draw contours If we know C*If we know C* We only need to explore contours less than C*We only need to explore contours less than C*

29 Properties of A* A* expands all nodes with f(n) < C*A* expands all nodes with f(n) < C* A* expands some (at least one) of the nodes on the C* contour before finding the goalA* expands some (at least one) of the nodes on the C* contour before finding the goal A* expands no nodes with f(n) > C*A* expands no nodes with f(n) > C* –these unexpanded nodes can be pruned A* expands all nodes with f(n) < C*A* expands all nodes with f(n) < C* A* expands some (at least one) of the nodes on the C* contour before finding the goalA* expands some (at least one) of the nodes on the C* contour before finding the goal A* expands no nodes with f(n) > C*A* expands no nodes with f(n) > C* –these unexpanded nodes can be pruned

30 A* is Optimally Efficient Compared to other algorithms that search from root Compared to other algorithms using same heuristic No other optimal algorithm is guaranteed to expand fewer nodes than A* (except perhaps eliminating consideration of ties at f(n) = C*) Compared to other algorithms that search from root Compared to other algorithms using same heuristic No other optimal algorithm is guaranteed to expand fewer nodes than A* (except perhaps eliminating consideration of ties at f(n) = C*)

31 Pros and Cons of A* A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all have errors A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all have errors A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible

32 End for ACM meeting?


Download ppt "CS 416 Artificial Intelligence Lecture 4 Finish Uninformed Searches Begin Informed Searches Lecture 4 Finish Uninformed Searches Begin Informed Searches."

Similar presentations


Ads by Google