Presentation is loading. Please wait.

Presentation is loading. Please wait.

 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance.

Similar presentations


Presentation on theme: " Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance."— Presentation transcript:

1  Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance data  Pattern DBs?

2 Announcements  Homework 1 (uninformed search) due tomorrow  Homework 2 (informed and local search, search agents) due 9/15  Part I through edX – online, instant grading, submit as often as you like.  Part II through www.pandagrader.com -- submit pdfwww.pandagrader.com  Project 1 out on Thursday, due 9/24

3 AI in the news … TechCrunch, 2014/1/25

4 AI in the news …

5

6 General Tree Search  Important ideas:  Frontier  Expansion: for each action, create the child node  Exploration strategy  Main question: which frontier nodes to explore? function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier

7 A note on implementation Nodes have state, parent, action, path-cost A child of node by action a has state = result(node.state,a) parent = node action = a path-cost = node.path-cost + step-cost(node.state,a,state) Extract solution by tracing back parent pointers, collecting actions

8 Depth-First Search

9 S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand a deepest node first Implementation: Frontier is a LIFO stack

10 Search Algorithm Properties

11  Complete: Guaranteed to find a solution if one exists?  Optimal: Guaranteed to find the least cost path?  Time complexity?  Space complexity?  Cartoon of search tree:  b is the branching factor  m is the maximum depth  solutions at various depths  Number of nodes in entire tree?  1 + b + b 2 + …. b m = O(b m ) … b 1 node b nodes b 2 nodes b m nodes m tiers

12 Depth-First Search (DFS) Properties … b 1 node b nodes b 2 nodes b m nodes m tiers  What nodes does DFS expand?  Some left prefix of the tree.  Could process the whole tree!  If m is finite, takes time O(b m )  How much space does the frontier take?  Only has siblings on path to root, so O(bm)  Is it complete?  m could be infinite, so only if we prevent cycles (more later)  Is it optimal?  No, it finds the “leftmost” solution, regardless of depth or cost

13 Breadth-First Search

14 S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r Search Tiers Strategy: expand a shallowest node first Implementation: Frontier is a FIFO queue

15 Breadth-First Search (BFS) Properties  What nodes does BFS expand?  Processes all nodes above shallowest solution  Let depth of shallowest solution be s  Search takes time O(b s )  How much space does the frontier take?  Has roughly the last tier, so O(b s )  Is it complete?  s must be finite if a solution exists, so yes!  Is it optimal?  Only if costs are all 1 (more on costs later) … b 1 node b nodes b 2 nodes b m nodes s tiers b s nodes

16 Quiz: DFS vs BFS

17  When will BFS outperform DFS?  When will DFS outperform BFS? [Demo: dfs/bfs maze water (L2D6)]

18 Video of Demo Maze Water DFS/BFS (part 1)

19 Video of Demo Maze Water DFS/BFS (part 2)

20 Iterative Deepening … b  Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages  Run a DFS with depth limit 1. If no solution…  Run a DFS with depth limit 2. If no solution…  Run a DFS with depth limit 3. …..  Isn’t that wastefully redundant?  Generally most work happens in the lowest level searched, so not so bad!

21 Finding a least-cost path BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path. START GOAL d b p q c e h a f r 2 9 2 81 8 2 3 2 4 4 15 1 3 2 2

22 Uniform Cost Search

23 S a b d p a c e p h f r q qc G a q e p h f r q qc G a Strategy: expand a cheapest node first: Frontier is a priority queue (priority: cumulative cost) S G d b p q c e h a f r 3 9 1 16 4 11 5 7 13 8 1011 17 11 0 6 3 9 1 1 2 8 8 2 15 1 2 Cost contours 2

24 … Uniform Cost Search (UCS) Properties  What nodes does UCS expand?  Processes all nodes with cost less than cheapest solution!  If that solution costs C* and arcs cost at least , then the “effective depth” is roughly C*/   Takes time O(b C*/  ) (exponential in effective depth)  How much space does the frontier take?  Has roughly the last tier, so O(b C*/  )  Is it complete?  Assuming best solution has a finite cost and minimum arc cost is positive, yes!  Is it optimal?  Yes! (Proof next lecture via A*) b C*/  “tiers” c  3 c  2 c  1

25 Uniform Cost Issues  Remember: UCS explores increasing cost contours  The good: UCS is complete and optimal!  The bad:  Explores options in every “direction”  No information about goal location  We’ll fix that soon! Start Goal … c  3 c  2 c  1

26 Search Gone Wrong?

27 Tree Search vs Graph Search

28  Failure to detect repeated states can cause exponentially more work. Search Tree State Graph Tree Search: Extra Work! O(m) O(2 m )

29  Failure to detect repeated states can cause exponentially more work. Search Tree State Graph Tree Search: Extra Work! O(2m 2 ) O(4 m ) m=20: 800 states, 1,099,511,627,776 tree nodes

30 General Tree Search function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier

31 General Graph Search function GRAPH-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem initialize the explored set to be empty loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chosen node, adding the resulting nodes to the frontier but only if the node is not already in the frontier or explored set

32 Tree Search vs Graph Search  Graph search  Avoids infinite loops: m is finite in a finite state space  Eliminates exponentially many redundant paths  Requires memory proportional to its runtime!

33 CS 188: Artificial Intelligence Informed Search Instructor: Stuart Russell University of California, Berkeley

34 Search Heuristics  A heuristic is:  A function that estimates how close a state is to a goal  Designed for a particular search problem  Examples: Manhattan distance, Euclidean distance for pathing 10 5 11.2

35 Example: distance to Bucharest h(x)

36 Example: Pancake Problem

37 Step Cost: Number of pancakes flipped

38 Example: Pancake Problem State space graph with step costs 3 2 4 3 3 2 2 2 4 3 4 3 4 2

39 Example: Pancake Problem Heuristic: the number of the largest pancake that is still out of place 4 3 0 2 3 3 3 4 4 3 4 4 4 h(x)

40 Effect of heuristics Guide search towards the goal instead of all over the place Start Goal Start Goal Uninformed Informed

41 Greedy Search

42  Expand the node that seems closest…(order frontier by h)  What can possibly go wrong? 193 253 100 0 176 Sibiu-Fagaras-Bucharest = 99+211 = 310 Sibiu-Rimnicu Vilcea-Pitesti-Bucharest = 80+97+101=278

43 Greedy Search  Strategy: expand a node that seems closest to a goal state, according to h  Problem 1: it chooses a node even if it’s at the end of a very long and winding road  Problem 2: it takes h literally even if it’s completely wrong … b

44 Video of Demo Contours Greedy (Empty)

45 Video of Demo Contours Greedy (Pacman Small Maze)

46 A* Search

47 UCSGreedy A*

48 Combining UCS and Greedy  Uniform-cost orders by path cost, or backward cost g(n)  Greedy orders by goal proximity, or forward cost h(n)  A* Search orders by the sum: f(n) = g(n) + h(n) Sad b G h=5 h=6 h=2 1 8 1 1 2 h=6 h=0 c h=7 3 e h=1 1 Example: Teg Grenager S a b c ed dG G g = 0 h=6 g = 1 h=5 g = 2 h=6 g = 3 h=7 g = 4 h=2 g = 6 h=0 g = 9 h=1 g = 10 h=2 g = 12 h=0

49 Is A* Optimal?  What went wrong?  Actual bad goal cost < estimated good goal cost  We need estimates to be less than actual costs! A G S 13 h = 6 h = 0 5 h = 7

50 Admissible Heuristics

51 Idea: Admissibility Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the frontier Admissible (optimistic) heuristics slow down bad plans but never outweigh true costs

52 Admissible Heuristics  A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal  Examples:  Coming up with admissible heuristics is most of what’s involved in using A* in practice. 4 15

53 Optimality of A* Tree Search

54 Assume:  A is an optimal goal node  B is a suboptimal goal node  h is admissible Claim:  A will be chosen for expansion before B …

55 Optimality of A* Tree Search: Blocking Proof:  Imagine B is on the frontier  Some ancestor n of A is on the frontier, too (maybe A!)  Claim: n will be expanded before B 1.f(n) is less or equal to f(A) Definition of f-cost Admissibility of h … h = 0 at a goal

56 Optimality of A* Tree Search: Blocking Proof:  Imagine B is on the frontier  Some ancestor n of A is on the frontier, too (maybe A!)  Claim: n will be expanded before B 1.f(n) is less or equal to f(A) 2.f(A) is less than f(B) B is suboptimal h = 0 at a goal …

57 Optimality of A* Tree Search: Blocking Proof:  Imagine B is on the frontier  Some ancestor n of A is on the frontier, too (maybe A!)  Claim: n will be expanded before B 1.f(n) is less or equal to f(A) 2.f(A) is less than f(B) 3. n expands before B  All ancestors of A expand before B  A expands before B  A* search is optimal …

58 Properties of A*

59 … b … b Uniform-CostA*

60 UCS vs A* Contours  Uniform-cost expands equally in all “directions”  A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal

61 Video of Demo Contours (Empty) -- UCS

62 Video of Demo Contours (Empty) -- Greedy

63 Video of Demo Contours (Empty) – A*

64 Video of Demo Contours (Pacman Small Maze) – A*

65 Comparison GreedyUniform CostA*

66 A* Applications  Video games  Pathing / routing problems  Resource planning problems  Robot motion planning  Language analysis  Machine translation  Speech recognition  … [Demo: UCS / A* pacman tiny maze (L3D6,L3D7)] [Demo: guess algorithm Empty Shallow/Deep (L3D8)]

67 Video of Demo Empty Water Shallow/Deep – Guess Algorithm

68 Creating Heuristics

69 Creating Admissible Heuristics  Most of the work in solving hard search problems optimally is in coming up with admissible heuristics  Often, admissible heuristics are solutions to relaxed problems, where new actions are available 15 366

70 Example: 8 Puzzle  What are the states?  How many states?  What are the actions?  How many actions from the start state?  What should the step costs be? Start StateGoal StateActions

71 8 Puzzle I  Heuristic: Number of tiles misplaced  Why is it admissible?  h(start) =  This is a relaxed-problem heuristic 8 Average nodes expanded when the optimal path has… …4 steps…8 steps…12 steps UCS1126,3003.6 x 10 6 TILES1339227 Start State Goal State Statistics from Andrew Moore

72 8 Puzzle II  What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles?  Total Manhattan distance  Why is it admissible?  h(start) = 3 + 1 + 2 + … = 18 Average nodes expanded when the optimal path has… …4 steps…8 steps…12 steps TILES1339227 MANHATTAN122573 Start State Goal State

73 Combining heuristics  Dominance: h a ≥ h c if  Roughly speaking, larger is better as long as both are admissible  The zero heuristic is pretty bad (what does A* do with h=0?)  The exact heuristic is pretty good, but usually too expensive!  What if we have two heuristics, neither dominates the other?  Form a new heuristic by taking the max of both:  Max of admissible heuristics is admissible and dominates both!  Example: number of knight’s moves to get from A to B  h1 = (Manhattan distance)/3 (rounded up to correct parity)  h2 = (Euclidean distance)/√5 (rounded up to correct parity)  h3 = (max x or y shift)/2 (rounded up to correct parity)

74 Optimality of A* Graph Search This part is a bit fiddly, sorry about that

75 A* Graph Search Gone Wrong? S A B C G 1 1 1 2 3 h=2 h=1 h=4 h=1 h=0 S (0+2) A (1+4)B (1+1) C (2+1) G (5+0) C (3+1) G (6+0) State space graph Search tree Simple check against explored set blocks C Fancy check allows new C if cheaper than old but requires recalculating C’s descendants

76 Consistency of Heuristics  Main idea: estimated heuristic costs ≤ actual costs  Admissibility: heuristic cost ≤ actual cost to goal h(A) ≤ actual cost from A to G  Consistency: heuristic “arc” cost ≤ actual cost for each arc h(A) – h(C) ≤ cost(A to C)  Consequences of consistency:  The f value along a path never decreases h(A) ≤ cost(A to C) + h(C)  A* graph search is optimal 3 A C G h=4h=1 1 h=2

77 Optimality of A* Graph Search  Sketch: consider what A* does with a consistent heuristic:  Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours)  Fact 2: For every state s, nodes that reach s optimally are expanded before nodes that reach s suboptimally  Result: A* graph search is optimal … f  3 f  2 f  1

78 Optimality  Tree search:  A* is optimal if heuristic is admissible  UCS is a special case (h = 0)  Graph search:  A* optimal if heuristic is consistent  UCS optimal (h = 0 is consistent)  Consistency implies admissibility  In general, most natural admissible heuristics tend to be consistent, especially if from relaxed problems

79 A*: Summary

80  A* uses both backward costs and (estimates of) forward costs  A* is optimal with admissible / consistent heuristics  Heuristic design is key: often use relaxed problems


Download ppt " Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance."

Similar presentations


Ads by Google