Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 343H: Artificial Intelligence

Similar presentations


Presentation on theme: "CS 343H: Artificial Intelligence"— Presentation transcript:

1 CS 343H: Artificial Intelligence
Week 2b: Informed Search

2 Announcements Are you registered in the course?
Reading responses good overall PS1 posted, due in 2 weeks This one will take more time Next Tuesday: in-class review and PS1 help Come prepared with questions on PS1!

3 Quiz : review Which are true about DFS? (b is branching factor, m is depth of search tree.) At any given time during the search, the number of nodes on the fringe can be no larger than bm. At any given time in the search, the number of nodes on the fringe can be as large as b^m. The number of nodes expanded throughout the entire search can be no larger than bm. The number of nodes expanded throughout the entire search can be as large as b^m.

4 Quiz : review Which are true about BFS? (b is the branching factor, s is the depth of the shallowest solution) At any given time during the search, the number of nodes on the fringe can be no larger than bs. At any given time during the search, the number of nodes on the fringe can be as large as b^s. The number of nodes considered throughout the entire search can be no larger than bs. The number of nodes considered throughout the entire search can be as large as b^s.

5 Quiz: search execution
Run DFS, BFS, and UCS:

6 Today Informed search Heuristics Greedy search A* search Graph search

7 Recap: Search Search problem: Search tree: Search algorithm:
States (configurations of the world) Actions and costs Successor function: a function from states to lists of (state, action, cost) triples (world dynamics) Start state and goal test Search tree: Nodes: represent plans for reaching states Plans have costs (sum of action costs) Search algorithm: Systematically builds a search tree Chooses an ordering of the fringe (unexplored nodes) Optimal: finds least-cost plans

8 Example: Pancake Problem
Cost: Number of pancakes flipped

9 Example: Pancake Problem
State space graph with costs as weights 4 2 3 2 3 4 3 4 2 3 2 2 4 3

10 General Tree Search Action: flip top two Cost: 2
Action: flip all four Cost: 4 Path to reach goal: Flip four, flip three Total cost: 7

11 The one queue All these search algorithms are the same except for fringe strategies Conceptually, all fringes are priority queues (i.e., collections of nodes with attached priorities) Practically, for DFS and BFS, you can avoid the log(n) overhead from an actual priority queue with stacks and queues Can even code one implementation that takes a variable queuing object

12 Recall: Uniform Cost Search
Strategy: expand lowest path cost The good: UCS is complete and optimal! The bad: Explores options in every “direction” No information about goal location c  1 c  2 c  3 Start Goal [demo: countours UCS]

13 Example: Uniform cost search

14 Another example

15 Informed search: main idea

16 Search heuristic A heuristic is:
A function that estimates how close a state is to a goal Designed for a particular search problem 10 5 11.2

17 Example: Heuristic Function
h(x)

18 Example: Heuristic Function
Heuristic: the largest pancake that is still out of place 4 3 2 h(x)

19 How to use the heuristic?
What about following the “arrow” of the heuristic?.... Greedy search

20 Example: Heuristic Function
h(x)

21 Best First / Greedy Search
Expand the node that seems closest… What can go wrong?

22 Greedy search Strategy: expand a node that you think is closest to a goal state Heuristic: estimate of distance to nearest goal for each state A common case: Best-first takes you straight to the (wrong) goal Worst-case: like a badly-guided DFS b b

23 Example 1: greedy search

24 Example 2: greedy search

25 Quiz: greedy search Which solution would greedy search find if run on this graph?

26 Enter: A* search

27 Combining UCS and Greedy
Uniform-cost orders by path cost, or backward cost g(n) Greedy orders by goal proximity, or forward cost h(n) A* Search orders by the sum: f(n) = g(n) + h(n) 5 e h=1 1 1 3 2 S a d G h=6 h=5 1 h=2 h=0 1 c b h=7 h=6 Example: Teg Grenager

28 When should A* terminate?
Should we stop when we enqueue a goal? No: only stop when we dequeue a goal A 2 2 h = 2 G S h = 3 h = 0 B 3 2 h = 1

29

30 Quiz: A* Tree search

31 Quiz: A* Tree search When running A*, the first node expanded is S (like all search algorithms). After expanding S, two nodes will be on the fringe: S->A and S->D. Fill in: g((S->A)) = h((S->A)) = f((S->A)) = g((S->D))= h((S->D)) = f((S->D)) = Which node will be expanded next?

32 Is A* Optimal? 1 A 3 S G 5 What went wrong?
Actual bad goal cost < estimated good goal cost We need estimates to be less than actual costs!

33 Admissible (optimistic):
Idea: admissibility Inadmissible (pessimistic): break optimality by trapping good plans on the fringe Admissible (optimistic): slows down bad plans but never outweigh true costs

34 Admissible Heuristics
A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal Examples: Coming up with admissible heuristics is most of what’s involved in using A* in practice. 15 4

35 Claim: A will exit the fringe before B.
Optimality of A* Notation: g(n) = cost to node n h(n) = estimated cost from n to the nearest goal (heuristic) f(n) = g(n) + h(n) = estimated total cost via n A: a lowest cost goal node B: another goal node A B Claim: A will exit the fringe before B.

36 Claim: A will exit the fringe before B.
Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) A B f(n) = g(n) + h(n) // by definition f(n) <= g(A) // by admissibility of h g(A) = f(A) // because h=0 at goal

37 Claim: A will exit the fringe before B.
Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) f(A) < f(B) A B g(A) < g(B) // B is suboptimal f(A) < f(B) // h=0 at goals

38 Claim: A will exit the fringe before B.
Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) f(A) < f(B) n will expand before B A B f(n) <= f(A) < f(B) // from above f(n) < f(B)

39 Claim: A will exit the fringe before B.
Optimality of A* Imagine B is on the fringe. Some ancestor n of A must be on the fringe too (maybe n is A) Claim: n will be expanded before B. f(n) <= f(A) f(A) < f(B) n will expand before B All ancestors of A expand before B A expands before B A B

40 Properties of A* Uniform-Cost A* b b

41 UCS vs A* Contours Uniform-cost expands equally in all directions
A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal

42 Recall: greedy

43 Uniform cost search

44 A*

45 Example: UCS vs. A*

46 Quiz: A* tree search

47 Quiz

48 A* applications Pathing / routing problems Video games
Resource planning problems Robot motion planning Language analysis Machine translation Speech recognition

49 Creating Admissible Heuristics
Most of the work in solving hard search problems optimally is in coming up with admissible heuristics Often, admissible heuristics are solutions to relaxed problems, where new actions are available Inadmissible heuristics are often useful too (why?) 15 366

50 Example: 8 Puzzle What are the states? How many states?
What are the actions? What states can I reach from the start state? What should the costs be?

51 8 Puzzle I Heuristic: Number of tiles misplaced Why is it admissible?
h(start) = This is a relaxed-problem heuristic Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps UCS 112 6,300 3.6 x 106 TILES 13 39 227 8

52 8 Puzzle II What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles? Total Manhattan distance Why admissible? h(start) = Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps TILES 13 39 227 MANHATTAN 12 25 73 = 18

53 8 Puzzle II What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles? Total Manhattan distance Why admissible? h(start) = Average nodes expanded when optimal path has length… …4 steps …8 steps …12 steps TILES 13 39 227 MANHATTAN 12 25 73 = 18

54 Quiz 1 Consider the problem of Pacman eating all dots in a maze. Every movement action has a cost of 1. Select all heuristics that are admissible, if any. the total number of dots left the distance to the closest dot the distance to the furthest dot the distance to the closest dot plus distance to the furthest dot none of the above

55 Quiz 2 Consider the pancake flipping problem described earlier in the lecture. Recall that we can put the spatula anywhere in the stack and flip the group of pancakes above the spatula. Rather than having the cost be equal to the number of flipped pancakes, let the cost be 1 for every flip, regardless of how many pancakes are flipped. Select all heuristics that are admissible, if any. 1 if at least one pancake is out of place, 0 otherwise the number of pancakes that are out of place none of the above

56 8 Puzzle III How about using the actual cost as a heuristic?
Would it be admissible? Would we save on nodes expanded? What’s wrong with it? With A*: a trade-off between quality of estimate and work per node!

57 Tree Search: Extra Work!
Failure to detect repeated states can cause exponentially more work. Why? State graph Search tree

58 Graph Search In BFS, for example, we shouldn’t bother expanding the circled nodes (why?) S a b d p c e h f r q G

59 Graph Search Idea: never expand a state twice How to implement:
Tree search + set of expanded states (“closed set”) Expand the search tree node-by-node, but… Before expanding a node, check to make sure its state is new If not new, skip it Important: store the closed set as a set, not a list Can graph search wreck completeness? Why/why not? How about optimality? Warning: 3e book has a more complex, but also correct, variant

60 A* Graph Search Gone Wrong?
State space graph Search tree A S (0+2) 1 A (1+4) B (1+1) S 1 C h=2 h=1 h=4 h=0 1 C (2+1) C (3+1) 2 B 3 G (5+0) G (6+0) G

61 Consistency of Heuristics
Admissibility: heuristic cost <= actual cost to goal h(A) <= actual cost from A to G A 1 C h=4 3 G

62 Consistency of Heuristics
Stronger than admissibility Definition: heuristic cost <= actual cost per arc h(A) - h(C) <= cost(A to C) Consequences: The f value along a path never decreases A* graph search is optimal A 1 C h=4 h=2 h=1

63 Optimality Tree search: Graph search:
A* is optimal if heuristic is admissible (and non-negative) UCS is a special case (h = 0) Graph search: A* optimal if heuristic is consistent UCS optimal (h = 0 is consistent) Consistency implies admissibility In general, most natural admissible heuristics tend to be consistent, especially if from relaxed problems

64 Quiz: consistency Run A* graph search

65 Trivial Heuristics, Dominance
Dominance: ha ≥ hc if Heuristics form a semi-lattice: Max of admissible heuristics is admissible Trivial heuristics Bottom of lattice is the zero heuristic (what does this give us?) Top of lattice is the exact heuristic

66 Quiz True or false: “If h1 is admissible and h2 is admissible, then 0.5*h *h2 is admissible.” Both the max and the min of admissible heuristics are admissible. Which of the following statements is true? The max of multiple admissible heuristics dominates the min of the same heuristics. The min of multiple admissible heuristics dominates the max of the same heuristics.

67 Summary: A* A* uses both backward costs and (estimates of) forward costs A* is optimal with admissible / consistent heuristics Heuristic design is key: often use relaxed problems

68

69 Optimality of A* Graph Search
Sketch: Consider what A* does with a consistent heuristic: Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours) Fact 2: For every state s, nodes that reach s optimally are expanded before nodes that reach s suboptimally. Result: A* graph search is optimal.

70 Optimality of A* Graph Search
Proof: New possible problem: some n on path to G* isn’t in queue when we need it, because some worse n’ for the same state dequeued and expanded first (disaster!) Take the highest such n in tree Let p be the ancestor of n that was on the queue when n’ was popped f(p) < f(n) because of consistency f(n) < f(n’) because n’ is suboptimal p would have been expanded before n’ Contradiction!

71 Graph Search Very simple fix: never expand a state twice
Can this wreck completeness? Optimality?

72 Graph Search, Reconsidered
Idea: never expand a state twice How to implement: Tree search + list of expanded states (closed list) Expand the search tree node-by-node, but… Before expanding a node, check to make sure its state is new Python trick: store the closed list as a set, not a list Can graph search wreck completeness? Why/why not? How about optimality?

73 Optimality of A* Graph Search
Consider what A* does: Expands nodes in increasing total f value (f-contours) Proof idea: optimal goals have lower f value, so get expanded first We’re making a stronger assumption than in the last proof… What?

74 Optimality of A* Graph Search
Consider what A* does: Expands nodes in increasing total f value (f-contours) Reminder: f(n) = g(n) + h(n) = cost to n + heuristic Proof idea: the optimal goal(s) have the lowest f value, so it must get expanded first f  1 There’s a problem with this argument. What are we assuming is true? f  2 f  3

75 Course Scheduling From the university’s perspective:
Set of courses {c1, c2, … cn} Set of room / times {r1, r2, … rn} Each pairing (ck, rm) has a cost wkm What’s the best assignment of courses to rooms? States: list of pairings Actions: add a legal pairing Costs: cost of the new pairing Admissible heuristics?

76 Consistency Wait, how do we know parents have better f-vales than their successors? Couldn’t we pop some node n, and find its child n’ to have lower f value? YES: What can we require to prevent these inversions? Consistency: Real cost must always exceed reduction in heuristic Like admissibility, but better! B h = 0 h = 8 3 g = 10 G A h = 10


Download ppt "CS 343H: Artificial Intelligence"

Similar presentations


Ads by Google