 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Informed Search Algorithms
Informed search strategies
An Introduction to Artificial Intelligence
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Announcements Homework 1: Search Project 1: Search Sections
CS 343H: Artificial Intelligence
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
CS 188: Artificial Intelligence Fall 2009 Lecture 3: A* Search 9/3/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell or Andrew Moore.
CS 188: Artificial Intelligence Spring 2006 Lecture 2: Queue-Based Search 8/31/2006 Dan Klein – UC Berkeley Many slides over the course adapted from either.
Announcements Project 0: Python Tutorial
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
CS 188: Artificial Intelligence Fall 2009 Lecture 2: Queue-Based Search 9/1/2009 Dan Klein – UC Berkeley Multiple slides from Stuart Russell, Andrew Moore.
Solving Problems by Searching
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
CSE 511a: Artificial Intelligence Spring 2012
How are things going? Core AI Problem Mobile robot path planning: identifying a trajectory that, when executed, will enable the robot to reach the goal.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Quiz 1 : Queue based search  q1a1: Any complete search algorithm must have exponential (or worse) time complexity.  q1a2: DFS requires more space than.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
PEAS: Medical diagnosis system  Performance measure  Patient health, cost, reputation  Environment  Patients, medical staff, insurers, courts  Actuators.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Advanced Artificial Intelligence Lecture 2: Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Warm-up: Tree Search Solution
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
CS 343H: Artificial Intelligence
AI Adjacent Fields  Philosophy:  Logic, methods of reasoning  Mind as physical system  Foundations of learning, language, rationality  Mathematics.
For Monday Read chapter 4 exercise 1 No homework.
Warm-up We’ll often have a warm-up exercise for the 10 minutes before class starts. Here’s the first one… Write the pseudo code for breadth first search.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Artificial Intelligence Solving problems by searching.
Review: Tree search Initialize the frontier using the starting state
Announcements Homework 1 Questions 1-3 posted. Will post remainder of assignment over the weekend; will announce on Slack. No AI Seminar today.
Announcements Homework 1 will be posted this afternoon. After today, you should be able to do Questions 1-3. Python 2.7 only for the homework; turns.
CS 188: Artificial Intelligence
Last time: Problem-Solving
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
CS 4100 Artificial Intelligence
Informed search algorithms
Artificial Intelligence
Announcements Homework 1: Search Project 1: Search
Warm-up: DFS Graph Search
CS 188: Artificial Intelligence Fall 2006
Announcements This Friday Project 1 due Talk by Jeniya Tabassum
Artificial Intelligence
CS 416 Artificial Intelligence
Presentation transcript:

 Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS  Cut back on A* optimality detail; a bit more on importance of heuristics, performance data  Pattern DBs?

Announcements  Homework 1 (uninformed search) due tomorrow  Homework 2 (informed and local search, search agents) due 9/15  Part I through edX – online, instant grading, submit as often as you like.  Part II through -- submit pdfwww.pandagrader.com  Project 1 out on Thursday, due 9/24

AI in the news … TechCrunch, 2014/1/25

AI in the news …

General Tree Search  Important ideas:  Frontier  Expansion: for each action, create the child node  Exploration strategy  Main question: which frontier nodes to explore? function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier

A note on implementation Nodes have state, parent, action, path-cost A child of node by action a has state = result(node.state,a) parent = node action = a path-cost = node.path-cost + step-cost(node.state,a,state) Extract solution by tracing back parent pointers, collecting actions

Depth-First Search

S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r q p h f d b a c e r Strategy: expand a deepest node first Implementation: Frontier is a LIFO stack

Search Algorithm Properties

 Complete: Guaranteed to find a solution if one exists?  Optimal: Guaranteed to find the least cost path?  Time complexity?  Space complexity?  Cartoon of search tree:  b is the branching factor  m is the maximum depth  solutions at various depths  Number of nodes in entire tree?  1 + b + b 2 + …. b m = O(b m ) … b 1 node b nodes b 2 nodes b m nodes m tiers

Depth-First Search (DFS) Properties … b 1 node b nodes b 2 nodes b m nodes m tiers  What nodes does DFS expand?  Some left prefix of the tree.  Could process the whole tree!  If m is finite, takes time O(b m )  How much space does the frontier take?  Only has siblings on path to root, so O(bm)  Is it complete?  m could be infinite, so only if we prevent cycles (more later)  Is it optimal?  No, it finds the “leftmost” solution, regardless of depth or cost

Breadth-First Search

S a b d p a c e p h f r q qc G a q e p h f r q qc G a S G d b p q c e h a f r Search Tiers Strategy: expand a shallowest node first Implementation: Frontier is a FIFO queue

Breadth-First Search (BFS) Properties  What nodes does BFS expand?  Processes all nodes above shallowest solution  Let depth of shallowest solution be s  Search takes time O(b s )  How much space does the frontier take?  Has roughly the last tier, so O(b s )  Is it complete?  s must be finite if a solution exists, so yes!  Is it optimal?  Only if costs are all 1 (more on costs later) … b 1 node b nodes b 2 nodes b m nodes s tiers b s nodes

Quiz: DFS vs BFS

 When will BFS outperform DFS?  When will DFS outperform BFS? [Demo: dfs/bfs maze water (L2D6)]

Video of Demo Maze Water DFS/BFS (part 1)

Video of Demo Maze Water DFS/BFS (part 2)

Iterative Deepening … b  Idea: get DFS’s space advantage with BFS’s time / shallow-solution advantages  Run a DFS with depth limit 1. If no solution…  Run a DFS with depth limit 2. If no solution…  Run a DFS with depth limit 3. …..  Isn’t that wastefully redundant?  Generally most work happens in the lowest level searched, so not so bad!

Finding a least-cost path BFS finds the shortest path in terms of number of actions. It does not find the least-cost path. We will now cover a similar algorithm which does find the least-cost path. START GOAL d b p q c e h a f r

Uniform Cost Search

S a b d p a c e p h f r q qc G a q e p h f r q qc G a Strategy: expand a cheapest node first: Frontier is a priority queue (priority: cumulative cost) S G d b p q c e h a f r Cost contours 2

… Uniform Cost Search (UCS) Properties  What nodes does UCS expand?  Processes all nodes with cost less than cheapest solution!  If that solution costs C* and arcs cost at least , then the “effective depth” is roughly C*/   Takes time O(b C*/  ) (exponential in effective depth)  How much space does the frontier take?  Has roughly the last tier, so O(b C*/  )  Is it complete?  Assuming best solution has a finite cost and minimum arc cost is positive, yes!  Is it optimal?  Yes! (Proof next lecture via A*) b C*/  “tiers” c  3 c  2 c  1

Uniform Cost Issues  Remember: UCS explores increasing cost contours  The good: UCS is complete and optimal!  The bad:  Explores options in every “direction”  No information about goal location  We’ll fix that soon! Start Goal … c  3 c  2 c  1

Search Gone Wrong?

Tree Search vs Graph Search

 Failure to detect repeated states can cause exponentially more work. Search Tree State Graph Tree Search: Extra Work! O(m) O(2 m )

 Failure to detect repeated states can cause exponentially more work. Search Tree State Graph Tree Search: Extra Work! O(2m 2 ) O(4 m ) m=20: 800 states, 1,099,511,627,776 tree nodes

General Tree Search function TREE-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution expand the chosen node, adding the resulting nodes to the frontier

General Graph Search function GRAPH-SEARCH(problem) returns a solution, or failure initialize the frontier using the initial state of problem initialize the explored set to be empty loop do if the frontier is empty then return failure choose a leaf node and remove it from the frontier if the node contains a goal state then return the corresponding solution add the node to the explored set expand the chosen node, adding the resulting nodes to the frontier but only if the node is not already in the frontier or explored set

Tree Search vs Graph Search  Graph search  Avoids infinite loops: m is finite in a finite state space  Eliminates exponentially many redundant paths  Requires memory proportional to its runtime!

CS 188: Artificial Intelligence Informed Search Instructor: Stuart Russell University of California, Berkeley

Search Heuristics  A heuristic is:  A function that estimates how close a state is to a goal  Designed for a particular search problem  Examples: Manhattan distance, Euclidean distance for pathing

Example: distance to Bucharest h(x)

Example: Pancake Problem

Step Cost: Number of pancakes flipped

Example: Pancake Problem State space graph with step costs

Example: Pancake Problem Heuristic: the number of the largest pancake that is still out of place h(x)

Effect of heuristics Guide search towards the goal instead of all over the place Start Goal Start Goal Uninformed Informed

Greedy Search

 Expand the node that seems closest…(order frontier by h)  What can possibly go wrong? Sibiu-Fagaras-Bucharest = = 310 Sibiu-Rimnicu Vilcea-Pitesti-Bucharest = =278

Greedy Search  Strategy: expand a node that seems closest to a goal state, according to h  Problem 1: it chooses a node even if it’s at the end of a very long and winding road  Problem 2: it takes h literally even if it’s completely wrong … b

Video of Demo Contours Greedy (Empty)

Video of Demo Contours Greedy (Pacman Small Maze)

A* Search

UCSGreedy A*

Combining UCS and Greedy  Uniform-cost orders by path cost, or backward cost g(n)  Greedy orders by goal proximity, or forward cost h(n)  A* Search orders by the sum: f(n) = g(n) + h(n) Sad b G h=5 h=6 h= h=6 h=0 c h=7 3 e h=1 1 Example: Teg Grenager S a b c ed dG G g = 0 h=6 g = 1 h=5 g = 2 h=6 g = 3 h=7 g = 4 h=2 g = 6 h=0 g = 9 h=1 g = 10 h=2 g = 12 h=0

Is A* Optimal?  What went wrong?  Actual bad goal cost < estimated good goal cost  We need estimates to be less than actual costs! A G S 13 h = 6 h = 0 5 h = 7

Admissible Heuristics

Idea: Admissibility Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the frontier Admissible (optimistic) heuristics slow down bad plans but never outweigh true costs

Admissible Heuristics  A heuristic h is admissible (optimistic) if: where is the true cost to a nearest goal  Examples:  Coming up with admissible heuristics is most of what’s involved in using A* in practice. 4 15

Optimality of A* Tree Search

Assume:  A is an optimal goal node  B is a suboptimal goal node  h is admissible Claim:  A will be chosen for expansion before B …

Optimality of A* Tree Search: Blocking Proof:  Imagine B is on the frontier  Some ancestor n of A is on the frontier, too (maybe A!)  Claim: n will be expanded before B 1.f(n) is less or equal to f(A) Definition of f-cost Admissibility of h … h = 0 at a goal

Optimality of A* Tree Search: Blocking Proof:  Imagine B is on the frontier  Some ancestor n of A is on the frontier, too (maybe A!)  Claim: n will be expanded before B 1.f(n) is less or equal to f(A) 2.f(A) is less than f(B) B is suboptimal h = 0 at a goal …

Optimality of A* Tree Search: Blocking Proof:  Imagine B is on the frontier  Some ancestor n of A is on the frontier, too (maybe A!)  Claim: n will be expanded before B 1.f(n) is less or equal to f(A) 2.f(A) is less than f(B) 3. n expands before B  All ancestors of A expand before B  A expands before B  A* search is optimal …

Properties of A*

… b … b Uniform-CostA*

UCS vs A* Contours  Uniform-cost expands equally in all “directions”  A* expands mainly toward the goal, but does hedge its bets to ensure optimality Start Goal Start Goal

Video of Demo Contours (Empty) -- UCS

Video of Demo Contours (Empty) -- Greedy

Video of Demo Contours (Empty) – A*

Video of Demo Contours (Pacman Small Maze) – A*

Comparison GreedyUniform CostA*

A* Applications  Video games  Pathing / routing problems  Resource planning problems  Robot motion planning  Language analysis  Machine translation  Speech recognition  … [Demo: UCS / A* pacman tiny maze (L3D6,L3D7)] [Demo: guess algorithm Empty Shallow/Deep (L3D8)]

Video of Demo Empty Water Shallow/Deep – Guess Algorithm

Creating Heuristics

Creating Admissible Heuristics  Most of the work in solving hard search problems optimally is in coming up with admissible heuristics  Often, admissible heuristics are solutions to relaxed problems, where new actions are available

Example: 8 Puzzle  What are the states?  How many states?  What are the actions?  How many actions from the start state?  What should the step costs be? Start StateGoal StateActions

8 Puzzle I  Heuristic: Number of tiles misplaced  Why is it admissible?  h(start) =  This is a relaxed-problem heuristic 8 Average nodes expanded when the optimal path has… …4 steps…8 steps…12 steps UCS1126, x 10 6 TILES Start State Goal State Statistics from Andrew Moore

8 Puzzle II  What if we had an easier 8-puzzle where any tile could slide any direction at any time, ignoring other tiles?  Total Manhattan distance  Why is it admissible?  h(start) = … = 18 Average nodes expanded when the optimal path has… …4 steps…8 steps…12 steps TILES MANHATTAN Start State Goal State

Combining heuristics  Dominance: h a ≥ h c if  Roughly speaking, larger is better as long as both are admissible  The zero heuristic is pretty bad (what does A* do with h=0?)  The exact heuristic is pretty good, but usually too expensive!  What if we have two heuristics, neither dominates the other?  Form a new heuristic by taking the max of both:  Max of admissible heuristics is admissible and dominates both!  Example: number of knight’s moves to get from A to B  h1 = (Manhattan distance)/3 (rounded up to correct parity)  h2 = (Euclidean distance)/√5 (rounded up to correct parity)  h3 = (max x or y shift)/2 (rounded up to correct parity)

Optimality of A* Graph Search This part is a bit fiddly, sorry about that

A* Graph Search Gone Wrong? S A B C G h=2 h=1 h=4 h=1 h=0 S (0+2) A (1+4)B (1+1) C (2+1) G (5+0) C (3+1) G (6+0) State space graph Search tree Simple check against explored set blocks C Fancy check allows new C if cheaper than old but requires recalculating C’s descendants

Consistency of Heuristics  Main idea: estimated heuristic costs ≤ actual costs  Admissibility: heuristic cost ≤ actual cost to goal h(A) ≤ actual cost from A to G  Consistency: heuristic “arc” cost ≤ actual cost for each arc h(A) – h(C) ≤ cost(A to C)  Consequences of consistency:  The f value along a path never decreases h(A) ≤ cost(A to C) + h(C)  A* graph search is optimal 3 A C G h=4h=1 1 h=2

Optimality of A* Graph Search  Sketch: consider what A* does with a consistent heuristic:  Fact 1: In tree search, A* expands nodes in increasing total f value (f-contours)  Fact 2: For every state s, nodes that reach s optimally are expanded before nodes that reach s suboptimally  Result: A* graph search is optimal … f  3 f  2 f  1

Optimality  Tree search:  A* is optimal if heuristic is admissible  UCS is a special case (h = 0)  Graph search:  A* optimal if heuristic is consistent  UCS optimal (h = 0 is consistent)  Consistency implies admissibility  In general, most natural admissible heuristics tend to be consistent, especially if from relaxed problems

A*: Summary

 A* uses both backward costs and (estimates of) forward costs  A* is optimal with admissible / consistent heuristics  Heuristic design is key: often use relaxed problems