ITCS 3153 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Informed search algorithms
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Review: Search problem formulation
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
CS 416 Artificial Intelligence Lecture 5 Finish Uninformed Searches Begin Informed Searches Lecture 5 Finish Uninformed Searches Begin Informed Searches.
Informed (Heuristic) Search
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches.
1 Shanghai Jiao Tong University Informed Search and Exploration.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Monday Read chapter 4, section 1 No homework..
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
Heuristic Search Foundations of Artificial Intelligence.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
CS 416 Artificial Intelligence Lecture 4 Finish Uninformed Searches Begin Informed Searches Lecture 4 Finish Uninformed Searches Begin Informed Searches.
Reading Material Sections 3.3 – 3.5 Sections 4.1 – 4.2 “Optimal Rectangle Packing: New Results” By R. Korf (optional) “Optimal Rectangle Packing: A Meta-CSP.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
CSE 473 University of Washington
CS 416 Artificial Intelligence
Presentation transcript:

ITCS 3153 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

We are informed (in some way) about future states and future paths We use this information to make better decisions about which of many potential paths to pursue We are informed (in some way) about future states and future paths We use this information to make better decisions about which of many potential paths to pursue

A* Search Combine two costs f(n) = g(n) + h(n)f(n) = g(n) + h(n) –g(n) = cost to get to n from the root –h(n) = cost to get to goal from n  admissible heurisitic  h(n) is optimistic  f(n) never overestimates cost of a solution through n Expand node with minimum f(n) Combine two costs f(n) = g(n) + h(n)f(n) = g(n) + h(n) –g(n) = cost to get to n from the root –h(n) = cost to get to goal from n  admissible heurisitic  h(n) is optimistic  f(n) never overestimates cost of a solution through n Expand node with minimum f(n)

Repeated States and Graph-Search Graph-Search always ignores all but the first occurrence of a state during search Lower cost path may be tossedLower cost path may be tossed –So, don’t throw away subsequent occurrences –Or, ensure that the optimal path to any repeated state is always the first one followed Additional constraint on heurisitic, consistencyAdditional constraint on heurisitic, consistency Graph-Search always ignores all but the first occurrence of a state during search Lower cost path may be tossedLower cost path may be tossed –So, don’t throw away subsequent occurrences –Or, ensure that the optimal path to any repeated state is always the first one followed Additional constraint on heurisitic, consistencyAdditional constraint on heurisitic, consistency

Consistent (monotonic) h(n) Heuristic function must be monotonic for every node, n, and successor, n’, obtained with action afor every node, n, and successor, n’, obtained with action a –estimated cost of reaching goal from n is no greater than cost of getting to n’ plus estimated cost of reaching goal from n’ –h(n)  c(n, a, n’) + h(n’) This implies f(n) along any path are non-decreasingThis implies f(n) along any path are non-decreasing Heuristic function must be monotonic for every node, n, and successor, n’, obtained with action afor every node, n, and successor, n’, obtained with action a –estimated cost of reaching goal from n is no greater than cost of getting to n’ plus estimated cost of reaching goal from n’ –h(n)  c(n, a, n’) + h(n’) This implies f(n) along any path are non-decreasingThis implies f(n) along any path are non-decreasing

Examples of consistent h(n) h(n)  c(n, a, n’) + h(n’) recall h(n) is admissiblerecall h(n) is admissible –The quickest you can get there from here is 10 minutes  It may take more than 10 minutes, but not fewer After taking an action and learning the costAfter taking an action and learning the cost –It took you two minutes to get here and you still have nine minutes to go –We cannot learn… it took you two minutes to get here and you have seven minutes to go h(n)  c(n, a, n’) + h(n’) recall h(n) is admissiblerecall h(n) is admissible –The quickest you can get there from here is 10 minutes  It may take more than 10 minutes, but not fewer After taking an action and learning the costAfter taking an action and learning the cost –It took you two minutes to get here and you still have nine minutes to go –We cannot learn… it took you two minutes to get here and you have seven minutes to go

Proof of monotonicity of f(n) If h(n) is consistent (monotonic) then f(n) along any path is nondecreasing then f(n) along any path is nondecreasing suppose n’ is successor of nsuppose n’ is successor of n –g(n’) = g(n) + c (n, a, n’) for some a –f(n’) = g(n’) + h(n’) = g(n) + c(n, a, n’) + h(n’)  g(n) + h(n) = f(n) If h(n) is consistent (monotonic) then f(n) along any path is nondecreasing then f(n) along any path is nondecreasing suppose n’ is successor of nsuppose n’ is successor of n –g(n’) = g(n) + c (n, a, n’) for some a –f(n’) = g(n’) + h(n’) = g(n) + c(n, a, n’) + h(n’)  g(n) + h(n) = f(n) monotonicity implies h(n)  c(n, a, n’) + h(n’)

Contours Because f(n) is nondecreasing we can draw contours If we know C*If we know C* We only need to explore contours less than C*We only need to explore contours less than C* Because f(n) is nondecreasing we can draw contours If we know C*If we know C* We only need to explore contours less than C*We only need to explore contours less than C*

Properties of A* A* expands all nodes n with f(n) < C*A* expands all nodes n with f(n) < C* A* expands some (at least one) of the nodes on the C* contour before finding the goalA* expands some (at least one) of the nodes on the C* contour before finding the goal A* expands no nodes with f(n) > C*A* expands no nodes with f(n) > C* –these unexpanded nodes can be pruned A* expands all nodes n with f(n) < C*A* expands all nodes n with f(n) < C* A* expands some (at least one) of the nodes on the C* contour before finding the goalA* expands some (at least one) of the nodes on the C* contour before finding the goal A* expands no nodes with f(n) > C*A* expands no nodes with f(n) > C* –these unexpanded nodes can be pruned

A* is Optimally Efficient Compared to other algorithms that search from root Compared to other algorithms using same heuristic No other optimal algorithm is guaranteed to expand fewer nodes than A* Compared to other algorithms that search from root Compared to other algorithms using same heuristic No other optimal algorithm is guaranteed to expand fewer nodes than A*

Pros and Cons of A* A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all have errors A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all have errors A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible

Memory-bounded Heuristic Search Try to reduce memory needs Take advantage of heuristic to improve performance Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*) Recursive best-first search (RBFS)Recursive best-first search (RBFS) SMA*SMA* Try to reduce memory needs Take advantage of heuristic to improve performance Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*) Recursive best-first search (RBFS)Recursive best-first search (RBFS) SMA*SMA*

Iterative Deepening A* Iterative Deepening Remember, as an uniformed search, this was a depth-first search where the max depth was iteratively increasedRemember, as an uniformed search, this was a depth-first search where the max depth was iteratively increased As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iterationAs an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iteration Don’t need to store ordered queue of best nodesDon’t need to store ordered queue of best nodes Iterative Deepening Remember, as an uniformed search, this was a depth-first search where the max depth was iteratively increasedRemember, as an uniformed search, this was a depth-first search where the max depth was iteratively increased As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iterationAs an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iteration Don’t need to store ordered queue of best nodesDon’t need to store ordered queue of best nodes

Recursive best-first search Depth-first combined with best alternative Keep track of options along fringeKeep track of options along fringe As soon as current depth-first exploration becomes more expensive of best fringe optionAs soon as current depth-first exploration becomes more expensive of best fringe option –back up to fringe, but update node costs along the way Depth-first combined with best alternative Keep track of options along fringeKeep track of options along fringe As soon as current depth-first exploration becomes more expensive of best fringe optionAs soon as current depth-first exploration becomes more expensive of best fringe option –back up to fringe, but update node costs along the way

Recursive best-first search box contains f-value of best alternative path available from any ancestorbox contains f-value of best alternative path available from any ancestor First, explore path to PitestiFirst, explore path to Pitesti Backtrack to Fagaras and update FagarasBacktrack to Fagaras and update Fagaras Backtrack to Pitesti and update PitestiBacktrack to Pitesti and update Pitesti box contains f-value of best alternative path available from any ancestorbox contains f-value of best alternative path available from any ancestor First, explore path to PitestiFirst, explore path to Pitesti Backtrack to Fagaras and update FagarasBacktrack to Fagaras and update Fagaras Backtrack to Pitesti and update PitestiBacktrack to Pitesti and update Pitesti

Meta-foo What does meta mean in AI? Frequently it means step back a level from fooFrequently it means step back a level from foo Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning These informed search algorithms have pros and cons regarding how they choose to explore new levelsThese informed search algorithms have pros and cons regarding how they choose to explore new levels –a metalevel learning algorithm may combine learn how to combine techniques and parameterize search What does meta mean in AI? Frequently it means step back a level from fooFrequently it means step back a level from foo Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning These informed search algorithms have pros and cons regarding how they choose to explore new levelsThese informed search algorithms have pros and cons regarding how they choose to explore new levels –a metalevel learning algorithm may combine learn how to combine techniques and parameterize search

Heuristic Functions 8-puzzle problem Avg Depth=22 Branching = approx states 170,000 repeated

Heuristics The number of misplaced tiles Admissible because at least n moves required to solve n misplaced tilesAdmissible because at least n moves required to solve n misplaced tiles The distance from each tile to its goal position No diagonals, so use Manhattan DistanceNo diagonals, so use Manhattan Distance –As if walking around rectilinear city blocks also admissiblealso admissible The number of misplaced tiles Admissible because at least n moves required to solve n misplaced tilesAdmissible because at least n moves required to solve n misplaced tiles The distance from each tile to its goal position No diagonals, so use Manhattan DistanceNo diagonals, so use Manhattan Distance –As if walking around rectilinear city blocks also admissiblealso admissible

Compare these two heuristics Effective Branching Factor, b* If A* explores N nodes to find the goal at depth dIf A* explores N nodes to find the goal at depth d –b* = branching factor such that a uniform tree of depth d contains N+1 nodes  N+1 = 1 + b* + (b*) 2 + … + (b*) d b* close to 1 is idealb* close to 1 is ideal Effective Branching Factor, b* If A* explores N nodes to find the goal at depth dIf A* explores N nodes to find the goal at depth d –b* = branching factor such that a uniform tree of depth d contains N+1 nodes  N+1 = 1 + b* + (b*) 2 + … + (b*) d b* close to 1 is idealb* close to 1 is ideal

Compare these two heuristics

Puzzle You are next to the punch bowl, at a party. You happen to have two glasses, one holds five units (cups, cubic centimeters, whatever), and the other holds three units. You must get exactly four units of punch (doctor's orders perhaps), by filling glasses and dumping back into the bowl. How can you do that? Note: our aim is to have four units in the five unit glass. You are next to the punch bowl, at a party. You happen to have two glasses, one holds five units (cups, cubic centimeters, whatever), and the other holds three units. You must get exactly four units of punch (doctor's orders perhaps), by filling glasses and dumping back into the bowl. How can you do that? Note: our aim is to have four units in the five unit glass. Puzzle Page