Artificial Intelligence 15-381 Heuristic Search Methods Jaime Carbonell 30 August 2001 Today's Agenda Search Complexity Analysis Heuristics.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed Search Algorithms
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Artificial Intelligence (CS 461D)
Slide 1 Heuristic Search: BestFS and A * Jim Little UBC CS 322 – Search 5 September 19, 2014 Textbook § 3.6.
State Space Search Algorithms CSE 472 Introduction to Artificial Intelligence Autumn 2003.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 23, 2009.
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Lecture 3 Informed Search CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.
Problem Solving and Search in AI Heuristic Search
CPSC 322, Lecture 8Slide 1 Heuristic Search: BestFS and A * Computer Science cpsc322, Lecture 8 (Textbook Chpt 3.5) January, 21, 2009.
CSC344: AI for Games Lecture 4: Informed search
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed Search Idea: be smart about what paths to try.
Informed search algorithms
Search: Heuristic &Optimal Artificial Intelligence CMSC January 16, 2003.
CS 416 Artificial Intelligence Lecture 4 Uninformed Searches (cont) Lecture 4 Uninformed Searches (cont)
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
Searching for Solutions
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Example Search Problem A genetics professor – Wants to name her new baby boy – Using only the letters D,N & A Search through possible strings (states)
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Artificial Intelligence Heuristic Search Methods
Presentation transcript:

Artificial Intelligence Heuristic Search Methods Jaime Carbonell 30 August 2001 Today's Agenda Search Complexity Analysis Heuristics and evaluation functions Heuristic search methods Admissibility and A* search B* search (time permitting) Macrooperators in search (time permitting)

Complexity of Search Definitions Let depth d = length(min(s-Path(S 0, S G )))-1 Let branching-factor b = Ave(|Succ(S i )|) Let backward branching B = Ave(|Succ -1 (S i )|); usually b=bb, but not always Let C(,b,d) = max number of S i visited C(,b,d) = worst-case time complexity C(,b,d) >= worst-case space complexity

Complexity of Search Breadth-First Search Complexity C(BFS,b,d) =  i=0,d b i = O(b d ) C(BBFS,b,d) =  i=0,d B i = O(B d ) C(BiBFS,b,d) = 2  i=0,d/2 b i = O(b d/2 ), if b=B Suppose we have k evenly-spaced islands in s=Path(S 0, S G ), then: C(IBFS,b,d) = (k+1)  i=0,d/(k+1) b i = O(b d/(k+1) ) C(BiIBFS,b,d) = 2 (k+1)  i=0,d/(2k+2) b i = O(b d/(2k+2) )

Heuristics in AI Search Definition A Heuristic is an operationally-effective nugget of information on how to direct search in a problem space. Heuristics are only approximately correct.

Common Types of Heuristics "If-then" rules for state-transition selection Macro-operator formation [discussed later] Problem decomposition [e.g. hypothesizing islands on the search path] Estimation of distance between S curr and S G. (e.g. Manhattan, Euclidian, topological distance) Value function on each Succ(S curr ) cost(path(S 0, S curr )) + E[cost(path(S curr,S G ))] Utility: value(S) – cost(S)

Heuristic Search Value function: E(o-Path(S 0, S curr ), S curr, S G ) Since S 0 and S G are constant, we abbreviate E(S curr ) General Form: 1. Quit if done (with success or failure), else: 2. s-Queue:= F(Succ(S curr ),s-Queue) 3. S next := Argmax[E(s-Queue)] 4. Go to 1, with S curr := S next

Heuristic Search Steepest-Ascent Hill-Climbing F(Succ(S curr ), s-Queue) = Succ(S curr ) No history stored in s-Queue, hence: Space complexity = max(b) [=O(1) if b is bounded] Quintessential greedy search Max-Gradient Search "Informed" depth-first search S next := Argmax[E(Succ(S curr ))] But if Succ(S next ) is null, then backtrack Alternative: backtrack if E(S next )<E(S curr )

Beyond Greedy Search Best-First Search BestFS(S curr, S G, s-Queue) IF S curr = S G, return SUCCESS For s i in Succ(S curr ) Insertion-sort(, s-Queue) IF s-Queue = Null, return FAILURE ELSE return BestFS(FIRST(s-Queue), S G, TAIL(s-Queue))

Beyond Greedy Search Best-First Search (cont.) F(Succ(S curr) ), s-Queue) = Sort(Append(Succ(S curr ), Tail(s-Queue)),E(s i )) Full-breadth search "Ragged"fringe expansion Does BestFS guarantee optimality?

Beyond Greedy Search Beam Search Best-few BFS Beam-width parameter Uniform fringe expansion Does Beam Search guarantee optimality?

A* Search Cost Function Definitions Let g(S curr ) = actual cost to reach S curr from S 0 Let h(S curr )= estimated cost from S curr to S G Let f(S curr )= g(S curr ) + h(S curr )

A* Search Definitions Optimality Definition A solution is optimal if it conforms to the minimal-cost path between S 0 and S G. If operators cost is uniform, then the optimal solution = shortest path. Admissibility Definition A heuristic is admissible with respect to a search method if it guarantees finding the optimal solution first, even when its value is only an estimate.

A* Search Preliminaries Admissible Heuristics for BestFS "Always expand the node with min(g(S curr )) first." If Solution found, expand any S i in s-Queue where g(S i ) < g(S G ) Find solution any which way. Then Best FS(S i ) for all intermediate S i in solution as follows: 1. If g(S( 1 curr ) >= g(S G ) in previous, quit 2. Else if g(S( 1 G < g(S G ), Sol:=Sol 1, & redo (1).

A*: Better Admissible Heuristics Observations on admissible heuristics Admissible heuristics based only on look-back (e.g. on g(S)) can lead to massive inefficiency! Can we do better? Can we look forward (e.g. beyond g(S curr )) too? Yes, we can!

A*: Better Admissible Heuristics The A* Criterion If h(S curr ) always equals or underestimates the true remaining cost, then f(S curr ) is admissible with respect to Best-First Search. A* Search A* Search = BestFS with admissible f = g + h under the admissibility constraints above.

A* Optimality Proof Goal and Path Proofs Let S G be optimal goal state, and s-path (S 0, S G ) be the optimal solution. Consider an A* search tree rooted at S 0 with S 1 G on fringe. Must prove f(S G2 ) >= f(S G ) and g(path(S 0, S G )) is minimal (optimal). Text proves optimality by contradiction.

A* Optimality Proof Simpler Optimality Proof for A* Assume s-Queue sorted by f. Pick a sub-optimal S G 2 : g(S G 2 ) > g(S G ) Since h(S G 2 ) = h(S G ) = 0, f (S G 2 ) > f (S G ) If s-Queue is sorted by f, f(S G ) is selected before f(S G 2 )

B* Search Ideas Admissible heuristics for mono- and bi-polar search "Eliminates" horizon problem in game-trees [more later] Definitions Let Best(S) = Always optimistic eval fn. Let Worst(S) = Always pessimistic eval fn. Hence: Worst(S) < True-eval(S) < Best(S)

Basic B* Search B*(S) is defined as: If there is an S i in SUCC(S curr ) s.t. For all other S j in SUCC(S curr ), W(S i ) > B(S j ) Then select S i Else ProveBest (SUCC(S curr )) OR DisproveRest (SUCC(S curr ) Difficulties in B* Guaranteeing eternal pessimism in W(S) (eternal optimism is somewhat easier) Switching among ProveBest and DisproveRest Usually W(S) B(S j )

Macrooperators in Search Linear Macros Cashed sequence of instantiated operators: If: S 0 ---op i  S 1 ---op j  S 2 Then: S 0 –op i,j  S 2 Alternative notation: if: op j (op i (S 0 )) = S 2, Then: op i,j (S 0 ) = S 2 Macros can have any length, e.g. o i,j,k,l,m,n Key question: do linear macoros reduce search?

Macrooperators in Search Disjunctive Macros Iterative Macros op 1 op 2 op 5 op 4 op 6 op 3 op 7 Cond (s-Hist,S G )op i,j NO YES op k,l,m,n op o,p,q