Heuristic Search Planners. 2 USC INFORMATION SCIENCES INSTITUTE Planning as heuristic search Use standard search techniques, e.g. A*, best-first, hill-climbing.

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

An Introduction to Artificial Intelligence
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
Top 5 Worst Times For A Conference Talk 1.Last Day 2.Last Session of Last Day 3.Last Talk of Last Session of Last Day 4.Last Talk of Last Session of Last.
Finding Search Heuristics Henry Kautz. if State[node] is not in closed OR g[node] < g[LookUp(State[node],closed)] then A* Graph Search for Any Admissible.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Planning Graphs * Based on slides by Alan Fern, Berthe Choueiry and Sungwook Yoon.
AI – Week 5 Implementing your own AI Planner in Prolog – part II : HEURISTICS Lee McCluskey, room 2/09
Heuristic State Space Seach Henry Kautz. Assignment.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
Utilizing Problem Structure in Local Search: The Planning Benchmarks as a Case Study Jőrg Hoffmann Alberts-Ludwigs-University Freiburg.
Problem Solving by Searching
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Classical Planning via State-space search COMP3431 Malcolm Ryan.
Informed Search Idea: be smart about what paths to try.
Classical Planning Chapter 10.
GraphPlan Alan Fern * * Based in part on slides by Daniel Weld and José Luis Ambite.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Midterm Review Prateek Tandon, John Dickerson. Basic Uninformed Search (Summary) b = branching factor d = depth of shallowest goal state m = depth of.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Heuristics in Search-Space CSE 574 April 11, 2003 Dan Weld.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Intro to Planning Or, how to represent the planning problem in logic.
Classical Planning Chapter 10 Mausam / Andrey Kolobov (Based on slides of Dan Weld, Marie desJardins)
Heuristic Search Foundations of Artificial Intelligence.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Planning I: Total Order Planners Sections
CHAPTER 2 SEARCH HEURISTIC. QUESTION ???? What is Artificial Intelligence? The study of systems that act rationally What does rational mean? Given its.
Heuristic Functions.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Search Control.. Planning is really really hard –Theoretically, practically But people seem ok at it What to do…. –Abstraction –Find “easy” classes of.
By J. Hoffmann and B. Nebel
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
1 Chapter 6 Planning-Graph Techniques. 2 Motivation A big source of inefficiency in search algorithms is the branching factor  the number of children.
Heuristic Functions.
Class #17 – Thursday, October 27
Graphplan/ SATPlan Chapter
Informed search algorithms
Class #19 – Monday, November 3
Chapter 6 Planning-Graph Techniques
Iterative Deepening CPSC 322 – Search 6 Textbook § 3.7.3
Graphplan/ SATPlan Chapter
Graphplan/ SATPlan Chapter
GraphPlan Jim Blythe.
CS 416 Artificial Intelligence
Presentation transcript:

Heuristic Search Planners

2 USC INFORMATION SCIENCES INSTITUTE Planning as heuristic search Use standard search techniques, e.g. A*, best-first, hill-climbing etc. Attempt to extract heuristic state evaluator automatically from the Strips encoding of the domain Here, generate relaxed problem by assuming action preconditions are independent

3 USC INFORMATION SCIENCES INSTITUTE Recap: A* search Best-first search using node evaluation f(n) = g(n) + h(n) where g(n) = accumulated cost h(n) = estimate of future cost For A*, h(.) should never overestimate the cost. In this case, the solution will be optimal. Then h is called an admissible heuristic.

4 USC INFORMATION SCIENCES INSTITUTE Derive cost estimate from a relaxed planning problem Ignore the deletes on actions BUT – still NP-hard, so approximate: For individual propositions p: d(s, p) = 0 if p is true in s = 1 + min(d(s, pre(a))) otherwise [min over actions a that add p]

5 USC INFORMATION SCIENCES INSTITUTE Cost of a conjunction How to compute d(s,pre(a)) or d(s,G) ? Different options:  Additive: d(s, P) = sum d(s, p) over p in P  Max: d(s, P) = max d(s, p) Then h(s) = d(s, G) Can compute d(.,.) in polynomial time

6 USC INFORMATION SCIENCES INSTITUTE Admissibility and information Is h+ (additive version) admissible? How about h-max?

7 USC INFORMATION SCIENCES INSTITUTE Admissibility and information II If h+ is not admissible, why would we use it rather than h-max?

8 USC INFORMATION SCIENCES INSTITUTE HSP algorithm overview Hill-climbing search + restarts if plateau for too long Some ad hoc choices for the planning competition Hill-climbing search is not complete

9 USC INFORMATION SCIENCES INSTITUTE HSP2 overview Best-first search, using h+ Based on WA* - weighted A*: f(n) = g(n) + W * h(n). If W = 1, it’s A* (with admissible h). If W > 1, it’s a little greedy – generally finds solutions faster, but not optimal. In HSP2, W = 5

10 USC INFORMATION SCIENCES INSTITUTE Experiments Does ok compared with IPP (Graphplan derivative) and Blackbox.

11 USC INFORMATION SCIENCES INSTITUTE Regression search Motivation for HSPr  HSP and HSP2 spend up to 80% of their time computing the evaluation function.  Slow to generate nodes compared to other heuristic search systems.  Search backwards from goal, then re-use cost estimates from s0 to the goal, since we always have a single start state s0.  Common wisdom: regression planning is good because the branching factor is much lower

12 USC INFORMATION SCIENCES INSTITUTE HSPr problem space States are sets of atoms (correspond to sets of states in original space) initial state s0 is the goal G Goal states are those that are true in s0 Still use h+. h+(s) = sum g(s0, p)

13 USC INFORMATION SCIENCES INSTITUTE Mutexes in HSPr Problem: many of the regressed goal states are ‘impossible’ – prune them with mutexes E.g in blocksworld (on(c,d), on(a,d),..) is probably unreachable.

14 USC INFORMATION SCIENCES INSTITUTE Mutexes in HSPr First definition: A set M of pairs R = {p, q} is a mutex set if (1) R is not true in s0 (2) for every op o that adds p, o deletes q Sound, but too weak.

15 USC INFORMATION SCIENCES INSTITUTE Mutexes in HSPr, take 2 Better definition: A set M of pairs R = {p, q} is a mutex set if (1) R is not true in s0 (2) for every op o that adds p, either o deletes q or o does not add q, and for some precond r of o, {r, q} is in M. Recursive definition allows for some interaction of the operators

16 USC INFORMATION SCIENCES INSTITUTE Computing mutex sets Start with some set of potential mutex pairs Delete any that don’t satisfy (1) and (2) above Keep going until you don’t delete any more Initial set? – could be all pairs (usually too expensive)

17 USC INFORMATION SCIENCES INSTITUTE Initial set of potential mutexes Ma = { {p, q} | some action adds p and deletes q} Mb = { {r, q} | {p, q} is in Ma, some action adds p, and has r in the precondition} Initial set = Ma u Mb Mutex set derived from Ma u Mb is M*

18 USC INFORMATION SCIENCES INSTITUTE HSPr algorithm WA* search using h+(s0) and M* W = 5 as before Prune states that contain pairs in M*

19 USC INFORMATION SCIENCES INSTITUTE Experiments comparing HSP2 and HSPr Sometimes HSPr does better, sometimes HSP2 does better. Why? Two reasons (per B & G):  Still have spurious states  Since HSP2 recomputes the estimate in each state, it actually has more information

20 USC INFORMATION SCIENCES INSTITUTE Evidence for spurious states Re-run HSPr using mutex set derived from all possible pairs. No difference in most domains Improvement in tire-world domain (with complex interactions) Slows down in logistics domain

21 USC INFORMATION SCIENCES INSTITUTE Branching factor Varies widely from instance to instance. (Always seems greater in forward chaining though) Performance of HSP2 vs HSPr doesn’t seem to correlate with branching factor  Other factors dominate, e.g. informedness of heuristic

22 USC INFORMATION SCIENCES INSTITUTE Derivation of heuristics h+ has problems when there are positive or negative interactions Can efficient heuristics better capture the interactions? H^2 – use the cost of the most expensive pair of goals  Still admissible, more informative than hmax, still cheap  Room for domain-dependent options?

23 USC INFORMATION SCIENCES INSTITUTE Comparing HSPr and Graphplan Both search forwards in relaxed space, then backwards Planning graph encodes an admissible heuristic: hg(s) = j if j is the first level where s appears without a mutex Graphplan encodes IDA* efficiently as solution extraction – but this makes it hard to use other search algorithms.

24 USC INFORMATION SCIENCES INSTITUTE Overall Planning as heuristic search: HSP family are elegant, quite efficient for domain-independent, and use clear principles of search Simple algorithms and relatively thorough analysis – make it easy to consider lots of extensions

25 USC INFORMATION SCIENCES INSTITUTE Ways to extend Improving automatically generated heuristics More flexible action representations  Probably easier to encode in forwards than backwards search Principles and format for encoding domain- dependent heuristics  Both the estimate function and other control