By J. Hoffmann and B. Nebel

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

Informed search algorithms
Heuristic Functions By Peter Lane
Informed search algorithms
Heuristic Searches. Feedback: Tutorial 1 Describing a state. Entire state space vs. incremental development. Elimination of children. Closed and the solution.
Review: Search problem formulation
Informed Search Algorithms
Heuristic Search techniques
Informed search algorithms
AI Pathfinding Representing the Search Space
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
PROBLEM SOLVING AND SEARCH
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
State Space 3 Chapter 4 Heuristic Search. Three Algorithms Backtrack Depth First Breadth First All work if we have well-defined: Goal state Start state.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Solving Problem by Searching
Graph-based Planning Brian C. Williams Sept. 25 th & 30 th, J/6.834J.
Planning Graphs * Based on slides by Alan Fern, Berthe Choueiry and Sungwook Yoon.
Search Techniques MSc AI module. Search In order to build a system to solve a problem we need to: Define and analyse the problem Acquire the knowledge.
CS 484 – Artificial Intelligence1 Announcements Department Picnic: today, after class Lab 0 due today Homework 2 due Tuesday, 9/18 Lab 1 due Thursday,
Chapter 4 Search Methodologies.
AI – Week 5 Implementing your own AI Planner in Prolog – part II : HEURISTICS Lee McCluskey, room 2/09
Heuristic State Space Seach Henry Kautz. Assignment.
Utilizing Problem Structure in Local Search: The Planning Benchmarks as a Case Study Jőrg Hoffmann Alberts-Ludwigs-University Freiburg.
Problem Solving by Searching
Review: Search problem formulation
1 Using Search in Problem Solving Part II. 2 Basic Concepts Basic concepts: Initial state Goal/Target state Intermediate states Path from the initial.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Find a Path s A D B E C F G Heuristically Informed Methods  Which node do I expand next?  What information can I use to guide this.
Using Search in Problem Solving
Research Related to Real-Time Strategy Games Robert Holte November 8, 2002.
Problem Solving and Search in AI Heuristic Search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed Search Strategies
Chapter 5.4 Artificial Intelligence: Pathfinding.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Search: Heuristic &Optimal Artificial Intelligence CMSC January 16, 2003.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
CS 415 – A.I. Slide Set 6. Chapter 4 – Heuristic Search Heuristic – the study of the methods and rules of discovery and invention State Space Heuristics.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Heuristic Search Andrea Danyluk September 16, 2013.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Heuristic Search Foundations of Artificial Intelligence.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Searching for Solutions
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
Heuristic Search Planners. 2 USC INFORMATION SCIENCES INSTITUTE Planning as heuristic search Use standard search techniques, e.g. A*, best-first, hill-climbing.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dept. Computer Science, Korea Univ. Intelligent Information System Lab A I (Artificial Intelligence) Professor I. J. Chung.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Artificial Intelligence Problem solving by searching CSC 361
Finding Heuristics Using Abstraction
CAP 5636 – Advanced Artificial Intelligence
Informed search algorithms
Informed search algorithms
BEST FIRST SEARCH -OR Graph -A* Search -Agenda Search CSE 402
CS 416 Artificial Intelligence
Informed Search.
Presentation transcript:

By J. Hoffmann and B. Nebel Fast Forward Planning By J. Hoffmann and B. Nebel

« « Chronology “Traditional” “Graphplan-based” “Satplan-based” faster  1995 Optimal “Graphplan-based” “Satplan-based” « Sub-optimal  2000 “Heuristic-based”

Fast Forward (FF) Winner of AIPS2000 Forward-chaining heuristic search planner Basic principle: Hill-climb through the space of problem states, starting at the initial state. Each child state results from apply a single action. Always moves to the first child state found that is closer to the goal. Records the actions applied along the path. The actions leading to the goal constitute a plan.

FF’s Base System Structure

FF Search Strategy FF uses a strategy called enforced hill-climbing: Obtain heuristic estimate of the value of the current state. Find action(s) transitioning to a better state. Move to the better state. Append actions to plan head. Never backtrack over any choice.

Search: Enforced hill-climbing Plain hill-climbing Randomly breaks ties and adds to the path Can wander in plateaus before restarting Enforced hill-climbing At a state, perform Breadth First (exhaustive) Search until a state with a better heuristic is found Force search to a better position (if it exists) Add path to that new state to the plan

Enforced Hill-Climbing (cont.) The success of this strategy depends on how informative the heuristic is. FF uses a heuristic found to be informative in a large class of bench mark planning domains. The strategy is not complete. Never backtracking means that some parts of the search space are lost. If FF fails to find a solution using this strategy it switches to standard Best First Search.

Finding a better state: Plateaus

FF’s Heuristic Estimate The value of a state is a measure of how close it is to a goal state. This cannot be determined exactly (too hard), but can be approximated. One way of approximating is to solve a relaxed problem. Relaxation is achieved by ignoring the negative effects of the actions. The relaxed action set, A', is defined by: A' = {<pre(a),add(a),0> | a in A}

Building the Relaxed Plan Graph Start at the initial state. Repeatedly apply all relaxed actions whose preconditions are satisfied. Assert their (positive) effects in the next layer. If all actions are applied and the goals are not all present in the final graph layer, Then the problem is unsolvable.

Extracting a Relaxed Soln When a layer containing all of the goals is reached, FF searches backwards for a plan. The first possible achiever found is always used to achieve each goal. The relaxed plan might contain many actions happening concurrently at a layer. The number of actions in the relaxed plan is an estimate of the true cost of achieving the goals.

Graph-based heuristic

Example: Polish

Distance Estimate Extracted From A Relaxed Plan Graph

How FF Uses the Heuristic FF uses the heuristic to estimate how close each state is to a goal state any state satisfying the goal propositions. The actions in the relaxed plan are used as a guide to which actions to explore when extending the plan. All actions in the relaxed plan at the 1st layer that achieves at least one of the (sub) goals required at the 2nd layer are considered helpful. FF restricts attention to the helpful actions when searching forward from a state.

Distance Estimate Extracted From A Relaxed Plan Graph

Properties of the Heuristic The relaxed plan that is extracted is not guaranteed to be the optimal relaxed plan. the heuristic is not admissible. FF can produce non-optimal solutions. Focusing only on helpful actions is not completeness preserving. Enforced hill-climbing is not completeness preserving.

Getting Out of Deadends Because FF does not backtrack, FF can get stuck in dead-ends. This arises when an action cannot be reversed, thus, having entered a bad state there is no way to improve. When no search progress can be made, FF switches to Best First Search from the initial state. Detecting a dead-end can be expensive if the plateau is large.

Runtime Curves on large Logistic instances

FF versus HSP FF and HSP are forward chaining planners that use a hill-climbing strategy based on relaxed distance estimates. FF uses a heuristic evaluation based on the number of actions in an explicit relaxed plan. HSP uses weight values which approximate (but overestimate) the length of a relaxed plan.

FF versus HSP FF uses a number of pruning heuristics that can be very powerful (especially in the simply structured propositional bench mark domains). FF terminates Breadth First Search for a successor state as soon as an improvement is found. HSP selects successors randomly from the set of best states. FF defaults to complete Best First Search from the initial state if the enforced hill-climbing strategy fails.