Planning (Chapter 10) Slides by Svetlana Lazebnik, 9/2016 with modifications by Mark Hasegawa-Johnson, 9/2017 http://en.wikipedia.org/wiki/Rube_Goldberg_machine.

Slides:



Advertisements
Similar presentations
Practical Planning: Scheduling and Hierarchical Task Networks Chapter CS 63 Adapted from slides by Tim Finin and Marie desJardins.
Advertisements

REVIEW : Planning To make your thinking more concrete, use a real problem to ground your discussion. –Develop a plan for a person who is getting out of.
Language for planning problems
Planning
Logic: The Big Picture Propositional logic: atomic statements are facts –Inference via resolution is sound and complete (though likely computationally.
1 Planning Chapter CMSC 471 Adapted from slides by Tim Finin and Marie desJardins. Some material adopted from notes by Andreas Geyer-Schulz,
Planning  We have done a sort of planning already  Consider the “search” applied to general problem solving  The sequence of moves with the “Jugs” was.
Planning II: Partial Order Planning
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Artificial Intelligence II S. Russell and P. Norvig Artificial Intelligence: A Modern Approach Chapter 11: Planning.
Planning Search vs. planning STRIPS operators Partial-order planning.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
CS 460, Session 20 1 Planning Search vs. planning STRIPS operators Partial-order planning.
Artificial Intelligence Chapter 11: Planning
Planning Planning is a special case of reasoning We want to achieve some state of the world Typical example is robotics Many thanks to Robin Burke, University.
CS 561, Session Planning Search vs. planning STRIPS operators Partial-order planning.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Automated Planning and HTNs Planning – A brief intro Planning – A brief intro Classical Planning – The STRIPS Language Classical Planning – The STRIPS.
AI Principles, Lecture on Planning Planning Jeremy Wyatt.
Planning Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Classical Planning Chapter 10.
For Wednesday Read chapter 12, sections 3-5 Program 2 progress due.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
1 Plan-Space Planning Dr. Héctor Muñoz-Avila Sources: Ch. 5 Appendix A Slides from Dana Nau’s lecture.
1 07. The planning problem 2  Inputs: 1. A description of the world state 2. The goal state description 3. A set of actions  Output: A sequence of actions.
April 3, 2006AI: Chapter 11: Planning1 Artificial Intelligence Chapter 11: Planning Michael Scherger Department of Computer Science Kent State University.
CS.462 Artificial Intelligence SOMCHAI THANGSATHITYANGKUL Lecture 07 : Planning.
Planning (Chapter 10)
For Monday Read chapter 12, sections 1-2 Homework: –Chapter 10, exercise 3.
1 Search vs. planning Situation calculus STRIPS operators Search vs. planning Situation calculus STRIPS operators.
For Friday Finish chapter 10 No homework. Program 2 Any questions?
For Friday No reading Homework: –Chapter 11, exercise 4.
Planning (Chapter 10)
Lecture 3-1CS251: Intro to AI/Lisp II Planning to Learn, Learning to Plan.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
Introduction to Planning Dr. Shazzad Hosain Department of EECS North South Universtiy
Partial Order Planning 1 Brian C. Williams J/6.834J Sept 16 th, 2002 Slides with help from: Dan Weld Stuart Russell & Peter Norvig.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Intro to Planning Or, how to represent the planning problem in logic.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Planning I: Total Order Planners Sections
Consider the task get milk, bananas, and a cordless drill.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
L9. Planning Agents L7_exAnswer and explanation Review
For Wednesday Read Chapter 18, sections 1-3 Homework:
Planning (Chapter 10)
ECE 448 Lecture 4: Search Intro
Introduction Contents Sungwook Yoon, Postdoctoral Research Associate
Planning Search vs. planning STRIPS operators Partial-order planning
Dana S. Nau University of Maryland 3:10 AM September 12, 2018
Planning (Chapter 10)
Problem Solving by Searching
Dana S. Nau University of Maryland 1:50 AM September 14, 2018
Consider the task get milk, bananas, and a cordless drill
Planning Search vs. planning STRIPS operators Partial-order planning
Planning Search vs. planning STRIPS operators Partial-order planning
AI Planning.
Planning José Luis Ambite.
Planning CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and.
L11. Planning Agents and STRIPS
Planning Chapter
© James D. Skrentny from notes by C. Dyer, et. al.
Course Outline Advanced Introduction Expert Systems Topics Problem
Class #20 – Wednesday, November 5
Presentation transcript:

Planning (Chapter 10) Slides by Svetlana Lazebnik, 9/2016 with modifications by Mark Hasegawa-Johnson, 9/2017 http://en.wikipedia.org/wiki/Rube_Goldberg_machine

Planning Problem: I’m at home and I need milk, bananas, and a drill. How is planning different from regular search? States and action sequences typically have complex internal structure State space and branching factor are huge Multiple subgoals at multiple levels of resolution Examples of planning applications Scheduling of tasks in space missions Logistics planning for the army Assembly lines, industrial processes Robotics Games, storytelling

A representation for planning STRIPS (Stanford Research Institute Problem Solver): classical planning framework from the 1970s States are specified as conjunctions of predicates Start state: At(home)  Sells(SM, Milk)  Sells(SM, Bananas)  Sells(HW, drill) Goal state: At(home)  Have(Milk)  Have(Banana)  Have(drill) Actions are described in terms of preconditions and effects: Go(x, y) Precond: At(x) Effect: ¬At(x)  At(y) Buy(x, store) Precond: At(store)  Sells(store, x) Effect: Have(x) Planning is “just” a search problem

Algorithms for planning Forward (progression) state-space search: starting with the start state, find all applicable actions (actions for which preconditions are satisfied), compute the successor state based on the effects, keep searching until goals are met Can work well with good heuristics https://www.youtube.com/watch?v=QLNSkFnBYuM

Algorithms for planning Forward (progression) state-space search: starting with the start state, find all applicable actions (actions for which preconditions are satisfied), compute the successor state based on the effects, keep searching until goals are met Can work well with good heuristics Backward (regression) relevant-states search: to achieve a goal, what must have been true in the previous state?

Forward chaining = start with what you already have, and then learn the next step

Forward-chaining: each step in the search is one application of modus ponens In forward-chaining, we start from a state, P, that the search process has already achieved We then check to see whether it is possible to apply modus ponens: given the current state P, is there any operation P→Q available?

Backward chaining = start with the goal state, and search backward

Forward-chaining: each step in the search is one application of modus ponens In backward-chaining, we start from a state, Q, that we want to achieve We then check to see whether it is possible to apply modus ponens in order to generate Q from any other state, P:

Heuristics for planning: means-ends analysis

Challenges of planning: “Sussman anomaly” Goal state: Start state: On(A,B) On(B,C) Let’s try to achieve On(A,B): Let’s try to achieve On(B,C): http://en.wikipedia.org/wiki/Sussman_Anomaly

Challenges of planning: “Sussman anomaly” Shows the limitations of non-interleaved planners that consider subgoals in sequence and try to satisfy them one at a time If you try to satisfy subgoal X and then subgoal Y, X might undo some preconditions for Y, or Y might undo some effects of X More powerful planning approaches must interleave the steps towards achieving multiple subgoals http://en.wikipedia.org/wiki/Sussman_Anomaly

Situation space planning vs. plan space planning Situation space planners: each node in the search space represents a world state, arcs are actions in the world Plans are sequences of actions from start to finish Must be totally ordered Plan space planners: nodes are (incomplete) plans, arcs are transformations between plans Actions in the plan may be partially ordered Principle of least commitment: avoid ordering plan steps unless absolutely necessary

Partial order planning Task: put on socks and shoes Partial order plan Total order (linear) plans

Partial Order Planning Example Start Sells(SM, Milk) At(Home) Sells(SM, Bananas) Start: empty plan Action: find flaw in the plan and modify plan to fix the flaw Have(Milk) Have(Bananas) Finish

Partial Order Planning Example Start Sells(SM, Milk) At(Home) Sells(SM, Bananas) At(x3) Go(x3, SM) At(SM) Sells(x1, Milk) At(x1) At(x2) Sells(x2, Bananas) Buy(x1, Milk) Buy(x2, Bananas) Have(Milk) Have(Bananas) x1 = SM Have(Milk) Have(Bananas) x2 = SM x3 = Home Finish

Application of planning: Automated storytelling https://research.cc.gatech.edu/inc/mark-riedl

Application of planning: Automated storytelling Applications Personalized experience in games Automatically generating training scenarios (e.g., for the army) Therapy for kids with autism Computational study of creativity https://research.cc.gatech.edu/inc/mark-riedl

Application of planning: the Gale-Church alignment algorithm

Application of planning: the Gale-Church alignment algorithm

Complexity of planning Planning is PSPACE-complete The length of a plan can be exponential in the number of “objects” in the problem! Example: towers of Hanoi

Complexity of planning Planning is PSPACE-complete The length of a plan can be exponential in the number of “objects” in the problem! So is game search Archetypal PSPACE-complete problem: quantified boolean formula (QBF) Example: is this formula true? x1x2 x3x4 (x1x3x4)(x2x3x4) Compare to SAT: x1 x2 x3 x4 (x1x3x4)(x2x3x4) Relationship between SAT and QBF is akin to the relationship between puzzles and games

Real-world planning Resource constraints Instead of “static,” the world is “semidynamic:” we can’t think forever Actions at different levels of granularity: hierarchical planning In order to make the depth of the search smaller, we might convert the world from “fully observable” to “partially observable” Contingencies: actions failing Instead of being “deterministic,” maybe the world is “stochastic” Incorporating sensing and feedback Possibly necessary to address stochastic or multi-agent environments