Knowledge Repn. & Reasoning Lec #26: Filtering with Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2004.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

CS188: Computational Models of Human Behavior
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
CS498-EA Reasoning in AI Lecture #15 Instructor: Eyal Amir Fall Semester 2011.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
JAYASRI JETTI CHINMAYA KRISHNA SURYADEVARA
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
CSE-573 Artificial Intelligence Partially-Observable MDPS (POMDPs)
Exact Inference in Bayes Nets
Dynamic Bayesian Networks (DBNs)
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
1 Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence Fall 2013.
CPSC 322, Lecture 19Slide 1 Propositional Logic Intro, Syntax Computer Science cpsc322, Lecture 19 (Textbook Chpt ) February, 23, 2009.
10/28 Temporal Probabilistic Models. Temporal (Sequential) Process A temporal process is the evolution of system state over time Often the system state.
1 Graphical Models in Data Assimilation Problems Alexander Ihler UC Irvine Collaborators: Sergey Kirshner Andrew Robertson Padhraic Smyth.
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
Lecture 5: Learning models using EM
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
Handling non-determinism and incompleteness. Problems, Solutions, Success Measures: 3 orthogonal dimensions  Incompleteness in the initial state  Un.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
11/14  Continuation of Time & Change in Probabilistic Reasoning Project 4 progress? Grade Anxiety? Make-up Class  On Monday?  On Wednesday?
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Dynamic Bayesian Networks CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanningLearning.
Learning and Planning for POMDPs Eyal Even-Dar, Tel-Aviv University Sham Kakade, University of Pennsylvania Yishay Mansour, Tel-Aviv University.
Decision Making Under Uncertainty Lec #4: Planning and Sensing UIUC CS 598: Section EA Professor: Eyal Amir Spring Semester 2005 Uses slides by José Luis.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Constraint Satisfaction Problems (CSPs) CPSC 322 – CSP 1 Poole & Mackworth textbook: Sections § Lecturer: Alan Mackworth September 28, 2012.
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Knowledge Repn. & Reasoning Lec #11: Partitioning & Treewidth UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2004.
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
Simultaneously Learning and Filtering Juan F. Mancilla-Caceres CS498EA - Fall 2011 Some slides from Connecting Learning and Logic, Eyal Amir 2006.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
CS Statistical Machine learning Lecture 24
Knowledge Representation & Reasoning Lecture #1 UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2004.
Tractable Inference for Complex Stochastic Processes X. Boyen & D. Koller Presented by Shiau Hong Lim Partially based on slides by Boyen & Koller at UAI.
Projection Methods (Symbolic tools we have used to do…) Ron Parr Duke University Joint work with: Carlos Guestrin (Stanford) Daphne Koller (Stanford)
Wei Sun and KC Chang George Mason University March 2008 Convergence Study of Message Passing In Arbitrary Continuous Bayesian.
Intro to Planning Or, how to represent the planning problem in logic.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Exact Inference in Bayes Nets. Notation U: set of nodes in a graph X i : random variable associated with node i π i : parents of node i Joint probability:
Decision Making Under Uncertainty Lec #1: Introduction UIUC CS 598: Section EA Professor: Eyal Amir Spring Semester 2005.
Knowledge Repn. & Reasoning Lec. #5: First-Order Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2004.
State Estimation and Kalman Filtering Zeeshan Ali Sayyed.
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Knowledge Repn. & Reasoning Lec #17: Continuous & Discrete UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2004 (Based on slides by Michael.
1 CMSC 471 Fall 2004 Class #21 – Thursday, November 11.
1 CMSC 471 Fall 2002 Class #24 – Wednesday, November 20.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Knowledge Representation & Reasoning Lecture #1 UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
Computing & Information Sciences Kansas State University Wednesday, 04 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 17 of 42 Wednesday, 04 October.
Daphne Koller Introduction Motivation and Overview Probabilistic Graphical Models.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Probabilistic Reasoning Inference and Relational Bayesian Networks.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
CS498-EA Reasoning in AI Lecture #23 Instructor: Eyal Amir Fall Semester 2011.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk
Lecture 7: Constrained Conditional Models
Markov ó Kalman Filter Localization
Class #20 – Wednesday, November 5
Reinforcement Learning Dealing with Partial Observability
Eyal Amir (UC Berkeley) Barbara Engelhardt (UC Berkeley)
Presentation transcript:

Knowledge Repn. & Reasoning Lec #26: Filtering with Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2004

Last Time Dynamic Bayes Nets –Forward-backward algorithm –Filtering Approximate inference via factoring and sampling

Filtering Stochastic Processes Dynamic Bayes Nets (DBNs): factored representation s1s4s3s2s5s1s4s3s2s5s1s4s3s2s5s1s4s3s2s5

Filtering Stochastic Processes Dynamic Bayes Nets (DBNs): factored representation s4s3s2s5 s4s3s2s5 s4s3s2s5 s4s3s2s5

Filtering Stochastic Processes Dynamic Bayes Nets (DBNs): factored representation s4s3s5 s4s3s5 s4s3s5 s4s3s5

Filtering Stochastic Processes Dynamic Bayes Nets (DBNs): factored representation s4s5 s4s5 s4s5 s4s5 O(2 n ) space O(2 2n ) time

Filtering Stochastic Processes Dynamic Bayes Nets (DBNs): factored representation: O(2 n ) space, O(2 2n ) time Kalman Filter: Gaussian belief state and linear transition model s1s4s3s2s5s1s4s3s2s5s1s4s3s2s5

Filtering Stochastic Processes Dynamic Bayes Nets (DBNs): factored representation: O(2 n ) space, O(2 2n ) time Kalman Filter: Gaussian belief state and linear transition model s4s5 s4s5 s4s5 O(n 2 ) space O(n 3 ) time

Complexity Results Filtering for deterministic systems is NP- hard when the initial state is not fully known [Liberatore ’97] [Amir&Russell’03]: Every representation of [Amir&Russell’03]: Every representation of belief states grows exponentially for belief states grows exponentially for some deterministic systems some deterministic systems

Today Tracking and filtering logical knowledge Foundations for efficient filtering Compact representation indefinitely Possible projects

Logical Filtering Belief state = logical formula

Logical Filtering Belief state = logical formula Observations = logical formulae

Logical Filtering Belief state = logical formula Observations = logical formulae Actions = effect rules –e.g., “fetch(X,Y) causes has(X) if in(X,Y)”

Logical Filtering Belief state = logical formula Observations = logical formulae Actions = effect rules –e.g., “fetch(X,Y) causes has(X) if in(X,Y)” Actions may be nondeterministic Partial observations

Example: A Cleaning Robot Initial Knowledge: ? Apply action fetch(broom,closet) Resulting knowledge  in(broom,closet) Reason: –If initially  in(broom,closet), then still  in(broom,closet) –If initially in(broom,closet), then now  in(broom,closet)

Filtering with Possible Worlds Problem: n world features  2 n states

Filtering Possible Worlds Initially we are in {s1,…,sk} Action a Filter[a]({s1,…,sk})= {s’ | R(s1,a,s’) or … or R(sk,a,s’)} observing o Filter[o]({s1’,…,su’})= {s1’,…,su’}  {s | o holds in s}

Filtering with Logical Formulae Action-Definition(a) t,t+1    (Precond i (a) t  Effect i (a) t+1 )  i  Frame-Axioms(a)

Filtering with Logical Formulae Belief state S represented by  Actions: Filter[a](  t )  logical results t+1 of  t  Action-Definition(a) t,t+1 A t v B t B t  C t+1   A t+1 v (B t+1  C t+1 )

Filtering with Logical Formulae Belief state S represented by  Actions: Filter[a](  t )  logical results t+1 of  t  Action-Definition(a) t,t+1 Observations: Filter[o](  ) =   o  t+1 = Filter[o](Filter[a](  t )) A t v B t B t  C t+1   A t+1 v (B t+1  C t+1 )

Filtering with Logical Formulae Belief state S represented by  Actions: Filter[a](  t )  logical results t+1 of  t  Action-Definition(a) t,t+1 Observations: Filter[o](  ) =   o  t+1 = Filter[o](Filter[a](  t )) Theorem: formula filtering implements possible-worlds semantics

Contents Tracking and filtering logical knowledge Foundations for efficient filtering Compact representation indefinitely Possible projects

Distribution Properties Filter[a](  Filter[a](  Filter[a](  ) Filtering a DNF belief state by factoring

Distribution Properties Filter[a](  Filter[a](  Filter[a](  ) Filter[a](  Filter[a](  Filter[a](  )  Filter[a](   Filter[a](  Filter[a](TRUE) Filtering a DNF belief state by factoring

Distribution for Some Actions Filter[a](  Filter[a](  Filter[a](  ) Filter[a](  Filter[a](  Filter[a](  ) Filter[a](   Filter[a](  Filter[a](TRUE) Filter literals in the belief-state formula separately, and combine the results STRIPS Actions STRIPS Actions 1:1 Actions 1:1 Actions

Actions that map states 1:1 Examples: –flip(light) but not turn-on(light) –increase(speed,+10) but not set(speed,50) –pickUp(X,Y)but not pickUp(X) Most actions are 1:1 in proper formulation

Actions that map states 1:1 Reason for distribution over  Filter[a](  Filter[a](  Filter[a](  ) 1:1 Non-1:1

STRIPS Actions Possibly nondeterministic effects No conditions on effects Example: turn-on(light) Used extensively in planning

Distribution for Some Actions Filter[a](  Filter[a](  Filter[a](  ) Filter[a](  Filter[a](  Filter[a](  ) Filter[a](   Filter[a](  Filter[a](TRUE) Filter literals in the belief-state formula separately, and combine the results STRIPS Actions STRIPS Actions 1:1 Actions 1:1 Actions

Example: Filtering a Literal Initial knowledge: in(broom,closet) Apply fetch(broom,closet) Preconds: in(broom,closet)  locked(closet) Effects: has(broom)  in(broom,closet) Resulting knowledge: has(broom)  in(broom,closet)  locked(closet)

Example: Filtering a Formula Initial knowledge: in(broom,closet)  locked(closet) Apply fetch(broom,closet) Preconds: in(broom,closet)  locked(closet) Effects: has(broom)  in(broom,closet) Resulting knowledge: has(broom)  in(broom,closet)  locked(closet)

Filtering a Single Literal Closed-form solution: Filter[a](literal) =  (Eff 1  Eff u )  B(a) literal ╞ Pre 1  Pre u –a has effect rules (and frame rules) a causes Eff i if Pre i Eff 1  Eff u - effects of action a Pre 1  Pre u - preconditions of action a –Roughly, B(a)  Filter[a](TRUE)

Filtering a Literal Filter[a](literal) =  (Eff 1  Eff u )  B(a) literal╞ Pre 1  Pre u Belief state (  :  locked(closet) Action (a): fetch(broom,closet) with “fetch(X,Y) causes has(X)  in(X,Y) if  locked(Y)  in(X,Y)” Belief state after a: Filter[a](  locked(closet)   locked(closet)  in(broom,closet)

Filtering a Literal Filter[a](literal) =  (Eff 1  Eff u )  B(a) literal╞ Pre 1  Pre u Action (a): fetch(broom,closet) with “fetch(X,Y) causes has(X)  in(X,Y) if  locked(Y)  in(X,Y)” Belief state after a:  Filter[a](  locked(closet)   locked(closet)  in(broom,closet) Reason:  locked(closet)╞  locked(closet)  in(broom,closet))   in(broom,closet)

Algorithm for Permutation Actions Belief state (  :  locked(closet)  (in(broom,closet)  in(broom,shed)) Action (a): fetch(broom,closet) with “fetch(X,Y) causes has(X)  in(X,Y) if  locked(Y)  in(X,Y)” Resulting belief state: Filter[a](  Filter[a](  locked(closet)   Filter[a](in(broom,closet)  Filter[a](in(broom,shed))) Filter[a](  locked(closet)   locked(closet)  in(broom,closet)

Algorithm for Permutation Actions Belief state (  :  locked(closet)  (in(broom,closet)  in(broom,shed)) Filter[a](  locked(closet)   locked(closet)  in(broom,closet) Filter[a](in(broom,closet)  locked(closet)   in(broom,closet)  has(broom))  in(broom,closet) Filter[a](in(broom,shed)  in(broom,shed) Filter[a](  =  locked(closet)  in(broom,closet)   has(broom)  in(broom,shed))

Summary: Efficient Update Fast exact update with any observation formulae, if one of the following: –STRIPS action (possibly nondeterministic) –Action is a 1:1 mapping between states –Belief states include all their prime implicates

Talk Outline Tracking and filtering knowledge Tractability results Compact representation over time Discussion & Future work

Tractability and Representation Size Theorem 1 : Every propositional repn. of the belief state grows exponentially for some systems, even when initial belief state is compactly represented (follows from [Boppana & Sipser ’90]) 1 Rough statement. Complete one in [A. & Russell ’03].

Example: A Cleaning Robot Initial Knowledge: in(broom,closet)  in(broom,shed) Apply action fetch(broom,closet) Resulting knowledge  (has(broom)  locked(closet)  in(broom,closet))   (  has(broom)  locked(closet)  in(broom,closet))   (  has(broom)  in(broom,shed)) Reason for space explosion: uncertainty of action’s success and preconditions applied Small formula Big formula

Compact & Tractable Cases Compact belief state representation –STRIPS actions with belief state in k-CNF –1:1 actions with belief state in k-CNF … Observations in 2-CNF Theorem: Filtering  with STRIPS actions –k-CNF  k-CNF –time ~ O(|  | 2 #rules(a) ) Corollary: Filtering with STRIPS actions keeps belief state in O(n k ) size (k fixed).

STRIPS-Filter: Experiments Average time per step Filtering step Filter time (m.sec) ~270 features ~240 features ~210 features ~180 features ~150 features

STRIPS-Filter: Experiments Average space per step Filtering step Filter space (literals) ~210 features ~185 features ~160 features ~135 features ~110 features

Intuition for More Results Filtering with deterministic action a is equivalent to filtering with actions a1 (“a succeeds”) or a2 (“a fails”) successfully, –a1,a2 STRIPS with known success/failure Filter[a](φ)  Filter[a1](φ) v Filter[a2](φ) STRIPS with known success/failure: Filter[a](l 1  l u ) = (l 1  l u )  B(a) or B(a)

Recent Results: (unpublished) #1 Compact representation indefinitely for STRIPS, if failure leaves features unchanged, and effects are 2-clauses a causes (f v g) & (g v -h) if x & y Starting from belief state with r clauses we get at most max(r,n) clauses indefinitely, if effects are conjunction of at most two clauses

Recent Results: (unpublished) #2 Compact representation indefinitely for STRIPS, if failure has nondeterministic effect on affected features a causes f & g if x & y a causes (f v -f) & (g v -g) if (-x v -y) Belief state in k-CNF maintained indefinitely, if effects in k1-CNF, preconditions in k2-DNF, k=k1+k2

Related Work Stochastic filtering –[Kalman ’60], [Doucet et-al. ’00], [Dean & Kanazawa ’88], [Boyen & Koller ’98], … Action theories and semantics –[Gelfond & Lifschitz ’97], [Baral & Son ’01], [Doherty et-al. ’98], … Computation of progression –[Winslett ’90], [del Val ’92], [Lin & Reiter ’97], [Simon & del Val ’01], …

Possible Projects More families of actions/observations –Stochastic conditions on observations –Different data structures (BDDs? Horn?) Compact and efficient stochastic filtering Relational / first-order filtering Dynamic observation models, filtering in expanding worlds Logical Filtering of numerical variables

More Projects Filtering for Kriegspiel (partially observable chess) Autonomous exploration of uncharted domains Smart agents in rich environments

THE END

Example: Explosion of Space Initial Knowledge: in(broom,closet)  in(broom,shed) Apply action fetch(broom,closet) Resulting knowledge  (has(broom)  locked(closet)   in(broom,closet))   (  has(broom)  locked(closet)  in(broom,closet))   (  has(broom)  in(broom,shed))

Tractability Problem Formula filtering is NP-hard in general Actions: Filter[a](  t )  Cn(  t  (Precond(a) t  Effect(a) t+1 )  Frame-Axioms(a)) Cn() = Logical consequences of Specific cases? Approximation?

Example: A Cleaning Robot Initial Knowledge: ? Apply action fetch(broom,closet) Resulting knowledge  in(broom,closet) Reason: –If initially  in(broom,closet), then still  in(broom,closet) –If initially in(broom,closet), then now  in(broom,closet)

Filtering Beliefs Filtering: Update knowledge of the world after actions and observations Stochastic filtering examples: –Dynamic Bayes Nets (DBNs): factored representation –Kalman Filter: Gaussian belief state and linear transition model s1s4s3s2 World state

Agents Acting in The World Agents in partially observable domains –Cognitive, medical assistants –Cleaning, gardening robots –Space robots (exploration, repair, assist) –Game-playing/companion agents Knowledgeable agents –Use knowledge to decide on actions –Update knowledge about the world

Example: A Cleaning Robot Decides to clean the current room Knows the broom is in the closet Fetches the broom from the closet Now knows that the broom is in its hand and not in the closet

Filtering a Single Literal Closed-form solution: Filter[a](literal) =  (Eff 1  Eff u )  B(a) literal ╞ Pre 1  Pre u –a has effect rules (and frame rules) a causes Eff i if Pre i Eff 1  Eff u - effects of action a Pre 1  Pre u - preconditions of action a –Roughly, B(a)  Filter[a](TRUE)

Permutation Actions Actions that permute the states: –flip(light) but not turn-on(light) –increase(speed,+10) but not set(speed,50) –pickUp(X,Y)but not pickUp(X)

Results: Tractable Cases Filtering a single literal Permutation actions STRIPS actions Prime-implicate representation of belief state

Filtering Logical Formulae: STRIPS-Filter If every executed action was possible to execute (or we observed an error), and actions do not have conditional effects (but may have nondeterministic effects), and the belief state representation in PI-CNF, then Filter[a](  C i ) =  Filter[a](C i )

Summary: Tractable Cases Fast approximate update: propositional belief state represented in NNF,CNF,DNF Fast exact update (if one of the following): –Action is a 1:1 mapping between states –STRIPS action (unconditional, nondeterministic effects of actions; observations distinguish success from failure of action) –Belief states include all their prime implicates –Any observations

Sources of Difficulty for Compact Representation For action a with effect rule a causes Eff if Pre –We always know after the action that Eff   Pre –If we know (Pre  p), then after the action we know Eff  p

How Is the State Kept Compact? STRIPS (nondeterministic) actions –We always know that the precondition held (or we got a signal that the action failed) –There are no conditional effects Permutation actions –We restrict the preconditions and effects, e.g., All rules of the form a causes l 1 if l 2, or One of the preconditions is always satisfied, or … Observations (and obs. model): in 2-CNF

STRIPS-Filter: Experimental Results [A. & Russell ’03]

Tractability and Representation Size Theorem 1 : Every propositional repn. of the belief state grows exponentially for some systems, even when initial belief state is compactly represented (follows from [Boppana & Sipser ’90]) However, special cases can can be computed efficiently and represented compactly s in 2-CNF 1 Rough statement. Complete one in [A. & Russell ’03].

STRIPS-Filter: Experimental Results [A. & Russell ’03]

Applications Tractable filtering and tracking of the world in high-dimensional domains with many objects, locations and relationships Learn effects and preconditions of actions in partially-observable domains Autonomous exploration of uncharted domains

Related Work Stochastic filtering –[Kalman ’60], [Blackman & Popoli ’99], [Doucet et-al. ’00], … Action theories and semantics –[Gelfond & Lifschitz ’97], [Baral & Son ’01], [Doherty et-al. ’98], … Computation of progression –[Winslett ’90], [del Val ’92], [Lin & Reiter ’97], [Simon & del Val ’01], …

THE END

Filtering STRIPS Actions STRIPS: –Action was executed or we observed an error, –No conditional effects, and –Possibly nondeterministic effects

Logical Filtering: Progress Outlook 18 months: relational filtering, learning actions in partially-observable domains 36 months: dynamic observation models, Horn belief states, filtering in expanding worlds, autonomous agents in games 54 months: first-order filtering, factored belief states, continuous time, autonomous exploration of uncharted domains

Today 1.Probabilistic graphical models 2.Treewidth methods: 1.Variable elimination 2.Clique tree algorithm 3. Applications du jour: Sensor Networks

Contents 1.Probabilistic graphical models 2.Exact inference and treewidth: 1.Variable elimination 2.Junction trees 3. Applications du jour: Sensor Networks

Application: Planning General-purpose planning problem: –Given: Domain features (fluents) Action descriptions: effects, preconditions Initial state Goal condition –Find: Sequence of actions that is guaranteed to achieve the goal starting from the initial state

Application: Planning with partitions PartPlan Algorithm Start with a tree- structured partition graph Identify goal partition Direct edges toward goal In each partition –Generate all plans possible with depth d and width k –Pass messages toward goal

Planning with partitions PartPlan Algorithm Start with a tree-structured partition graph Identify goal partition Direct edges toward goal In each partition –Generate all plans possible with depth d and width k: “if you give me a block, I can return it to you painted”, “if you give me a block, let me do a few things, and then give me another block, then I can return the two painted and glued together.” –Pass messages toward goal: All preconditions/effects for which there are feasible action sequences

Factored Planning: Analysis Planner is sound and complete Running time for finding plans of width w with m partitions of treewidth k is O(m  w  2 2w+2k ) Factoring can be done in polynomial time Goal can be distributed over partitions by adding at most 2 features per partition

Next Time Probabilistic Graphical Models: –Directed models: Bayesian Networks –Undirected models: Markov Fields Requires prior knowledge of: –Treewidth and graph algorithms –Probability theory