Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 12: Planning Motivation Search, theorem proving, and planning Situation.

Similar presentations


Presentation on theme: "Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 12: Planning Motivation Search, theorem proving, and planning Situation."— Presentation transcript:

1 Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 12: Planning Motivation Search, theorem proving, and planning Situation calculus Planning: STRIPS algorithm Blocks world example (see attached)

2 Intro to AI Fall 2002 © L. Joskowicz 2 Agent architectures Problem solving agent –states, start and goal states, operations –search over space of states Knowledge-based agent –facts, axioms in first order logic –theorem-proving, resolution Planning agent –states, start and goal states, operations –search over the space of plans

3 Intro to AI Fall 2002 © L. Joskowicz 3 Relations between agent architectures Problem solving (search) Planning Knowledge-based (deduction) special case representation efficiency Key issues emulate

4 Intro to AI Fall 2002 © L. Joskowicz 4 The blocks world (1) c b a b a c all blocks are of equal size fixed table; only relative block position matters only one block on top of another block any number of blocks on the table blocks only move when the arm picks them blocks are picked and dropped by an arm the arm can only pick one block at a time

5 Intro to AI Fall 2002 © L. Joskowicz 5 The blocks world (2) c b a b a c Find a sequence of actions from the start to the end state pickup(b), putdown(b), pickup(c), stack(c,a) b a c c b a start goal

6 Intro to AI Fall 2002 © L. Joskowicz 6 Formalization of blocks world Objects: blocks a,b,c,... table table States: conjunctions of on(X,Y), ontable(X), clear(X), handempty, holding(X) Actions: pickup(X), putdown(X), stack(X,Y), unstack(X,Y) Knowledge about states and actions: –no two blocks on top of another block –hand must be empty and block clear to pick up

7 Intro to AI Fall 2002 © L. Joskowicz 7 Block world rules pickup(X): S 1 /\ (ontable(X) /\ handempty /\ clear(X) => S 1 /\ holding(X) putdown(X) : S 1 /\ holding(X) => S 1 /\ (ontable(X) /\ handempty /\ clear(X)) stack(X,Y): S 1 /\ ( holding(X) /\ clear(Y) => S 1 /\ (on(X,Y) /\ handempty /\ clear(X)) unstack(X,Y): S 1 /\ (handempty /\ clear(X) /\ on(X,Y)) => S 1 /\ (holding(X) /\ clear(Y))

8 Intro to AI Fall 2002 © L. Joskowicz 8 Blocks world in FOL Explicit representation of world state as a conjuction of predicates –initial state: a logical sentence about the world on(b,a) /\ clear(b) /\ clear(c) /\ ontable(a) /\ ontable(c) –goal state: a query describing desired situation on(c,a) /\ clear(b) /\ clear(a) /\ ontable(a) /\ ontable(b) –axioms about the world:  X holding(X) ~clear(X)  X ontable(X) ~on(X,Y) –what about the rules that change the world state?

9 Intro to AI Fall 2002 © L. Joskowicz 9 Situation calculus (1) We would like to write rules of the type: unstack(X,Y): (on(X,Y) /\ handempty /\ clear(X)) => holding(X) /\ clear(Y) but we cannot add and delete clauses directly to KB! Alternative: add state variables to the predicates, and use a function DO that maps actions and states into new states –DO(Actions,State) maps into NewState –DO(unstack(X,Y),S) is a new state obtained after applying unstack(X,Y) to state S.

10 Intro to AI Fall 2002 © L. Joskowicz 10 Situation calculus (2) Predicates: on(X,Y,S), ontable(X,S), clear(X,S), handempty(S), holding(X,S) To characterize the action unstack(X,Y) we would write: [ on(X,Y) /\ handempty(S) /\ clear(X,S)] => [holding(X,DO(unstack(X,Y),S)) /\ clear(Y,DO(unstack(X,Y),S))] create a new state which is the concatenation of the old one S with the action unstack(X,Y).

11 Intro to AI Fall 2002 © L. Joskowicz 11 Situation calculus proofs (1) We can now deduce that: on(a,b,s0) /\ ontable(b,s0) /\ clear(a,s0) then holding(a,DO(unstack(a,b),s0)) /\ clear(b,DO(unstack(a,b),s0)) Deductions can continue. For putdown(X) holding(X,S ) => ontable(X,S ) /\ handempty(S) /\ clear(X,S) )) ontable(a,DO(putdown(a),DO(unstack(a,b),s0))) s0 s1 s2 a b

12 Intro to AI Fall 2002 © L. Joskowicz 12 Situation calculus proofs (2) Generate new conclusions that contain the previous actions. When successful, the instantiated state is the list of actions: on(a,b,s5) /\ …. s5 = DO(stack(b,c),DO(pickup(b),DO(unstack(..))) Problem: must explicitly state what changes and what does not! if a is red in state s n, then it should remain red in state s n+1 !

13 Intro to AI Fall 2002 © L. Joskowicz 13 The frame problem We must explicitly state all that does NOT change from state to state, for otherwise we get incomplete results on(a,b,S) => on(a,b,DO(unstack(c,d),S)) color(X,red,S) => color(X,red,DO(unstack(Y,Z),S) These axioms are called frame axioms Analogy: old way of producing cartoons Writing down all frame axioms is both tedious and inefficient!

14 Intro to AI Fall 2002 © L. Joskowicz 14 Shortcomings of FOL Impractical for problem where the state of the world changes because we must explicitly state that all other things do not change -- the frame problem. No easy way to avoid this: default rules, higher-order logic, etc. Classic problem that appears in many situations

15 Intro to AI Fall 2002 © L. Joskowicz 15 Blocks world as a search problem States: explicitly represented as before –each free block is either on(X,Y) or ontable(X), and can also be clear(X) –up to one held block: holding(X) –hand can be free or holding a block Representing change: from state S 1 to S 2 –some predicates get deleted, others added: pickup(X): (ontable(X) /\ handempty /\ clear(X)) => holding(X) remove ontable(X), handempty, clear(X) from S 1 add to it holding(X)

16 Intro to AI Fall 2002 © L. Joskowicz 16 Blocks world rules for searching pickup(X): S /\ [(ontable(X) /\ handempty /\ clear(X))] => S /\ [holding(X)] putdown(X) : S /\ [holding(X)] => S /\ [(ontable(X) /\ handempty /\ clear(X))] stack(X,Y): S /\ [ ( holding(X) /\ clear(Y))] => S /\ [(on(X,Y) /\ handempty /\ clear(X))] unstack(X,Y): S /\ [(handempty /\ clear(X) /\ on(X,Y))] => S /\ [(holding(X) /\ clear(Y))] Explicitly state only what changes!

17 Intro to AI Fall 2002 © L. Joskowicz 17 Heuristic function for blocks world Simple heuristic: number of different predicates S 1 : [on(b,a) /\ clear(b) /\ clear(c) /\ ontable(a) /\ ontable(c)] G : [on(c,a) /\ clear(b) /\ clear(a) /\ ontable(a) /\ ontable(b)] Difference: 3 Search tree generated in A* fashion, forward from the start to the goal state Possible loops, repeated states Search tries to satisfy all goals simultaneously!

18 Intro to AI Fall 2002 © L. Joskowicz 18 Search tree for blocks world on(b,a) /\ clear(b) /\ clear(c) /\ ontable(a) /\ ontable(c) /\ handempty unstack(b,a) clear(a) /\ clear(c) /\ ontable(a) /\ ontable(c) /\ holding(b) pickup(c) on(b,a) /\ clear(b) /\ holding(c) /\ ontable(a) clear(a) /\ clear(c) /\ ontable(a) /\ ontable(c) /\ holding(b) pickup(c)stack(c,b)

19 Intro to AI Fall 2002 © L. Joskowicz 19 Search can also be inefficient Consider solving problems with independent or nearly independent subgoals Get a liter of milk, a kilo of apples, and a screwdriver –initial state: agent at home with none of the above –rules: actions the robot can do: from home: go to office, go to supermarket, etc… in each place: search for tuna, search for bananas, … The goal is reached when all the subgoals are satisfied

20 Intro to AI Fall 2002 © L. Joskowicz 20 Search tree for shopping start goal action 1 action n

21 Intro to AI Fall 2002 © L. Joskowicz 21 Why is the search inefficient? Too many actions and states to consider because of large branching factor (forward search) Difficult to write a good SINGLE heuristic function Lack of global view: no hierarchy between getting out the door and driving to supermarket No direct connection between state and action Does not exploit independence or near- independence of goals.

22 Intro to AI Fall 2002 © L. Joskowicz 22 Planning: key ideas 1. “Open up” the representation of states, goals, and actions. States of individual objects can now be considered separately on(b,a) /\ clear(b) /\ clear(c) /\ ontable(a) /\ ontable(c) 2. Add actions to the plan when needed, not just incrementally from initial state 3. Divide-and-conquer for nearly independent goals a b d c

23 Intro to AI Fall 2002 © L. Joskowicz 23 Planning, search, and knowledge- based inferencing Special case of search and knowledge-base agents targeted to deal with world changes Restricted form of rules to avoid the frame problem Solves problems by trying to recursively satisfy subgoals In the worst case (puzzle-like problems) will do no better than search algorithms.

24 Intro to AI Fall 2002 © L. Joskowicz 24 Planning: representation States: conjunction of literals Single goals (one literal), compound goals Actions (rules) consist of three parts –action description –precondition –effect: add-list and delete list No need of frame actions: rules only specify what changes STRIPS: Stanford Research Institute Planning System (1972)

25 Intro to AI Fall 2002 © L. Joskowicz 25 Block world rules for planning pickup(X): P&D: (ontable(X) /\ handempty /\ clear(X)) A: holding(X) putdown(X) : P&D: holding(X) A: ontable(X) /\ handempty /\ clear(X) stack(X,Y): P&D: holding(X) /\ clear(Y) A: on(X,Y) /\ handempty /\ clear(X) unstack(X,Y): P&D: handempty /\ clear(X) /\ on(X,Y) A: holding(X) /\ clear(Y) P: precondition D: delete-list A: add-list

26 Intro to AI Fall 2002 © L. Joskowicz 26 Example of rules applications clear(b) on(c,a) clear(c) ontable(a) ontable(b) handempty unstack(X) P&D: handempty, clear(X),on(X,Y) A: holding(X), clear(Y) clear(a) clear(b) holding(c) ontable(a) ontable(b) putdown(X) P&D: holding(X) A: ontable(X), clear(X), handempty clear(a) clear(b) ontable(a) ontable(b) ontable(c) clear(c) handempty clear(a) ontable(a) ontable(c) clear(c) holding(b) pickup(X) P&D: ontable(X), clear(X), handempty A: holding(X)

27 Intro to AI Fall 2002 © L. Joskowicz 27 Example of search space

28 Intro to AI Fall 2002 © L. Joskowicz 28 Planning: search The planner considers subgoals and tries to satisfy each independently: G: s 1 /\ s 2 ….. /\ s n Separate DB and goal stack representation Interaction between goals is handled through the database Search can be forward (progression) or backwards (regression) Original STRIPS is DFS with goal stack: first satisfy s 1 and its subgoals, the s 2, etc...

29 Intro to AI Fall 2002 © L. Joskowicz 29 Stack algorithm 1. Add the conjunction of goals to the stack 2. While the stack is not empty do: 2.1 if goal on top matches current DB, pop 2.2 else if compound goal does NOT match DB, add each single goal in new order to stack 2.3 else if single goal, find rule whose add-list includes the goal and 1. Replace goal with the instantiated rule 2. Place the rule’s instantiated precondition on top of the stack 2.4 if rule, record, remove from stack, update DB

30 Intro to AI Fall 2002 © L. Joskowicz 30 See example in Lecture 12-ex (from slide 1 to slide 23)

31 Intro to AI Fall 2002 © L. Joskowicz 31 STRIPS planning properties Can be progressive (forward) or regressive (backwards) by changing the order in which goals and rules are added to the stack Not all sequences are optimal STRIPS is not complete: not all possible plans are derivable with it STRIPS imposes total ordering on plans -- this can lead to much wasted search No hierarchical planning

32 Intro to AI Fall 2002 © L. Joskowicz 32 Problem that STRIPS cannot solve “Swap two memory registers r1 and r2 whose initial contents are a and b respectively. Assume a third register r3 is available.” start: contents(r1,a), contents(r2,b), contents(r3,0) goal: contents(r1,b), contents(r2,a) operation: assign(X,A,Y,B) P: contents(X,A), contents(Y,B) D: contents(Y,B) A: contents(Y,A)

33 Intro to AI Fall 2002 © L. Joskowicz 33 Fail!

34 Intro to AI Fall 2002 © L. Joskowicz 34 Non-linear planning Instead of satisfying goals sequentially, try to satisfy each separately, and see what needs to be achieved to do so. If a contradiction occurs, prune the branch Need to deal with interactions between goals Regression: forsee what must hold for a rule to be applied before applying it (partially covered, slides 24 to 33 of Lect 12-ex)


Download ppt "Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 12: Planning Motivation Search, theorem proving, and planning Situation."

Similar presentations


Ads by Google