Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence: Planning Lecture 6 Problems with state space search Planning Operators A Simple Planning Algorithm (Game Playing)

Similar presentations


Presentation on theme: "Artificial Intelligence: Planning Lecture 6 Problems with state space search Planning Operators A Simple Planning Algorithm (Game Playing)"— Presentation transcript:

1 Artificial Intelligence: Planning Lecture 6 Problems with state space search Planning Operators A Simple Planning Algorithm (Game Playing)

2 AI Planning n Planning concerns problem of finding sequence of actions to achieve some goal. n Action sequence will be system’s plan. n State-space search techniques, discussed in lecture 6, may be viewed as simplest form of planning. u Based on rules that specify, for possible actions, how the problem state changes. n But need to consider further how to represent these state-change rules.

3 Reminder of Robot Planning expressed as search.. Me Rob Beer Me Rob Beer Robit opens door Me Rob Beer Robot picks up Me Me Rob Beer Robot moves to next room Etc etc

4 Problem State n How do we capture how the different possible actions change the state of world (problem state)? n For “Jugs” problem, problem state was just a pair of numbers, so could specify explicitly how it changed. n For more complex problems, representing the problem state requires specifying all the (relevant) things that are “true”. n Can be done using statements in predicate logic: u in(john, room1) u open_door(room1, room2) n Note how we give objects in world unique labels (e.g., room1).

5 Problem State n So, for simple robot planning problem we might have an initial state described by: u in(robot, room1) u door_closed(room1, room2) u in(john, room) u in(beer, room2) n And target state must include: u in(beer, room1) n (But many different ways of formulating same problem.)

6 Representing Actions n We now specify what the possible actions are (with capitals to indicate variables) u move(R1, R2) u carry(R1, R2, Object) u open(R1, R2) (open door between R1 and R2) n For each action, we need to specify precisely: u When it is allowed. F E.g., can only pick something up when in the same room as that object. u What the change in the problem state will be.

7 Planning Operators n To do this we specify, for each action: u A list of facts that must be true before the action is possible. (Preconditions) u A list of facts made true by the action. (Add list) u A list of facts made false by the action (Delete list). n E.g., carry(R1, R2, Object) u pre:door_open(R1, R2), in(robot, R1), in(Object, R1) u add:in(robot, R2), in(Object, R2) u delete: in(robot, R1), in(Object, R1)

8 Planning Operators n We can now check when an operator may be applied, and what the new state is. n Current state: u in(robot, room1), door_open(room1, room2), in(beer, room1) n Action: u carry(beer, room1, room2) n New state u in(robot, room2), door_open(room1, room2), in(beer, room2)

9 Searching for a Solution n How do we now search for a sequence of actions that gets us from initial to target state? n Can simply use standard search techniques discussed last week. n We can define a rule that lets us find possible “successor” nodes in our search tree. u To find successor NewState of State: F Find operator with preconditions satisfied in State. F Add all the facts in Add list to State F Delete all the facts in Delete list from State. n We then use standard depth/breadth first search

10 Towards an implementation n Express plan operators as prolog facts like: u op(carry(R1, R2, O), [door_open(R1, R2), in(r, R1), in(O, R1)], [in(r, R2), in(O, R2)], [in(r, R1), in(O, R1)]). n Define a successor rule. u successor(State, New) :- op(Action, Pre, Add, Delete), satisfied(Pre, State), additems(State, Add, Temp), delitems(Temp, Delete, New). Action Preconds Add Delete

11 n Now use a simple search algorithm. n Simplest just exploits Prolog’s depth first search: u search(State, State). search(Initial, Target) :- succesor(Initial, Next), search(Next, Target). ?- search([in(r, room1),..], [in(r, room), in(beer, room1)…]). n Problems.. u Order of facts in target state significant.  Doesn’t yet tell us what the plan IS. Just says “yes” if a plan exists.

12 Forwards versus Backwards.. n Can search for a solution forwards (from start state) or backwards (from target). n Backwards search search(Initial, Target) :- succesor(Previous, Target), search(Initial, Previous). u Finds actions that get you to the Target. u Works out state you’d have to be in for that action to apply. u Then searches for actions that get you to that intermediate state.

13 Problems with Simple Search n Search is “blind” - We consider every action that can be done in current state, even if it is completely irrelevant to the goal. u E.g., if robot could clap, jump, and roll over, would consider paths in search tree starting with these actions, as well as opening door into the other room. n Backward search helps a little - but considers ALL actions that end up in target state, not focusing on those that start in state more similar to initial.

14 Means-ends Analysis (MEA) n Early planning algorithm that attempted to address these issues. n Focus the search on actions that reduce the difference between current state and target. n Combine forward and backward reasoning. u Consider actions that can’t immediately apply in current state. u Getting to state where useful action can be applied can be set as new subproblem to solve.

15 MEA algorithm n Find useful action.. n Then set as new subproblems getting to a state where that action can apply, and getting to target from state resulting from that action. Initial State Mid State1 Mid State 2 Target State preplanaction postplan

16 MEA Algorithm in detail n To find plan from Initial to Target u If all goals in Target are true in Initial, succeed. u Otherwise: F Select an usolved goal from target state. F Find an Action that adds goal to target state. F Enable Action by finding a plan (preplan) that achieves Actions preconditions. Let midstate1 be result of applying that plan to initial state. F Apply Action to midstate1 to give midstate2. F Find a plan (postplan) from midstate2 to target state. F Return a plan consisting of preplan, action and postplan.

17 A little on Game Playing n Search techniques may also be applied to game playing problems (e.g., board games). n Difference is that we have two players, each with opposing goals. n We can still express this as a search tree. n But the way we search has to be a bit different.

18 Search Tree Player 1’s moves Player 2 ‘s moves Player 1’s moves etc

19 Game Playing n Essence of game playing is how to choose a move that will maximise your chances of winning on the assumption that your opponent will always make the move that is best for them. n One algorithm for this “minimax”. n Form of best-first search, scoring game states according to how close they are to a solution, but with assumption that opponent will try and minimise your “score”.

20 Summary n Planning: Finding sequence of actions to achieve goal. n Actions specified in terms of preconditions, addlist, delete list. n Can then use standard search techniques, or Means-ends-analysis, which focuses search on actions that achieve goals in target. n Game playing - have to consider opponent.


Download ppt "Artificial Intelligence: Planning Lecture 6 Problems with state space search Planning Operators A Simple Planning Algorithm (Game Playing)"

Similar presentations


Ads by Google