Artificial Intelligence: Planning Lecture 6 Problems with state space search Planning Operators A Simple Planning Algorithm (Game Playing)

Slides:



Advertisements
Similar presentations
Artificial Intelligence: Knowledge Representation
Advertisements

Planning.
Heuristic Search techniques
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Constraint Based Reasoning over Mutex Relations in Graphplan Algorithm Pavel Surynek Charles University, Prague Czech Republic.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Adversarial Search Reference: “Artificial Intelligence: A Modern Approach, 3 rd ed” (Russell and Norvig)
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
1 State-Space representation and Production Systems Introduction: what is State-space representation? (E.Rich, Chapt.2) Basis search methods. (Winston,
Best-First Search: Agendas
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
CPSC 322 Introduction to Artificial Intelligence October 25, 2004.
Artificial Intelligence 2005/06
1 Using Search in Problem Solving Part II. 2 Basic Concepts Basic concepts: Initial state Goal/Target state Intermediate states Path from the initial.
Using Search in Problem Solving
Artificial Intelligence Chapter 11: Planning
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Games with Chance Other Search Algorithms CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe.
November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Constraint Satisfaction Problems
Artificial Intelligence 2005/06 Planning: STRIPS.
Planning Techniques Planning:The problem of finding some action to achieve some goal System’s Plan:The sequence of such actions is called System’s Plan.
1 Chapter 15 Introduction to Planning. 2 Chapter 15 Contents l Planning as Search l Situation Calculus l The Frame Problem l Means-Ends Analysis l The.
CS121 Heuristic Search Planning CSPs Adversarial Search Probabilistic Reasoning Probabilistic Belief Learning.
Chapter 5 Outline Formal definition of CSP CSP Examples
Lesson 6. Refinement of the Operator Model This page describes formally how we refine Figure 2.5 into a more detailed model so that we can connect it.
Reinforcement Learning (1)
Artificial Intelligence Introduction (2). What is Artificial Intelligence ?  making computers that think?  the automation of activities we associate.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
Game Playing. Introduction Why is game playing so interesting from an AI point of view? –Game Playing is harder then common searching The search space.
Game Playing.
You are going to work in pairs to produce a Maths board game.
Game Playing Chapter 5. Game playing §Search applied to a problem against an adversary l some actions are not under the control of the problem-solver.
22/11/04 AIPP Lecture 16: More Planning and Operators1 More Planning Artificial Intelligence Programming in Prolog.
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Computer Science CPSC 322 Lecture 4 Search: Intro (textbook Ch: ) 1.
Reinforcement Learning Ata Kaban School of Computer Science University of Birmingham.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
1 Chapter 15 Introduction to Planning. 2 Chapter 15 Contents l Planning as Search l Situation Calculus l The Frame Problem l Means-Ends Analysis l The.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Introduction to Planning Dr. Shazzad Hosain Department of EECS North South Universtiy
Agents that plan K. V. S. Prasad Notes for TIN171/DIT410 (Friday, 26 March 2010) Based on Nils Nilsson, “Artificial Intelligence: a new synthesis”, Morgan.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
CSCI 4310 Lecture 2: Search. Search Techniques Search is Fundamental to Many AI Techniques.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Intro to Planning Or, how to represent the planning problem in logic.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches - Introduction.
Search CSC 358/ Outline  Homework #7  GPS States and operators Issues  Search techniques DFS, BFS Beam search A* search Alpha-beta search.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 3 –Problem Solving Agents State space search –Programming Assignment Thursday.
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Introduction to State Space Search
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
Planning I: Total Order Planners Sections
February 18, 2016Introduction to Artificial Intelligence Lecture 8: Search in State Spaces III 1 A General Backtracking Algorithm Sanity check function.
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
Best-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule.
Iterative Deepening A*
Problem Solving by Searching
Games with Chance Other Search Algorithms
Artificial Intelligence
Artificial Intelligence
The Alpha-Beta Procedure
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
CSE (c) S. Tanimoto, 2001 Search-Introduction
Othello Artificial Intelligence With Machine Learning
Presentation transcript:

Artificial Intelligence: Planning Lecture 6 Problems with state space search Planning Operators A Simple Planning Algorithm (Game Playing)

AI Planning n Planning concerns problem of finding sequence of actions to achieve some goal. n Action sequence will be system’s plan. n State-space search techniques, discussed in lecture 6, may be viewed as simplest form of planning. u Based on rules that specify, for possible actions, how the problem state changes. n But need to consider further how to represent these state-change rules.

Reminder of Robot Planning expressed as search.. Me Rob Beer Me Rob Beer Robit opens door Me Rob Beer Robot picks up Me Me Rob Beer Robot moves to next room Etc etc

Problem State n How do we capture how the different possible actions change the state of world (problem state)? n For “Jugs” problem, problem state was just a pair of numbers, so could specify explicitly how it changed. n For more complex problems, representing the problem state requires specifying all the (relevant) things that are “true”. n Can be done using statements in predicate logic: u in(john, room1) u open_door(room1, room2) n Note how we give objects in world unique labels (e.g., room1).

Problem State n So, for simple robot planning problem we might have an initial state described by: u in(robot, room1) u door_closed(room1, room2) u in(john, room) u in(beer, room2) n And target state must include: u in(beer, room1) n (But many different ways of formulating same problem.)

Representing Actions n We now specify what the possible actions are (with capitals to indicate variables) u move(R1, R2) u carry(R1, R2, Object) u open(R1, R2) (open door between R1 and R2) n For each action, we need to specify precisely: u When it is allowed. F E.g., can only pick something up when in the same room as that object. u What the change in the problem state will be.

Planning Operators n To do this we specify, for each action: u A list of facts that must be true before the action is possible. (Preconditions) u A list of facts made true by the action. (Add list) u A list of facts made false by the action (Delete list). n E.g., carry(R1, R2, Object) u pre:door_open(R1, R2), in(robot, R1), in(Object, R1) u add:in(robot, R2), in(Object, R2) u delete: in(robot, R1), in(Object, R1)

Planning Operators n We can now check when an operator may be applied, and what the new state is. n Current state: u in(robot, room1), door_open(room1, room2), in(beer, room1) n Action: u carry(beer, room1, room2) n New state u in(robot, room2), door_open(room1, room2), in(beer, room2)

Searching for a Solution n How do we now search for a sequence of actions that gets us from initial to target state? n Can simply use standard search techniques discussed last week. n We can define a rule that lets us find possible “successor” nodes in our search tree. u To find successor NewState of State: F Find operator with preconditions satisfied in State. F Add all the facts in Add list to State F Delete all the facts in Delete list from State. n We then use standard depth/breadth first search

Towards an implementation n Express plan operators as prolog facts like: u op(carry(R1, R2, O), [door_open(R1, R2), in(r, R1), in(O, R1)], [in(r, R2), in(O, R2)], [in(r, R1), in(O, R1)]). n Define a successor rule. u successor(State, New) :- op(Action, Pre, Add, Delete), satisfied(Pre, State), additems(State, Add, Temp), delitems(Temp, Delete, New). Action Preconds Add Delete

n Now use a simple search algorithm. n Simplest just exploits Prolog’s depth first search: u search(State, State). search(Initial, Target) :- succesor(Initial, Next), search(Next, Target). ?- search([in(r, room1),..], [in(r, room), in(beer, room1)…]). n Problems.. u Order of facts in target state significant.  Doesn’t yet tell us what the plan IS. Just says “yes” if a plan exists.

Forwards versus Backwards.. n Can search for a solution forwards (from start state) or backwards (from target). n Backwards search search(Initial, Target) :- succesor(Previous, Target), search(Initial, Previous). u Finds actions that get you to the Target. u Works out state you’d have to be in for that action to apply. u Then searches for actions that get you to that intermediate state.

Problems with Simple Search n Search is “blind” - We consider every action that can be done in current state, even if it is completely irrelevant to the goal. u E.g., if robot could clap, jump, and roll over, would consider paths in search tree starting with these actions, as well as opening door into the other room. n Backward search helps a little - but considers ALL actions that end up in target state, not focusing on those that start in state more similar to initial.

Means-ends Analysis (MEA) n Early planning algorithm that attempted to address these issues. n Focus the search on actions that reduce the difference between current state and target. n Combine forward and backward reasoning. u Consider actions that can’t immediately apply in current state. u Getting to state where useful action can be applied can be set as new subproblem to solve.

MEA algorithm n Find useful action.. n Then set as new subproblems getting to a state where that action can apply, and getting to target from state resulting from that action. Initial State Mid State1 Mid State 2 Target State preplanaction postplan

MEA Algorithm in detail n To find plan from Initial to Target u If all goals in Target are true in Initial, succeed. u Otherwise: F Select an usolved goal from target state. F Find an Action that adds goal to target state. F Enable Action by finding a plan (preplan) that achieves Actions preconditions. Let midstate1 be result of applying that plan to initial state. F Apply Action to midstate1 to give midstate2. F Find a plan (postplan) from midstate2 to target state. F Return a plan consisting of preplan, action and postplan.

A little on Game Playing n Search techniques may also be applied to game playing problems (e.g., board games). n Difference is that we have two players, each with opposing goals. n We can still express this as a search tree. n But the way we search has to be a bit different.

Search Tree Player 1’s moves Player 2 ‘s moves Player 1’s moves etc

Game Playing n Essence of game playing is how to choose a move that will maximise your chances of winning on the assumption that your opponent will always make the move that is best for them. n One algorithm for this “minimax”. n Form of best-first search, scoring game states according to how close they are to a solution, but with assumption that opponent will try and minimise your “score”.

Summary n Planning: Finding sequence of actions to achieve goal. n Actions specified in terms of preconditions, addlist, delete list. n Can then use standard search techniques, or Means-ends-analysis, which focuses search on actions that achieve goals in target. n Game playing - have to consider opponent.