An Overview of Goals and Goal Selection Justin L. Blount Knowledge Representation Lab Texas Tech University August 24, 2007.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Heuristic Search techniques
On Experimenting with AgentSpeak(L) Agents Ioannis Svigkos June 2004 Harrow School of Computer Science.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Understanding in reading By Jocelyne GIASSON Ch. 2 : A model of teaching for understanding in reading Teaching explicitly. De Boeck, 1996 and 2008.
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
1 Graphplan José Luis Ambite * [* based in part on slides by Jim Blythe and Dan Weld]
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Artificial Intelligence Chapter 21 The Situation Calculus Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Knowledge Representation
1 CSIS 7102 Spring 2004 Lecture 8: Recovery (overview) Dr. King-Ip Lin.
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Learning Objectives Explain similarities and differences among algorithms, programs, and heuristic solutions List the five essential properties of an algorithm.
Best-First Search: Agendas
Patterns in Game Design Chapter 9: Game Design Patterns for Narrative Structures, Predictability, and Immersion Patterns CT60A7000 Critical Thinking and.
Chapter 6 Concurrency: Deadlock and Starvation
Avishai Wool lecture Introduction to Systems Programming Lecture 5 Deadlocks.
Mobile and Wireless Computing Institute for Computer Science, University of Freiburg Western Australian Interactive Virtual Environments Centre (IVEC)
CPSC 322, Lecture 37Slide 1 Finish Markov Decision Processes Last Class Computer Science cpsc322, Lecture 37 (Textbook Chpt 9.5) April, 8, 2009.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Default and Cooperative Reasoning in Multi-Agent Systems Chiaki Sakama Wakayama University, Japan Programming Multi-Agent Systems based on Logic Dagstuhl.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
BDI Agents Martin Beer, School of Computing & Management Sciences,
Universal Plans for Reactive Robots in Unpredictable Environments By M.J. Schoppers Presented by: Javier Martinez.
Belief Revision Lecture 1: AGM April 1, 2004 Gregory Wheeler
Introduction to Jadex programming Reza Saeedi
2APL A Practical Agent Programming Language March 6, 2007 Cathy Yen.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Knowledge representation
Cognitive Reasoning to Respond Affectively to the Student Patrícia A. Jaques Magda Bercht Rosa M. Vicari UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL BRASIL.
Homework 1 ( Written Portion )  Max : 75  Min : 38  Avg : 57.6  Median : 58 (77%)
Artificial Intelligence Chapter 25 Agent Architectures Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Integrating high- and low-level Expectations in Deliberative Agents Michele Piunti - Institute of.
Belief Desire Intention Agents Presented by Justin Blount From Reasoning about Rational Agents By Michael Wooldridge.
Introduction to AgentSpeak and Jason for Programming Multi-agent Systems (1) Dr Fuhua (Oscar) Lin SCIS Athabasca University June 19, 2009.
EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems.
Pattern-directed inference systems
Ayestarán SergioA Formal Model for CPS1 A Formal Model for Cooperative Problem Solving Based on: Formalizing the Cooperative Problem Solving Process [Michael.
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
Copyright © Cengage Learning. All rights reserved. CHAPTER 7 FUNCTIONS.
Intelligent Agents RMIT Prof. Lin Padgham (leader) Ass. Prof. Michael Winikoff Ass. Prof James Harland Dr Lawrence Cavedon Dr Sebastian Sardina.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
©Agent Technology, 2008, Ai Lab NJU Agent Technology Agent model and theory.
L. M. Pereira, J. J. Alferes, J. A. Leite Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal P. Dell’Acqua Dept. of Science.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Distributed Models for Decision Support Jose Cuena & Sascha Ossowski Pesented by: Gal Moshitch & Rica Gonen.
Multiagent System Katia P. Sycara 일반대학원 GE 랩 성연식.
CS6502 Operating Systems - Dr. J. Garrido Deadlock – Part 2 (Lecture 7a) CS5002 Operating Systems Dr. Jose M. Garrido.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
University of Kurdistan Artificial Intelligence Methods (AIM) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Intro to Planning Or, how to represent the planning problem in logic.
U i Modleing SGOMS in ACT-R: Linking Macro- and Microcognition Lee Yong Ho.
Lecture 12 Handling Deadlock – Prevention, avoidance and detection.
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Process Management Deadlocks.
Service-Oriented Computing: Semantics, Processes, Agents
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
CMSC 691M Agent Architectures & Multi-Agent Systems
Artificial Intelligence Chapter 25 Agent Architectures
Artificial Intelligence Lecture No. 5
Service-Oriented Computing: Semantics, Processes, Agents
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Service-Oriented Computing: Semantics, Processes, Agents
Artificial Intelligence Chapter 25. Agent Architectures
Artificial Intelligence Chapter 25 Agent Architectures
Formal Methods in DAI : Logic-Based Representation and Reasoning
Presentation transcript:

An Overview of Goals and Goal Selection Justin L. Blount Knowledge Representation Lab Texas Tech University August 24, 2007

Outline Goals Goals in ASP agents Goals in Situation Calculus agents Goals in BDI agents

Goals What question do I want to answer What do I do now? (goal/planning) What do I want? (goal selection) How do I get it? (planning) What is a goal? How to choose/select a goal? Goal n. the result or achievement toward which effort is directed; aim; end. (dictionary)

Goals in ASP agents (Baral, Gelfond, 2000 ) Assume at each moment t the agents memory contains the domain description and a partially ordered set G of the agents goals. A goal is a finite set of fluent literals the agents wants to make true. Partial ordering corresponds to the comparative importance Agent loop 1.Observe world 2.Select one of the most import goal g in G to be achieved 3.Find plan a 1, … a n to achieve g 4 Execute a 1

Goals in ASP agents (Balduccini, 2005 ) Agent loop 1.Observe world 2.Select a goal 3.Find plan a 1, … a n to achieve g 4 Execute a 1 The selection of the current goal is performed taking into account information such as the partial ordering of goals, the history of the domain, the previous goal, and the action description (e.g., to evaluate how hard/time-consuming it will be to achieve a goal).

Goals in Situation Calculus agents (Shapiro, Lesperance, 2001 ) 1.Consistent set of goals --If the agent gets a request for g and it already has the goal that -g, then it does not adopt the goal that, otherwise its goal state would become inconsistent and it would want everything. 2.Paths to goal are finite. A maintenance goal of X is always true is not finite, but can do X is true for next 100 time steps

Goals in Situation Calculus agents (Shapiro, Lesperance, 2005 ) Expansion An agents goal are expanded when it is requested to do something by another agent. Unless it currently has a contradicting goal. Contraction If an agents owner REQUESTS(x) then later changes his mind. The owner uses a CANCEL_REQUEST(x). Can only be used if a REQUEST was executed previously Persistence A goal x persists over an action a, if a is not CANCEL REQUEST, and the agent knows that if x holds then a does not change its value.

Goals in situation calculus (Sardina, Shapiro, 2003) Prioritized goals. Each goal has a priority level an agent that will attempt to achieve as many goals as possible in priority order even if the agent does not know of a plan that is guaranteed to achieve all the goals. Priorities are strict A strategy to achieve 1 High level goal is preffered to strategy to achieve many (or all) lower level goals

Goals in Situation Calculus agents (Shapiro,Lesperance, 2007 ) 1.An agent should drop goal that it believes are impossible to achieve. 2.However, if the agent revises its beliefs, it may later come to believe that it was mistaken about the impossibility of achieving the goal. In that case, the agent should readopt the goal. 3.If an agent receives a request to adopt goal X, it will adopt it if it does not conflict with a higher priority goal.

Goals in Situation Calculus agents (Shapiro, Lesperance, 2007 ) 1.Goal should be compatible with beliefs. The situations that the agent wants to actualize should be on a path from a situation that the agent considers possible. 2.Instead of checking whether each individual goal is consistent with beliefs, check if the set of all goals are consistent with beliefs 3.it could be the case that each goal is individually compatible with an agent’s beliefs but the set of goals of the agent is incompatible, so some of them should be dropped. 4.Which ones should be dropped? 1.Each agent has a preorder over goal formulae that corresponds to a prioritization of goals 2.Chooses a maximal subset respecting this ordering

Goals in BDI agents (D’Iverno,Kinny,Luck,Wooldridge,1998) Goals correspond to the tasks allocated to it From their agent loop –generate new possible desires (tasks), by finding plans whose trigger event matches an event in the event queue; A plan consists of subgoals or primitive actions Thus an agent with goal “achieve PHI” has a goal of performing some (possibly empty) sequence of actions, such that after these actions are performed, PHI will be true. Thus an agent with goal “query PHI” has a goal of performing some (possibly empty) sequence of actions, such that after it performs these actions, it will know whether or not PHI is true. Thus an agent can have a goal either of achieving a state of affairs or of determining whether the state of affairs holds.

Goals in BDI agents (Thanagarajah, Padgham, Harland, 2002) Desires may be inconsistent Goals must be consistent if it is not possible to immediately form an intention towards a goal then the goal is simply dropped. It certainly seems more reasonable that the agent have the ability to ‘remember’ a goal, and to form an intention regarding how to achieve it when the environment is conducive to doing so. How to choose between two mutually inconsistent goals? if a new goal X is more important than an existing goal Y with which it conflicts, then Y should be aborted and pursued. Otherwise, (X is less important or same importance as Y ), X is not adopted. –(too nieve) if there is no preference ordering between two goals, then we should prefer a goal that is already adopted over one that is not:

Goals in BDI agents (Winikoff, Padgham,Harland, 2001) Problem - BDI is difficult to explain and teach Solution - simplify Explicitly represent goals. (instead of desires) –This is vital in order to enable selection between competing goals, dealing with conflicting goals, and correctly handling goals which cannot be pursued at the time they are created and must be delayed. Highlight goal selection as an important issue. By contrast, BDI systems simply assume the existence of a selection function. avoidance goals, or safety constraints (e.g. “never move the table while the robot is drilling”).

Goals in BDI agents (Winikoff, Padgham, Harland, Thangarajah,2002) Goals have 2 aspects-- declarative and procedural Declarative -- to reason about important properties of goals Procedural -- to ensure goals can be achieved efficiently in dynamic environments Reasoning about multiple goals (simple case 2 goals) Plans to achieve both may be independent irrational to try to achieve both X and -X simultaneously Necessarily consistent -- iff all possible subgoals do not conflict Necessarily inconsistent -- iff some necessary subgoals conflict Possibly inconsistent -- choose a consistent means on achieving both Necessarily support -- both share a common necessary subgoal Possibly support -- the exists a common necessary subgoal

Goals in BDI agents (Pohkar, Braubach, lamersdorf,2005) Goal types -- perform, achieve, query, maintain. Active goals are currently pursued Options are inactive because the agent explicitly wants them to be –Ex the option conflicts with a active goal Suspended goals must not be pursued because their context is invalid –Will remain inactive until their context is valid and they become options Deliberation is executed when one of the following occurs Creation condition -- defines when new goal instance is created Context condition -- defines when a goal’s execution should be suspended Drop condition -- defines when a goal instance is removed Inhibition arc -- define a negative relationship between 2 goals used in deliberation, constrain what goals are reconsidered

Goals in BDI agents (Duff, Harland, Thangarajah,2006) Maintenance goals - defines states that must remain true rather than a state that is to be achieved. Reactive - goals are only acted upon when the maintenance condition is no longer true. Proactive - anticipate failures and act in order to prevent them from failing ( done by performing actions that we prevent the condition from failing or suspending goals that will cause the maintenance condition to fail) Future -- prioritize maintenance goals via urgency

Goals in BDI agents (Morreale,et al 2006) A goal g1 is inconsistent with a goal g2 if and only if when g1 succeeds, then g2 fails. agent deliberates and generates g as an option agent checks if g is possible and not inconsistent with active goals 2.If both checks are passed then g becomes and intention 3.If case of inconsistency among g and some active goals g becomes intention only if it is prefferred to such inconsistent goals which will be dropped Preference relation -- not total since several goals can be pursued in parallel, there is no need to prefer some goal to another goal if they are not inconsistent each other.

References [1] Baral,C. and Gelfond, M Reasoning agents in Dynamic Domains, Logic Based Artificial Intelligence, Edited By J. Minker, Kluwer. [2] Balduccini, M Answer Set Based Design of Highly Autonomous, Rational Agents. PhD thesis, Texas Tech University. [4] Shapiro, S. and Lesp´erance, Y Modeling multiagent systems with the cognitive agents specification language — a feature interaction resolution application. In C. Castelfranchi and Y. Lesp´erance, editors, Proc. ATAL-2000, pages 244–259. Springer-Verlag, Berlin. [5] Sardina, S. and Shapiro, S Rational action in agent programs with prioritized goals.AAMAS, [6] Shapiro, S., Lesperance, Y., and Levesque, H., Goal Change, in Proccedings of the IJCAI-05 Conference, Edinburgh, Scotland. [7] Shapiro, S. and Brewka, G Dynamic Interactions between Goals and Beliefs. IJCAI, [8] d’Inverno, M. Kinny, D. Luck, M. and Wooldridge,M A Formal Specification of dMARS, In Intelligent Agents IV In Proceedings of the Fourth International Workshop on Agent Theories, Architectures and Languages, Singh, Rao and Wooldridge (eds.), Lecture Notes in Artificial Intelligence, 1365, Springer-Verlag.

References - continued [9] Thangarajah, J., Padgham, L., and Harland, J Representation and reasoning for goals in BDI agents. In Proceedings of the Twenty-Fifth Australasian Computer Science Conference (ACSC 2002), Melbourne, Australia. [10] Winikoff, M., Padgham, L., and Harland, J Simplifying the Development of Intelligent Agents. In AI2001: Advances in Artificial Intelligence. 14th Australian Joint Conference on Artificial Intelligence. LNAI 2256, pages , Adelaide. [11] Winikoff, M., Padgham, L., Harland, J., and Thangarajah, J Declarative and Procedural Goals in Intelligent Agent Systems, Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Toulouse. [12] Pokahr, A., Braubach, L., Lamersdorf, W A Goal Deliberation Strategy for BDI Agent Systems, Third German conference on Multi-Agent System Technologies [13]Duff, S., Harland, J., and Thangarajah, J On Proactivity and Maintenance Goals, Proceedings of the Fifth International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS'06), Hakodate. [14]Morreale, V., Bonura, S., Francaviglia, G., Centineo, F., Cossentino, M., and Gaglio, S Reasoning about Goals in BDI Agents: the PRACTIONIST Framework. Proc. Of the Workshop on Objects and Agents. Catania, Italy.