Multi-Agent Systems University “Politehnica” of Bucarest Spring 2011 Adina Magda Florea curs.cs.pub.ro.

Slides:



Advertisements
Similar presentations
Agent Based Software Development
Advertisements

Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Formal Semantics for an Abstract Agent Programming Language K.V. Hindriks, Ch. Mayer et al. Lecture Notes In Computer Science, Vol. 1365, 1997
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
Intelligent Agents Russell and Norvig: 2
Outline Overview of agent architectures Deliberative agents
Embedded System Lab Kim Jong Hwi Chonbuk National University Introduction to Intelligent Robots.
Constructing the Future with Intelligent Agents Raju Pathmeswaran Dr Vian Ahmed Prof Ghassan Aouad.
Faculty of Management and Organization Emergence of social constructs and organizational behaviour How cognitive modelling enriches social simulation Martin.
Modeling Command and Control in Multi-Agent Systems* Thomas R. Ioerger Department of Computer Science Texas A&M University *funding provided by a MURI.
Concrete architectures (Section 1.4) Part II: Shabbir Ssyed We will describe four classes of agents: 1.Logic based agents 2.Reactive agents 3.Belief-desire-intention.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
Architecture-driven Modeling and Analysis By David Garlan and Bradley Schmerl Presented by Charita Feldman.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
JACK Intelligent Agents and Applications Hitesh Bhambhani CSE 6362, SPRING 2003 Dr. Lawrence B. Holder.
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Topics: Introduction to Robotics CS 491/691(X) Lecture 8 Instructor: Monica Nicolescu.
Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
BDI Agents Martin Beer, School of Computing & Management Sciences,
Autonomous Agents Overview. Topics Theories: logic based formalisms for the explanation, analysis, or specification of autonomous agents. Languages: agent-based.
Behavior- Based Approaches Behavior- Based Approaches.
Meaningful Learning in an Information Age
1 Chapter 19 Intelligent Agents. 2 Chapter 19 Contents (1) l Intelligence l Autonomy l Ability to Learn l Other Agent Properties l Reactive Agents l Utility-Based.
The Robotics Institute
Intelligent Agents: an Overview. 2 Definitions Rational behavior: to achieve a goal minimizing the cost and maximizing the satisfaction. Rational agent:
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio Intelligent agents.
Introduction to Jadex programming Reza Saeedi
5-1 LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems
Chapter 15: Agents Service-Oriented Computing: Semantics, Processes, Agents – Munindar P. Singh and Michael N. Huhns, Wiley, 2005.
Belief Desire Intention Agents Presented by Justin Blount From Reasoning about Rational Agents By Michael Wooldridge.
EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems.
EEL 5937 Agent models. EEL 5937 Multi Agent Systems Lecture 4, Jan 16, 2003 Lotzi Bölöni.
Robotica Lecture Review Reactive control Complete control space Action selection The subsumption architecture –Vertical vs. horizontal decomposition.
Geoinformatics 2006 University of Texas at El Paso Evaluating BDI Agents to Integrate Resources Over Cyberinfrastructure Leonardo Salayandía The University.
Ann Nowe VUB 1 What are agents anyway?. Ann Nowe VUB 2 Overview Agents Agent environments Intelligent agents Agents versus objects.
Lecture 2 Multi-Agent Systems Lecture 2 University “Politehnica” of Bucarest Adina Magda Florea
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 25 –Robotics Thursday –Robotics continued Home Work due next Tuesday –Ch. 13:
Introduction of Intelligent Agents
Distributed Models for Decision Support Jose Cuena & Sascha Ossowski Pesented by: Gal Moshitch & Rica Gonen.
Multiagent System Katia P. Sycara 일반대학원 GE 랩 성연식.
Lecture 2 Multi-Agent Systems Lecture 2 Computer Science WPI Spring 2002 Adina Magda Florea
Agent Overview. Topics Agent and its characteristics Architectures Agent Management.
Lecture 9 Multi-Agent Systems Lecture 9 University “Politehnica” of Bucarest Adina Magda Florea
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Formal Verification. Background Information Formal verification methods based on theorem proving techniques and model­checking –To prove the absence of.
第 25 章 Agent 体系结构. 2 Outline Three-Level Architectures Goal Arbitration The Triple-Tower Architecture Bootstrapping Additional Readings and Discussion.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Artificial Intelligence Knowledge Representation.
The Agent and Environment Presented By:sadaf Gulfam Roll No:15156 Section: E.
Intelligent Agents (Ch. 2)
LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES
Service-Oriented Computing: Semantics, Processes, Agents
Do software agents know what they talk about?
CMSC 691M Agent Architectures & Multi-Agent Systems
Conception de modèles pour la simulation
Intelligent Agents Chapter 2.
Intelligent Agents (Ch. 2)
Service-Oriented Computing: Semantics, Processes, Agents
Michael Wooldridge presented by Kim Sang Soon
CIS 488/588 Bruce R. Maxim UM-Dearborn
EA C461 – Artificial Intelligence Problem Solving Agents
Subsuption Architecture
Service-Oriented Computing: Semantics, Processes, Agents
Presentation transcript:

Multi-Agent Systems University “Politehnica” of Bucarest Spring 2011 Adina Magda Florea curs.cs.pub.ro

Lecture 2: Agent architectures n Simple agent models n Cognitive agent architectures n Reactive agent architectures n Layered architectures

n An agent perceives its environment through sensors and acts upon the environment through effectors. n Aim: design an agent program = a function that implements the agent mapping from percepts to actions. Agent = architecture + program  A rational agent has a performance measure that defines its degree of success.  A rational agent has a percept sequence and, for each possible percept sequence the agent should do whatever action is expected to maximize its performance measure Simple agent models

E = {e 1,.., e,..} P = {p 1,.., p,..} A = {a 1,.., a,..} Reactive agent see : E  P action : P  A env : E x A  E (env : E x A  P(E)) Environment env (E) Decision component action Execution component action Perception component see Agent 4 PA Reactive agent model

Several reactive agents see i : E  P i action i : P i  A i env : E x A 1 x … A n  P(E) Environment env Decision component action Execution component action Perception component see Agent (A1) Agent (A2) Agent (A3) 5 A 1,…, A i,.. P 1,…, P i,.. (usually the same) Reactive agent model

E = {e 1,.., e,..} P = {p 1,.., p,..} A = {a 1,.., a,..} S = {s 1,.., s,..} State Agent see : E  P next : S x P  S action : S  A env : E x A  P(E) Environment env (E) Decision component action, next Execution component action Perception component see Agent 6 PA S Cognitive agent model

Several cognitive agents see i : E  P i next i : S i x P  S i action i : S i x I  A i inter i : S i  I env : E x A 1 x … A n  P(E) Environment env Decision component action, next Execution component action Perception component see Agent (A1) Agent (A2) Agent (A3) Interaction component inter 7 S1,…, Si,.. A 1,…, A i,.. P 1,…, P i,.. (not always the same) I = {i 1,.., i k,…} Cognitive agent model

Agent with states and goals goal : E  {0, 1} Agents with utility utility : E  R Nondeterministic environment env : E x A  P(E) n Utility theory = every state has a degree of usefulness, to an agent, and that agent will prefer states with higher utility n Decision theory = an agent is rational if and only if it chooses the actions that yields the highest expected utility, averaged over all possible outcomes of actions - Maximum Expected Utility 8 Cognitive agent model

The expected probability that the result of an action (a) executed in e to be the new state e’ The expected utility of a action a in a state e, from the point of view of the agent Maximum Expected Utility (MEU) 9 Cognitive agent model

n Example: getting out of a maze –Reactive agent –Cognitive agent –Cognitive agent with utility n 3 problems: –what action to choose if several available –what to do if the outcomes of an action are not known –how to cope with changes in the environment 10

2. Cognitive agent architectures n Rational behaviour: AI and Decision theory n AI = models of searching the space of possible actions to compute some sequence of actions that will achieve a particular goal n Decision theory = competing alternatives are taken as given, and the problem is to weight these alternatives and decide on one of them (means-end analysis is implicit in the specification of competing alternatives) n Problem = deliberation/decision –The agents are resource bounded –The environment may change during deliberation 11

General cognitive agent architecture 12 Information about itself - what it knows - what it believes - what is able to do - how it is able to do - what it wants environment and other agents - knowledge - beliefs Communication Interactions Control Output Input Other agents Environment Scheduler& Executor State Planner Reasoner

FOPL models of agency n Symbolic representation of knowledge + use inferences in FOPL - deduction or theorem proving to determine what actions to execute n Declarative problem solving approach - agent behavior represented as a theory T which can be viewed as an executable specification (a) Deduction rules At(0,0)  Free(0,1)  Exit(east)  Do(move_east) Facts and rules about the environment At(0,0)Wall(1,1)  x  y Wall(x,y)   Free(x,y) Automatically update current state and test for the goal state At(0,3) or At(3,1) (b) Use situation calculus =describe change in FOPL Function Result(Action,State) = NewState At((0,0), S0)  Free(0,1)  Exit(east)  At((0,1), Result(move_east,S0)) Try to prove the goal At((0,3), _) and determines actions that lead to it - means-end analysis 13

Advantages of FOPL - simple, elegant - executable specifications Disadvantages - difficult to represent changes over time other logics - decision making is deduction and selection of a strategy - intractable - semi-decidable 14

BDI (Belief-Desire-Intention) architectures n High-level specifications of an architecture for a resource-bounded agent. n Beliefs = information the agent has about the world n Desires = state of affairs that the agent would wish to bring about n Intentions = desires the agent has committed to achieve n BDI - a theory of practical reasoning - Bratman, 1988 n Intentions play a critical role in practical reasoning - limits options, DM simpler 15

16 BDI particularly compelling because: n philosophical component - based on a theory of rational actions in humans n software architecture - it has been implemented and successfully used in a number of complex fielded applications –IRMA (Intelligent Resource-bounded Machine Architecture) –PRS - Procedural Reasoning System n logical component - the model has been rigorously formalized in a family of BDI logics –Rao & Georgeff, Wooldrige –(Int i  )   (Bel i  )

17 BDI Architecture Belief revision Beliefs Knowledge Deliberation process percepts Desires Opportunity analyzer Intentions Filter Means-end reasonner Intentions structured in partial plans Plans Library of plans Executor B = brf(B, p) I = options(D, I) I = filter(B, D, I)  = plan(B, I) actions

BDI Agent control loop B = B 0 I = I 0 D = D 0 while true do get next perceipt p B = brf(B,p) D = options(B, D, I) I = filter(B, D, I)  = plan(B, I) execute(  ) end while 18

Commitment strategies o The option chosen by the agent as an intention = the agent is committed to that intention o Commitments imply temporal persistence of intentions Question: How long is an agent committed to an intention o Blind commitment o Single minded commitment o Open minded commitment 19

B = B 0 I = I 0 D = D 0 while true do get next perceipt p B = brf(B,p) D = options(B, D, I) I = filter(B, D, I)  = plan(B, I) while not (empty(  ) or succeeded (I, B)) do  = head(  ) execute(  )  = tail(  ) get next perceipt p B = brf(B,p) if not sound( , I, B) then  = plan(B, I) end while 20 Reactivity, replan BDI Agent control loop Blind commitment

B = B 0 I = I 0 D = D 0 while true do get next perceipt p B = brf(B,p) D = options(B, D, I) I = filter(B, D, I)  = plan(B, I) while not (empty(  ) or succeeded (I, B) or impossible(I, B)) do  = head(  ) execute(  )  = tail(  ) get next perceipt p B = brf(B,p) if not sound( , I, B) then  = plan(B, I) end while 21 Reactivity, replan Dropping intentions that are impossible or have succeeded BDI Agent control loop Single minded commitment

B = B 0 I = I 0 D = D 0 while true do get next perceipt p B = brf(B,p) D = options(B, D, I) I = filter(B, D, I)  = plan(B, I) while not (empty(  ) or succeeded (I, B) or impossible(I, B)) do  = head(  ) execute(  )  = tail(  ) get next perceipt p B = brf(B,p) D = options(B, D, I) I = filter(B, D, I)  = plan(B, I) end while 22 Replan if reconsider(I, B) then BDI Agent control loop Open minded commitment

o BDI architecture – most popular o There is no unique BDI architecture o PRS - Procedural Reasoning System (Georgeff) o dMARS o UMPRS si JAM – C++ o JADE ( ) o JACK ( o JADEX – XML si Java ( ) o JASON – Java (

3. Reactive agent architectures Subsumption architecture - Brooks, 1986 n DM = {Task Accomplishing Behaviours} n A TAB is represented by a competence module (c.m.) n Every c.m. is responsible for a clearly defined, but not particular complex task - concrete behavior n The c.m. are operating in parallel n Lower layers in the hierarchy have higher priority and are able to inhibit operations of higher layers n The modules located at the lower end of the hierarchy are responsible for basic, primitive tasks n The higher modules reflect more complex patterns of behaviour and incorporate a subset of the tasks of the subordinate modules  subsumtion architecture 24

Module 1 can monitor and influence the inputs and outputs of Module 2 M1 = wonders about while avoiding obstacles  M0 M2 = explores the environment looking for distant objects of interests while moving around  M1  Incorporating the functionality of a subordinated c.m. by a higher module is performed using suppressors (modify input signals) and inhibitors (inhibit output) 25 Competence Module (1) Move around Competence Module (0) Avoid obstacles Inhibitor nodeSupressor node Sensors Competence Module (2) Investigate env Competence Module (0) Avoid obstacles Effectors Competence Module (1) Move around Input (percepts) Output (actions)

 More modules can be added: Replenishing energy Optimising paths Making a map of territory Pick up and put down objects Investigate environment Move around Avoid obstacles Behavior(c, a) – pair of condition-action describing behavior Beh = { (c, a) | c  P, a  A}R = set of behavior rules   R x R - binary inhibition relation on the set of behaviors, total ordering of R function action( p: P) var fired: P(R) begin fired = {(c, a) | (c, a)  R and p  c} for each (c, a)  fired do if   (c', a')  fired such that (c', a')  (c, a) then return a return null end 26

n Brooks extends the architecture to cope with a large number of c.m. - Behavior Language n Indirect communication n Advantages of reactive architectures n Disadvantages 27

4. Layered agent architectures n Combine reactive and pro-active behavior n At least two layers, for each type of behavior n Horizontal layering - i/o flows horizontally n Vertical layering - i/o flows vertically

29 Layer n … Layer 2 Layer 1 perceptual input Action output Layer n … Layer 2 Layer 1 Layer n … Layer 2 Layer 1 Action output Action output perceptual input perceptual input Horizontal Vertical Layered agent architectures

TouringMachine n Horizontal layering - 3 activity producing layers, each layer produces suggestions for actions to be performed n reactive layer - set of situation-action rules, react to precepts from the environment n planning layer - pro-active behavior - uses a library of plan skeletons called schemas - hierarchical structured plans refined in this layer n modeling layer - represents the world, the agent and other agents - set up goals, predicts conflicts - goals are given to the planning layer to be achieved n Control subsystem - centralized component, contains a set of control rules - the rules: suppress info from a lower layer to give control to a higher one - censor actions of layers, so as to control which layer will do the actions 30

InteRRaP n Vertically layered two pass agent architecture n Based on a BDI concept but concentrates on the dynamic control process of the agent Design principles n the three layered architecture describes the agent using various degrees of abstraction and complexity n both the control process and the KBs are multi-layered n the control process is bottom-up, that is a layer receives control over a process only when this exceeds the capabilities of the layer beyond n every layer uses the operations primitives of the lower layer to achieve its goals Every control layer consists of two modules: - situation recognition / goal activation module (SG) - planning / scheduling module (PS) 31

32 Cooperative planning layer SG PSSGPS Local planning layer Behavior based layer World interface Sensors Effectors Communication Social KB Planning KB World KB actions percepts InteRRaPInteRRaP

33 Beliefs Social model Mental model World model Situation Cooperative situation Local planning situation Routine/emergency sit. Goals Cooperative goals Local goals Reactions Options Cooperative option Local option Reaction Operational primitive Joint plans Local plans Behavior patterns Intentions Cooperative intents Local intentions Response Sensors Effectors BDI model in InteRRaP filter plan options PS SG

 Muller tested InteRRaP in a simulated loading area.  A number of agents act as automatic fork-lifts that move in the loading area, remove and replace stock from various storage bays, and so compete with other agents for resources 34