Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 1999 Singh & Huhns1 Principles of Agents and Multiagent Systems Munindar P. Singh Michael N. Huhns

Similar presentations


Presentation on theme: "© 1999 Singh & Huhns1 Principles of Agents and Multiagent Systems Munindar P. Singh Michael N. Huhns"— Presentation transcript:

1 © 1999 Singh & Huhns1 Principles of Agents and Multiagent Systems Munindar P. Singh Michael N. Huhns singh@ncsu.edu huhns@sc.edu http://www.csc.ncsu.edu/faculty/mpsingh/ http://www.ece.sc.edu/faculty/Huhns/

2 © 1999 Singh & Huhns2 Tremendous Interest in Agent Technology Evidence: 400 people at Autonomous Agents 98 550 people at Agents World in Paris Why? Vast information resources now accessible Ubiquitous processors New interface technology Problems in producing software

3 © 1999 Singh & Huhns3 What is an Agent? The term agent in computing covers a wide range of behavior and functionality. In general, an agent is an active computational entity –with a persistent identity –that can perceive, reason about, and initiate activities in its environment –that can communicate (with other agents)

4 © 1999 Singh & Huhns4 The Agent Test “A system containing one or more reputed agents should change substantively if another of the reputed agents is added to the system.”

5 © 1999 Singh & Huhns5 Social Engineering for Agents Computers are making more and more decisions autonomously: when airplanes land and take off (fuel vs. tax) how phone calls are routed (pricing; choose carrier dynamically) how loads are controlled in an electrical grid when packages are delivered which stocks are bought and sold electronic marketplaces

6 © 1999 Singh & Huhns6 An Agent Should Act Benevolently Predictably –consistent with its model of itself –consistent with its model of other agents’ beliefs about itself

7 © 1999 Singh & Huhns7 Benevolence “A Mattress in the Road”

8 © 1999 Singh & Huhns8 A Collective Store Benevolent agents might contribute information they have retrieved, filtered, and refined to a collective store database Access to the collective store might be predicated on contributions to it Collective Store Database World Wide Web... Query Agents

9 © 1999 Singh & Huhns9 Agent Behavior Testbed - University of South Carolina

10 © 1999 Singh & Huhns10 Agents in a Cooperative Information System Architecture E-Mail System Web System Database System Application (Mediators, Proxies, Aides, Wrappers) Agent

11 © 1999 Singh & Huhns11 Agent Characteristics/1 Locality: local or remote Uniqueness: homogeneous or heterogeneous Granularity: fine- or coarse-grained Persistence: transient or long-lived Level of Cognition: reactive or deliberative Sociability: autistic, aware, responsible, team player Friendliness: cooperative or competitive or antagonistic Construction: declarative or procedural Semantic Level: communicate what or how Mobility: stationary or itinerant

12 © 1999 Singh & Huhns12 Agent Characteristics/2 Autonomy: independent or controlled Adaptability: fixed or teachable or autodidactic Sharing: degree and flexibility with respect to –communication: vocabulary, language, protocol –intellect: knowledge, goals, beliefs, specific ontologies –skills: procedures, "standard" behaviors, implementation languages Interactions: direct or via facilitators, mediators, or “nonagents” Interaction Style/Quality/Nature: with each other or with “the world”, or both? Do the agents model their environment, themselves, or other agents?

13 © 1999 Singh & Huhns13 Dimensions of CIS: System Scale is the number of agents: Interactions: Coordination (self interest): Agent Heterogeneity: Communication Paradigm: IndividualCommitteeSociety ReactivePlanned AntagonisticAltruisticCollaborative CompetitiveCooperativeBenevolent IdenticalUnique Point-to-PointMulti-by-name/roleBroadcast

14 © 1999 Singh & Huhns14 Dimensions of CIS: Agent Dynamism is the ability of an agent to learn: Autonomy: Interactions: Sociability (awareness): FixedTeachableAutodidactic ControlledIndependent SimpleComplex Interdependent AutisticCollaborativeCommitting

15 © 1999 Singh & Huhns15 Challenges Doing the "right" thing Shades of autonomy Conventions: emergence and maintenance Coordination Collaboration Communication: semantics and pragmatics Interaction-oriented programming

16 © 1999 Singh & Huhns16 BASIC CONCEPTS

17 © 1999 Singh & Huhns17 Categories of Agent Research

18 © 1999 Singh & Huhns18 Agent Environments Accessible vs. Inaccessible Deterministic vs. Nondeterministic Episodic vs. Nonepisodic Static vs. Dynamic Discrete vs. Continuous Open information environments (e.g., InfoSleuth) are inaccessible, nondeterministic, nonepisodic, dynamic, and discrete

19 © 1999 Singh & Huhns19 Agent Abstractions/1 The traditional abstractions are from AI and are mentalistic –beliefs: agent’s representation of the world –knowledge: (usually) true beliefs –desires: preferred states of the world –goals: consistent desires –intentions: goals adopted for action

20 © 1999 Singh & Huhns20 Agent Abstractions/2 The agent-specific abstractions are inherently interactional –social: about collections of agents –organizational: about teams and groups –ethical: about right and wrong actions –legal: about contracts and compliance

21 © 1999 Singh & Huhns21 Agent Abstractions/3 Inherently interactional Agents, when properly understood lead naturally to multiagent systems provide a means to capture the fundamental abstractions that apply in all major applications and which are otherwise ignored by system builders

22 © 1999 Singh & Huhns22 Agents versus AI

23 © 1999 Singh & Huhns23 How to Apply the Abstractions Consider how the components of a any practical situation involving large and dynamic software systems. –Dynamism => autonomly –Openness and compliance => ability to enter into and obey contracts –Trustworthiness => ethical behavior

24 © 1999 Singh & Huhns24 Why Do These Abstractions Matter? Because of modern applications that demand going beyond traditional metaphors and models –virtual enterprises: manufacturing supply chains, autonomous logistics, –electronic commerce: utility management –communityware: social user interfaces –problem-solving by teams

25 © 1999 Singh & Huhns25 A Rational Agent Rationality depends on... The performance measure for success What the agent has perceived so far What the agent knows about the environment The actions the agent can perform An ideal rational agent: for each possible percept sequence, it acts to maximize its expected utility, on the basis of its knowledge and the evidence from the percept sequence

26 © 1999 Singh & Huhns26 A Simple Reactive Agent Agent Environment Sensors Effectors What the world is like now What action I should do now Condition-action rules

27 © 1999 Singh & Huhns27 A Simple Reactive Agent function Simple-Reactive-Agent(percept) static: rules, a set of condition-action rules state  Interpret-Input(percept) rule  Rule-Matching(state, rules) action  Rule-Action(rule) return action

28 © 1999 Singh & Huhns28 A Reactive Agent with State Agent Environment Sensors Effectors What the world is like now What action I should do now Condition-action rules State How the world evolves What my actions do

29 © 1999 Singh & Huhns29 function Reactive-Agent-with-State(percept) static: rules, a set of condition-action rules state, a description of the current world state  Update-State(state, percept) rule  Rule-Matching(state, rules) action  Rule-Action(rule) state  Update-State(state, action) return action A Reactive Agent with State

30 © 1999 Singh & Huhns30 A Goal-Based Agent Agent Environment Sensors Effectors What the world is like now What action I should do now Goals State How the world evolves What my actions do What it will be like if I do action A

31 © 1999 Singh & Huhns31 A Utility-Based Agent Agent Environment Sensors Effectors What the world is like now What action I should do now Utility State How the world evolves What my actions do What it will be like if I do action A How happy I will be in such a state

32 © 1999 Singh & Huhns32 A Utility-Based Agent function Utility-Based-Agent(percept) static: a set of probabilistic beliefs about the state of the world Update-Probs-for-Current-State(percept,old-action) Update-Probs-for-Actions(state, actions) Select-Action-with-Highest-Utility(probs) return action

33 © 1999 Singh & Huhns33 5. INTERACTION AND COMMUNICATION

34 © 1999 Singh & Huhns34 Cognitive Economy Prefer the simpler (more economical) explanation ("but not too simple" - Einstein) Implications of Cognitive Economy: Agents must represent their environment Agents must represent themselves Agents must represent other agents ad infinitum Zero-order model: other agents are the same as oneself

35 © 1999 Singh & Huhns35 Coordination A property of interaction among a set of agents performing some activity in a shared state. The degree of coordination is the extent to which they avoid extraneous activity –reduce resource contention –avoid livelock avoid deadlock maintain safety conditions Cooperation is coordination among nonantagonistic agents. Typically, each agent must maintain a model of the other agents each agent must develop a model of future interactions

36 © 1999 Singh & Huhns36 The Contract Net Protocol An important generic protocol A manager announces the existence of tasks via a (possibly selective) multicast Agents evaluate the announcement. Some of these agents submit bids The manager awards a contract to the most appropriate agent The manager and contractor communicate privately as necessary

37 © 1999 Singh & Huhns37 Task Announcement Message Eligibility specification: criteria that a node must meet to be eligible to submit a bid Task abstraction: a brief description of the task to be executed Bid specification: a description of the expected format of the bid Expiration time: a statement of the time interval during which the task announcement is valid

38 © 1999 Singh & Huhns38 Bid and Award Messages A bid consists of a node abstraction—a brief specification of the agent’s capabilities that are relevant to the task An award consists of a task specification—the complete specification of the task

39 © 1999 Singh & Huhns39 Applicability of Contract Net The Contract Net is a high-level communication protocol a way of distributing tasks a means of self-organization for a group of agents Best used when the application has a well-defined hierarchy of tasks the problem has a coarse-grained decomposition the subtasks minimally interact with each other, but cooperate when they do

40 © 1999 Singh & Huhns40 CONTROL

41 © 1999 Singh & Huhns41 Goals for Multiagent Control Develop Technologies for... Locating and allocating capabilities and resources that are dispersed in the environment Predicting, avoiding, or resolving contentions over capabilities and resources Mediating among more agents, with more heterogeneity and more complex interactions Maintaining stability, coherence, and effectiveness

42 © 1999 Singh & Huhns42 Control Challenges What makes control difficult can be broken down into several major characteristics of the overall system, including: The Agents that comprise the system The Problems that those agents are solving individually and/or collectively The Solution characteristics that are critical

43 © 1999 Singh & Huhns43 Control Challenges:Agents Control is harder as agents are: More numerous More complex individually (e.g., more versatile) More heterogeneous in their capabilities, means of accomplishing capabilities, languages for describing capabilities, etc. quantity heterogeniety complexity

44 © 1999 Singh & Huhns44 Control Challenges:Problems Control is harder as the problems agents solve are More interrelated Changing more rapidly, or pursued in an uncertain and changing world More unforgiving of control failures (e.g., involving irreversible actions) degree of interaction severity of failure volatility

45 © 1999 Singh & Huhns45 Control Challenges:Solutions Control is harder as solutions to agent problems must be Better (e.g., more efficient) for the circumstances More robust to changing circumstances Cheaper/faster to develop individually and in concert quality / efficiency low overhead robustness

46 © 1999 Singh & Huhns46 Technologies for Agent Control Broker-based Matchmaker-based Market-based; auctions BDI and commitment based Decision theoretic Workflow (procedural) based Standard operating procedures Learning / adaptive Coordinated planning Conventions / protocols Stochastic or physics-based Organizations: teams and coalitions Constraint satisfaction/ optimization

47 © 1999 Singh & Huhns47 Example Experiments: Capability Location (1) Investigate matchmaking and distributed matchmaking complexities as a function of numbers of agents (2) Investigate brokering vs. matchmaking vs. direct interaction as a function of different task types and allocation mechanisms # of agents Matchmaking activity Task Type Brokering allocation mechanism Matchmaking

48 © 1999 Singh & Huhns48 Example Experiments: Capability Allocation and Scheduling (1) Investigate quality/cost of allocating scarce capabilities as number of capabilities and their consumers rises (2) Investigate quality/cost of scheduling reusable/nonsharable capabilities as volatility/uncertainty in agents’ future needs rises # of agents Allocation costs volatility/uncertainty scheduling mechanism capability utilization

49 © 1999 Singh & Huhns49 Parameters of Tasks and Experiments Number of tasks Types of tasks –number of resources –duration of resource need –complementarity/substitutability –sequencing of resource needs Resource contention/overlap in needs Types of resources –reusable/sharable/scaleable

50 © 1999 Singh & Huhns50 Dimensions of Control

51 © 1999 Singh & Huhns51

52 © 1999 Singh & Huhns52

53 © 1999 Singh & Huhns53 SOCIAL ABSTRACTIONS

54 © 1999 Singh & Huhns54 Social Abstractions Commitments: social, joint, collective,... Organizations and roles Teams and teamwork Mutual beliefs and problems Joint intentions Potential conflict with individual rationality

55 © 1999 Singh & Huhns55 Coherence and Commitments Coherence is how well a system behaves as a unit. It requires some form of organization, typically hierarchical Social commitments are a means to achieve coherence

56 © 1999 Singh & Huhns Example: Electronic Commerce Define an abstract sphere of commitment (SoCom) consisting of two roles: buyer and seller, which require capabilities and commitments about, e.g., –requests they will honor –validity of price quotes To adopt these roles, agents must have the capabilities and acquire the commitments.

57 Buyer and Seller Agents SoComs provide the context for the concepts represented & communicated.

58 © 1999 Singh & Huhns Example: Electronic Commerce Agents can join –during execution—requires publishing the definition of the commerce SoCom –when configured by humans The agents then behave according to the commitments Toolkit should help define and execute commitments, and detect conflicts.

59 Virtual Enterprises (VE) Two sellers come together with a new proxy agent called VE. Example of VE agent commitments: notify on change update orders guarantee the price guarantee delivery date

60 VE and EC Composed

61 © 1999 Singh & Huhns Social Commitments Operations on commitments (instantiated as social actions): –create –discharge (satisfy) –cancel –release (eliminate) –delegate (change debtor) –assign (change creditor).

62 © 1999 Singh & Huhns Policies and Structure Spheres of commitment (SoComs) –abstract specifications of societies –made concrete prior to execution Policies apply on performing social actions Policies related to the nesting of SoComs Role conflicts can occur when agents play multiple roles, e.g., because of nonunique nesting.

63 © 1999 Singh & Huhns63 ETHICAL ABSTRACTIONS

64 © 1999 Singh & Huhns64 Ethical Abstractions Utilitarianism Consequentialism Obligations Deontic logic Paradoxes

65 © 1999 Singh & Huhns65 Motivation The ethical abstractions help us specify agents who act appropriately. Intuitively, we think of ethics as just the basic way of distinguishing right from wrong. It is difficult to entirely separate ethics from legal, social, or even economic considerations

66 © 1999 Singh & Huhns66 Right and Good Right: that which is right in itself Good: that which is good for someone or for some end

67 © 1999 Singh & Huhns67 Deontological vs Teleological Deontological theories –right before good –being good does not mean being right –ends do not justify means Teleological theories –good before right –right maximizes good –ends justify means

68 © 1999 Singh & Huhns68 Deontological Theories Constraints –negatively formulated –narrowly framed e.g., lying is not not-telling-the-truth –narrowly directed at the agent’s specific action not its occurrence by other means not the consequences that are not explicitly chosen

69 © 1999 Singh & Huhns69 Teleological Theories Based on how actions satisfy various goals, not their intrinsic rightness comparison-based preference-based

70 © 1999 Singh & Huhns70 Consequentialism

71 © 1999 Singh & Huhns71 Utilitarianism This is the view that a moral action is one that is useful must be good for someone good may be interpreted as –pleasure: hedonism –preference satisfaction: microeconomic rationalism (assumes each agent knows its preferences) –interest satisfaction: welfare utilitarianism –aesthetic ideals: ideal utilitarianism

72 © 1999 Singh & Huhns72 Obligations For deontological theories, obligations are those that are impermissible to omit

73 © 1999 Singh & Huhns73 Applying Ethics The deontological theories –are narrower –ignore practical consideration –but are only meant as incomplete constraints (of all right actions, the agent can choose any) The teleological theories –are broader –include practical considerations –but leave fewer options for the agent, who must always choose the best available alternative

74 © 1999 Singh & Huhns74 LEGAL ABSTRACTIONS

75 © 1999 Singh & Huhns75 Legal Abstractions Contracts Directed obligations Hohfeldian concepts: right, duty, power, liability, immunity,... Following protocols Defining and testing compliance

76 © 1999 Singh & Huhns76 UNDERSTANDING COMMUNICATION

77 © 1999 Singh & Huhns77 Interaction and Communication Interactions occur when agents exist and act in close proximity: –resource contention, e.g., bumping into each other Communication occurs when agents send messages to one another with a view to influencing beliefs and intentions. Implementation details are irrelevant: can occur over communication links –can occur through shared memory –can occur because of shared conventions

78 © 1999 Singh & Huhns78 Speech Act Theory Speech act theory, initially meant for natural language, views communications as actions. It considers three aspects of a message: Locution, or how it is phrased, e.g., –"It is hot here" or "Turn on the cooler" Illocution, or how it is meant by the sender or understood by the receiver, e.g., –a request to turn on the cooler or an assertion about the temperature Perlocution, or how it influences the recipient, e.g., –turns on the cooler, opens the window, ignores the speaker Illocution is the main aspect.

79 © 1999 Singh & Huhns79 Syntax, Semantics, Pragmatics For message passing Syntax: requires a common language to represent information and queries, or languages that are intertranslatable Semantics: requires a structured vocabulary and a shared framework of knowledge-a shared ontology Pragmatics: –knowing whom to communicate with and how to find them –knowing how to initiate and maintain an exchange –knowing the effect of the communication on the recipient

80 © 1999 Singh & Huhns80 KQML Semantics Each agent manages a virtual knowledge base (VKB) Statements in a VKB can be classified into beliefs and goals Beliefs encode information an agent has about itself and its environment Goals encode states of an agent’s environment that it will act to achieve Agents use KQML to communicate about the contents of their own and others’ VKBs

81 © 1999 Singh & Huhns81 Semantics of Communications What if the agents have different terms for the same concept? same term for different concepts? different class systems or schemas? differences in depth and breadth of coverage?

82 © 1999 Singh & Huhns82 Common Ontologies A shared representation is essential to successful communication and coordination For humans, this is provided by the physical, biological, and social world For computational agents, this is provided by a common ontology: –terms used in communication can be coherently defined –interaction policies can be shared Current efforts are –Cyc –DARPA ontology sharing project –Ontology Base (ISI) –WordNet (Princeton)

83 © 1999 Singh & Huhns83 ECONOMIC ABSTRACTIONS

84 © 1999 Singh & Huhns84 Motivation The economic abstractions have a lot of appeal as an existing approach to capture complex systems of autonomous agents. By themselves they are incomplete Can provide a basis for achieving some of the contractual behaviors, especially in –helping an agent decide what to do –helping agents negotiate.

85 © 1999 Singh & Huhns85 Market-Oriented Programming An approach to distributed computation based on market price mechanisms Effective for coordinating the activities of many agents with minimal communication Goal: build computational economies to solve problems of distributed resource allocation

86 © 1999 Singh & Huhns86 Benefits For agents, the state of the world is described completely by current prices Agents do not need to consider the preferences or abilities of others Communications are offers to exchange goods at various prices Under certain conditions, a simultaneous equilibrium of supply and demand across all of the goods is guaranteed to exist, to be reachable via distributed bidding, and to be Pareto optimal.

87 © 1999 Singh & Huhns87 Market Behavior Agents interact by offering to buy or sell quantities of commodities at fixed unit prices At equilibrium, the market has computed the allocation of resources and dictates the activities and consumptions of the agents

88 © 1999 Singh & Huhns88 Agent Behavior Consumer agents: exchange goods Producer agents: transform some goods into other goods Assume individual impact on market is negligible Both types of agents bid so as to maximize profits (or utility)

89 © 1999 Singh & Huhns89 Principles of Negotiation Negotiation involves a small set of agents Actions are propose, counterpropose, support, accept, reject, dismiss, retract Negotiation requires a common language and common framework (an abstraction of the problem and its solution) RAD agents exchange DTMS justifications and class information Specialized negotiation knowledge may be encoded in third-party agents The only negotiation formalism is unified negotiation protocol [Rosenschein, Hebrew U.]

90 © 1999 Singh & Huhns90 Negotiation A deal is a joint plan between two agents that would satisfy both of their goals The utility of a deal for an agent is the amount he is willing to pay minus the cost to him of the deal The negotiation set is the set of all deals that have a positive utility for every agent The possible situations for interaction are conflict: the negotiation set is empty compromise: agents prefer to be alone, but will agree to a negotiated deal cooperative: all deals in the negotiation set are preferred by both agents over achieving their goals alone [Rosenschein and Zlotkin, 1994]

91 © 1999 Singh & Huhns91 Negotiation Mechanism The agents follow a Unified Negotiation Protocol, which applies to any situation. In this protocol, the agents negotiate on mixed-joint plans, i.e., plans that bring the world to a new state that is better for both agents if there is a conflict, they "flip a coin" to decide which agent gets to satisfy his goal

92 © 1999 Singh & Huhns92 Negotiation Mechanism Attributes Efficiency Stability Simplicity Distribution Symmetry e.g., sharing book purchases, with cost decided by coin flip

93 © 1999 Singh & Huhns93 Third-Party Negotiation Resolves conflicts among antagonistic agents directly or through a mediator Handles multiagent, multiple-issue, multiple-encounter interactions using case-based reasoning and multiattribute utility theory Agents exchange messages that contain –the proposed compromise –persuasive arguments –agreement (or not) with the compromise or argument –requests for additional information –reasons for disagreement –utilities / preferences for the disagreed-upon issues [Sycara]

94 © 1999 Singh & Huhns94 Negotiation in RAD Resolves conflicts among agents during problem solving To negotiate, agents exchange –justifications, which are maintained by a DTMS –class information, which is maintained by a frame system Maintains global consistency, but only where necessary for problem solving

95 © 1999 Singh & Huhns95 Negotiation among Utility-Based Agents Problem: How to design the rules of an environment so that agents interact productively and fairly, e.g., Vickrey’s Mechanism: lowest bidder wins, but gets paid second lowest bid (this motivates telling the truth?? and is best for the consumer??)

96 © 1999 Singh & Huhns96 Problem Domain Hierarchy Worth-Oriented Domains State-Oriented Domains Task-Oriented Domains

97 © 1999 Singh & Huhns97 Task-Oriented Domains A TOD is a tuple, where T is the set of tasks, A is the set of agents, and c(X) is a monotonic function for the cost of executing the set of tasks X Examples –delivery domain: c(X) is length of minimal path that visits X –postmen domain: c(X) is length of minimal path plus return –database queries: c(X) is minimal number of needed DB ops

98 © 1999 Singh & Huhns98 TODs A deal is a redistribution of tasks Utility of deal d for agent k is U k (d) = c(T k ) - c(d k ) The conflict deal, D, is no deal A deal d is individual rational if d>D Deal d dominates d’ if d is better for at least one agent and not worse for the rest Deal d is Pareto optimal if there is no d’>d The set of all deals that are individual rational and Pareto optimal is the negotiation set, NS

99 © 1999 Singh & Huhns99 Monotonic Concession Protocol Each agent proposes a deal If one agent matches or exceeds what the other demands, the negotiation ends Else, the agents propose the same or more (concede) If no agent concedes, the negotiation ends with the conflict deal This protocol is simple, symmetric, distributed, and guaranteed to end in a finite number of steps in any TOD. What strategy should an agent adopt?

100 © 1999 Singh & Huhns100 Zeuthen Strategy Offer deal that is best among all deals in NS Calculate risks of self and opponent R1=(utility A1 loses by accepting A2’s offer) (utility A1 loses by causing a conflict) If risk is smaller than opponent, offer minimal sufficient concession (a sufficient concession makes opponent’s risk less than yours); else offer original deal If both use this strategy, they will agree on deal that maximizes the product of their utilities (Pareto optimal) The strategy is not stable (when both should concede on last step, but it’s sufficient for only one to concede, then one can benefit by dropping strategy)

101 © 1999 Singh & Huhns101 Deception-Free Protocols Zeuthen strategy requires full knowledge of –tasks –protocol –strategies –commitments Hidden tasks Phantom tasks Decoy tasks P.O. A1 (hidden) A1A2

102 © 1999 Singh & Huhns102 8. SYNTHESIS

103 © 1999 Singh & Huhns103 Research Trends Economic: Sycara, Rosenschein, Sandholm, Lesser Social: organizational theory and open systems—Hewitt, Gasser, Castelfranchi Ethical: Legal: Communication: Coordination: Collaboration: Formal Methods—Singh, Wooldridge, Jennings, Georgeff

104 © 1999 Singh & Huhns104 Interaction-Oriented Software Development Active modules, representing real-world objects Declarative specification (“what,” not “how”) Modules that volunteer Modules hold beliefs about the world, especially about themselves and others

105 © 1999 Singh & Huhns What is IOP? A collection of abstractions and techniques for programming MAS. Classified into three layers of mechanisms : –coordination: living in a shared environment –commitment: organizational or social coherence (adds stability over time) –collaboration: high-level interactions combining mental and social abstractions.

106 © 1999 Singh & Huhns IOP Contribution Enhances and formalizes ideas from different disciplines Separates them out in an explicit conceptual metamodel to use as a basis for programming and for programming methodologies Makes them programmable

107 © 1999 Singh & Huhns Benefits of IOP Like all conceptual modeling, IOP offers a higher-level starting point than traditionally available. Specifically: –key concepts of coordination, commitment, collaboration as first-class concepts that can be applied directly –aspects of the underlying infrastructure are separated, leading to improved portability.

108 © 1999 Singh & Huhns108 Representations for IOP Functionalities, which typically exist –effected by humans in some unprincipled way –hard-coded in applications –buried in operating procedures and manuals Information, which typically exists –in data stores –in the environment or with interacting entities. Problem: interactive aspects are not modeled.

109 © 1999 Singh & Huhns109 Lessons Advanced abstractions –must be simple –must reflect true status

110 © 1999 Singh & Huhns110 Challenges Formal semantics Operational semantics related to formal semantics Tools Design rules capturing useful patterns, but respecting the formal semantics

111 © 1999 Singh & Huhns111 To Probe Further Readings in Agents (Huhns & Singh, eds.), Morgan Kaufmann, 1997 http://www.mkp.com/books_catalog/1-55860-495-2.asp DAI-List-Request@ece.sc.edu Journal of Autonomous Agents and Multiagent Systems International Conference on Multiagent Systems (ICMAS) International Joint Conference on Artificial Intelligence International Workshop on Agent Theories, Architectures, and Languages (ATAL)


Download ppt "© 1999 Singh & Huhns1 Principles of Agents and Multiagent Systems Munindar P. Singh Michael N. Huhns"

Similar presentations


Ads by Google