Multi-Agent Systems: Overview and Research Directions CMSC 477/677 March 15, 2005 Prof. Marie desJardins.

Slides:



Advertisements
Similar presentations
BEHAVIORAL RESEARCH IN MANAGERIAL ACCOUNTING RANJANI KRISHNAN HARVARD BUSINESS SCHOOL & MICHIGAN STATE UNIVERSITY 2008.
Advertisements

ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Collaboration Mechanisms in SOA based MANETs. Introduction Collaboration implies the cooperation between the nodes to support the proper functioning of.
Nash’s Theorem Theorem (Nash, 1951): Every finite game (finite number of players, finite number of pure strategies) has at least one mixed-strategy Nash.
Computer-aided mechanism design Ye Fang, Swarat Chaudhuri, Moshe Vardi 1.
The Voting Problem: A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor SIUC.
Negotiation Martin Beer, School of Computing & Management Sciences, Sheffield Hallam University, Sheffield, United Kingdom
Some questions o What are the appropriate control philosophies for Complex Manufacturing systems? Why????Holonic Manufacturing system o Is Object -Oriented.
CROWN “Thales” project Optimal ContRol of self-Organized Wireless Networks WP1 Understanding and influencing uncoordinated interactions of autonomic wireless.
Multi-Agent Systems: Overview and Research Directions CMSC 477/677 Spring 2005 Prof. Marie desJardins.
An Introduction to... Evolutionary Game Theory
Game Theory-Based Network Selection: Solutions and Challenges Speaker: Kai-Wei Ping Advisor: Prof Dr. Ho-Ting Wu 2013/05/20 1.
Seminar In Game Theory Algorithms, TAU, Agenda  Introduction  Computational Complexity  Incentive Compatible Mechanism  LP Relaxation & Walrasian.
COORDINATION and NETWORKING of GROUPS OF MOBILE AUTONOMOUS AGENTS.
Games What is ‘Game Theory’? There are several tools and techniques used by applied modelers to generate testable hypotheses Modeling techniques widely.
Game Theory The study of rational behavior among interdependent agents Agents have a common interest to make the pie as large as possible, but Agents have.
Slide 1 of 13 So... What’s Game Theory? Game theory refers to a branch of applied math that deals with the strategic interactions between various ‘agents’,
Game-Theoretic Approaches to Multi-Agent Systems Bernhard Nebel.
Effective Coordination of Multiple Intelligent Agents for Command and Control The Robotics Institute Carnegie Mellon University PI: Katia Sycara
Gabriel Tsang Supervisor: Jian Yang.  Initial Problem  Related Work  Approach  Outcome  Conclusion  Future Work 2.
A Heuristic Bidding Strategy for Multiple Heterogeneous Auctions Patricia Anthony & Nicholas R. Jennings Dept. of Electronics and Computer Science University.
استاد درس : دکتر عبداله زاده درس : هوش مصنوعی توزیع شده ارائه کننده : مریم باحجب ایمانی.
Nov 2003Group Meeting #2 Distributed Optimization of Power Allocation in Interference Channel Raul Etkin, Abhay Parekh, and David Tse Spectrum Sharing.
A Free Market Architecture for Distributed Control of a Multirobot System The Robotics Institute Carnegie Mellon University M. Bernardine Dias Tony Stentz.
Distributed Rational Decision Making Sections By Tibor Moldovan.
Game Theory April 9, Prisoner’s Dilemma  One-shot, simultaneous game  Nash Equilibrium (individually rational strategies) is not Pareto Optimal.
Building a Strong Foundation for a Future Internet Jennifer Rexford ’91 Computer Science Department (and Electrical Engineering and the Center for IT Policy)
Towards A Multi-Agent System for Network Decision Analysis Jan Dijkstra.
Aeronautics & Astronautics Autonomous Flight Systems Laboratory All slides and material copyright of University of Washington Autonomous Flight Systems.
Strategic Game Theory for Managers. Explain What is the Game Theory Explain the Basic Elements of a Game Explain the Importance of Game Theory Explain.
Outline Definition Issues and elements of MAS Applications
MAKING COMPLEX DEClSlONS
Coalition Formation between Self-Interested Heterogeneous Actors Arlette van Wissen Bart Kamphorst Virginia DignumKobi Gal.
A Framework for Distributed Model Predictive Control
Team Formation between Heterogeneous Actors Arlette van Wissen Virginia Dignum Kobi Gal Bart Kamphorst.
L 9 : Collaborations Why? Terminology Coherence Coordination Reference s :
The Management Process Today
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Taylorian Management develop a science for every job –standardize –proper working conditions –rules of motion (eliminate unnecessary movement) match.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
ACM SIGACT News Distributed Computing Column 9 Abstract This paper covers the distributed systems issues, concentrating on some problems related to distributed.
Don Perugini, Dennis Jarvis, Shane Reschke, Don Gossink Decision Automation Group Command and Control Division, DSTO Distributed Deliberative Planning.
Multi-agent Systems & Reinforcement Learning
An Extended Alternating-Offers Bargaining Protocol for Automated Negotiation in Multi-agent Systems P. Winoto, G. McCalla & J. Vassileva Department of.
Lecture 5 Introduction to Game theory. What is game theory? Game theory studies situations where players have strategic interactions; the payoff that.
Information Theory for Mobile Ad-Hoc Networks (ITMANET): The FLoWS Project Competitive Scheduling in Wireless Networks with Correlated Channel State Ozan.
A Quantitative Trust Model for Negotiating Agents A Quantitative Trust Model for Negotiating Agents Jamal Bentahar, John Jules Ch. Meyer Concordia University.
Algorithmic, Game-theoretic and Logical Foundations
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
Distributed Models for Decision Support Jose Cuena & Sascha Ossowski Pesented by: Gal Moshitch & Rica Gonen.
Multiagent System Katia P. Sycara 일반대학원 GE 랩 성연식.
How to Analyse Social Network? : Part 2 Game Theory Thank you for all referred contexts and figures.
1 Multiagent Teamwork: Analyzing the Optimality and Complexity of Key Theories and Models David V. Pynadath and Milind Tambe Information Sciences Institute.
Iterated Prisoner’s Dilemma Game in Evolutionary Computation Seung-Ryong Yang.
CMSC 100 Multi-Agent Game Day Professor Marie desJardins Tuesday, November 20, 2012 Tue 11/20/12 1 Multi-Agent Game Day.
Software Agents & Agent-Based Systems Sverker Janson Intelligent Systems Laboratory Swedish Institute of Computer Science
Multi-Agent Systems: Overview and Research Directions CMSC 671 – Class #26 December 1, 2005 Prof. Marie desJardins.
Intermediate Microeconomics Game Theory. So far we have only studied situations that were not “strategic”. The optimal behavior of any given individual.
Learning for Physically Diverse Robot Teams Robot Teams - Chapter 7 CS8803 Autonomous Multi-Robot Systems 10/3/02.
Ch. 16 Oligopoly. Oligopoly Only a few sellers offer similar or identical products Actions of any seller can have large impact on profits of other sellers.
Multi-Agent Systems: Overview and Research Directions CMSC 471 March 11, 2014 Prof. Marie desJardins.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Dynamics of Competition Between Incumbent and Emerging Network Technologies Youngmi Jin (Penn) Soumya Sen (Penn) Prof. Roch Guerin (Penn) Prof. Kartik.
Intelligent Agents: Technology and Applications Unit Five: Collaboration and Task Allocation IST 597B Spring 2003 John Yen.
Management support systems II
Multi-Agent Systems: Overview and Research Directions
An Investigation of Market Dynamics and Wealth Distributions
Market-based Dynamic Task Allocation in Mobile Surveillance Systems
Multi-Agent Systems: Overview and Research Directions
Presentation transcript:

Multi-Agent Systems: Overview and Research Directions CMSC 477/677 March 15, 2005 Prof. Marie desJardins

Outline  Multi-Agent Systems  Cooperative multi-agent systems  Competitive multi-agent systems  MAS Research Directions  Organizational structures  Communication limitations  Learning in multi-agent systems

Multi-agent systems  Jennings et al.’s key properties:  Situated  Autonomous  Flexible: Responsive to dynamic environment Pro-active / goal-directed Social interactions with other agents and humans  Research questions: How do we design agents to interact effectively to solve a wide range of problems in many different environments?

Aspects of multi-agent systems  Cooperative vs. competitive  Homogeneous vs. heterogeneous  Macro vs. micro  Interaction protocols and languages  Organizational structure  Mechanism design / market economics  Learning

Topics in multi-agent systems  Cooperative MAS:  Distributed problem solving: Less autonomy  Distributed planning: Models for cooperation and teamwork  Competitive or self-interested MAS:  Distributed rationality: Voting, auctions  Negotiation: Contract nets

Typical (cooperative) MAS domains  Distributed sensor network establishment  Distributed vehicle monitoring  Distributed delivery

Distributed sensing  Track vehicle movements using multiple sensors  Distributed sensor network establishment:  Locate sensors to provide the best coverage  Centralized vs. distributed solutions  Distributed vehicle monitoring:  Control sensors and integrate results to track vehicles as they move from one sensor’s “region” to another’s  Centralized vs. distributed solutions

Distributed delivery  Logistics problem: move goods from original locations to destination locations using multiple delivery resources (agents)  Dynamic, partially accessible, nondeterministic environment (goals, situation, agent status)  Centralized vs. distributed solution

Cooperative Multi-Agent Systems

Distributed problem solving/planning  Cooperative agents, working together to solve complex problems with local information  Partial Global Planning (PGP): A planning-centric distributed architecture  SharedPlans: A formal model for joint activity  Joint Intentions: Another formal model for joint activity  STEAM: Distributed teamwork; influenced by joint intentions and SharedPlans

Distributed problem solving  Problem solving in the classical AI sense, distributed among multiple agents  That is, formulating a solution/answer to some complex question  Agents may be heterogeneous or homogeneous  DPS implies that agents must be cooperative (or, if self-interested, then rewarded for working together)

Requirements for cooperative activity  (Grosz) -- “Bratman (1992) describes three properties that must be met to have ‘shared cooperative activity’:  Mutual responsiveness  Commitment to the joint activity  Commitment to mutual support”

Joint intentions  Theoretical framework for joint commitments and communication  Intention: Commitment to perform an action while in a specified mental state  Joint intention: Shared commitment to perform an action while in a specified group mental state  Communication: Required/entailed to establish and maintain mutual beliefs and join intentions

SharedPlans  SharedPlan for group action specifies beliefs about how to do an action and subactions  Formal model captures intentions and commitments towards the performance of individual and group actions  Components of a collaborative plan (p. 5):  Mutual belief of a (partial) recipe  Individual intentions-to perform the actions  Individual intentions-that collaborators succeed in their subactions  Individual or collaborative plans for subactions  Very similar to joint intentions

STEAM: Now we’re getting somewhere!  Implementation of joint intentions theory  Built in Soar framework  Applied to three “real” domains  Many parallels with SharedPlans  General approach:  Build up a partial hierarchy of joint intentions  Monitor team and individual performance  Communicate when need is implied by changing mental state & joint intentions  Key extension: Decision-theoretic model of communication selection (Teamcore)

Competitive Multi-Agent Systems

Distributed rationality  Techniques to encourage/coax/force self-interested agents to play fairly in the sandbox  Voting: Everybody’s opinion counts (but how much?)  Auctions: Everybody gets a chance to earn value (but how to do it fairly?)  Contract nets: Work goes to the highest bidder  Issues:  Global utility  Fairness  Stability  Cheating and lying

Pareto optimality  S is a Pareto-optimal solution iff  S’ (  x U x (S’) > U x (S) →  y U y (S’) < U y (S))  i.e., if X is better off in S’, then some Y must be worse off  Social welfare, or global utility, is the sum of all agents’ utility  If S maximizes social welfare, it is also Pareto-optimal (but not vice versa) X’s utility Y’s utility Which solutions are Pareto-optimal? Which solutions maximize global utility (social welfare)?

Stability  If an agent can always maximize its utility with a particular strategy (regardless of other agents’ behavior) then that strategy is dominant  A set of agent strategies is in Nash equilibrium if each agent’s strategy S i is locally optimal, given the other agents’ strategies  No agent has an incentive to change strategies  Hence this set of strategies is locally stable

Prisoner’s Dilemma CooperateDefect Cooperate3, 30, 5 Defect5, 01, 1 A B

Prisoner’s Dilemma: Analysis  Pareto-optimal and social welfare maximizing solution: Both agents cooperate  Dominant strategy and Nash equilibrium: Both agents defect CooperateDefect Cooperate3, 30, 5 Defect5, 01, 1  Why? A B

Voting  How should we rank the possible outcomes, given individual agents’ preferences (votes)?  Six desirable properties (which can’t all simultaneously be satisfied):  Every combination of votes should lead to a ranking  Every pair of outcomes should have a relative ranking  The ranking should be asymmetric and transitive  The ranking should be Pareto-optimal  Irrelevant alternatives shouldn’t influence the outcome  Share the wealth: No agent should always get their way

Voting protocols  Plurality voting: the outcome with the highest number of votes wins  Irrelevant alternatives can change the outcome: The Ross Perot factor  Borda voting: Agents’ rankings are used as weights, which are summed across all agents  Agents can “spend” high rankings on losing choices, making their remaining votes less influential  Binary voting: Agents rank sequential pairs of choices (“elimination voting”)  Irrelevant alternatives can still change the outcome  Very order-dependent

Auctions  Many different types and protocols  All of the common protocols yield Pareto-optimal outcomes  But… Bidders can agree to artificially lower prices in order to cheat the auctioneer  What about when the colluders cheat each other?  (Now that’s really not playing nicely in the sandbox!)

Contract nets  Simple form of negotiation  Announce tasks, receive bids, award contracts  Many variations: directed contracts, timeouts, bundling of contracts, sharing of contracts, …  There are also more sophisticated dialogue-based negotiation models

MAS Research Directions

Agent organizations  “Large-scale problem solving technologies”  Multiple (human and/or artificial) agents  Goal-directed (goals may be dynamic and/or conflicting)  Affects and is affected by the environment  Has knowledge, culture, memories, history, and capabilities (distinct from individual agents)  Legal standing is distinct from single agent  Q: How are MAS organizations different from human organizations?

Organizational structures  Exploit structure of task decomposition  Establish “channels of communication” among agents working on related subtasks  Organizational structure:  Defines (or describes) roles, responsibilities, and preferences  Use to identify control and communication patterns: Who does what for whom: Where to send which task announcements/allocations Who needs to know what: Where to send which partial or complete results

Communication models  Theoretical models: Speech act theory  Practical models:  Shared languages like KIF, KQML, DAML  Service models like DAML-S  Social convention protocols

Communication strategies  Send only relevant results at the right time  Conserve bandwidth, network congestion, computational overhead of processing data  Push vs. pull  Reliability of communication (arrival and latency of messages)  Use organizational structures, task decomposition, and/or analysis of each agent’s task to determine relevance

Communication structures  Connectivity (network topology) strongly influences the effectiveness of an organization  Changes in connectivity over time can impact team performance:  Move out of communication range  coordination failures  Changes in network structure  reduced (or increased) bandwidth, increased (or reduced) latency

Learning in MAS  Emerging field to investigate how teams of agents can learn individually and as groups  Distributed reinforcement learning: Behave as an individual, receive team feedback, and learn to individually contribute to team performance  Distributed reinforcement learning: Iteratively allocate “credit” for group performance to individual decisions  Genetic algorithms: Evolve a society of agents (survival of the fittest)  Strategy learning: In market environments, learn other agents’ strategies

Adaptive organizational dynamics  Potential for change:  Change parameters of organization over time  That is, change the structures, add/delete/move agents, …  Adaptation techniques:  Genetic algorithms  Neural networks  Heuristic search / simulated annealing  Design of new processes and procedures  Adaptation of individual agents

Conclusions and directions  Different types of “multi-agent systems”:  Cooperative vs. competitive  Heterogeneous vs. homogeneous  Micro vs. macro  Lots of interesting/open research directions:  Effective cooperation strategies  “Fair” coordination strategies and protocols  Learning in MAS  Resource-limited MAS (communication, …)