Presentation is loading. Please wait.

Presentation is loading. Please wait.

CASE − Cognitive Agents for Social Environments

Similar presentations


Presentation on theme: "CASE − Cognitive Agents for Social Environments"— Presentation transcript:

1 CASE − Cognitive Agents for Social Environments
Yu Zhang Trinity University | Laboratory for Distributed Intelligent Agent Systems

2 Outline Introduction CASE — Agent-Level Solution
CASE — Society-Level Solution Experiment A Case Study Conclusion and Future Work Trinity University | Laboratory for Distributed Intelligent Agent Systems

3 Introduction Multi-Agent Systems MAS for Social Simulation
Research Goal Existing Approaches Our Approach Trinity University | Laboratory for Distributed Intelligent Agent Systems

4 Multi-Agent Systems Society Multi-Agent Agent Agents Societies
High-Frequency Interactions Interactions are decentralized KB Agent KB KB KB KB KB KB = Knowledge Base Trinity University | Laboratory for Distributed Intelligent Agent Systems

5 Simulating Social Environments
5 1998 Journal of Artificial Societies and Social Simulation first published. 1997 First international conference on computer simulation and the social sciences. Hope is that computer simulation will achieve a disciplinary synthesis among the social sciences. 1996 Santa Fe Institute becomes well known for developing ideas about complexity and studying them utilizing computer simulations of real-world phenomena. 1995 Series of workshops held in Italy and USA. Field becomes more theoretically and methodologically grounded. 1992 First ‘Simulating Societies’ workshop held. Trinity University | Laboratory for Distributed Intelligent Agent Systems

6 6 Research Goal Understanding how the decentralized interactions of agents could generate social conventions. Trinity University | Laboratory for Distributed Intelligent Agent Systems

7 Current Approaches Society Level Focuses on static social structures.
7 Society Level Focuses on static social structures. Agent Level Focuses on the self-interested agents. Trinity University | Laboratory for Distributed Intelligent Agent Systems

8 Our Approach Cognitive Agents for Social Environments Network
8 Our Approach Cognitive Agents for Social Environments Network  Social Convention  Bounded Rationality  Action  Perception Environment Trinity University | Laboratory for Distributed Intelligent Agent Systems

9 Related Work COGENT SOAR Sugarscape CASE Meso-Level
Agent behavior realistic but not too computationally complex Agent Complexity Schelling’s Segregation Model ACT-R CLARION Top-Down Bottom-Up Trinity University | Laboratory for Distributed Intelligent Agent Systems

10 Outline Background and Objective CASE — Agent-Level Solution
CASE — Society-Level Solution Experiment A Case Study Conclusion and Future Work Trinity University | Laboratory for Distributed Intelligent Agent Systems

11 Our Approach Cognitive Agents for Social Environments Network
11 Our Approach Cognitive Agents for Social Environments Network  Social Convention  Bounded Rationality  Action  Perception Environment Trinity University | Laboratory for Distributed Intelligent Agent Systems

12 Rationality vs. Bounded Rationality
Rationality means that agents calculate a utility value for the outcome of every action. Bounded Rationality means that agents use intuition and heuristics to determine if one action is better than another. Trinity University | Laboratory for Distributed Intelligent Agent Systems

13 13 Daniel Kahneman Courtesy Google Image Trinity University | Laboratory for Distributed Intelligent Agent Systems

14 Two-Phase Decision Model
Evaluation Criteria Selective Attention Framing Anchoring Accessibility State Similarity Phase I - Editing Phase II - Evaluation Decision Mode Two Modes of Function Intuition Deliberation Action Trinity University | Laboratory for Distributed Intelligent Agent Systems

15 Phase I - Editing Phase II - Evaluation
15 Phase I - Editing Phase II - Evaluation Trinity University | Laboratory for Distributed Intelligent Agent Systems

16 Phase I - Editing Framing: decide evaluation criteria based on one’s attitude toward potential risk and reward. Anchoring: build selective attention on information. Salience of information: iI Anchored information: I*= {i | i > threshold} Accessibility: determine state similarity only by I*. A piece of information Context of the current decision A set of all information Accessibility relation Current state A memory state Trinity University | Laboratory for Distributed Intelligent Agent Systems

17 Phase II - Evaluation Intuition Deliberation
If st ~ sm, the optimal decision policy *(st) and *(sm) should be close too. Deliberation Optimize *(st). Time discount factor A given policy Expected value function Trinity University | Laboratory for Distributed Intelligent Agent Systems

18 Outline Background and Objective CASE — Agent-Level Solution
CASE — Society-Level Solution Experiment A Case Study Conclusion and Future Work Trinity University | Laboratory for Distributed Intelligent Agent Systems

19 Our Approach Cognitive Agents for Social Environments Network
19 Our Approach Cognitive Agents for Social Environments Network  Social Convention  Bounded Rationality  Action  Perception Environment Trinity University | Laboratory for Distributed Intelligent Agent Systems

20 Social Convention A social law is a restriction on the set of actions available to agents. A social convention is a social law that restricts the agent’s behavior to one particular action. Trinity University | Laboratory for Distributed Intelligent Agent Systems

21 Hard-Wired Design vs. Emergent Design
Hard-wired design means that social conventions are given to agents off-line before the simulation. Emergent design is a run time solution that agents decide the most suitable conventions giving the current state of the system. Trinity University | Laboratory for Distributed Intelligent Agent Systems

22 Generating Social Conventions: Existing Rules
Highest Cumulative Reward Simple Majority An agent switches to a new action if the total payoff from that action is higher than the payoff obtained from the currently-chosen action. Not rely on global statistics about the system. Guaranteeing convergence in a 2-person 2-choice symmetric coordination game. An agent switches to a new action if they have observed more instance of it in other agents than the present action. Rely on global statistics about the system. Convergence has not been proved. Trinity University | Laboratory for Distributed Intelligent Agent Systems

23 Generating Social Conventions: Our Rule
Generalized Simple Majority Definition. Assume an agent has K neighbors and that KA neighbors are in state A. If the agent is in state B, it will change to state A with probability Theorem. When →, change to state A when more than K/2 neighbors are in state A, in a 2-person 2-choice symmetric coordination game. Trinity University | Laboratory for Distributed Intelligent Agent Systems

24 Outline Background and Objective CASE — Agent-Level Solution
CASE — Society-Level Solution Experiment Evaluating the Agent-Level Solution Evaluating the Society-Level Solution A Case Study Conclusion and Future Work Trinity University | Laboratory for Distributed Intelligent Agent Systems

25 Evaluating the Agent-Level Solution
The Ultimatum Game The Bargaining Game Agent a I'll take x, you get 10x I'll take x, you get 10x Accept Negotiate I'll take y, you get 10y a gets x, b gets 10x a gets x, b gets 10x Accept Agent b Reject Both get 0 Reject or Run out of steps Both get 0 Trinity University | Laboratory for Distributed Intelligent Agent Systems

26 Phase I - Editing Accessibility Framing Anchoring st ~ sm if
11 states: $0, $1, …, $10 Anchoring Use 500 iterations of Q-learning to develop anchored states Accessibility st ~ sm if Trinity University | Laboratory for Distributed Intelligent Agent Systems

27 Q-Learning Well studied reinforcement learning algorithm
Converges to optimal decision policy Works in unknown environments Estimates long-term reward from experience expected discounted reward old value old value max future value learning rate discount factor Trinity University | Laboratory for Distributed Intelligent Agent Systems

28 Phase II - Evaluation 1000 iterations of play with intuitive or deliberative decisions Trinity University | Laboratory for Distributed Intelligent Agent Systems

29 Results of the Ultimatum Game
Human Players’ Results & Rational Players’ Results Human Players Rational Players Number Time Accepted Two-Phase CASE Agents’ Results Intuition Only Deliberation Only Split Value Trinity University | Laboratory for Distributed Intelligent Agent Systems

30 Results of the Bargaining Game
Human Players’ Results 10 8 6 4 2 Split Value Negotiation Size Iteration Iteration CASE Agents’ Results Split Value Negotiation Size Iteration Iteration Results of the Bargaining Game by Human Players are with kind permission of Springer Science Trinity University | Laboratory for Distributed Intelligent Agent Systems

31 Outline Background and Objective CASE — Agent-Level Solution
CASE — Society-Level Solution Experiment Evaluating the Agent-Level Solution Evaluating the Society-Level Solution A Case Study Conclusion and Future Work Trinity University | Laboratory for Distributed Intelligent Agent Systems

32 Evaluating the Society-Level Solution
2-person 2-choice symmetric coordination Game A B 1 -1 Two Optimal Decisions: (A,A) and (B,B) Trinity University | Laboratory for Distributed Intelligent Agent Systems

33 Evaluating the Society-Level Solution
Intuitive and deliberative decisions N agents (N2) with random initial state, A or B, with probability 50% Agents connected by classic networks or complex networks Evaluating two rules Highest cumulative reward (HCR) Generalized simple majority (GSM) Performance measure: T90% The time it takes that 90% of the agents use the same convention Trinity University | Laboratory for Distributed Intelligent Agent Systems

34 Classic Networks Complete Network KN Lattice Ring CN,K
Random Network RN,P Nodes fully connected to each other Nodes fully connected to its K neighbors Local clustering Nodes connected with equal probability N=8 N=100 K=6 N=100 P=5% Trinity University | Laboratory for Distributed Intelligent Agent Systems

35 Small-World Network WN,K,P
Complex Networks Small-World Network WN,K,P Scale-Free Network SN,K, Start with a CN,K graph and rewire every link at random with P Local clustering & randomness P(K) is a power law P(K) ~ K- Large networks can self-organize into a scale free state, independent of the agents N=100 K=6 P=5% N=100 K=6 =2.5 Trinity University | Laboratory for Distributed Intelligent Agent Systems

36 Evaluating Highest Cumulative Reward
Network Topology Name Size  103 (N) Parameter SN Scale-free network 1, 2.5, 5, 7.5, 10, 25, 50 =2.5 <K>=12 CN Lattice ring 0.1, 0.25, 0.5, 0.75, 1 K=12 KN Complete network None needed WN Small-world network 1, 2.5, 5, 7.5, 10 P=0.1 <K>=12 Lattice ring T90%=O(N2.5) Small-world Scale-free/ Complete T90%=O(N1.5) T90%=O(NlogN) Trinity University | Laboratory for Distributed Intelligent Agent Systems

37 Evaluating Generalized Simple Majority
Network Topology Name Size  103 (N) Parameter CN Lattice ring 0.1, 0.25, 0.5, 0.75 K=12 KN Complete network 1, 2.5, 5, 7.5, 10, 25, 50, 75, 100 None needed SN Scale-free network =2.5 <K>=12 =3.0 <K>=12 WN Small-world network 1, 2.5, 5, 7.5, 10, 25, 50 P=0.1 <K>=12 Lattice ring T90%=O(N2.5) Small-world T90%=O(N1.5) Scale-free/ Complete T90%=O(N) Trinity University | Laboratory for Distributed Intelligent Agent Systems

38 Evaluating HCR vs. GSM Network Topology N <K> P
Small-World Network 104 12 P=0.05, 0.09 … 0.9 (P=0.09) Lattice ring Random network Trinity University | Laboratory for Distributed Intelligent Agent Systems


Download ppt "CASE − Cognitive Agents for Social Environments"

Similar presentations


Ads by Google