Presentation is loading. Please wait.

Presentation is loading. Please wait.

2010 5 13 Alessandria, Palazzo Borsalino 1 CIPESS Centro Interuniversitario di Psicologia ed Economia Sperimentali e Simulative Agent based simulation.

Similar presentations


Presentation on theme: "2010 5 13 Alessandria, Palazzo Borsalino 1 CIPESS Centro Interuniversitario di Psicologia ed Economia Sperimentali e Simulative Agent based simulation."— Presentation transcript:

1 2010 5 13 Alessandria, Palazzo Borsalino 1 CIPESS Centro Interuniversitario di Psicologia ed Economia Sperimentali e Simulative Agent based simulation and Artificial Neural Network Pietro TERNA terna@econ.unito.it, http://web.econ.unito.it/terna

2 2010 5 13Alessandria, Palazzo Borsalino2 _______________________________________ A general structure for agent-based simulation models _______________________________________

3 2010 5 13Alessandria, Palazzo Borsalino3 Social simulation as a computer based way to execute complex artificial experiments, but also as a via to represent the complexity of real world simulation = agent-based models

4 2010 5 13Alessandria, Palazzo Borsalino4 _______________________________________ Building models: three ways _______________________________________

5 2010 5 13Alessandria, Palazzo Borsalino5 Three different symbol systems: verbal argumentations mathematics computer simulation (agent based)

6 2010 5 13Alessandria, Palazzo Borsalino6 _______________________________________ How to use agents in simulation models: a radical view _______________________________________

7 2010 5 13Alessandria, Palazzo Borsalino7 The radical characterization of an ABM must be found (1) into the possibility of real – direct or indirect – interaction amid the agents, (2) instead of modeling that interaction in a simplified way, with aggregated simultaneous equations To build (1) type models we need sophisticated tools, but also simple and transparent

8 2010 5 13Alessandria, Palazzo Borsalino8 _______________________________________ Agent-based simulation model weaknesses _______________________________________

9 2010 5 13Alessandria, Palazzo Borsalino9 Weaknesses the difficulty of fully understand them without studying the program used to run the simulation; the necessity of carefully checking computer code to prevent generation of inaccurate results from mere coding errors; the difficulty of systematically exploring the entire set of possible hypotheses in order to infer the best explanation. This is mainly due to the inclusion of behavioral rules for the agents within the hypotheses, which produces a space of possibilities that is difficult if not impossible to explore completely.

10 2010 5 13Alessandria, Palazzo Borsalino10 _______________________________________ We need simple and powerful tools … _______________________________________

11 2010 5 13Alessandria, Palazzo Borsalino11 Swarm, http://www.swarm.org SLAPP, Swarm-Like Agent Protocol in Python, temporary at http://eco83.econ.unito.it/terna/slapp ; Python at www.python.org JAS, http://jaslibrary.sourceforge.net/ Ascape, http://www.brook.edu/dynamics/models/ascape/ Repast, http://repast.sourceforge.net/ StarLogo, http://education.mit.edu/starlogo/ StarLogo TNG, http://education.mit.edu/starlogo-tng/ NetLogo, http://ccl.northwestern.edu/netlogo/ FLAME, https://trac.flame.ac.uk/wiki MetaABM, http://www.metascapeabm.com/ SDML (based upon SmallTalk, as a declarative programming tool): http://www.cpm.mmu.ac.uk/sdml/ See also ABLE, http://www.research.ibm.com/able/ JADE, http://jade.tilab.com/ or DAML, www.daml.org Also useful in adidactical perspective nearly videogames

12 2010 5 13Alessandria, Palazzo Borsalino12 _______________________________________ Why a new tool and why SLAPP (Swarm-Like Agent Based Protocol in Python) as a preferred tool? _______________________________________

13 2010 5 13Alessandria, Palazzo Borsalino13 For didactical reasons, applying a such rigorous and simple object oriented language as Python To build models upon transparent code: Python does not have hidden parts or feature coming from magic, it has no obscure libraries To use the openness of Python To apply easily the SWARM protocol

14 2010 5 13Alessandria, Palazzo Borsalino14 The openness of Python (www.python.org) … going from Python to R (R is at http://cran.r-project.org/ ; rpy library is at http://rpy.sourceforge.net/)http://cran.r-project.org/http://rpy.sourceforge.net/ … going from OpenOffice (Calc, Writer, …) to Python and viceversa (via the Python-UNO bridge, incorporated in OOo) … doing symbolic calculations in Python (via http://code.google.com/p/sympy/)http://code.google.com/p/sympy/ … doing declarative programming with PyLog, a Prolog implementation in Python (http://christophe.delord.free.fr/pylog/index.html)http://christophe.delord.free.fr/pylog/index.html … using Social Network Analysis from Python; examples: Igraph library http://cneurocvs.rmki.kfki.hu/igraph/http://cneurocvs.rmki.kfki.hu/igraph/ libsna http://www.libsna.org/http://www.libsna.org/ pySNA http://www.menslibera.com.tr/pysna/http://www.menslibera.com.tr/pysna/

15 2010 5 13Alessandria, Palazzo Borsalino15 The SWARM protocol What’s SLAPP: basically a demonstration that we can easily implement the Swarm protocol [Minar, N., R. Burkhart, C. Langton, and M. Askenazi (1996), The Swarm simulation system: A toolkit for building multi- agent simulations. Working Paper 96-06-042, Santa Fe Institute, Santa Fe (*)] in Python (*) http://www.swarm.org/images/b/bb/MinarEtAl96.pdfhttp://www.swarm.org/images/b/bb/MinarEtAl96.pdf Key points (quoting from that paper): Swarm defines a structure for simulations, a framework within which models are built. The core commitment is to a discrete-event simulation of multiple agents using an object-oriented representation. To these basic choices Swarm adds the concept of the "swarm," a collection of agents with a schedule of activity.

16 2010 5 13Alessandria, Palazzo Borsalino16 The SWARM protocol An absolutely clear and rigorous application of the SWARM protocol is contained in the original SimpleBug tutorial (1996?) with ObjectiveC code and text by Chris Langton & Swarm development team (Santa Fe Institute), on line at http://ftp.swarm.org/pub/swarm/apps/objc/sdg/swarmapps-objc-2.2-3.tar.gz (into the folder “tutorial”, with the texts reported into the README files in the tutorial folder and in the internal subfolders) The same has also been adapted to Java by Charles J. Staelin (jSIMPLEBUG, a Swarm tutorial for Java, 2000), at http://www.cse.nd.edu/courses/cse498j/www/Resources/jsimplebug11.pdfhttp://www.cse.nd.edu/courses/cse498j/www/Resources/jsimplebug11.pdf (text) or http://eco83.econ.unito.it/swarm/materiale/jtutorial/JavaTutorial.ziphttp://eco83.econ.unito.it/swarm/materiale/jtutorial/JavaTutorial.zip (text and code) At http://eco83.econ.unito.it/terna/ slapp you can find the same structure of files, but now implementing the SWARM protocol using Pythonhttp://eco83.econ.unito.it/terna/ slapp The SWARM protocol as lingua franca in agent based simulation models

17 2010 5 13Alessandria, Palazzo Borsalino17 _______________________________________ Have a look to Swarm basics _______________________________________

18 2010 5 13Alessandria, Palazzo Borsalino18 Swarm = a library of functions and a protocol modelSwarm create objects create actions run modelSwarm randomwalk, Bug aBug bugList aBug schedule

19 2010 5 13Alessandria, Palazzo Borsalino19 Swarm = a library of functions and a protocol modelSwarm create objects create actions run modelSwarm randomwalk, reportPosition Bug aBug bugList aBug schedule run observerSwarm

20 2010 5 13Alessandria, Palazzo Borsalino20 Swarm = a library of functions and a protocol modelSwarm create objects create actions run modelSwarm randomwalk, reportPosition Bug aBug bugList aBug schedule run observerSwarm probes to be developed in SLAPP

21 2010 5 13Alessandria, Palazzo Borsalino21 _______________________________________ (A digression) Environment, Agents and Rules representation, the ERA scheme _______________________________________

22 Fixed rules 2010 5 13Alessandria, Palazzo Borsalino22 http://web.econ.unito.it/terna/ct-era/ct-era.html NN CS GA Avatar Microstructures, mainly related to time and parallelism Reinforcement learning

23 2010 5 13Alessandria, Palazzo Borsalino23 _______________________________________ Eating the pudding The surprising world of the Chameleons, with SLAPP From an idea of Marco Lamieri, a project work with Riccardo Taormina http://eco83.econ.unito.it/terna/chameleons/chameleons.html _______________________________________

24 2010 5 13Alessandria, Palazzo Borsalino24 The metaphorical models we use here is that of the changing color chameleons We have chameleons of three colors: red, green and blue When two chameleons of different colors meet, they both change their color, assuming the third one (if all the chameleons get the same color, we have a steady state situation) (The metaphor can also be interpreted in the following way: an agent diffusing innovation or ideas (or political ideas) can change itself via the interaction with other agents: as an example think about an academic scholar working in a completely isolated context or interacting with other scholars or with private entrepreneurs to apply the results of her work)

25 2010 5 13Alessandria, Palazzo Borsalino25 The simple model moves agents and changes their colors, when necessary But what if the chameleons of a given color want to preserve their identity?

26 2010 5 13Alessandria, Palazzo Borsalino26 Preserving identity! Reinforcement learning and pattern recognition, with bounded rationality Agent brain built upon 9 ANN Let’s play

27 Random moves 2010 5 1327Alessandria, Palazzo Borsalino

28 Red cham. adverse to change color 2010 5 1328Alessandria, Palazzo Borsalino

29 Green cham. too 2010 5 1329Alessandria, Palazzo Borsalino

30 Blue cham. chase the other colors 2010 5 1330Alessandria, Palazzo Borsalino

31 2010 5 1331Alessandria, Palazzo Borsalino

32 2010 5 1332Alessandria, Palazzo Borsalino

33 2010 5 1333Alessandria, Palazzo Borsalino

34 2010 5 1334Alessandria, Palazzo Borsalino

35 2010 5 13Alessandria, Palazzo Borsalino35 _______________________________________ Eating the pudding again SLAPP and the Italian Central Bank model of the internal interbank payment system _______________________________________

36 2010 5 13Alessandria, Palazzo Borsalino36 This figure, related to a StarLogo TNG implementation of the model, comes from: Luca Arciero+, Claudia Biancotti*, Leandro D’Aurizio*, Claudio Impenna+ (2008), An agent-based model for crisis simulation in payment systems, forthcoming. + Bank of Italy, Payment System Oversight Office; * Bank of Italy, Economic and Financial Statistics Department. Real Time Gross Settlement payment system Automatic settlements Treasurers’ decisions ?

37 2010 5 13Alessandria, Palazzo Borsalino37 Delays* in payments … … liquidity shortages … … in presence of unexpected negative operational or financial shocks … … financial crisis (generated or amplified by * ), with domino effects

38 2010 5 13Alessandria, Palazzo Borsalino38 Two parallel highly connected institutions: RTGS (Real Time Gross Settlement payment system) eMID (electronic Market of Interbank Deposit) Starting from actual data, we simulate delays, looking at the emergent interest rate dynamics into the eMID Agent based simulation as a magnifying glass to understand reality

39 2010 5 13Alessandria, Palazzo Borsalino39 _______________________________________ SLAPP and the Italian Central Bank model: a few complicated microstructures _______________________________________

40 402010 5 13 A “quasi UML” representation NB prices are bid (or offered) by buyers and asked by sellers Alessandria, Palazzo Borsalino

41 412010 5 13 Alessandria, Palazzo Borsalino

42 422010 5 13 Alessandria, Palazzo Borsalino

43 2010 5 13Alessandria, Palazzo Borsalino43 _______________________________________ Microstructures: effects on interest rate dynamics _______________________________________

44 2010 5 13Alessandria, Palazzo Borsalino44 Model v.0.3.4 Used parameters: # of steps 100; payments per step max 30; # of banks 30; payment amount interval, max 30; time break at 20; observer interval 2; delay in payments, randomly set between 0 and max 18; bidding a price probability B; asking a price probability A A 0.1 B 0.1A 0.5 B 0.5A 0.9 B 0.9 parallel / last

45 2010 5 13Alessandria, Palazzo Borsalino45 Model v.0.3.4 Used parameters: # of steps 100; payments per step max 30; # of banks 30; payment amount interval, max 30; time break at 20; observer interval 2; delay in payments, randomly set between 0 and max 18; bidding a price probability B; asking a price probability A A 0.1 B 0.1A 0.5 B 0.5A 0.9 B 0.9 parallel / best

46 2010 5 13Alessandria, Palazzo Borsalino46 Model v.0.3.4 Used parameters: # of steps 100; payments per step max 30; # of banks 30; payment amount interval, max 30; time break at 20; observer interval 2; delay in payments, randomly set between 0 and max 18; bidding a price probability B; asking a price probability A A 0.1 B 0.1A 0.5 B 0.5A 0.9 B 0.9 immediate / last

47 2010 5 13Alessandria, Palazzo Borsalino47 Model v.0.3.4 Used parameters: # of steps 100; payments per step max 30; # of banks 30; payment amount interval, max 30; time break at 20; observer interval 2; delay in payments, randomly set between 0 and max 18; bidding a price probability B; asking a price probability A A 0.1 B 0.1A 0.5 B 0.5A 0.9 B 0.9 immediate / best

48 48 What if no delays in payments? 2010 5 13Alessandria, Palazzo Borsalino A 0.9 B 0.9 Look back at immediate / last delay=18A 0.9 B 0.9delay= 6A 0.9 B 0.9delay= 0 random walk

49 2010 5 13Alessandria, Palazzo Borsalino49 _______________________________________ What if we want to characterize better our agent (with an Aesop fairy story on Artificial Neural Network) _______________________________________

50 2010 5 13Alessandria, Palazzo Borsalino50 Repeated question: why a new tool and why SLAPP (Swarm-Like Agent Based Protocol in Python) as a preferred tool? … … to create the new AESOP (Agents and Emergencies for Simulating Organizations in Python) tool to model agents and their actions and interactions

51 2010 5 13Alessandria, Palazzo Borsalino51 _______________________________________ Agents and schedule _______________________________________

52 2010 5 13Alessandria, Palazzo Borsalino52 X X X X X X X X X X X X X X X Rules operating “in the foreground”, explicitly managed via a script (with different sets of agents, with a different number of elements) Rules operating “in the background” for all the agents, or only for the blue ones (to be decided) Bland * and tasty # agents * Bland = simple, unspecific, basic, insipid, … # Tasty = specialized, with given skills, discretionary, …

53 2010 5 13 53 Alessandria, Palazzo Borsalino Empty schedule (no tasty agents, only bland ones, operating with the background rules)

54 2010 5 13Alessandria, Palazzo Borsalino 54 How many ‘bland’ agents? 3 X Size of the world? 10 Y Size of the world? 10 How many cycles? (0 = exit) 5 World state number 0 has been created. Agent number 0 has been created at 7, 1 Agent number 1 has been created at 3, 2 Agent number 2 has been created at 7, 0 Time = 1 agent # 0 moving agent # 2 moving agent # 1 moving Time = 1 ask all agents to report position Agent number 0 moved to X = 0.0131032296035 Y = 3.0131032296 Agent number 1 moved to X = 8.9868967704 Y = 0.0 Agent number 2 moved to X = 0.986896770397 Y = 3.9868967704 Time = 2 agent # 0 moving agent # 1 moving agent # 2 moving Time = 2 ask first agent to report position Agent number 0 moved to X = 6.18205342701 Y = 6.8441530322 Creation of the bland agents bland agents acting with the background rules All the agents reporting their position (background operation) The agent # 0 reporting … (b. op.)

55 2010 5 13Alessandria, Palazzo Borsalino 55 Time = 3 agent # 1 moving agent # 2 moving agent # 0 moving Time = 3 ask first agent to report position Agent number 0 moved to X = 2.76682561579 Y = 6.8441530322 Time = 4 agent # 0 moving agent # 2 moving agent # 1 moving agent 2 made a big jump Time = 4 ask all agents to report position Agent number 0 moved to X = 2.76682561579 Y = 2.32334710187 Agent number 1 moved to X = 2.81794657299 Y = 6.16895019741 Agent number 2 moved to X = 2.63504103748 Y = 9.46609084007 Time = 5 agent # 2 moving agent # 0 moving agent # 1 moving agent 2 made a big jump Time = 5 ask first agent to report position Agent number 0 moved to X = 2.76682561579 Y = 2.32334710187 Time = 6

56 2010 5 1356Alessandria, Palazzo Borsalino Schedule driving bland agents (no tasty agents) Agent -> all agents; Agent0 -> bland agents; in this case the two sets are coincident Empty sets, in this case Acting on bland (blue) agents and on tasty (red) ones

57 2010 5 13Alessandria, Palazzo Borsalino 57 How many ‘bland’ agents? 3 X Size of the world? 10 Y Size of the world? 10 How many cycles? (0 = exit) 5 World state number 0 has been created. Agent number 0 has been created at 7, 1 Agent number 1 has been created at 3, 2 Agent number 2 has been created at 7, 0 Time = 1 agent # 1 moving agent # 2 moving agent # 0 moving I'm agent 1: nothing to eat here! I'm agent 2: nothing to eat here! I'm agent 0: nothing to eat here! I'm agent 0: it's not time to dance! I'm agent 1: it's not time to dance! I'm agent 2: it's not time to dance! Time = 1 ask all agents to report position Agent number 0 moved to X = 0.972690201302 Y = 7.0273097987 Agent number 1 moved to X = 6.9726902013 Y = 2.0 Agent number 2 moved to X = 7.0 Y = 6.0273097987 bland agents

58 2010 5 13Alessandria, Palazzo Borsalino 58 Time = 2 agent # 1 moving agent # 2 moving agent # 0 moving Time = 2 ask first agent to report position Agent number 0 moved to X = 3.51562472467 Y = 7.0273097987 Time = 3 agent # 1 moving agent # 2 moving agent # 0 moving Time = 3 ask first agent to report position Agent number 0 moved to X = 3.51562472467 Y = 7.0273097987 Time = 4 agent # 0 moving agent # 2 moving agent # 1 moving I'm agent 1: it's not time to dance! Time = 4 ask all agents to report position Agent number 0 moved to X = 3.51562472467 Y = 0.992771148789 Agent number 1 moved to X = 3.43870920817 Y = 1.00895353023 Agent number 2 moved to X = 6.00895353023 Y = 3.48437527533

59 2010 5 13Alessandria, Palazzo Borsalino 59 Time = 5 agent # 0 moving agent # 1 moving agent # 2 moving I'm agent 0: nothing to eat here! I'm agent 1: nothing to eat here! I'm agent 2: nothing to eat here! I'm agent 1: it's not time to dance! I'm agent 0: it's not time to dance! I'm agent 2: it's not time to dance! Time = 5 ask first agent to report position Agent number 0 moved to X = 3.74036626026 Y = 0.992771148789 Time = 6 bland agents

60 2010 5 1360Alessandria, Palazzo Borsalino Schedule driving bland agents (with tasty agents) Agent -> all agents; Agent0 -> background agents Non empty sets, in this case Effects on bland (blue) agents and tasty (red) ones

61 2010 5 13Alessandria, Palazzo Borsalino61 agType1.txt 111…… 222…… agType3.txt 1111 Set of agents (any kind of names) IDsSpecific attributes of each agent

62 2010 5 13Alessandria, Palazzo Borsalino 62 How many ‘bland’ agents? 3 X Size of the world? 3 Y Size of the world? 3 How many cycles? (0 = exit) 32 World state number 0 has been created. Agent number 0 has been created at 0, 2 Agent number 1 has been created at 1, 0 Agent number 2 has been created at 0, 2 creating agType1 # 111 Agent number 111 has been created at 1, 1 creating agType1 # 222 Agent number 222 has been created at 2, 0 creating agType3 # 1111 Agent number 1111 has been created at 2, 2 Time = 1 agent # 2 moving agent # 222 moving agent # 0 moving agent # 111 moving agent # 1 moving agent # 1111 moving tasty agents bland and tasty agents

63 2010 5 13Alessandria, Palazzo Borsalino 63 I'm agent 2: nothing to eat here! I'm agent 1: nothing to eat here! I'm agent 0: nothing to eat here! I'm agent 1: it's not time to dance! I'm agent 0: it's not time to dance! I'm agent 2: it's not time to dance! Time = 1 ask all agents to report position Agent number 0 moved to X = 0.924426630933 Y = 2.92442663093 Agent number 1 moved to X = 1.0 Y = 0.924426630933 Agent number 2 moved to X = 0.924426630933 Y = 2.92442663093 Agent number 111 moved to X = 1.92442663093 Y = 1.92442663093 Agent number 222 moved to X = 1.07557336907 Y = 2.07557336907 Agent number 1111 moved to X = 2.92442663093 Y = 2.0 Time = 2 agent # 1 moving agent # 111 moving agent # 222 moving agent # 0 moving agent # 1111 moving agent # 2 moving

64 2010 5 13Alessandria, Palazzo Borsalino 64 Time = 5 agent # 222 moving agent # 1 moving I'm agent 1111: nothing to eat here! I'm agent 2: nothing to eat here! I'm agent 111: nothing to eat here! I'm agent 0: nothing to eat here! I'm agent 222: nothing to eat here! I'm agent 1: nothing to eat here! I'm agent 0: it's not time to dance! I'm agent 222: it's not time to dance! I'm agent 1111: it's not time to dance! I'm agent 2: it's not time to dance! I'm agent 111: it's not time to dance! I'm agent 1: it's not time to dance! Time =31 agent # 1 moving agent # 111 moving agent # 0 moving agent # 2 moving I'm agent 222: it's not time to dance! Time = 31 ask all agents to report position

65 2010 5 13Alessandria, Palazzo Borsalino65 _______________________________________ Artificial neural networks into the agents _______________________________________

66 2010 5 13Alessandria, Palazzo Borsalino66 X X X X X X X X X X X X X X X ANN bland and tasty agents can contain an ANN Networks of ANNs, built upon agent interaction

67 2010 5 13Alessandria, Palazzo Borsalino67 y = g(x) = f(B f(A x)) (m) (n) or y 1 = g 1 (x) = f(B 1 f(A 1 x)) (1) (n) … y m = g m (x) = f(B m f(A m x)) (1) (n) actionsinformation

68 2010 5 13Alessandria, Palazzo Borsalino 68 a - Static ex-ante learning (on examples) Rule master X a Y a ------------------------ X a,1 Y a,1 … X a,m-1 Y a,m a -1 X a,m Y a,m a X b Y b ------------------------ X b,1 Y b,1 … X b,m-1 Y b,m b -1 X b,m Y b,m b Different agents, with different set of examples, estimating and using different sets A and B of parameters

69 2010 5 13Alessandria, Palazzo Borsalino69 b - Continuous learning (trials and errors) z = g([x,y]) = f(B f(A [x,y])) (p) (n+m) effects information actions Different agent, estimating and using different set A and B of parameters (or using the same set of parameters) Coming from simulation the agents will choose Z maximizing: (i)individual U, with norms (ii)societal wellbeing Emergence of norms [modifying f(u), as norms do, or the set z, as new laws do] at t=0 or at given t=k steps, all or a few agents act randomly Rule master accounting for laws

70 2010 5 13Alessandria, Palazzo Borsalino70 c - Continuous learning (cross-targets) Developing internal consistence EO EP A few ideas at http://web.econ.unito.it/terna/ct-era/ct- era.html Rule master

71 2010 5 13Alessandria, Palazzo Borsalino71 Thanks for your attention terna@econ.unito.it, http://web.econ.unito.it/terna SLAPP & Aesop are at http://eco83.econ.unito.it/terna/slapp


Download ppt "2010 5 13 Alessandria, Palazzo Borsalino 1 CIPESS Centro Interuniversitario di Psicologia ed Economia Sperimentali e Simulative Agent based simulation."

Similar presentations


Ads by Google