Presentation is loading. Please wait.

Presentation is loading. Please wait.

15. 05. 2007 Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH.

Similar presentations


Presentation on theme: "15. 05. 2007 Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH."— Presentation transcript:

1 Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH Zürich

2 Julian Lorenz, 2 Observational Learning Examples: Brand choice, fashion, bestseller list … Stock market bubbles (Animal) Mating: Females choose males they observed being selected by other females (Gibson/Hoglund ´92, Copying and Sexual Selection) When people make a decision, they typically look around how others have decided. Decision process of a group where each individual combines own opinion and observation of others Word of mouth learning, social learning:

3 Julian Lorenz, 3 Model of Sequential Observational Learning Agents are Bayes-rational and decide using Stochastic private signal (correct with prob. >0.5 ) Observation of other agents actions Macro-behavior of such learning processes? How well does population as a whole? (Bikhchandani, Hirshleifer, Welch 1998) Population of n agents makes one-time decision between two alternatives (a and b) sequentially a or b is aposteriori superior choice for all agents (unknown during decision process)

4 Julian Lorenz, 4 Model of Sequential Observational Learning Agents can observe actions of all predecessors = Predecessors that chose option If tie, follow private signal. Majority voting of observed actions & private signal In totalvotes. Bayes optimal local decision rule in [BHW98]: Can show: Bayes optimal strategy for each agent (optimizes probability of correct choice) Information externality Imitation rational

5 Julian Lorenz, 5 Sequential Observational Learning [BHW98] a b b a a … Example:

6 Julian Lorenz, 6 Informational Cascades in [BHW98]: Agent chooses a if ¸ 2, b if · -2 and follows private signal if -1 · · +1. Equivalent version of decision rule Obviously, key variable is. Eventually, hit absorbing state or In the long run, almost all agents make same decision Incorrect informational cascades quite likely! ? ? ? ? ? ? ? ? ? ? ? ? Globally inefficient use of information

7 Julian Lorenz, 7 Informational Cascades in [BHW98]: [correct cascade] [incorrect cascade] Confidence of private signal [correct cascade] Even in cascade imitation is rational Locally rational vs. globally beneficial Remark:

8 Julian Lorenz, 8 Wisdom of Crowds Actions observable of subset only What would improve global behavior? Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations (2004) … vs. incorrect informational cascades ?

9 Julian Lorenz, 9 Learning in Random Networks Random graph on n vertices, each edge present with probability p. Agents can only observe actions of their acquaintances modeled by random graph : Then agent chooses Agents local decision rule: Same as [BHW98] be #acquaintances that chose option Let and For p=1: Recover [BHW98]

10 Julian Lorenz, 10 Theorem (L., Marciniszyn, Steger 07) = # correct agents in 1 a.a.s. almost all agents correct Network of agents is random graph, > 0.5 Result: Macro-behavior of process depends on p=p(n) 2 constant: with constant probability almost all agents incorrect

11 Julian Lorenz, 11 Remark: Sparse networks 3 No significant herding towards a or b. Why? Sparse random graph contains (with ) isolated vertices ( independent decisions)

12 Julian Lorenz, 12 Discussion 2 constant: with constant probability almost all agents incorrect Generalization of [BHW98] Entire population benefits from learning and imitation Intuition: Agents make independent decisions in the beginning, information accumulates locally first Less information for each individual Entire population better off 1 a.a.s. almost all agents correct

13 Julian Lorenz, 13 whp next agent pk 1 À 1 neighbors & majority correct whp correct decision! Idea of Proof (I) Suppose correct bias among first k 1 À p -1 agents However, technical difficulties: Need to establish correct critical mass Almost all subsequent agents must be correct … and everything must be with high probability 1 a.a.s. almost all agents correct Proof uses Chernoff type bounds and techniques from random graph theory

14 Julian Lorenz, 14 Idea of Proof (II) 2 const.: const prob almost all agents incorrect With constant probability, an incorrect criticial mass will emerge 1 Herding as in Because of high density of network, no local accumulation of information.

15 Julian Lorenz, 15 1 a.a.s. almost all agents correct Phase I : whpfraction correct Phase II : whpfraction correct Phase III : whp almost all agents correct We show: Proof Phase IPhase II Phase III Early adoptors Critical phase Herding Choose suitable and. Then: Because of follows. 1

16 Julian Lorenz, 16 1 a.a.s. almost all agents correct Phase II : During Phase II, increases to Lemma : More and more agents disregard private signal But: Proof Consider groups of agents who are almost independent. But: Conditional probabilities & dependencies between agents in Phase II … Critical phase

17 Julian Lorenz, 17 1 a.a.s. almost all agents correct Proof Phase II (cont) w.h.p. edge in each W i … … … & sharp concentration Iteratively, w.h.p. fraction correct in Phase II correct agents

18 Julian Lorenz, 18 1 a.a.s. almost all agents correct Proof Phase III : Whp almost all agents correct in Phase III. … w.h.p next agent has À 1 neighbors & follows majority … again technical difficulties (consider groups of agents), but finally …. Herding

19 Julian Lorenz, 19 p=1/log n, correct cascade Numerical Experiments Population size n Relative Frequency p=, correct cascade p=0.5, correct cascade p=0.5, incorrect cascade Signal confidence: =0.75

20 Julian Lorenz, 20 Conclusion Macro-behavior of observational learning depends on density of random network Intuition Future work Critical mass of independent decisions in beginning (information accumulates) Correct herding of almost all subsequent agents Dense: incorrect informational cascades possible Moderately linked: whp correct informational cascade Other types of random networks (scale-free networks etc.)

21 Julian Lorenz, 21 Thank you very much for your attention! Questions?


Download ppt "15. 05. 2007 Observational Learning in Random Networks Julian Lorenz, Martin Marciniszyn, Angelika Steger Institute of Theoretical Computer Science, ETH."

Similar presentations


Ads by Google