Presentation is loading. Please wait.

Presentation is loading. Please wait.

The dynamics of iterated learning Tom Griffiths UC Berkeley with Mike Kalish, Steve Lewandowsky, Simon Kirby, and Mike Dowman.

Similar presentations


Presentation on theme: "The dynamics of iterated learning Tom Griffiths UC Berkeley with Mike Kalish, Steve Lewandowsky, Simon Kirby, and Mike Dowman."— Presentation transcript:

1 The dynamics of iterated learning Tom Griffiths UC Berkeley with Mike Kalish, Steve Lewandowsky, Simon Kirby, and Mike Dowman

2 Learning data

3 Iterated learning (Kirby, 2001) What are the consequences of learners learning from other learners?

4 Objects of iterated learning Knowledge communicated across generations through provision of data by learners Examples: –religious concepts –social norms –myths and legends –causal theories –language

5 Language The languages spoken by humans are typically viewed as the result of two factors –individual learning –innate constraints (biological evolution) This limits the possible explanations for different kinds of linguistic phenomena

6 Linguistic universals Human languages possess universal properties –e.g. compositionality (Comrie, 1981; Greenberg, 1963; Hawkins, 1988) Traditional explanation: –linguistic universals reflect innate constraints specific to a system for acquiring language (e.g., Chomsky, 1965)

7 Cultural evolution Languages are also subject to change via cultural evolution (through iterated learning) Alternative explanation: –linguistic universals emerge as the result of the fact that language is learned anew by each generation (using general-purpose learning mechanisms) (e.g., Briscoe, 1998; Kirby, 2001)

8 Analyzing iterated learning P L (h|d): probability of inferring hypothesis h from data d P P (d|h): probability of generating data d from hypothesis h PL(h|d)PL(h|d) P P (d|h) PL(h|d)PL(h|d)

9 Variables x (t+1) independent of history given x (t) Converges to a stationary distribution under easily checked conditions (i.e., if it is ergodic) xx x xx x x x Transition matrix T = P(x (t+1) |x (t) ) Markov chains

10 Stationary distributions Stationary distribution: In matrix form  is the first eigenvector of the matrix T –easy to compute for finite Markov chains –can sometimes be found analytically

11 Analyzing iterated learning d0d0 h1h1 d1d1 h2h2 PL(h|d)PL(h|d) PP(d|h)PP(d|h) PL(h|d)PL(h|d) d2d2 h3h3 PP(d|h)PP(d|h) PL(h|d)PL(h|d)  d P P (d|h)P L (h|d) h1h1 h2h2 h3h3 A Markov chain on hypotheses d0d0 d1d1  h P L (h|d) P P (d|h) d2d2 A Markov chain on data

12 Bayesian inference Reverend Thomas Bayes Rational procedure for updating beliefs Foundation of many learning algorithms Lets us make the inductive biases of learners precise

13 Bayes’ theorem Posterior probability LikelihoodPrior probability Sum over space of hypotheses h: hypothesis d: data

14 Iterated Bayesian learning PL(h|d)PL(h|d) P P (d|h) PL(h|d)PL(h|d) Assume learners sample from their posterior distribution:

15 Stationary distributions Markov chain on h converges to the prior, P(h) Markov chain on d converges to the “prior predictive distribution” (Griffiths & Kalish, 2005)

16 Explaining convergence to the prior PL(h|d)PL(h|d) P P (d|h) PL(h|d)PL(h|d) Intuitively: data acts once, prior many times Formally: iterated learning with Bayesian agents is a Gibbs sampler for P(d,h) (Griffiths & Kalish, submitted)

17 Gibbs sampling For variables x = x 1, x 2, …, x n Draw x i (t+1) from P(x i |x -i ) x -i = x 1 (t+1), x 2 (t+1),…, x i-1 (t+1), x i+1 (t), …, x n (t) Converges to P(x 1, x 2, …, x n ) (a.k.a. the heat bath algorithm in statistical physics) (Geman & Geman, 1984)

18 Gibbs sampling (MacKay, 2003)

19 Explaining convergence to the prior PL(h|d)PL(h|d) P P (d|h) PL(h|d)PL(h|d) When the target distribution is P(d,h), the conditional distributions are P L (h|d) and P P (d|h)

20 Implications for linguistic universals When learners sample from P(h|d), the distribution over languages converges to the prior –identifies a one-to-one correspondence between inductive biases and linguistic universals

21 Iterated Bayesian learning PL(h|d)PL(h|d) P P (d|h) PL(h|d)PL(h|d) Assume learners sample from their posterior distribution:

22 r = 1 r = 2 r =  From sampling to maximizing

23 General analytic results are hard to obtain –(r =  is a stochastic version of the EM algorithm) For certain classes of languages, it is possible to show that the stationary distribution gives each hypothesis h probability proportional to P(h) r –the ordering identified by the prior is preserved, but not the corresponding probabilities (Kirby, Dowman, & Griffiths, submitted) From sampling to maximizing

24 Implications for linguistic universals When learners sample from P(h|d), the distribution over languages converges to the prior –identifies a one-to-one correspondence between inductive biases and linguistic universals As learners move towards maximizing, the influence of the prior is exaggerated –weak biases can produce strong universals –cultural evolution is a viable alternative to traditional explanations for linguistic universals

25 Analyzing iterated learning The outcome of iterated learning is determined by the inductive biases of the learners –hypotheses with high prior probability ultimately appear with high probability in the population Clarifies the connection between constraints on language learning and linguistic universals… …and provides formal justification for the idea that culture reflects the structure of the mind

26 Conclusions Iterated learning provides a lens for magnifying the inductive biases of learners –small effects for individuals are big effects for groups When cognition affects culture, studying groups can give us better insight into individuals data

27


Download ppt "The dynamics of iterated learning Tom Griffiths UC Berkeley with Mike Kalish, Steve Lewandowsky, Simon Kirby, and Mike Dowman."

Similar presentations


Ads by Google