Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ch 6. Language Change: Multiple Languages 6.1 Multiple Languages

Similar presentations


Presentation on theme: "Ch 6. Language Change: Multiple Languages 6.1 Multiple Languages"— Presentation transcript:

1 Ch 6. Language Change: Multiple Languages 6.1 Multiple Languages
Presented by Rhee, Je-Keun

2 (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/
Contents 6.1 Multiple Languages 6.1.1 The Language Acquisition Framework 6.1.2 From Language Learning to Population Dynamics (C) 2009, SNU Biointelligence Lab, 

3 (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/
Multiple Languages The previous chapter examined some preliminary models that arise as a result of two languages in competition with each other. Consider a more general case where n languages may be potentially present in the population at any point in time. The n-language case gives rise to n-1 dimensional discrete-time dynamical systems. (C) 2009, SNU Biointelligence Lab, 

4 The Language Acquisition Framework
(C) 2009, SNU Biointelligence Lab, 

5 The Language Acquisition Framework
The author adopt a notion of probabilistic convergence as a notion of learnability. A grammar is learnable if on presentation of example sentences in i.i.d. fashion according to Pi, the learner converges to the target with probability 1, i.e., where is a random data set of n sentences drawn according to Pi. (C) 2009, SNU Biointelligence Lab, 

6 From Language Learning to Population Dynamics
The framework for language learning focuses on the behavior of learners attempting to infer grammars on the basis of linguistic data. At any point in time, n, (i.e., after encountering n example sentences) the learner may have any of the grammatical hypotheses in . Let us denote by pn(h), the probability with which it has the hypothesis Since an arbitrary learner has a probability pn(h) of developing hypothesis (for every ), it follows that a fraction pn(h) of the population of learners internalize the grammar h after n examples. We therefore have a current state of the learning population after n examples. The new generation produces sentences for the following generation of learners according to the distribution of grammars in its population. (C) 2009, SNU Biointelligence Lab, 

7 Discrete-time dynamical system
A State Space A set of system states, . Here the state space is the space of possible linguistic compositions of the population. Each state is described by a distribution Ppop on describing the language spoken by the population. At any given point time, t, the system is in exactly one state An Update Rule How the system states change from one time step to the next. Typically, this involves specifying a functional mapping, f, that maps to st+1. (C) 2009, SNU Biointelligence Lab, 

8 State space and update rule
As a linguistic example, consider the three-parameter syntactic space described in Gibson and Wexler This system defines eight possible “natural” grammars – thus has eight elements. The state is interpreted as the linguistic composition of the population. To see in detail how the update rule may be computed, consider the acquisition algorithm (C) 2009, SNU Biointelligence Lab, 

9 Two possibilities for attaining the mature target grammar
Finite sample case Identification in a fixed, finite time One draws n example sentences according to distribution P, and the acquisition algorithm develops hypotheses One can compute the probability with which the learner will posit hypothesis hi after n examples Limiting sample case Identification in the limit The learnability requires that pn(gt) converge to 1 for the case where a unique target grammar gt exists. However, in general, there need not be a unique target grammar since the linguistic population can be nonhomogeneous. (C) 2009, SNU Biointelligence Lab, 

10 Turn from the individual child to the population
For each grammar , the individual child leaner adopt (internalizes) the grammar with probability pn(hi) in the finite sample case or with probability p(hi) in the limiting sample case. In a population of such individuals one would therefore expect a proportion pn(hi) or p(hi) respectively to have internalized grammar hi. In other words, the linguistic composition of the next generation is given by Ppop,t+1(hi) = pn(hi) for the finite sample case and by Ppop,t+1(hi) = p(hi) in the limiting sample case. (C) 2009, SNU Biointelligence Lab, 

11 A learning system triple
(C) 2009, SNU Biointelligence Lab, 

12 (C) 2009, SNU Biointelligence Lab, http://bi.snu.ac.kr/
The Update Rule For the finite sample case : the 2n dimensional column vector with all its components taking the value 1 i, j element of the matrix Tm: the probability with which the learner moves from an initial hypothesis of hi to a hypothesis of hj after exactly m examples. For the limiting sample case ONE: a matrix with all ones (C) 2009, SNU Biointelligence Lab, 

13 Basic computational framework for modeling language change
(C) 2009, SNU Biointelligence Lab, 


Download ppt "Ch 6. Language Change: Multiple Languages 6.1 Multiple Languages"

Similar presentations


Ads by Google