Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Modeling Parameter Setting Performance in Domains with a Large Number of Parameters: A Hybrid Approach CUNY / SUNY / NYU Linguistics Mini-conference.

Similar presentations


Presentation on theme: "1 Modeling Parameter Setting Performance in Domains with a Large Number of Parameters: A Hybrid Approach CUNY / SUNY / NYU Linguistics Mini-conference."— Presentation transcript:

1 1 Modeling Parameter Setting Performance in Domains with a Large Number of Parameters: A Hybrid Approach CUNY / SUNY / NYU Linguistics Mini-conference March 10, 2001 William Gregory Sakas & Dina Demner-Fushman

2 2 Primary point (to make [quickly, painlessly] on a Saturday just before lunch): Not enough to build a series of computer simulations of a cognitive model of human language acquisition and claim that it mirrors the process by which a child acquires language. The (perhaps obvious) fact is that learners are acutely sensitive to cross-language ambiguity. Whether or not a learning model is ultimately successful as a cognitive model is an empirical issue; depends on the ‘fit’ of the simulations with the facts about the distribution of ambiguity in human languages.

3 3 What’s coming: 1) Some early learnability results 2) A feasibility case study analysis of one parameter setting model : The Triggering Learning Algorithm (TLA) Gibson and Wexler (1994) 3) Conjectures and a proposed research agenda

4 4 Why computationally model natural language acquisition? Pinker (1979) : "...it may be necessary to find out how language learning could work in order for the developmental data to tell us how is does work." [emphasis mine]

5 5 Learnability Is the learner guaranteed to converge on the target grammar for every language in a given domain? Gold (1967), Wexler and Culicover (1980), Gibson & Wexler (1994), Kanazawa (1994) An early learnability result (Gold, 1967) Exposed to input strings of an arbitrary target language generated by grammar G targ, it is impossible to guarantee that any learner can converge on G targ if G targ is drawn from any class in the Chomsky hierarchy. (E.g. context-free grammars).

6 6 Gold’s result is sometimes taken to be strong evidence for a nativist Universal Grammar. 1 ) Psycholinguistic research indicates that children learn grammar based on positive exemplar sentences. 2) Gold proves that G reg G cfg G cs G re can’t be learned this way. Conclude: some grammatical competence must be in place before learning commences. Gold’s result is often misapplied, but much discussion.

7 7 Another Learnability result: All classes of grammars possible within the principles and parameters framework are learnable because they are finite. In fact a simple Blind Guess Learner is guaranteed to succeed in the long run for any finite class of grammars. Blind Guess Learner: 1. randomly hypothesize a current grammar 2. consume and attempt to parse a sentence from the linguistic environment 3. If the sentence is parsable by the current grammar, go to 2. Otherwise go to 1.

8 8 Feasibility Is acquisition possible within a reasonable amount of time and/or with a reasonable amount of work? Clark (1994, in press), Niyogi and Berwick (1996), Lightfoot (1989) (degree-0), Sakas(2000), Tesar and Smolensky (1996) and many PAC results concerning induction of FSA’s Feasibility measure (Sakas and Fodor, in press) Near linear increase of the expected number of sentences consumed before a learner converges on the target grammar.

9 9 Feasibility result:The Blind guess learner succeeds only after consuming a number of sentences exponentially correlated with the number of parameters. If # Parameters= 30 then # Grammars = 2 30 = 1,073,741,824 The search space is huge!

10 10 A Feasibility Case Study : A three parameter domain (Gibson and Wexler, 1994) Sentences are strings of the symbols: S, V, 01, 02, aux, adv SV / VS- subject precedes verb / verb precedes subject +V2 / -V2- verb or aux must be in the second position in the sentence VO / OV - verb precedes object / object precedes verb Allie will eat the birds  S aux V O

11 11 SV OV +V2 (German-like) S V S V O O V S S V O2 O1 O1 V S O2 O2 V S O1 S AUX V S AUX O V O AUX S V S AUX O2 O1 V O1 AUX S O2 V O2 AUX S O1 V ADV V S ADV V S O ADV V S O2 O1 ADV AUX S V ADV AUX S O V ADV AUX S O2 O1 V SV VO -V2 (English-like) S V S V O S V O1 O2 S AUX V S AUX V O S AUX V O1 O2 ADV S V ADV S V O ADV S V O1 O2 ADV S AUX V ADV S AUX V O ADV S AUX V O1 O2 Two example languages (finite, degree-0)

12 12 Surprisingly, G&W’s simple 3-parameter domain presents nontrivial obstacles to several types of learning strategies, but the space is ultimately learnable. Big question: How will the learning process scale up in terms of feasibility as the number of parameters increases? Two problems for most acquisition strategies: 1) Ambiguity 2) Size of the domain

13 13 SV OV +V2 (German-like) S V S V O O V S S V O2 O1 O1 V S O2 O2 V S O1 S AUX V S AUX O V O AUX S V S AUX O2 O1 V O1 AUX S O2 V O2 AUX S O1 V ADV V S ADV V S O ADV V S O2 O1 ADV AUX S V ADV AUX S O V ADV AUX S O2 O1 V SV VO -V2 (English-like) S V S V O S V O1 O2 S AUX V S AUX V O S AUX V O1 O2 ADV S V ADV S V O ADV S V O1 O2 ADV S AUX V ADV S AUX V O ADV S AUX V O1 O2 Cross-language ambiguity Indicates a few ambiguous strings

14 14 P&P acquisition: How to obtain informative feasibility results studying linguistically interesting domains with cognitively plausible learning algorithms?

15 15 Answer: introduce some formal notions in order to abstract away from the specific linguistic content of the input data. Create an input space for a linguistically plausible domain. -- simulations. (Briscoe (2000), Clark(1992), Elman (1990, 1991,1996), Yang (200)) So, how to answer questions of feasibility as the number of grammars (exponentially) scales up? But won't work for large domains.

16 16 A hybrid approach (formal/empirical) 1)formalize the learning process and input space 2) use the formalization in a Markov structure to empirically test the learner across a wide range of learning scenarios The framework gives general data on the expected performance of acquisition algorithms. Can answer the question: Given learner L, if the input space exhibits characteristics x, y and z, is feasible learning possible?

17 17 Syntax acquisition can be viewed as a state space search — nodes represent grammars including a start state and a target state. — arcs represent a possible change from one hypothesized grammar to another. 100 010 000 011 101 001 110 111 G targ A possible state space for parameter space with 3 parameters.

18 18 The Triggering Learning Algorithm -TLA (Gibson and Wexler, 1994) Searches the (huge) grammar space using local heuristics repeat until convergence: receive a string s from L(G targ ) if it can be parsed by G curr, do nothing otherwise, pick a grammar that differs by one parameter value from the current grammar if this grammar parses the sentence, make it the current grammar, otherwise do nothing SVC Greediness Error-driven

19 19 010 000110 011 s  L(G curr ) s  L(G attempt ) s  L(G curr )  G attempt = 110  s  L(G 110 ) s  L(G curr )  G attempt = 011  s  L(G 011 ) s  L(G curr )  G attempt = 000  s  L(G 000 ) Error-driven Greediness Local state space for the TLA If G curr = 010, then G attempt = random G  { 000, 110, 011 } SVC

20 20  i denotes the ambiguity factor  i = Pr ( s  L(G targ )  L(G i ) ) i,j denotes the overlap factor i,j = Pr (s  L(G i )  L(G J ) )  i denotes the probability of picking or "looking ahead” at a new hypothesis grammar G i Probabilistic Formulation of TLA performance

21 21 The probability that the learner moves from state G curr to state G new = (1-  curr ) (  new ) Pr (G new can parse s|G curr can’t parse s) Pr(G curr  G new ) = (  new ) (  new - curr, new ) Error-driven Greediness SVC After some algebra:

22 22 0000 0010 1100 0011 1001 1010 0100 1000 0001 1111 1101 0111 1110 1011 0110 0101 G-Ring G 4 G-Ring G 2 target grammar G targ grammars Parameter space H 4 with G targ = 1111. Each ring or G-Ring contains exactly those grammars a certain hamming distance from the target. For example, ring G 2 contains 0011, 0101, 1100, 1010, 1001 and 0110 all of which differ from the target grammar 1111 by 2 bits.

23 23 Weak Smoothness Requirement - All the members of a G-Ring can parse s with an equal probability. Strong Smoothness Requirement - The parameter space is weakly smooth and the probability that s can be parsed by a member of a G-Ring increases monotonically as distance from the target decreases. Smoothness - there exists a correlation between the similarity of grammars and the similarity of the languages that they generate.

24 24 Experimental Setup 1) Adapt the formulas for the transition probabilities to work with G-rings 2) Build a generic transition matrix into which varying values of  and can be employed 3) Use standard Markov technique to calculate the expected number of inputs consumed by the system (construct the fundamental matrix) Goal - find the ‘sweet spot’ for TLA performance

25 25 Three experiments 1) G-Rings equally likely to parse an input sentence (uniform domain) 2) G-Rings are strongly smooth (smooth domain) 3) Anything goes domain Problem: How to find combinations of  and that are optimal? Solution: Use an an optimization algorithm: GRG2 (Lasdon and Waren, 1978).

26 26 Result 1: The TLA performs worse than blind guessing in a uniform domain - exponential increase in # sentences Logarithmic scale Results obtained employing optimal values of  and

27 27 Result 2: The TLA performs extremely well in a smooth domain - but still nonlinear increase Linear scale Results obtained employing optimal values of  and

28 28 Result 3: The TLA performs a bit better in the Anything Goes scenario - optimizer chooses ‘accelerating’ strong smoothness Linear scale Results obtained employing optimal values of  and

29 29 In summary TLA is an infeasible learner: With cross-language ambiguity uniformly distributed across the domain of languages, the number of sentences required by the the number of sentences consumed by the TLA is exponential in the number of parameters. TLA is a feasible learner: In strongly smooth domains, the number of sentences increases at a rate much closer to linear as the number of parameters increases (i.e. the number of grammars increases exponentially).

30 30 No Best Strategy Conjecture (roughly in the spirit of Schaffer, 1994): Algorithms may be extremely efficient in specific domains, but not in others; there is generally no best learning strategy. Recommends: Have to know the specific facts about the distribution or shape of ambiguity in natural language.

31 31 Research agenda: Three-fold approach to building a cognitive computational model of human language acquisition: 1) formulate a framework to determine what distributions of ambiguity make for feasible learning 2) conduct a psycholinguistic study to determine if the facts of human (child-directed) language are in line with the conducive distributions 3) conduct a computer simulation to check for performance nuances and potential obstacles (e.g. local max based on defaults, or subset principle violations)

32 32 Tag Slides (if time): Why is Gold's theorem so often misapplied?

33 33 Gold’s result is sometimes taken to be strong evidence for a nativist Universal Grammar. 1 ) Psycholinguistic research indicates that children learn grammar based on positive exemplar sentences. 2) Gold says, G reg G cfg G cs G re can’t be learned this way. Conclude: some grammatical competence must be in place before learning commences. Gold’s result is often misapplied, but much discussion.

34 34 Gold misapplied. Tacit assumptions: human language is a computable process By Church/Turing no computational model is more powerful than L re. Hence, L human is a subset of L re The Chomsky hierarchy is an appropriate framework in which to examine L human Many, many interesting formal results about CFG’s, automata, etc. But where does L human lie?

35 35 Given the language as computation assumption, and Gold’s result, it may be that the class of human languages intersects the classes of the Chomsky Hierarchy. L re L cs L cfg L reg L human Angluin’s Theorem (1980) - provides necessary and sufficient conditions for such a class.


Download ppt "1 Modeling Parameter Setting Performance in Domains with a Large Number of Parameters: A Hybrid Approach CUNY / SUNY / NYU Linguistics Mini-conference."

Similar presentations


Ads by Google