Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning linguistic structure John Goldsmith Computer Science Department University of Chicago February 7, 2003.

Similar presentations


Presentation on theme: "Learning linguistic structure John Goldsmith Computer Science Department University of Chicago February 7, 2003."— Presentation transcript:

1

2 Learning linguistic structure John Goldsmith Computer Science Department University of Chicago February 7, 2003

3 A large part of the field of computational linguistics has moved during the 1990s from developing grammars, speech recognition engines, etc., that simply work, to developing systems that learn language-specific parameters from large amounts of data.

4 Credo… The application of statistically-driven methods of data analysis, when applied to natural language data, will produce results which shed light on linguistic structure.

5 Unsupervised learning Input: large texts in a natural language, with no prior knowledge of the language.

6 A bit more about the goal What’s the input? “Data” – which comes to the learner, in acoustic form, unsegmented: Sentences not broken up into words Words not broken up into their components (morphemes). Words not assigned to lexical categories (noun, verb, article, etc.) With a meaning representation?

7 Idealization of the language- learning scheme 1. Segment the soundstream into words; the words form the lexicon of the language. 2. Discover internal structure of words; this is the morphology of the language. 3. Infer a set of lexical categories for words; each word is assigned to (at least) one lexical category. 4. Infer a set of phrase-structure rules for the language.

8 Idealization? While these tasks are individually coherent, we make no assumption that any one must be completed before another can be begun.

9 Today’s task To develop an algorithm capable of learning the morphology of a language, given knowledge of the words of the language, and of a large sample of utterances.

10 Goals Given a corpus, learn: The set of word-roots, prefixes, and suffixes, and principles of combinations; Principles of automatic alternations (e.g., e drops before the suffixes –ing,–ity and –ed, but not before –s) Some suffixes have one grammatical function (-ness) while others have more (e.g., -s: song-s versus sing-s).

11 Why? Practical applications: Automatic stemming for multilingual information retrieval A corpus broken into morphemes is far superior to a corpus broken into words for statistically-driven machine translation Develop morphologies for speech recognition automatically

12 Theoretically There is a strong bias currently in linguistics to underestimate the difficulty of language learning – For example, to identify language learning with the selection of a phrase-structure grammar, or with the independent setting of a small number of parameters.

13 Morphology The learning of morphology is a very difficult task, in the sense that every word W of length |W| can potentially be divided into 1, 2, …, L morphemes  i, constrained only by  i | = |W| – and that’s ignoring labeling (which is the stem, which the affix). The number of potential morphologies for a given corpus is enormous.

14 So the task is a reality check for discussions of language learning

15 Ideally We would like to pose the problem of grammar-selection as an optimization problem, and cut our task into two parts: 1. Specification of the objective function to be optimized, and 2. Development of practical search techniques to find optima in reasonable time.

16 Current status Linguistica: a C++ Windows-based program available for download at http://humanities.uchicago.edu/faculty/goldsmith/Linguistica2000 Technical discussion in Computational Linguistics (June 2001) Good results with 5,000 words, very fine- grained results with 500,000 words (corpus length, not lexicon count), especially in European languages.

17 Today’s talk 1. Specify the task in explicit terms 2. Minimum Description Length analysis: what it is, and why it is reasonable for this task; how it provides our optimization criteria. 3. Search heuristics: (1) bootstrap heuristic, and (2) incremental heuristics. 4. Morphology assigns a probability distribution over its words. 5. Computing the length of the morphology.

18 Today’s talk (continued) 6. Results 7. Some work in progress: learning syntax to learn about morphology

19 Given a text (but no prior knowledge of its language), we want: List of stems, suffixes, and prefixes List of signatures. A signature: a list of all suffixes (prefixes) appearing in a given corpus with a given stem. Hence, a stem in a corpus has a unique signature. A signature has a unique set of stems associated with it

20 Example of signature in English NULL.ed.ing.s askcallpoint summarizes: askaskedasking asks call calledcallingcalls pointpointedpointingpoints

21 We would like to characterize the discovery of a signature as an optimization problem Reasonable tack: formulate the problem in terms of Minimum Description Length (Rissanen, 1989)

22 Today’s talk 1. Specify the task in explicit terms 2. Minimum Description Length analysis: what it is, and why it is reasonable for this task; how it provides our optimization criteria. 3. Search heuristics: (1) bootstrap heuristic, and (2) incremental heuristics. 4. Morphology assigns a probability distribution over its words. 5. Computing the length of the morphology.

23 Minimum Description Length (MDL) Jorma Rissanen: Stochastic Complexity in Statistical Inquiry (1989) Work by Michael Brent and Carl de Marcken on word-discovery using MDL in the mid-1990s.

24 Essence of MDL If we are given 1. a corpus, and 2. a probabilistic morphology, which technically means that we are given a distribution over certain strings of stems and affixes. Then we can compute an over-all measure (“description length”) which we can seek to minimize over the space of all possible analyses.

25 Description length of a corpus C, given a morphology M The length, in bits, of the shortest formulation of the morphology expressible on a given Turing machine + Optimal compressed length of the corpus, using that morphology.

26 Probabilistic morphology To serve this function, the morphology must assign a distribution over the set of words it generates, so that the optimal compressed length of an actual, occurring corpus (the one we’re learning from) is -1 * log probability it assigns.

27 Essence of MDL… The goodness of the morphology is also measured by how compact the morphology is. We can measure the compactness of a morphology in information theoretic bits.

28 How can we measure the compactness of a morphology? Let’s consider a naïve version of description length: count the number of letters. This naïve version is nonetheless helpful in seeing the intuition involved.

29 Naive Minimum Description Length Corpus: jump, jumps, jumping laugh, laughed, laughing sing, sang, singing the, dog, dogs total: 62 letters Analysis: Stems: jump laugh sing sang dog (20 letters) Suffixes: s ing ed (6 letters) Unanalyzed: the (3 letters) total: 29 letters. Notice that the description length goes UP if we analyze sing into s+ing

30 Essence of MDL… The best overall theory of a corpus is the one for which the sum of -1 * log prob (corpus) + length of the morphology (that’s the description length) is the smallest.

31 Essence of MDL…

32 Overall logic Search through morphology space for the morphology which provides the smallest description length.

33 Brief foreshadowing of our calculation of the length of the morphology A morphology is composed of three lists: a list of stems, a list of suffixes (say), and a list of ways in which the two can be combined (“signatures”). Information content of a list =

34 Stem list

35 Today’s talk 1. Specify the task in explicit terms 2. Minimum Description Length analysis: what it is, and why it is reasonable for this task; how it provides our optimization criteria. 3. Search heuristics: (1) bootstrap heuristic, and (2) incremental heuristics. 4. Morphology assigns a probability distribution over its words. 5. Computing the length of the morphology.

36 Bootstrap heuristic 1. Find a method to locate likely places to cut a word. 2. Allow no more than 1 cut per word (i.e., maximum of 2 morphemes). 3. Assume this is stem + suffix. 4. Associate with each stem an alphabetized list of its suffixes; call this its signature. 5. Accept only those word analyses associated with robust signatures…

37 …where a robust signature is one with a minimum of 5 stems (and at least two suffixes). Robust signatures are pieces of secure structure.

38 Heuristic to find likely cuts… Best is a modification of a good idea of Zellig Harris (1955): Current variant: Cut words at certain peaks of successor frequency. Problems: can over-cut; can under-cut; and can put cuts too far to the right (“aborti-” problem). [Not a problem!]

39 Successor frequency g o v e r n Empirically, only one letter follows “gover”: “n”

40 Successor frequency g o v e r n m Empirically, 6 letters follows “govern”: “n” i o s e #

41 Successor frequency g o v e r n m Empirically, 1 letter follows “governm”: “e” e g o v e r 1 n 6 m 1 e peak of successor frequency

42 Lots of errors… c o n s e r v a t i v e s 9 18 11 6 4 1 2 1 1 2 1 1 wrong rightwrong

43 Even so… We set conditions: Accept cuts with stems at least 5 letters in length; Demand that successor frequency be a clear peak: 1… N … 1 (e.g. govern- ment) Then for each stem, collect all of its suffixes into a signature; and accept only signatures with at least 5 stems to it.

44 Words->SuccessorFreq1(GetStems_Suffixed(), GetSuffixes(), GetSignatures(), SF1 ); CheckSignatures(); ExtendKnownStemsToKnownSuffixes(); TakeSignaturesFindStems(); ExtendKnownStemsToKnownSuffixes(); FromStemsFindSuffixes(); ExtendKnownStemsToKnownSuffixes(); LooseFit(); CheckSignatures();

45 2. Incremental heuristics Enormous amount of detail being skipped…let’s look at one simple case: Loose fit: suffixes and signatures to split: Collect any string that precedes a known suffix. Find all of its apparent suffixes, and use MDL to decide if it’s worth it to do the analysis.

46 Using MDL to judge a potential stem and potential signature Suppose we find: act, acted, action, acts. We have the suffixes NULL, ed, ion, and s, but not the signature NULL.ed.ion.s Let’s compute cost versus savings of signature NULL.ed.ion.s

47 savings Savings: Stem savings: 3 copies of the stem act: that’s 3 x 3 = 9 letters = 40.5 bits (taking 4.5 bits/letter). Suffix savings: ed, ing, s: 6 letters, another 27 bits. Total of 67.5 bits--

48 Cost of NULL.ed.ing.s A pointer to each suffix: To give a feel for this: Total cost of suffix list: about 30 bits. Cost of pointer to signature: total cost is -- all the stems using it chip in to pay for its cost, though.

49 Cost of signature: about 43 bits Savings: about 67 bits Slight worsening in the compressed length of these 4 words. so MDL says: Do it! Analyze the words as stem + suffix. Notice that the cost of the analysis would have been higher if one or more of the suffixes had not already “existed”.

50 Today’s talk 1. Specify the task in explicit terms 2. Minimum Description Length analysis: what it is, and why it is reasonable for this task; how it provides our optimization criteria. 3. Search heuristics: (1) bootstrap heuristic, and (2) incremental heuristics. 4. Morphology assigns a probability distribution over its words. 5. Computing the length of the morphology.

51 Frequency of analyzed word W is analyzed as belonging to Signature  stem T and suffix F. Actually what we care about is the log of this: Where [W] is the total number of words. [x] means the count of x’s in the corpus (token count)

52

53 Today’s talk 1. Specify the task in explicit terms 2. Minimum Description Length analysis: what it is, and why it is reasonable for this task; how it provides our optimization criteria. 3. Search heuristics: (1) bootstrap heuristic, and (2) incremental heuristics. 4. Morphology assigns a probability distribution over its words. 5. Computing the length of the morphology.

54 The length of a morphology A morphology is a set of 3 things: A list of stems; A list of suffixes; A list of signatures with the associated stems. We’ll make an effort to make our grammars consist primarily of lists, whose length is conceptually simple.

55 Length of a list A header telling us how long the list is, of length (roughly) log 2 N, where N is the length. N entries. What’s in an entry? Raw lists: a list of strings of letters, where the length of each letter is log 2 (26) – the information content of a letter (we can use a more accurate conditional probability). Pointer lists: A list of pointers to the entries. Someday: the information contained in the meaning of each morpheme

56 Connections across lists Raw suffix list: ed s ing ion able … Signature 1: Suffixes: pointer to “ing” pointer to “ed” Signature 2: Suffixes pointer to “ing” pointer to “ion” The length of each pointer is -- usually cheaper than the letters themselves

57 The fact that a pointer to a symbol has a length that is inversely proportional to its frequency is the key: We want the shortest overall grammar; so That means maximizing the re-use of units (stems, affixes, signatures, etc.)

58 Number of letters structure + Signatures, which we’ll get to shortly

59 Information contained in the Signature component list of pointers to signatures indicates the number of distinct elements in X

60 Original morphology + Compressed data Repair heuristics: using MDL We could compute the entire MDL in one state of the morphology; make a change; compute the whole MDL in the proposed (modified) state; and compared the two lengths. Revised morphology+ compressed data <><>

61 But it’s better to have a more thoughtful approach. Let’s define Then the change of the size of the punctuation in the lists: Then the size of the punctuation for the 3 lists is:

62 Size of the suffix component, remember: Change in its size when we consider a modification to the morphology: 1. Global effects of change of number of suffixes; 2. Effects on change of size of suffixes in both states; 3. Suffixes present only in state 1; 4. Suffixes present only in state 2;

63 Suffix component change: Contribution of suffixes that appear only in State1 Contribution of suffixes that appear only in State 2 Global effect of change on all suffixes Suffixes whose counts change

64 Digression on entropy, MDL, and morphology Why using MDL is closely related to measuring the complexity of the space of possible vocabularies You better save this for another day, John – you’ve only got 15 minutes left.

65 Today’s talk (continued) 6. Results 7. Some work in progress: learning syntax to learn about morphology

66 How good? In practice, on a large naturally- occurring corpus of a European language: precision and recall in the low 80%. Precision: proportion of predicted cuts that are correct Recall: proportion of actual cuts that are predicted.

67 These numbers go to the high 98% if we use an artificial corpus with all of the inflected forms of a word.

68 Real life challenges include alumnus Johnson, Acheson, Adrianople adenomas Adirondacks Abolition Los Angeles

69 Today’s talk (continued) 6. Results 7. Some work in progress: learning syntax to learn about morphology

70 Current research projects 1. Allomorphy: Automatic discovery of relationship between stems (lov~love, win~winn) 2. Use of syntax (automatic learning of syntactic categories) 3. Rich morphology: other languages (e.g., Swahili), other sub-languages (e.g., biochemistry sub-language) where the mean # morphemes/word is much higher 4. Ordering of morphemes

71 Allomorphy: Automatic discovery of relationship between stems Currently learns (unfortunately, over- learns) how to delete stem-final letters in order to simplify signatures. E.g., delete stem-final –e in English before suffixes –ing, -ed, -ion (etc.).

72 Automatic learning of syntactic categories Work in progress with Misha Belkin Finding eigenvector decomposition of a graph that represents word neighbors. Using eigenvectors of the bigram graph to infer morpheme identity. With Mikhail Belkin. Proceedings of the Morphology/Phonology Learning Workshop of ACL-02. Association for Computational Linguistics..

73 Disambiguating morphs? Automatic learning of morphology can provide us with a signature associated with a given stem: Signature = alphabetized list of affixes associated with a given stem in a corpus.

74 For example: Signature NULL.ed.ing.s: aid, ask, call, claim, help,kick Signature NULL.ed.ing: add, assist, attend, consider Signature NULL.s achievement, acre, action, administrator, affair

75 The signature NULL.ed.ing is much more a subsignature of NULL.ed.ing.s than NULL.s is because of s’s ambiguity (noun, verb).

76 How can we determine whether a given morph (“ed”, “s”) represents more than 1 morpheme? I don’t think that we can do this on the basis of morphological information.

77 Goal: find a way of describing syntactic behavior in a way that is dependent only on a corpus. That is, in a fashion that is language-independent but corpus-dependent – though the global structure that is induced from 2 corpora from the same language will be very similar.

78 French Fem. sg. nouns plural nouns Finite verbs

79 With such a method… We can look at words formed with the “same” suffix, putting words into buckets based on the signature their stem is in: Bucket 1 (NULL.ed.ing.s): aided, asked, called Bucket 2 (NULL.ed.ing): added, assisted, attended. Q: do the average positions from each of the buckets form a tight cluster?

80 If the average locations of each bucket of –ed words form a tight cluster, then –ed is not ambiguous. If the average locations of each bucket (from distinct signatures) does not form a tight cluster, the morpheme is not the same across signatures.

81 Method Not a clustering method; neither top- down nor bottom-up. Two step procedure: 1. Construct a nearest-neighbor graph. 2. Reduce the graph to 2-dimensions by means of eigenvector decomposition.

82 Nearest neighbors Following a long list of researchers: We begin by assuming that a word W’s distribution can be described by a vector L describing all of its left-hand neighbors and a vector R describing all of its right-hand neighbors.

83 V = Size of corpus’ vocabulary V L w,R w are vectors that live in R V. If V is ordered alphabetically, then L w = (4, 0, 0, 0, …) # of occurrences of “a” before w # of occurrences of “abatuna” before w # of occurrences of “abandoned” before w

84 Similarity of syntactic behavior is modeled as closeness of L-vectors …where “closeness” of 2 vectors is modeled as the angle between them.

85 Construct a (non-directed) graph: Its vertices are the words W in V. For each word W: Pick the K most-similar words (K = 20, 50) (by angle of L-vector) Add an edge to the graph connecting W to each of those words.

86 Canonical matrix representation of a graph: M(i,j) = 1 iff there is an edge connecting w i and w j – that is, iff w i and w j are similar words as regards how they interact with the word immediately to the left.

87 Where is this matrix M? It’s a point in a space of size V(V-1)/2. Not very helpful, really. How can we optimally reduce it to a space of small dimension? Find the eigenvectors of the normalized laplacian of the graph. See Chung, Malik and Shi, Belkin and Niyogi …

88 A graph and its matrix M The degree of a vertex (= word) is the number of edges adjacent (linked) to it. Notice that this is not fixed across words. The degree of vertex v i is the sum of the entries of the i th row.

89 The laplacian of the graph Let D = VxV diagonal matrix s.t. diagonal entry M(i,i) = degree of v i D – M is the Laplacian of the graph. Its rows sum to 0.

90 Normalized laplacian: For each i, divide all entries in the i th row by √d(i). For each i, divide all entries in the i th column by √d(i). Result: Diagonal elements are all 1. Generally:

91 Eigenvector decomposition The eigenvectors form a spectrum, ranked by the value of their eigenvalues. Eigenvalues run from 0 to 2 (L is positive semi-definite). The eigenvector with 0 eigenvalue reflects word’s frequency. But the next smallest gives us a good representation of the words…

92 …in the sense that the values associated with each word show how close the words are in the original graph. We can graph the first two eigenvectors of the Left (or Right) graph: each word is located at the coordinates corresponding to it in the eigenvector(s):

93 Spanish (left) masculine plurals fem. plurals finite verbs feminine sg nouns masc. sg. nouns past participles

94 German (left) Neuter sg nouns Names of places Fem. sg. nouns numbers, centuries

95 English (right) prepositions + “to” + of nouns modals

96 English (left) infinitives the + modals past verbs

97 Results of experiment If we define the size of the minimal box that includes all of the vocabulary as being 1 by 1, then we find a small ( < 0.10 ) average distance to mean for unambiguous suffixes (e.g., -ed (English), -ait (French) ) – only for them.

98 Measure To repeat: we find the “virtual” location of the conflation of all of the stems of a given signature, plus the suffix in question e.g., NULL.ed.ing_ed We do this for all signatures containing “ed” We compute average distance to the mean.

99 LeftGraphRightGraph Expect coherence: ed0.0500.054 -ly0.0320.100 ‘s0.0000.012 -al0.002 -ate0.0690.080 -ment0.0120.009 -ait0.0680.034 -er0.0550.071 -a0.0230.029 -ant0.0630.088 LeftGraphRightGraph Expect little/no coherence: -s0.2650.145 -ing0.0960.143 NULL0.3120.192 -e0.2900.130 Average <= 0.10 Average > 0.10

100 Rich morphologies A practical challenge for use in data- mining and information retrieval in patent applications (de-oxy-ribo-nucle- ic, etc.) Swahili, Hungarian, Turkish, etc.

101 The End

102 Appendices

103 Corpus Pick a large corpus from a language -- 5,000 to 1,000,000 words.

104 Corpus Bootstrap heuristic Feed it into the “bootstrapping” heuristic...

105 Corpus Out of which comes a preliminary morphology, which need not be superb. Morphology Bootstrap heuristic

106 Corpus Morphology Bootstrap heuristic incremental heuristics Feed it to the incremental heuristics...

107 Corpus Morphology Bootstrap heuristic incremental heuristics modified morphology Out comes a modified morphology.

108 Corpus Morphology Bootstrap heuristic incremental heuristics modified morphology Is the modification an improvement? Ask MDL--

109 Corpus Morphology Bootstrap heuristic modified morphology If it is an improvement, replace the morphology... Garbage

110 Corpus Bootstrap heuristic incremental heuristics modified morphology Send it back to the incremental heuristics again...

111 Morphology incremental heuristics modified morphology Continue until there are no improvements to try.

112 Consider the space of all words of length L, built from an alphabet of size b. How many ways are there to build a vocabulary of size N?Call that U(b,L,N). Clearly,

113 Compare that with the operation (choosing a set of N words of length L, alphabet size b) with the operation of choosing a set of T stems (of length t) and a set of F suffixes (of length f), where t + f = L. If we take the complexity of each task to be measured by the log of its size, then we’re asking the size of:

114 is easy to approximate, however. remember:

115 The number of bits needed to list all the words: the analysis The length of all the pointers to all the words: the compressed corpus Thus the log of the number of vocabularies = description length of that vocabulary, in the terms we’ve been using

116 That means that the differences in the sizes of the spaces of possible vocabularies is equal to the difference in the description length in the two cases: hnce, Difference of complexity of “simplex word” analysis and complexity of analyzed word analysis= log U(b,L,N) – log U(b,t,T) – log U(b,f,F) Difference in size of morphologies Difference in size of compressed data

117 But we’ve (over)simplified in this case by ignoring the frequencies inherent in real corpora. What’s of great interest in real life is the fact that some suffixes are used often, others rarely, and similarly for stems.

118 We know something about the distribution of words, but nothing about distribution of stems and especially suffixes. But suppose we wanted to think about the statistics of vocabulary choice in which words could be selected more than once….

119 We want to select N words of length L, and the same word can be selected. How many ways of doing this are there? You can have any number of occurrence of a word, and 2 sets of the same number of them are indistinguishable. How many such vocabularies are there, then?

120 where Z(i) is the number of words of frequency i. (‘Z’ stands for “Zipf”). We don’t know much about frequencies of suffixes, but Zipf’s law says that hence for a morpheme set that obeyed the Zipf distribution:

121 End of digression


Download ppt "Learning linguistic structure John Goldsmith Computer Science Department University of Chicago February 7, 2003."

Similar presentations


Ads by Google