Presentation is loading. Please wait.

Presentation is loading. Please wait.

Syntactic category acquisition. 1;0 1;1 1;2 1;3 1;4 1;5 1;6 daddy, mommy bye dog, hi, uh oh baby, ball, no eye, nose, banana, juice, shoe, kitty, bird,

Similar presentations


Presentation on theme: "Syntactic category acquisition. 1;0 1;1 1;2 1;3 1;4 1;5 1;6 daddy, mommy bye dog, hi, uh oh baby, ball, no eye, nose, banana, juice, shoe, kitty, bird,"— Presentation transcript:

1 Syntactic category acquisition

2 1;0 1;1 1;2 1;3 1;4 1;5 1;6 daddy, mommy bye dog, hi, uh oh baby, ball, no eye, nose, banana, juice, shoe, kitty, bird, duck, car, book, balloon, bottle, night-night, woof, moo, ouch, baa baa, yum yum apple, cheese, ear, cracker, keys, bath, peekaboo, vroom, up, down, that grandpa, grandma, sock, hat, cat, fish, truck, boat, thank you, cup, spoon, back Early words (Clark 2003)

3 peopledaddy, mommy, baby animalsdog, kitty, bird, duck body partseye, nose, ear foodbanana, juice, apple, cheese toysball, balloon, book clothsshoe, sock, hat vehiclescar, truck, boat household itemsbottle, keys, bath, spoon routinesbye, hi, uh oh, night-night, thank you, no activitiesup, down, back sound imitationwoof, moo, ouch, baa baa, yum yum deicticsthat

4 How do children learn syntactic categories such as nouns, verbs, and prepositions?

5 The meaning of syntactic categories Nouns typically denote objects, persons, animals (nouns are non-relational and atemporal; Langacker) Verbs typically denote events and states (verbs are relational and temporal; Langacker)

6 Cues for syntactic category acquisition Semantic cues (Gentner 1982; Pinker 1984) Pragmatic cues (Bruner 1975) Phonological cues (Monaghan et al. 2005) Distributional cues (Redington et al. 1998)

7 Maratsos and Chalkely (1980) Nouns: the __, X-s Verbs:will __, X-ing, X-ed,

8 Objections to distributional learning Syntactic categories are commonly defined in terms of their distribution; thus, it cannot be a surprise that distributional information is informative about syntactic category status. The argument is trivial or even circular. ‘Noisy input data’ Det Adj __ P N ….

9 Objections to distributional learning The vast number of possible relationships that might be included in a distributional analysis is likely to overwhelm any distributional learning mechanism in a combinatorial explosion. (Pinker 1984) Distributional learning mechanisms do not search blindly for all possible relationships between linguistic items, i.e. the search is focused on specific distributional cues (Reddington et al. 1998).

10 Objections to distributional learning The interesting properties of linguistic categories are abstract and such abstract properties cannot be detected in the input. (Pinker 1984) This assumption crucially relies on Pinker‘s particular view of grammar. If you take a construction grammar perspective, grammar (or syntax) is much more concrete (Redington et al. 1998).

11 Objections to distributional learning Even if the child is able to determine certain correlations between distributional regularities and syntactic categories, this information is of little use because there are so many different cross-linguistic correlations that the child wouldn’t know which ones are relevant in his/her language. (Pinker 1984) Syntactic categories vary to some extent across languages (i.e. there are no fixed categories). Children recognize any distributional pattern regardless of the particular properties that categories in different languages may have (Redington et al. 1998)

12 Objections to distributional learning Spurious correlations will occur in the input that will be misguiding. For instance, if the child hears John eats meat. John eats slowly. The meat is good. He may erroneously infer The slowly is good is a possible English sentence. (Pinker 1984) Children do not learn categories from isolated examples (Redington et al. 1998).

13 Redington et al. 1998 - Data All adult speakers of the CHILDES database (2.5 million words). Bigram statistics: Target words: 1000 most frequent words in the corpus Context words: 150 most frequent words in the corpus Context size: 2 words preceding + 2 words following the target word: x the __ of x in the __ x x will have __ the x

14 Bigram statistics Context w. 1 (the __ of) Context w. 2 (at the __ is) Context w. 3 (has __ him) Context w. 4 (He __ in) Target w. 1 Target w. 2 Target w. 3 Target w. 4 Etc. 210 376 0 1 321 917 1 4 2 1 1078 987 0 5 1298 1398 Context vectors: Target word 1210-321-2-0 Target word 2376-917-1-5 Target word 30-1-1078-1298 Target word 41-4-987-1398

15 Statistical analysis Hierarchical cluster analysis over context vectors: dendogram Treatment of polysemous words ‘ Slicing’ of the denogram Comparison of the clusters of the dendogram to a ‘ benchmark’ (Collins Cobuild lexical dictionary)

16 Hierarchical cluster analysis

17 Result: Local contexts have the strongest effect, notably the word immediately preceding the target word is important. Exp 1: Context size "Learners might be innately biased towards considering only these local contexts, whether as a result of limited processing abilities (e.g. Elman 1993) or as a result of language specific representational bias." (Redington et al. 1998)

18 Exp 2: Number of target words Distributional learning is most efficient for high frequency open class words. Level of accuracy Number of target words

19 Result: nouns < verbs < function words Exp 3: Category type „Although content words are typically much less frequent, their context is relatively predictable … Because there are many more content words, the context of function words will be relatively amaophous." (Redington et al. 1998)

20 Exp 4: Corpus size Level of accuracy Number of words

21 Result: Including information about utterance boundaries did not improve the level of accurarcy. Exp 5: Utterance boundaries

22 Result: The cluster analysis still revealed significant clusters, but performance was much better when frequency information was included. Exp 6: Frequency vs occurrence ‘Frequency vectors’ were replaced by ‘occurrence vectors’: Frequency vectorOccurrence vector 27-0-12-0-0-12-21-0-1-0-0-1-1 0-213-2-1-45-3-00-1-1-1-1-1-0

23 Result: The results decreased but were still significant. Exp 7: Removing function words Early child language includes very few function words. Thus, Redington et al. removed all function words from the context and repeated the cluster analysis without function words.

24 Result: Representing particular word classes through discrete category labels (e.g. N), does not improve the categorization of other categories (e.g. V). Exp 8: Knowledge of word classes The cluster analyses were performed over the distribution of individual items. It is conceivable that the child recognizes at some point discrete syntactic categories (cf. semantic bootstrapping), which may facilitate the categorization task.

25 Mintz et al. 2002. Cognitive Science (1)The man [in the yellow car] … (2)She [has not yet been] to NY. 1.Information about phrasal boundaries improves performance. 2.Local contexts have the strongest effect (cf. Redington et al. 1998). 3.The results for Ns are better than the results for Vs (cf. Redington et al. 1998).

26 Monaghan et al. 2005. Cognition (1)Nouns vs. verbs (2)Open class vs. closed class. 1.Distributional information 2.Phonological information

27 Phonological features of syntactic categories 1.LengthOpen class words are longer than closed class words 2.StressClosed class words usually do not carry stress 3.StressNouns tend to be more often trochaic than verbs (i.e. verbs are often iambic) 4.ConsonantsClosed class words have fewer consonant cluster 5.Reduced vowelsClosed class words include a higher proportion of reduced vowels than open class words

28 Phonological features of syntactic categories 1.InterdentalsClosed class words are more likely to begin with an interdental fricative than open class words 2.NasalsNouns are more likely than verbs to include nasals 3.Final voicingNouns are more likely than verbs to end in a voiced consonant 4.Vowel positionNouns tend to include more back vowels than verbs 5.Vowel heightThe vowels of verbs tend to be higher than the vowels of verbs

29 Results Phonological features do not just reinforce distributional information, but seem to be especially powerful in domains in which distributional information is not so easily available. 1.Distributional information is especially useful for categorization of high frequency open class words. 2.Phonological information is more useful for catego- rization of low frequency open class words (Zipf 1935). 3.Phonological information is also useful for the distinction between open and closed class words.


Download ppt "Syntactic category acquisition. 1;0 1;1 1;2 1;3 1;4 1;5 1;6 daddy, mommy bye dog, hi, uh oh baby, ball, no eye, nose, banana, juice, shoe, kitty, bird,"

Similar presentations


Ads by Google