Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 I256: Applied Natural Language Processing Marti Hearst Sept 18, 2006.

Similar presentations


Presentation on theme: "1 I256: Applied Natural Language Processing Marti Hearst Sept 18, 2006."— Presentation transcript:

1 1 I256: Applied Natural Language Processing Marti Hearst Sept 18, 2006

2 2 Why do puns make us groan? He drove his expensive car into a tree and found out how the Mercedes bends. Isn't the Grand Canyon just gorges?

3 3 Why do puns make us groan? Time flies like an arrow. Fruit flies like a banana.

4 4 Predicting Next Words One reason puns make us groan is they play on our assumptions of what the next word will be. They also exploit homonymy – same sound, different spelling and meaning (bends, Benz; gorges, gorgeous) polysemy – same spelling, different meaning

5 5 Review: ConditonalFreqDist() Data Structure CFD is a collection of FreqDist() objects Indexed by the “condition” that is being tested or compared Initialize a new one: cfd = ConditionalFreqDist() Add a count cfd[‘austen-emma’].inc(‘she’) cfd[‘austen-pride’].inc(‘she’) cfd[‘austen-pride’].inc(‘he’) Can access each FreqDist object by indexing on condition cfd[‘austen-emma’].samples() # [‘she’] cfd[‘austen-pride’].N() # 2 Can also get a list of the conditions from the cfd object cfd.conditions() >> [‘austen-emma’, ‘austen-pride’]

6 6 Computing Next Words

7 7 Computing Bigrams via Storing Adjacent Word Counts cdf = ConditionalFreqDist() prev = None for word in sentence.split(“ “) cdf[prev].inc(word. lower()) prev = word.lower() sentence = “The dog ate the crab” prev = None, word = “the” prev = “the”, word = “dog” prev = “dog”, word = “ate” prev = “ate”, word = “the” prev = “the”, word = “crab” cdf[`the’].samples() = [‘dog’, ‘crab’]

8 8 Auto-generate a Story How to fix this? Use a random number generator.

9 9 Auto-generate a Story The choice() method chooses one item randomly from a list (from random import *)

10 10 Adapted from slide by Bonnie Dorr Applications Why do we want to predict a word, given some preceding words? Rank the likelihood of sequences containing various alternative hypotheses – e.g., for speech recognition Theatre owners say popcorn/unicorn sales have doubled... Assess the likelihood/goodness of a sentence –for text generation or machine translation. The doctor recommended a cat scan. El doctor recommendó una exploración del gato.

11 11 Python Tip: Lists can build Lists

12 12 Bigram counts How to get the counts in a compact way from a CFD for all the ngrams starting with a given word? How to include the words themselves along with their counts?

13 13 Comparing Modal Verb Counts How to implement this? Which modals best characterize each genre? Hint to get you started:

14 14 Comparing Modals

15 15 Comparing Modals

16 16 Part-of-Speech Tagging

17 17 Modified from Diane Litman's version of Steve Bird's notes Terminology Tagging The process of associating labels with each token in a text Tags The labels Tag Set The collection of tags used for a particular task

18 18 Example from the GENIA corpus Typically a tagged text is a sequence of white-space separated base/tag tokens: These/DT findings/NNS should/MD be/VB useful/JJ for/IN therapeutic/JJ strategies/NNS and/CC the/DT development/NN of/IN immunosuppressants/NNS targeting/VBG the/DT CD28/NN costimulatory/NN pathway/NN./.

19 19 Modified from Diane Litman's version of Steve Bird's notes What does Tagging do? 1.Collapses Distinctions Lexical identity may be discarded e.g., all personal pronouns tagged with PRP 2.Introduces Distinctions Ambiguities may be resolved e.g. deal tagged with NN or VB 3.Helps in classification and prediction

20 20 Modified from Diane Litman's version of Steve Bird's notes Significance of Parts of Speech A word’s POS tells us a lot about the word and its neighbors: Limits the range of meanings (deal), pronunciation (object vs object) or both (wind) Helps in stemming Limits the range of following words Can help select nouns from a document for summarization Basis for partial parsing (chunked parsing) Parsers can build trees directly on the POS tags instead of maintaining a lexicon

21 21 Slide modified from Massimo Poesio's Choosing a tagset The choice of tagset greatly affects the difficulty of the problem Need to strike a balance between Getting better information about context Make it possible for classifiers to do their job

22 22 Slide modified from Massimo Poesio's Some of the best-known Tagsets Brown corpus: 87 tags (more when tags are combined) Penn Treebank: 45 tags Lancaster UCREL C5 (used to tag the BNC): 61 tags Lancaster C7: 145 tags

23 23 Modified from Diane Litman's version of Steve Bird's notes The Brown Corpus An early digital corpus (1961) Francis and Kucera, Brown University Contents: 500 texts, each 2000 words long From American books, newspapers, magazines Representing genres: –Science fiction, romance fiction, press reportage scientific writing, popular lore

24 24 Modified from Diane Litman's version of Steve Bird's notes Penn Treebank First large syntactically annotated corpus 1 million words from Wall Street Journal Part-of-speech tags and syntax trees

25 25 What are the most frequent Brown tags?

26 26 Slide modified from Massimo Poesio's How hard is POS tagging? Number of tags1234567 Number of word types 353403760264611221 In the Brown corpus, 12% of word types ambiguous 40% of word tokens ambiguous

27 27 Tagging methods Hand-coded Statistical taggers Brill (transformation-based) tagger

28 28 nltk_lite tag package Type of taggers: tag.Default() tag.Regexp() tag.Lookup() tag.Affix() tag.Unigram() tag.Bigram() tag.Trigram() Actions: tag.tag() tag.tagsents() tag.untag() tag.train() tag.accuracy() tag.tag2tuple() tag.string2words() tag.string2tags()

29 29 Hand-coded Tagger Make up some regexp rules that make use of morphology

30 30 Compare to Brown tags

31 31 Modified from Massio Poesio's lecture Tagging with lexical frequencies Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN Problem: assign a tag to race given its lexical frequency Solution: we choose the tag that has the greater P(race|VB) P(race|NN) Actual estimate from the Switchboard corpus: P(race|NN) =.00041 P(race|VB) =.00003

32 32 Next Time N-gram taggers Training, testing, and determining accuracy The Brill tagger


Download ppt "1 I256: Applied Natural Language Processing Marti Hearst Sept 18, 2006."

Similar presentations


Ads by Google