Statistical Methods for NLP

Slides:



Advertisements
Similar presentations
Chapter 6: Statistical Inference: n-gram Models over Sparse Data
Advertisements

Language Modeling: Ngrams
1 CS 388: Natural Language Processing: N-Gram Language Models Raymond J. Mooney University of Texas at Austin.
Language Modeling.
N-Grams and Corpus Linguistics 6 July Linguistics vs. Engineering “But it must be recognized that the notion of “probability of a sentence” is an.
Albert Gatt Corpora and Statistical Methods – Lecture 7.
SI485i : NLP Set 4 Smoothing Language Models Fall 2012 : Chambers.
Smoothing N-gram Language Models Shallow Processing Techniques for NLP Ling570 October 24, 2011.
CSC 9010: Special Topics, Natural Language Processing. Spring, Matuszek & Papalaskari 1 N-Grams CSC 9010: Special Topics. Natural Language Processing.
N-Grams and Corpus Linguistics.  Regular expressions for asking questions about the stock market from stock reports  Due midnight, Sept. 29 th  Use.
Morphology & FSTs Shallow Processing Techniques for NLP Ling570 October 17, 2011.
N-Grams and Corpus Linguistics
CS 4705 Lecture 13 Corpus Linguistics I. From Knowledge-Based to Corpus-Based Linguistics A Paradigm Shift begins in the 1980s –Seeds planted in the 1950s.
N-Gram Language Models CMSC 723: Computational Linguistics I ― Session #9 Jimmy Lin The iSchool University of Maryland Wednesday, October 28, 2009.
CS 4705 Lecture 6 N-Grams and Corpus Linguistics.
1 Language Model (LM) LING 570 Fei Xia Week 4: 10/21/2009 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAA A A.
CS 4705 Lecture 15 Corpus Linguistics III. Training and Testing Probabilities come from a training corpus, which is used to design the model. –overly.
Language Model. Major role: Language Models help a speech recognizer figure out how likely a word sequence is, independent of the acoustics. A lot of.
I256 Applied Natural Language Processing Fall 2009 Lecture 7 Practical examples of Graphical Models Language models Sparse data & smoothing Barbara Rosario.
Introduction to Language Models Evaluation in information retrieval Lecture 4.
LING 438/538 Computational Linguistics Sandiway Fong Lecture 18: 10/26.
CS 4705 N-Grams and Corpus Linguistics. Spelling Correction, revisited M$ suggests: –ngram: NorAm –unigrams: anagrams, enigmas –bigrams: begrimes –trigrams:
CS 4705 Lecture 14 Corpus Linguistics II. Relating Conditionals and Priors P(A | B) = P(A ^ B) / P(B) –Or, P(A ^ B) = P(A | B) P(B) Bayes Theorem lets.
SI485i : NLP Set 3 Language Models Fall 2012 : Chambers.
1 Advanced Smoothing, Evaluation of Language Models.
8/27/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini.
Machine Translation Course 3 Diana Trandab ă ț Academic year:
NGrams 09/16/2004 Instructor: Rada Mihalcea Note: some of the material in this slide set was adapted from an NLP course taught by Bonnie Dorr at Univ.
Formal Models of Language. Slide 1 Language Models A language model an abstract representation of a (natural) language phenomenon. an approximation to.
Natural Language Processing Language Model. Language Models Formal grammars (e.g. regular, context free) give a hard “binary” model of the legal sentences.
Multi-Style Language Model for Web Scale Information Retrieval Kuansan Wang, Xiaolong Li and Jianfeng Gao SIGIR 2010 Min-Hsuan Lai Department of Computer.
1 LIN6932 Spring 2007 LIN6932: Topics in Computational Linguistics Hana Filip Lecture 5: N-grams.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture 7 8 August 2007.
6. N-GRAMs 부산대학교 인공지능연구실 최성자. 2 Word prediction “I’d like to make a collect …” Call, telephone, or person-to-person -Spelling error detection -Augmentative.
Information Retrieval and Web Search Text properties (Note: some of the slides in this set have been adapted from the course taught by Prof. James Allan.
NLP Language Models1 Language Models, LM Noisy Channel model Simple Markov Models Smoothing Statistical Language Models.
Chapter 6: Statistical Inference: n-gram Models over Sparse Data
Statistical NLP: Lecture 8 Statistical Inference: n-gram Models over Sparse Data (Ch 6)
Chapter 6: N-GRAMS Heshaam Faili University of Tehran.
Exploring Text: Zipf’s Law and Heaps’ Law. (a) (b) (a) Distribution of sorted word frequencies (Zipf’s law) (b) Distribution of size of the vocabulary.
NLP. Introduction to NLP Extrinsic –Use in an application Intrinsic –Cheaper Correlate the two for validation purposes.
N-gram Models CMSC Artificial Intelligence February 24, 2005.
Ngram models and the Sparcity problem. The task Find a probability distribution for the current word in a text (utterance, etc.), given what the last.
1 Introduction to Natural Language Processing ( ) Language Modeling (and the Noisy Channel) AI-lab
Estimating N-gram Probabilities Language Modeling.
A COMPARISON OF HAND-CRAFTED SEMANTIC GRAMMARS VERSUS STATISTICAL NATURAL LANGUAGE PARSING IN DOMAIN-SPECIFIC VOICE TRANSCRIPTION Curry Guinn Dave Crist.
Natural Language Processing Statistical Inference: n-grams
2/29/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
Statistical Methods for NLP Diana Trandab ă ț
Speech and Language Processing Lecture 4 Chapter 4 of SLP.
Language Modeling Part II: Smoothing Techniques Niranjan Balasubramanian Slide Credits: Chris Manning, Dan Jurafsky, Mausam.
Language Model for Machine Translation Jang, HaYoung.
N-Grams Chapter 4 Part 2.
CSC 594 Topics in AI – Natural Language Processing
CSCI 5832 Natural Language Processing
N-Grams and Corpus Linguistics
Speech and Language Processing
Natural Language Processing
N-Grams and Corpus Linguistics
CPSC 503 Computational Linguistics
N-Gram Model Formulas Word sequences Chain rule of probability
CSCI 5832 Natural Language Processing
Presented by Wen-Hung Tsai Speech Lab, CSIE, NTNU 2005/07/13
Chapter 6: Statistical Inference: n-gram Models over Sparse Data
CSCE 771 Natural Language Processing
Lecture 13 Corpus Linguistics I CS 4705.
INF 141: Information Retrieval
Introduction to N-grams
Presentation transcript:

Statistical Methods for NLP Diana Trandabăț 2016-2017

Noisy Channel Model Sent Message in source language (f) Broken Message in target language Received Message in target language (e) Translation model P(f|e) Language model P(e) What hunger have I Hungry am I so I am so hungry Have I that hunger ... Statistical Analysis Statistical Analysis Bilingual text Monolingual text Language Model P(e) Takes care of fluency in the target language Data: corpora in the target language Translation Model P(f|e) Lexical faithful correspondence between languages Data: aligned corpora in source and target languages argmax Search done by the decoder       Ce foame am I am so hungry

Uses of Language Models Speech recognition “I ate a cherry” is a more likely sentence than “Eye eight uh Jerry” OCR & Handwriting recognition More probable sentences are more likely correct readings. Machine translation More likely sentences are probably better translations. Generation More likely sentences are probably better NL generations. Context sensitive spelling correction “Their are problems wit this sentence.”

Completion Prediction A language model also supports predicting the completion of a sentence. Please turn off your cell _____ Your program does not ______ Predictive text input systems can guess what you are typing and give choices on how to complete it.

Letter-based Language Models Shannon’s Game Guess the next letter:

Letter-based Language Models Shannon’s Game Guess the next letter: W

Letter-based Language Models Shannon’s Game Guess the next letter: Wh

Letter-based Language Models Shannon’s Game Guess the next letter: Wha

Letter-based Language Models Shannon’s Game Guess the next letter: What

Letter-based Language Models Shannon’s Game Guess the next letter: What d

Letter-based Language Models Shannon’s Game Guess the next letter: What do

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is?

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word:

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word: What

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word: What do

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word: What do you

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word: What do you think

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word: What do you think the

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word: What do you think the next

Letter-based Language Models Shannon’s Game Guess the next letter: What do you think the next letter is? Guess the next word: What do you think the next word is?

Real Word Spelling Errors They are leaving in about fifteen minuets to go to her house. The study was conducted mainly be John Black. Hopefully, all with continue smoothly in my absence. Can they lave him my messages? I need to notified the bank of…. He is trying to fine out.

Corpora Corpora are online collections of text and speech Brown Corpus Wall Street Journal AP newswire Hansards Timit DARPA/NIST text/speech corpora (Call Home, Call Friend, ATIS, Switchboard, Broadcast News, Broadcast Conversation, TDT, Communicator) TRAINS, Boston Radio News Corpus

Counting Words in Corpora What is a word? e.g., are cat and cats the same word? September and Sept? zero and oh? Is _ a word? * ? ) . , How many words are there in don’t ? Gonna ? In Japanese and Chinese text -- how do we identify a word?

Terminology Sentence: unit of written language Utterance: unit of spoken language Word Form: the inflected form as it actually appears in the corpus Lemma: an abstract form, shared by word forms having the same stem, part of speech, word sense – stands for the class of words with same stem Types: number of distinct words in a corpus (vocabulary size) Tokens: total number of words

Word-based Language Models A model that enables one to compute the probability, or likelihood, of a sentence S, P(S). Simple: Every word follows every other word w/ equal probability (0-gram) Assume |V| is the size of the vocabulary V Likelihood of sentence S of length n is = 1/|V| × 1/|V| … × 1/|V| If English has 100,000 words, probability of each next word is 1/100000 = .00001

Word Prediction: Simple vs. Smart Smarter: probability of each next word is related to word frequency (unigram) – Likelihood of sentence S = P(w1) × P(w2) × … × P(wn) – Assumes probability of each word is independent of probabilities of other words. Even smarter: Look at probability given previous words (N-gram) – Likelihood of sentence S = P(w1) × P(w2|w1) × … × P(wn|wn-1) – Assumes probability of each word is dependent on probabilities of other words.

Chain Rule Conditional Probability P(w1,w2) = P(w1) · P(w2|w1) The Chain Rule generalizes to multiple events P(w1, …,wn) = P(w1) P(w2|w1) P(w3|w1,w2)…P(wn|w1…wn-1) Examples: P(the dog) = P(the) P(dog | the) P(the dog barks) = P(the) P(dog | the) P(barks| the dog)

Relative Frequencies and Conditional Probabilities Relative word frequencies are better than equal probabilities for all words In a corpus with 10K word types, each word would have P(w) = 1/10K Does not match our intuitions that some words are more likely to occur (e.g. the) than others Conditional probability - more useful than individual relative word frequencies dog may be relatively rare in a corpus But if we see barking, P(dog|barking) may be very large

N-Gram Models Estimate probability of each word given prior context. P(phone | Please turn off your cell) Number of parameters required grows exponentially with the number of words of prior context. An N-gram model uses only N1 words of prior context. Unigram: P(phone) Bigram: P(phone | cell) Trigram: P(phone | your cell) The Markov assumption is the presumption that the future behavior of a dynamical system only depends on its recent history. In particular, in a kth-order Markov model, the next state only depends on the k most recent states, therefore an N-gram model is a (N1)-order Markov model.

N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation

Maximum Likelihood Estimate (MLE) N-gram conditional probabilities can be estimated from raw text based on the relative frequency of word sequences. To have a consistent probabilistic model, append a unique start (<s>) and end (</s>) symbol to every sentence and treat these as additional words. Bigram: N-gram:

Simple N-Grams An N-gram model uses the previous N-1 words to predict the next one: P(wn | wn-N+1 wn-N+2… wn-1 ) unigrams: P(dog) bigrams: P(dog | big) trigrams: P(dog | the big) quadrigrams: P(dog | chasing the big)

Using N-Grams N-gram: P(wn|w1n-1 ) ≈ P(wn|wn-N+1n-1) Bigram: P(w1n) ≈ Recall that N-gram: P(wn|w1n-1 ) ≈ P(wn|wn-N+1n-1) Bigram: P(w1n) ≈ For a bigram grammar P(sentence) can be approximated by multiplying all the bigram probabilities in the sequence Example: P(I want to eat Chinese food) = P(I | <start>) P(want | I) P(to | want) P(eat | to) P(Chinese | eat) P(food | Chinese)

A Bigram Grammar Fragment Eat on .16 Eat Thai .03 Eat some .06 Eat breakfast Eat lunch Eat in .02 Eat dinner .05 Eat Chinese Eat at .04 Eat Mexican Eat a Eat tomorrow .01 Eat Indian Eat dessert .007 Eat today Eat British .001

Additional Grammar <start> I .25 Want some .04 <start> I’d .06 Want Thai .01 <start> Tell To eat .26 <start> I’m .02 To have .14 I want .32 To spend .09 I would .29 To be I don’t .08 British food .60 I have British restaurant .15 Want to .65 British cuisine Want a .05 British lunch

Computing Sentence Probability P(I want to eat British food) = P(I|<start>) P(want|I) P(to|want) P(eat|to) P(British|eat) P(food|British) = .25×.32×.65×.26×.001×.60 = .000080 vs. P(I want to eat Chinese food) = .00015 Probabilities seem to capture “syntactic'' facts, “world knowledge'' eat is often followed by a NP British food is not too popular N-gram models can be trained by counting and normalization

N-grams Issues Sparse data Change of domain Not all N-grams are found in training data, need smoothing Change of domain Train on Wall Street Journal, attempt to identify Shakespeare – won’t work N-grams more reliable than (N-1)-grams But even more sparse Generating Shakespeare sentences with random unigrams... Every enter now severally so, let With bigrams... What means, sir. I confess she? then all sorts, he is trim, captain. Trigrams Sweet prince, Falstaff shall die.

N-grams Issues Determine reliable sentence probability estimates should have smoothing capabilities (avoid the zero-counts) apply back-off strategies: if N-grams are not possible, back-off to (N-1) grams P(“And nothing but the truth”)  0.001 P(“And nuts sing on the roof”)  0

Zipf’s Law Rank (r): The numerical position of a word in a list sorted by decreasing frequency (f ). Zipf (1949) “discovered” that: If probability of word of rank r is pr and N is the total number of word occurrences:

Predicting Occurrence Frequencies By Zipf, a word appearing n times has rank rn=AN/n If several words may occur n times, assume rank rn applies to the last of these. Therefore, rn words occur n or more times and rn+1 words occur n+1 or more times. So, the number of words appearing exactly n times is: Fraction of words with frequency n is: Fraction of words appearing only once is therefore ½.

Zipf’s Law Impact on Language Analysis Good News: Stopwords will account for a large fraction of text so eliminating them greatly reduces size of vocabulary in a text Bad News: For most words, gathering sufficient data for meaningful statistical analysis (e.g. for correlation analysis for query expansion) is difficult since they are extremely rare.

Train and Test Corpora A language model must be trained on a large corpus of text to estimate good parameter values. Model can be evaluated based on its ability to predict a high probability for a disjoint (held-out) test corpus (testing on the training corpus would give an optimistically biased estimate). Ideally, the training (and test) corpus should be representative of the actual application data. May need to adapt a general model to a small amount of new (in-domain) data by adding highly weighted small corpus to original training data.

Evaluation of Language Models Ideally, evaluate use of model in end application (extrinsic, in vivo) Realistic Expensive Evaluate on ability to model test corpus (intrinsic). Less realistic Cheaper Verify at least once that intrinsic evaluation correlates with an extrinsic one.

Evaluation and Data Sparsity Questions Perplexity and entropy: how do you estimate how well your language model fits a corpus once you’re done? Smoothing and Backoff : how do you handle unseen n-grams? 44

Perplexity and Entropy Information theoretic metrics Useful in measuring how well a grammar or language model (LM) models a natural language or a corpus Entropy: With 2 LMs and a corpus, which LM is the better match for the corpus? How much information is in a grammar or LM about what the next word will be? More is better! For a random variable X ranging over e.g. bigrams and a probability function p(x), the entropy of X is the expected negative log probability

Perplexity At each choice point in a grammar What are the average number of choices that can be made, weighted by their probabilities of occurrence? I.e., Weighted average branching factor How much probability does a grammar or language model (LM) assign to the sentences of a corpus, compared to another LM? The more information, the lower the perplexity.

Perplexity Measure of how well a model “fits” the test data. Uses the probability that the model assigns to the test corpus. Normalizes for the number of words in the test corpus and takes the inverse. Measures the weighted average branching factor in predicting the next word (lower is better).

Sample Perplexity Evaluation Models trained on 38 million words from the Wall Street Journal (WSJ) using a 19,979 word vocabulary. Evaluate on a disjoint set of 1.5 million WSJ words. Unigram Bigram Trigram Perplexity 962 170 109

Smoothing Since there are a combinatorial number of possible word sequences, many rare (but not impossible) combinations never occur in training, so zero is incorrectly assigned to many parameters (a.k.a. sparse data). If a new combination occurs during testing, it is given a probability of zero and the entire sequence gets a probability of zero (i.e. infinite perplexity). In practice, parameters are smoothed (a.k.a. regularized) to reassign some probability mass to unseen events. Adding probability mass to unseen events requires removing it from seen ones (discounting) in order to maintain a joint distribution that sums to 1.

Unknown Words How to handle words in the test corpus that did not occur in the training data, i.e. out of vocabulary (OOV) words? Train a model that includes an explicit symbol for an unknown word (<UNK>). Choose a vocabulary in advance and replace other words in the training corpus with <UNK>. Replace the first occurrence of each word in the training data with <UNK>.

Laplace (Add-One) Smoothing “Hallucinate” additional training data in which each possible N-gram occurs exactly once and adjust estimates accordingly. where V is the total number of possible (N1)-grams (i.e. the vocabulary size for a bigram model). Bigram: N-gram: Tends to reassign too much mass to unseen events, so can be adjusted to add 0<<1 (normalized by V instead of V).

Advanced Smoothing Many advanced techniques have been developed to improve smoothing for language models. Good-Turing Interpolation Backoff Kneser-Ney Class-based (cluster) N-grams

Model Combination As N increases, the power (expressiveness) of an N-gram model increases, but the ability to estimate accurate parameters from sparse data decreases (i.e. the smoothing problem gets worse). A general approach is to combine the results of multiple N-gram models of increasing complexity (i.e. increasing N).

Interpolation Linearly combine estimates of N-gram models of increasing order. Interpolated Trigram Model: Where: Learn proper values for i by training to (approximately) maximize the likelihood of an independent development (a.k.a. tuning) corpus.

Backoff Only use lower-order model when data for higher-order model is unavailable (i.e. count is zero). Recursively back-off to weaker models until data is available. Where P* is a discounted probability estimate to reserve mass for unseen events and ’s are back-off weights .

A Problem for N-Grams: Long Distance Dependencies Many times local context does not provide the most useful predictive clues, which instead are provided by long-distance dependencies. Syntactic dependencies “The man next to the large oak tree near the grocery store on the corner is tall.” “The men next to the large oak tree near the grocery store on the corner are tall.” Semantic dependencies “The bird next to the large oak tree near the grocery store on the corner flies rapidly.” “The man next to the large oak tree near the grocery store on the corner talks rapidly.” More complex models of language are needed to handle such dependencies.

Summary Language models assign a probability that a sentence is a legal string in a language. They are useful as a component of many NLP systems, such as ASR, OCR, and MT. Simple N-gram models are easy to train on unsupervised corpora and can provide useful estimates of sentence likelihood. MLE gives inaccurate parameters for models trained on sparse data. Smoothing techniques adjust parameter estimates to account for unseen (but not impossible) events.

Google NGrams 58

Great!  Let’s go to practicalities!