8/27/2015CPSC503 Winter 20081 CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini.

Slides:



Advertisements
Similar presentations
1 CS 388: Natural Language Processing: N-Gram Language Models Raymond J. Mooney University of Texas at Austin.
Advertisements

N-gram model limitations Important question was asked in class: what do we do about N-grams which were not in our training corpus? Answer given: we distribute.
Albert Gatt Corpora and Statistical Methods – Lecture 7.
SI485i : NLP Set 4 Smoothing Language Models Fall 2012 : Chambers.
Language modelling using N-Grams Corpora and Statistical Methods Lecture 7.
6/9/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 11 Giuseppe Carenini.
6/9/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 4 Giuseppe Carenini.
6/9/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 4 Giuseppe Carenini.
CS4705 Natural Language Processing.  Regular Expressions  Finite State Automata ◦ Determinism v. non-determinism ◦ (Weighted) Finite State Transducers.
September BASIC TECHNIQUES IN STATISTICAL NLP Word prediction n-grams smoothing.
Probabilistic Pronunciation + N-gram Models CSPP Artificial Intelligence February 25, 2004.
N-Gram Language Models CMSC 723: Computational Linguistics I ― Session #9 Jimmy Lin The iSchool University of Maryland Wednesday, October 28, 2009.
Smoothing Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Fall BASIC TECHNIQUES IN STATISTICAL NLP Word prediction n-grams smoothing.
N-gram model limitations Q: What do we do about N-grams which were not in our training corpus? A: We distribute some probability mass from seen N-grams.
1 Smoothing LING 570 Fei Xia Week 5: 10/24/07 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A AA A A A.
CS 4705 Lecture 15 Corpus Linguistics III. Training and Testing Probabilities come from a training corpus, which is used to design the model. –overly.
CPSC 322, Lecture 31Slide 1 Probability and Time: Markov Models Computer Science cpsc322, Lecture 31 (Textbook Chpt 6.5) March, 25, 2009.
LING 438/538 Computational Linguistics Sandiway Fong Lecture 18: 10/26.
CS 4705 Lecture 14 Corpus Linguistics II. Relating Conditionals and Priors P(A | B) = P(A ^ B) / P(B) –Or, P(A ^ B) = P(A | B) P(B) Bayes Theorem lets.
1 Statistical NLP: Lecture 5 Mathematical Foundations II: Information Theory.
SI485i : NLP Set 3 Language Models Fall 2012 : Chambers.
1 Advanced Smoothing, Evaluation of Language Models.
1 LIN6932 Spring 2007 LIN6932: Topics in Computational Linguistics Hana Filip Lecture 5: N-grams.
BİL711 Natural Language Processing1 Statistical Language Processing In the solution of some problems in the natural language processing, statistical techniques.
Heshaam Faili University of Tehran
6. N-GRAMs 부산대학교 인공지능연구실 최성자. 2 Word prediction “I’d like to make a collect …” Call, telephone, or person-to-person -Spelling error detection -Augmentative.
NLP Language Models1 Language Models, LM Noisy Channel model Simple Markov Models Smoothing Statistical Language Models.
Chapter 6: Statistical Inference: n-gram Models over Sparse Data
Statistical NLP: Lecture 8 Statistical Inference: n-gram Models over Sparse Data (Ch 6)
10/12/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 10 Giuseppe Carenini.
Chapter 6: N-GRAMS Heshaam Faili University of Tehran.
Language Modeling Anytime a linguist leaves the group the recognition rate goes up. (Fred Jelinek)
Chapter6. Statistical Inference : n-gram Model over Sparse Data 이 동 훈 Foundations of Statistic Natural Language Processing.
10/24/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 6 Giuseppe Carenini.
Resolving Word Ambiguities Description: After determining word boundaries, the speech recognition process matches an array of possible word sequences from.
10/30/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 7 Giuseppe Carenini.
Tokenization & POS-Tagging
Lecture 4 Ngrams Smoothing
N-gram Models CMSC Artificial Intelligence February 24, 2005.
Yuya Akita , Tatsuya Kawahara
Ngram models and the Sparcity problem. The task Find a probability distribution for the current word in a text (utterance, etc.), given what the last.
12/6/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini.
12/7/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 4 Giuseppe Carenini.
Estimating N-gram Probabilities Language Modeling.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
Natural Language Processing Statistical Inference: n-grams
2/29/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini.
Learning, Uncertainty, and Information: Evaluating Models Big Ideas November 12, 2004.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
Probabilistic Pronunciation + N-gram Models CMSC Natural Language Processing April 15, 2003.
6/18/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 6 Giuseppe Carenini.
Language Modeling Part II: Smoothing Techniques Niranjan Balasubramanian Slide Credits: Chris Manning, Dan Jurafsky, Mausam.
N-Grams Chapter 4 Part 2.
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
Probability and Time: Markov Models
Probability and Time: Markov Models
CPSC 503 Computational Linguistics
Probability and Time: Markov Models
CSCI 5832 Natural Language Processing
CSCE 771 Natural Language Processing
Lecture 10: Speech Recognition (II) October 28, 2004 Dan Jurafsky
Presented by Wen-Hung Tsai Speech Lab, CSIE, NTNU 2005/07/13
Probability and Time: Markov Models
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
Presentation transcript:

8/27/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini

8/27/2015CPSC503 Winter Today 22/9 Finish spelling n-grams Model evaluation

8/27/2015CPSC503 Winter Spelling: the problem(s) Non-word isolated Non-word context Detection Correction Find the most likely correct word funn -> funny, funnel... …in this context –trust funn –a lot of funn –I want too go their Real-word isolated Real-word context ?! Is it an impossible (or very unlikely) word in this context? Find the most likely sub word in this context

8/27/2015CPSC503 Winter Key Transition Up to this point we’ve mostly been discussing words in isolation Now we’re switching to sequences of words And we’re going to worry about assigning probabilities to sequences of words

8/27/2015CPSC503 Winter Knowledge-Formalisms Map (including probabilistic formalisms) Logical formalisms (First-Order Logics) Rule systems (and prob. versions) (e.g., (Prob.) Context-Free Grammars) State Machines (and prob. versions) (Finite State Automata,Finite State Transducers, Markov Models) Morphology Syntax Pragmatics Discourse and Dialogue Semantics AI planners

8/27/2015CPSC503 Winter Only Spelling? A.Assign a probability to a sentence Part-of-speech tagging Word-sense disambiguation Probabilistic Parsing B.Predict the next word Speech recognition Hand-writing recognition Augmentative communication for the disabled ABAB Impossible to estimate 

8/27/2015CPSC503 Winter Impossible to estimate! Assuming 10 4 word types and average sentence contains 10 words -> sample space? Google language model Update (22 Sept. 2006): was based on corpus Number of sentences: 95,119,665,584 Most sentences will not appear or appear only once  Key point in Stat. NLP: your corpus should be >> than your sample space!

8/27/2015CPSC503 Winter Decompose: apply chain rule Chain Rule: Applied to a word sequence from position 1 to n:

8/27/2015CPSC503 Winter Example Sequence “The big red dog barks” P(The big red dog barks)= P(The) * P(big|the) * P(red|the big)* P(dog|the big red)* P(barks|the big red dog) Note - P(The) is better expressed as: P(The| ) written as P(The| )

8/27/2015CPSC503 Winter Not a satisfying solution  Even for small n (e.g., 6) we would need a far too large corpus to estimate: Markov Assumption: the entire prefix history isn’t necessary. unigram bigram trigram

8/27/2015CPSC503 Winter Prob of a sentence: N-Grams unigram bigram trigram Chain-rule simplifications

8/27/2015CPSC503 Winter Bigram The big red dog barks P(The big red dog barks)= P(The| ) * Trigram?

8/27/2015CPSC503 Winter Estimates for N-Grams bigram..in general

8/27/2015CPSC503 Winter Estimates for Bigrams Silly Corpus : “ The big red dog barks against the big pink dog” Word types vs. Word tokens

8/27/2015CPSC503 Winter Berkeley ____________Project (1994) Table: Counts Corpus: ~10,000 sentences, 1616 word types What domain? Dialog? Reviews?

8/27/2015CPSC503 Winter BERP Table:

8/27/2015CPSC503 Winter BERP Table Comparison Counts Prob. 1?

8/27/2015CPSC503 Winter Some Observations What’s being captured here? –P(want|I) =.32 –P(to|want) =.65 –P(eat|to) =.26 –P(food|Chinese) =.56 –P(lunch|eat) =.055

8/27/2015CPSC503 Winter Some Observations P(I | I) P(want | I) P(I | food) I I I want I want I want to The food I want is Speech-based restaurant consultant!

8/27/2015CPSC503 Winter Generation Choose N-Grams according to their probabilities and string them together I want -> want to -> to eat -> eat lunch I want -> want to -> to eat -> eat Chinese -> Chinese food

8/27/2015CPSC503 Winter Two problems with applying: to

8/27/2015CPSC503 Winter Problem (1) We may need to multiply many very small numbers (underflows!) Easy Solution: –Convert probabilities to logs and then do …… –To get the real probability (if you need it) go back to the antilog.

8/27/2015CPSC503 Winter Problem (2) The probability matrix for n-grams is sparse How can we assign a probability to a sequence where one of the component n-grams has a value of zero? Solutions: –Add-one smoothing –Good-Turing –Back off and Deleted Interpolation

8/27/2015CPSC503 Winter Add-One Make the zero counts 1. Rationale: If you had seen these “rare” events chances are you would only have seen them once. unigram discount

8/27/2015CPSC503 Winter Add-One: Bigram …… Counts

8/27/2015CPSC503 Winter BERP Original vs. Add-one smoothed Counts

8/27/2015CPSC503 Winter Add-One Smoothed Problems An excessive amount of probability mass is moved to the zero counts When compared empirically with MLE or other smoothing techniques, it performs poorly -> Not used in practice Detailed discussion [Gale and Church 1994]

8/27/2015CPSC503 Winter Better smoothing technique Good-Turing Discounting (clear example on textbook) More advanced: Backoff and Interpolation To estimate an ngram use “lower order” ngrams –Backoff: Rely on the lower order ngrams when we have zero counts for a higher order one; e.g.,use unigrams to estimate zero count bigrams –Interpolation: Mix the prob estimate from all the ngrams estimators; e.g., we do a weighted interpolation of trigram, bigram and unigram counts

8/27/2015CPSC503 Winter Impossible to estimate! Sample space much bigger than any realistic corpus  Chain rule does not help  Markov assumption : Unigram… sample space? Bigram … sample space? Trigram … sample space? Sparse matrix: Smoothing techniques N-Grams Summary: final Look at practical issues: sec. 4.8

8/27/2015CPSC503 Winter You still need a big corpus… The biggest one is the Web! Impractical to download every page from the Web to count the ngrams => Rely on page counts (which are only approximations) –Page can contain an ngram multiple times –Search Engines round-off their counts Such “noise” is tolerable in practice

8/27/2015CPSC503 Winter Today 22/9 Finish spelling n-grams Model evaluation

8/27/2015CPSC503 Winter Entropy Def1. Measure of uncertainty Def2. Measure of the information that we need to resolve an uncertain situation –Let p(x)=P(X=x); where x  X. –H(p)= H(X)= -  x  X p(x)log 2 p(x) –It is normally measured in bits.

8/27/2015CPSC503 Winter Model Evaluation Actual distribution Our approximation How different? Relative Entropy (KL divergence) ? D(p||q)=  x  X p(x)log(p(x)/q(x))

8/27/2015CPSC503 Winter Entropy of Entropy rate Language Entropy Assumptions: ergodic and stationary Entropy can be computed by taking the average log probability of a looooong sample NL? Shannon-McMillan-Breiman

8/27/2015CPSC503 Winter Cross-Entropy Between probability distribution P and another distribution Q (model for P) Between two models Q 1 and Q 2 the more accurate is the one with higher =>lower cross- entropy => lower Applied to Language

8/27/2015CPSC503 Winter Model Evaluation: In practice Corpus Training Set Testing set A:split B: train model Model: Q C:Apply model counting frequencies smoothing Compute cross- perplexity

8/27/2015CPSC503 Winter Knowledge-Formalisms Map (including probabilistic formalisms) Logical formalisms (First-Order Logics) Rule systems (and prob. versions) (e.g., (Prob.) Context-Free Grammars) State Machines (and prob. versions) (Finite State Automata,Finite State Transducers, Markov Models) Morphology Syntax Pragmatics Discourse and Dialogue Semantics AI planners

8/27/2015CPSC503 Winter Next Time Hidden Markov-Models (HMM) Part-of-Speech (POS) tagging