Language Modeling Julia Hirschberg CS 4706. Approaches to Language Modeling Context-Free Grammars –Use in HTK Ngram Models.

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Word-counts, visualizations and N-grams Eric Atwell, Language Research.
Advertisements

Chapter 6: Statistical Inference: n-gram Models over Sparse Data
Language Modeling: Ngrams
1 CS 388: Natural Language Processing: N-Gram Language Models Raymond J. Mooney University of Texas at Austin.
N-Grams and Corpus Linguistics 6 July Linguistics vs. Engineering “But it must be recognized that the notion of “probability of a sentence” is an.
N-gram model limitations Important question was asked in class: what do we do about N-grams which were not in our training corpus? Answer given: we distribute.
Albert Gatt Corpora and Statistical Methods – Lecture 7.
SI485i : NLP Set 4 Smoothing Language Models Fall 2012 : Chambers.
Smoothing N-gram Language Models Shallow Processing Techniques for NLP Ling570 October 24, 2011.
CS 4705 N-Grams and Corpus Linguistics Julia Hirschberg CS 4705.
CSC 9010: Special Topics, Natural Language Processing. Spring, Matuszek & Papalaskari 1 N-Grams CSC 9010: Special Topics. Natural Language Processing.
N-Grams and Corpus Linguistics.  Regular expressions for asking questions about the stock market from stock reports  Due midnight, Sept. 29 th  Use.
1 I256: Applied Natural Language Processing Marti Hearst Sept 13, 2006.
Morphology & FSTs Shallow Processing Techniques for NLP Ling570 October 17, 2011.
N-Grams and Corpus Linguistics
N-Gram Language Models CMSC 723: Computational Linguistics I ― Session #9 Jimmy Lin The iSchool University of Maryland Wednesday, October 28, 2009.
Page 1 Language Modeling. Page 2 Next Word Prediction From a NY Times story... Stocks... Stocks plunged this …. Stocks plunged this morning, despite a.
CS 4705 Lecture 6 N-Grams and Corpus Linguistics.
N-gram model limitations Q: What do we do about N-grams which were not in our training corpus? A: We distribute some probability mass from seen N-grams.
1 Language Model (LM) LING 570 Fei Xia Week 4: 10/21/2009 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAA A A.
Introduction to Syntax, with Part-of-Speech Tagging Owen Rambow September 17 & 19.
N-Grams and Language Modeling
CS 4705 Lecture 15 Corpus Linguistics III. Training and Testing Probabilities come from a training corpus, which is used to design the model. –overly.
Language Model. Major role: Language Models help a speech recognizer figure out how likely a word sequence is, independent of the acoustics. A lot of.
Introduction to Language Models Evaluation in information retrieval Lecture 4.
CS 4705 N-Grams and Corpus Linguistics. Homework Use Perl or Java reg-ex package HW focus is on writing the “grammar” or FSA for dates and times The date.
LING 438/538 Computational Linguistics Sandiway Fong Lecture 19: 10/31.
LING 438/538 Computational Linguistics Sandiway Fong Lecture 18: 10/26.
CS 4705 N-Grams and Corpus Linguistics. Spelling Correction, revisited M$ suggests: –ngram: NorAm –unigrams: anagrams, enigmas –bigrams: begrimes –trigrams:
CS 4705 Lecture 14 Corpus Linguistics II. Relating Conditionals and Priors P(A | B) = P(A ^ B) / P(B) –Or, P(A ^ B) = P(A | B) P(B) Bayes Theorem lets.
SI485i : NLP Set 3 Language Models Fall 2012 : Chambers.
1 Advanced Smoothing, Evaluation of Language Models.
1 LIN6932 Spring 2007 LIN6932: Topics in Computational Linguistics Hana Filip Lecture 5: N-grams.
Lanugage Modeling Lecture 12 Spoken Language Processing Prof. Andrew Rosenberg.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture 7 8 August 2007.
6. N-GRAMs 부산대학교 인공지능연구실 최성자. 2 Word prediction “I’d like to make a collect …” Call, telephone, or person-to-person -Spelling error detection -Augmentative.
NLP Language Models1 Language Models, LM Noisy Channel model Simple Markov Models Smoothing Statistical Language Models.
Chapter 6: Statistical Inference: n-gram Models over Sparse Data
Statistical NLP: Lecture 8 Statistical Inference: n-gram Models over Sparse Data (Ch 6)
GRAMMARS David Kauchak CS159 – Fall 2014 some slides adapted from Ray Mooney.
Chapter 6: N-GRAMS Heshaam Faili University of Tehran.
NLP. Introduction to NLP Extrinsic –Use in an application Intrinsic –Cheaper Correlate the two for validation purposes.
Chapter6. Statistical Inference : n-gram Model over Sparse Data 이 동 훈 Foundations of Statistic Natural Language Processing.
PARSING David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Parsing with Context-Free Grammars for ASR Julia Hirschberg CS 4706 Slides with contributions from Owen Rambow, Kathy McKeown, Dan Jurafsky and James Martin.
Tokenization & POS-Tagging
Lecture 4 Ngrams Smoothing
N-gram Models CMSC Artificial Intelligence February 24, 2005.
LING/C SC/PSYC 438/538 Lecture 22 Sandiway Fong. Last Time Gentle introduction to probability Important notions: –sample space –events –rule of counting.
Ngram models and the Sparcity problem. The task Find a probability distribution for the current word in a text (utterance, etc.), given what the last.
Estimating N-gram Probabilities Language Modeling.
Natural Language Processing Statistical Inference: n-grams
Maximum Entropy techniques for exploiting syntactic, semantic and collocational dependencies in Language Modeling Sanjeev Khudanpur, Jun Wu Center for.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
Statistical Methods for NLP Diana Trandab ă ț
Language Modeling Part II: Smoothing Techniques Niranjan Balasubramanian Slide Credits: Chris Manning, Dan Jurafsky, Mausam.
Language Model for Machine Translation Jang, HaYoung.
Statistical Methods for NLP
N-Grams Chapter 4 Part 2.
N-Grams and Corpus Linguistics
N-Grams and Corpus Linguistics
N-Gram Model Formulas Word sequences Chain rule of probability
CS4705 Natural Language Processing
CSCE 771 Natural Language Processing
Presented by Wen-Hung Tsai Speech Lab, CSIE, NTNU 2005/07/13
Chapter 6: Statistical Inference: n-gram Models over Sparse Data
David Kauchak CS159 – Spring 2019
Presentation transcript:

Language Modeling Julia Hirschberg CS 4706

Approaches to Language Modeling Context-Free Grammars –Use in HTK Ngram Models

Context-Free Grammars Defined in formal language theory –Terminals: e.g. cat –Non-terminal symbols: e.g. NP, VP –Start symbol: e.g. S –Rewriting rules: e.g. S  NP VP Start with start symbol, rewrite using rules, done when only terminals left

A Fragment of English S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the Input: the cat is on the mat

Derivations in a CFG S S S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the

Derivations in a CFG NP VP NP S VP S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the

Derivations in a CFG DetP N VP DetP NP S VP N S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the

Derivations in a CFG the cat VP catthe DetP NP S VP N S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the

Derivations in a CFG the cat V PP catthe DetP NP PP S VP NV S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the

Derivations in a CFG the cat is Prep NP cattheis DetP NP PP Prep S VP N NP V S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the

Derivations in a CFG the cat is on Det N cattheis DetP NP DetP PP Prep S VP N NP V S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the on N

Derivations in a CFG the cat is on the mat cattheis DetP NP DetP PP Prep S VP N NP V S  NP VP VP  V PP NP  DetP N N  cat | mat V  is PP  Prep NP Prep  on DetP  the on N themat

A More Complicated Fragment of English S  NP VP S  VP VP  V PP VP  V NP VP  V NP  DetP NP NP  N NP NP  N PP  Prep NP N  cat | mat | food | bowl | Mary V  is | likes | sits Prep  on | in | under DetP  the | a Mary likes the cat bowl.

Using CFGs in Simple ASR Applications LHS of rules is a semantic category: –LIST -> show me | I want | can I see|… –DEPARTTIME -> (after|around|before) HOUR | morning | afternoon | evening –HOUR -> one|two|three…|twelve (am|pm) –FLIGHTS -> (a) flight|flights –ORIGIN -> from CITY –DESTINATION -> to CITY –CITY -> Boston | San Francisco | Denver | Washington

HTK Grammar Format Variables start with $ (e.g., $city) Terminals must be in capital letters (e.g., FRIDAY, TICKET) X Y is concatenation (e.g., I WANT) (X | Y) means X or Y – e.g., (WANT | NEED) [X] means optional, (e.g., [ON] FRIDAY) Kleene closure (e.g., )

Examples $city = BOSTON | NEWYORK | WASHINGTON | BALTIMORE; $time = MORNING | EVENING; $day = FRIDAY | MONDAY; (SENT-START ((WHAT TRAINS LEAVE) | (WHAT TIME CAN I TRAVEL) | (IS THERE A TRAIN)) (FROM|TO) $city (FROM | TO) $city ON $day [$time]) SENT-END)

Problems for Larger Vocabulary Applications CFGs complicated to build and hard to modify to accommodate new data: –Add capability to make a reservation –Add capability to ask for help –Add ability to understand greetings –… Parsing input with large CFGs is slow for real- time applications So…for large applications we use ngram models

Next Word Prediction The air traffic control supervisor who admitted falling asleep while on duty at Reagan National Airport has been suspended, and the head of the Federal Aviation Administration on Friday ordered new rules to ensure a similar incident doesn't take place. FAA chief Randy Babbitt said he has directed controllers at regional radar facilities to contact the towers of airports where there is only one controller on duty at night before sending planes on for landings. Babbitt also said regional controllers have been told that if no controller can be raised at the airport, they must offer pilots the option of diverting to another airport. Two commercial jets were unable to contact the control tower early Wednesday and had to land without gaining clearance.

Word Prediction How do we know which words occur together? –Domain knowledge –Syntactic knowledge –Lexical knowledge Can we model this knowledge computationally? –Simple statistical techniques do a good job when trained appropriately –Most common way of constraining ASR predictions to conform to probabilities of word sequences in the language – Language Modeling via N-grams

N-Gram Models of Language Use the previous N-1 words in a sequence to predict the next word Language Model (LM) –unigrams, bigrams, trigrams,… How do we train these models to discover co- occurrence probabilities?

Finding Corpora Corpora are online collections of text and speech –Brown Corpus –Wall Street Journal, AP newswire, web –DARPA/NIST text/speech corpora (Call Home, Call Friend, ATIS, Switchboard, Broadcast News, TDT, Communicator)

Tokenization: Counting Words in Corpora What is a word? –e.g., are cat and cats the same word? Cat and cat? –September and Sept? –zero and oh? –Is _ a word? * ? ‘(‘ ? Uh ? –Should we count parts of self-repairs? (go to fr- france) –How many words are there in don’t? Gonna? –Any token separated by white space from another? In Japanese, Thai, Chinese text -- how do we identify a word?

Terminology Sentence: unit of written language (SLU) Utterance: unit of spoken language (prosodic phrase) Wordform: inflected form as it actually appears in the corpus Lemma: an abstract form, shared by word forms having the same stem, part of speech, and word sense – stands for the class of words with stem X Types: number of distinct words in a corpus (vocabulary size) Tokens: total number of words

Simple Word Probability Assume a language has T word types and N tokens, how likely is word y to follow word x? –Simplest model: 1/T But is every word equally likely? –Alternative 1: estimate likelihood of y occurring in new text based on its general frequency of occurrence estimated from a corpus (unigram probability) ct(y)/N But is every word equally likely in every context? –Alternative 2: condition the likelihood of y occurring on the context of previous words ct(x,y)/ct(x)

Computing Word Sequence (Sentence) Probabilities Compute probability of a word given a preceding sequence –P(the mythical unicorn…) = P(the| ) P(mythical| the) * P(unicorn| the mythical)… Joint probability: P(w n-1,w n ) = P(w n | w n-1 ) P(w n-1 ) –Chain Rule: Decompose joint probability, e.g. P(w 1,w 2,w 3 ) asChain Rule P(w 1,w 2,...,w n ) = P(w 1 ) P(w 2 |w 1 ) … P(w n |w 1 to n-1 ) But…the longer the sequence, the less likely we are to find it in a training corpus P(Most biologists and folklore specialists believe that in fact the mythical unicorn horns derived from the narwhal)

Bigram Model Markov assumption: the probability of a word depends only on the probability of a limited history Approximate by –P(unicorn|the mythical) by P(unicorn|mythical) Generalization: the probability of a word depends only on the probability of the n previous words –trigrams, 4-grams, 5-grams… –the higher n is, the more training data needed

From –P(the mythical unicorn…) = P(the| ) P(mythical| the) * P(unicorn| the mythical)… To –P(the,mythical,unicorn) = P(unicorn|mythical) P(mythical|the) P(the| )

Bigram Counts (fragment) n eatshoneymythicalcatunicornthea eats honey mythical cat unicorn the a n-1

Determining Bigram Probabilities Normalization: divide each row's counts by appropriate unigram counts for w n-1 Computing the bigram probability of mythical mythical –C(m,m)/C(all m-initial bigrams) –p (m|m) = 2 / 35 = Maximum Likelihood Estimation (MLE): relative frequency of e.g. amythicalcateatshoney

A Simple Example P(a mythical cat…) = P(a | ) P(mythical | a) P(cat | mythical) … P( |…) = 90/1000 * 5/200 * 8/35 … Needed: –Bigram counts for each of these word pairs (x,y) –Counts for each unigram (x) to normalize –P(y|x) = ct(x,y)/ct(x) Why do we usually represent bigram probabilities as log probabilities? What do these bigrams intuitively capture?

Training and Testing N-Gram probabilities come from a training corpus –overly narrow corpus: probabilities don't generalize –overly general corpus: probabilities don't reflect task or domain A separate test corpus is used to evaluate the model, typically using standard metrics –held out test set; development (dev) test set –cross validation –results tested for statistical significance – how do they differ from a baseline? Other results?

Evaluating Ngram Models: Perplexity Information theoretic, intrinsic metric that usually correlates with extrinsic measures (e.g. ASR performance) At each choice point in a grammar or LM –Weighted average branching factor: Average number of choices y following x, weighted by their probabilities of occurrence –Or, if LM(1) assigns more probability to test set sentences than LM(2), the lower is LM(1)’s perplexity and the better it models the test set

Ngram Properties As we increase the value of N, the accuracy of an ngram model increases – why? Ngrams are quite sensitive to the corpus they are trained on A few events (words) occur with high frequency, e.g.? –Easy to collect statistics on these A very large number occur with low frequency, e.g.? –You may wait an arbitrarily long time to get valid statistics on these –Some of the zeroes in the table are really zeros –Others are just low frequency events you haven't seen yet –How to allow for these events in unseen data?

Ngram Smoothing Every n-gram training matrix is sparse, even for very large corpora –Zipf’s law: a word’s frequency is approximately inversely proportional to its rank in the word distribution listZipf’s law Solution: –Estimate the likelihood of unseen n-grams –Problem: how do to adjust the rest of the corpus to accommodate these ‘phantom’ n-grams? –Many techniques described in J&M

Backoff methods (e.g. Katz ‘87) For e.g. a trigram model –Compute unigram, bigram and trigram probabilities –In use: Where trigram unavailable back off to bigram if available, o.w. unigram probability E.g An omnivorous unicorn

LM toolkits The CMU-Cambridge LM toolkit (CMULM) – The SRILM toolkit –

Next Evaluating ASR systems