November 2005CSA3180: Statistics III1 CSA3202: Natural Language Processing Statistics 3 – Spelling Models Typing Errors Error Models Spellchecking Noisy.

Slides:



Advertisements
Similar presentations
Spelling Correction for Search Engine Queries Bruno Martins, Mario J. Silva In Proceedings of EsTAL-04, España for Natural Language Processing Presenter:
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
LING 438/538 Computational Linguistics Sandiway Fong Lecture 17: 10/25.
Mustafa Cayci INFS 795 An Evaluation on Feature Selection for Text Clustering.
Speech Recognition Part 3 Back end processing. Speech recognition simplified block diagram Speech Capture Speech Capture Feature Extraction Feature Extraction.
Lattices Segmentation and Minimum Bayes Risk Discriminative Training for Large Vocabulary Continuous Speech Recognition Vlasios Doumpiotis, William Byrne.
Bar Ilan University And Georgia Tech Artistic Consultant: Aviya Amir.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
1 Hidden Markov Models (HMMs) Probabilistic Automata Ubiquitous in Speech/Speaker Recognition/Verification Suitable for modelling phenomena which are dynamic.
A BAYESIAN APPROACH TO SPELLING CORRECTION. ‘Noisy channels’ In a number of tasks involving natural language, the problem can be viewed as recovering.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 12: Sequence Analysis Martin Russell.
Gobalisation Week 8 Text processes part 2 Spelling dictionaries Noisy channel model Candidate strings Prior probability and likelihood Lab session: practising.
Automatic Spelling Correction Probability Models and Algorithms Motivation and Formulation Demonstration of a Prototype Program The Underlying Probability.
Computational Language Andrew Hippisley. Computational Language Computational language and AI Language engineering: applied computational language Case.
Spelling Checkers Daniel Jurafsky and James H. Martin, Prentice Hall, 2000.
Introduction to Bayesian Learning Ata Kaban School of Computer Science University of Birmingham.
Linear Discriminant Functions Chapter 5 (Duda et al.)
Metodi statistici nella linguistica computazionale The Bayesian approach to spelling correction.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
1 Introduction to Computational Natural Language Learning Linguistics (Under: Topics in Natural Language Processing ) Computer Science (Under:
Statistical Natural Language Processing. What is NLP?  Natural Language Processing (NLP), or Computational Linguistics, is concerned with theoretical.
Lecture 21: Languages and Grammars. Natural Language vs. Formal Language.
LING 438/538 Computational Linguistics
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
Chapter 5. Probabilistic Models of Pronunciation and Spelling 2007 년 05 월 04 일 부산대학교 인공지능연구실 김민호 Text : Speech and Language Processing Page. 141 ~ 189.
LING/C SC/PSYC 438/538 Lecture 19 Sandiway Fong. Administrivia Next Monday – guest lecture from Dr. Jerry Ball of the Air Force Research Labs to be continued.
Prof. Pushpak Bhattacharyya, IIT Bombay.1 Application of Noisy Channel, Channel Entropy CS 621 Artificial Intelligence Lecture /09/05.
Statistical NLP: Lecture 8 Statistical Inference: n-gram Models over Sparse Data (Ch 6)
1 CSA4050: Advanced Topics in NLP Spelling Models.
1 University of Palestine Topics In CIS ITBS 3202 Ms. Eman Alajrami 2 nd Semester
Recognition of spoken and spelled proper names Reporter : CHEN, TZAN HWEI Author :Michael Meyer, Hermann Hild.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
22CS 338: Graphical User Interfaces. Dario Salvucci, Drexel University. Lecture 10: Advanced Input.
November 2004CSA4050: Crash Concepts in Probability1 CSA4050: Advanced Topics in NLP Probability I Experiments/Outcomes/Events Independence/Dependence.
Maximum Entropy (ME) Maximum Entropy Markov Model (MEMM) Conditional Random Field (CRF)
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2005 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Protein motif extraction with neuro-fuzzy optimization Bill C. H. Chang and Author : Bill C. H. Chang and Saman K. Halgamuge Saman K. Halgamuge Adviser.
CSA3202 Human Language Technology HMMs for POS Tagging.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Natural Language Processing Statistical Inference: n-grams
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 04: GAUSSIAN CLASSIFIERS Objectives: Whitening.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Autumn Web Information retrieval (Web IR) Handout #3:Dictionaries and tolerant retrieval Mohammad Sadegh Taherzadeh ECE Department, Yazd University.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Using the Web for Language Independent Spellchecking and Auto correction Authors: C. Whitelaw, B. Hutchinson, G. Chung, and G. Ellis Google Inc. Published.
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
January 2012Spelling Models1 Human Language Technology Spelling Models.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
LING/C SC/PSYC 438/538 Lecture 24 Sandiway Fong 1.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
Hidden Markov Models BMI/CS 576
CSC 594 Topics in AI – Natural Language Processing
Lecture 2 Introduction to Programming
Do-Gil Lee1*, Ilhwan Kim1 and Seok Kee Lee2
Learning Sequence Motif Models Using Expectation Maximization (EM)
Discrete Event Simulation - 4
CS621/CS449 Artificial Intelligence Lecture Notes
CONTEXT DEPENDENT CLASSIFICATION
CSA3180: Natural Language Processing
CPSC 503 Computational Linguistics
LECTURE 15: REESTIMATION, EM AND MIXTURES
CSCI 5582 Artificial Intelligence
Presentation transcript:

November 2005CSA3180: Statistics III1 CSA3202: Natural Language Processing Statistics 3 – Spelling Models Typing Errors Error Models Spellchecking Noisy Channel Methods Probabilistic Methods Bayesian Methods

November 2005CSA3180: Statistics III2 Introduction Slides based on Lectures by Mike Rosner (2003/2004) Chapter 5 of Jurafsky & Martin, Speech and Language Processing mlhttp:// ml

November 2005CSA3180: Statistics III3 Speech Recognition and Text Correction Commonalities: –Both concerned with problem of accepting a string of symbols and mapping them to a sequence of progressively less likely words –Spelling correction: characters –Speech recognition: phones

November 2005CSA3180: Statistics III4 Noisy Channel Model Language generated and passed through a noisy channel Resulting noisy data received Goal is to recover original data from noisy data Same model can be used in diverse areas of language processing e.g. spelling correction, morphological analysis, pronunciation modelling, machine translation Metaphor comes from Jelinek’s (1976) work on speech recognition, but algorithm is a special case of Bayesian inference (1763) Original Word Source Guess at origina l word Receiver Noisy Channel Noisy Word

November 2005CSA3180: Statistics III5 Spelling Correction Kukich(1992) breaks the field down into three increasingly broader problems: –Detection of non-words (e.g. graffe) –Isolated word error correction (e.g. graffe  giraffe) –Context dependent error detection and correction where the error may result in a valid word (e.g. there  three)

November 2005CSA3180: Statistics III6 Spelling Error Patterns According to Damereau (1964) 80% of all misspelled words are caused by single-error misspellings which fall into the following categories (for the word the): –Insertion (ther) –Deletion (th) –Substitution (thw) –Transposition (teh) Because of this study, much subsequent research focused on the correction of single error misspellings

November 2005CSA3180: Statistics III7 Causes of Spelling Errors Keyboard Based –83% novice and 51% overall were keyboard related errors –Immediately adjacent keys in the same row of the keyboard (50% of the novice substitutions, 31% of all substitutions) Cognitive –Phonetic seperate – separate –Homonym there – their

November 2005CSA3180: Statistics III8 Bayesian Classification Given an observation, determine which set of classes it belongs to Spelling Correction: –Observation: String of characters –Classification: Word that was intended Speech Recognition: –Observation: String of phones –Classification: Word that was said

November 2005CSA3180: Statistics III9 Bayesian Classification Given string O = (o 1, o 2, …, o n ) of observations Bayesian interpretation starts by considering all possible classes, i.e. set of all possible words Out of this universe, we want to choose that word w in the vocabulary V which is most probable given the observation that we have O, i.e.: ŵ = argmax w  V P(w | O) where argmax x f(x) means “the x such that f(x) is maximised” Problem: Whilst this is guaranteed to give us the optimal word, it is not obvious how to make the equation operational: for a given word w and a given O we don’t know how to compute P(w | O)

November 2005CSA3180: Statistics III10 Bayes’ Rule Intuition behind Bayesian classification is use Bayes’ rule to transform P(w | O) into a product of two probabilities, each of which is easier to compute than P(w | O)

November 2005CSA3180: Statistics III11 Bayes’ Rule we obtain P(w) can be estimated from word frequency P(O | w) can also be easily estimated P(O), the probability of the observation sequence, is harder to estimate, but we can ignore it

November 2005CSA3180: Statistics III12 PDF Slides See PDF Slides on spell.pdf Pages 11-20

November 2005CSA3180: Statistics III13 Confusion Set The confusion set of a word w includes w along with all words in the dictionary D such that O can be derived from w by a single application of one of the four edit operations: –Add a single letter. –Delete a single letter. –Replace one letter with another. –Transpose two adjacent letters.

November 2005CSA3180: Statistics III14 Error Model 1 Mayes, Damerau et al Let C be the number of words in the confusion set of w. The error model, for all s in the confusion set of d, is: P(O|w) =α if O=w, (1- α)/(C-1) otherwise α is the prior probability of a given typed word being correct. Key Idea: The remaining probability mass is distributed evenly among all other words in the confusion set.

November 2005CSA3180: Statistics III15 Error Model 2: Church & Gale 1991 Church & Gale (1991) propose a more sophisticated error model based on same confusion set (one edit operation away from w). Two improvements: 1.Unequal weightings attached to different editing operations. 2.Insertion and deletion probabilities are conditioned on context. The probability of inserting or deleting a character is conditioned on the letter appearing immediately to the left of that character.

November 2005CSA3180: Statistics III16 Obtaining Error Probabilities The error probabilities are derived by first assuming all edits are equiprobable. They use as a training corpus a set of space- delimited strings that were found in a large collection of text, and that (a) do not appear in their dictionary and (b) are no more than one edit away from a word that does appear in the dictionary. They iteratively run the spell checker over the training corpus to find corrections, then use these corrections to update the edit probabilities.

November 2005CSA3180: Statistics III17 Error Model 3 Brill and Moore (2000) Let Σ be an alphabet Model allows all operations of the form α  β, where α,β in Σ*. P(α  β) is the probability that when users intends to type the string α they type β instead. N.B. model considers substitutions of arbitrary substrings not just single characters.

November 2005CSA3180: Statistics III18 Model 3 Brill and Moore (2000) Model also tries to account for the fact that in general, positional information is a powerful conditioning feature, e.g. p(entler|antler) < p(reluctent|reluctant) i.e. Probability is partially conditioned by the position in the string in which the edit occurs. artifact/artefact; correspondance/correspondence

November 2005CSA3180: Statistics III19 Three Stage Model Person picks a word. physical Person picks a partition of characters within word. ph y s i c al Person types each partition, perhaps erroneously. f i s i k le p(fisikle|physical) = p(f|ph) * p(i|y) * p(s|s) * p(i|i) * p(k|c) * p(le|al)

November 2005CSA3180: Statistics III20 Formal Presentation Let Part(w) be the set of all possible ways to partition string w into substrings. For particular R in Part(w) containing j continuous segments, let R i be the i th segment. Then P(s|w) =

November 2005CSA3180: Statistics III21 Simplification P(s | w) = max R P(R|w)P(T i |R i ) By considering only the best partitioning of s and w this simplifies to

November 2005CSA3180: Statistics III22 Training the Model To train model, need a series of (s,w) word pairs. begin by aligning the letters in (si,wi) based on MED. For instance, given the training pair (akgsual, actual), this could be aligned as: a c t u a l a k g s u a l

November 2005CSA3180: Statistics III23 Training the Model This corresponds to the sequence of editing operations a  a c  k ε  g t  s u  u a  a l  l To allow for richer contextual information, each nonmatch substitution is expanded to incorporate up to N additional adjacent edits. For example, for the first nonmatch edit in the example above, with N=2, we would generate the following substitutions:

November 2005CSA3180: Statistics III24 Training the Model a c t u a l a k g s u a l c  k ac  ak c  kg ac  akg ct  kgs We would do similarly for the other nonmatch edits, and give each of these substitutions a fractional count.

November 2005CSA3180: Statistics III25 Training the Model We can then calculate the probability of each substitution α  β as count(α  β)/count(α). count(α  β) is simply the sum of the counts derived from our training data as explained above Estimating count(α) is harder, since we are not training from a text corpus, but from a a set of (s,w) tuples (without an associated corpus)

November 2005CSA3180: Statistics III26 Training the Model From a large collection of representative text, count the number of occurrences of α. Adjust the count based on an estimate of the rate with which people make typing errors.