Statistical Machine Translation Part II – Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart 2008.07.22.

Slides:



Advertisements
Similar presentations
Statistical Machine Translation
Advertisements

Statistical Machine Translation Part II: Word Alignments and EM Alexander Fraser Institute for Natural Language Processing University of Stuttgart
Statistical Machine Translation Part II: Word Alignments and EM Alexander Fraser ICL, U. Heidelberg CIS, LMU München Statistical Machine Translation.
Information Extraction Lecture 12 – Multilingual Extraction CIS, LMU München Winter Semester Dr. Alexander Fraser, CIS.
SEARCHING QUESTION AND ANSWER ARCHIVES Dr. Jiwoon Jeon Presented by CHARANYA VENKATESH KUMAR.
Machine Translation Course 9 Diana Trandab ă ț Academic year
Information Extraction Lecture 9 – Multilingual Extraction CIS, LMU München Winter Semester Dr. Alexander Fraser.
1 Statistical NLP: Lecture 12 Probabilistic Context Free Grammars.
Chinese Word Segmentation Method for Domain-Special Machine Translation Su Chen; Zhang Yujie; Guo Zhen; Xu Jin’an Beijing Jiaotong University.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Page 1 Hidden Markov Models for Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
Statistical NLP: Lecture 11
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Statistical Machine Translation Part IV - Assignments and Advanced Topics Alex Fraser Institute for Natural Language Processing University of Stuttgart.
A Phrase-Based, Joint Probability Model for Statistical Machine Translation Daniel Marcu, William Wong(2002) Presented by Ping Yu 01/17/2006.
Statistical Phrase-Based Translation Authors: Koehn, Och, Marcu Presented by Albert Bertram Titles, charts, graphs, figures and tables were extracted from.
Symmetric Probabilistic Alignment Jae Dong Kim Committee: Jaime G. Carbonell Ralf D. Brown Peter J. Jansen.
Maximum Entropy Model LING 572 Fei Xia 02/07-02/09/06.
Application of RNNs to Language Processing Andrey Malinin, Shixiang Gu CUED Division F Speech Group.
MACHINE TRANSLATION AND MT TOOLS: GIZA++ AND MOSES -Nirdesh Chauhan.
Natural Language Processing Expectation Maximization.
Statistical Machine Translation Part VIII – Log-Linear Models Alexander Fraser ICL, U. Heidelberg CIS, LMU München Statistical Machine Translation.
Machine translation Context-based approach Lucia Otoyo.
Statistical Machine Translation Part III – Phrase-based SMT / Decoding Alex Fraser Institute for Natural Language Processing University of Stuttgart
English-Persian SMT Reza Saeedi 1 WTLAB Wednesday, May 25, 2011.
Query Rewriting Using Monolingual Statistical Machine Translation Stefan Riezler Yi Liu Google 2010 Association for Computational Linguistics.
Statistical Machine Translation Part IV – Log-Linear Models Alex Fraser Institute for Natural Language Processing University of Stuttgart Seminar:
Combining Lexical Semantic Resources with Question & Answer Archives for Translation-Based Answer Finding Delphine Bernhard and Iryna Gurevvch Ubiquitous.
Advanced Signal Processing 05/06 Reinisch Bernhard Statistical Machine Translation Phrase Based Model.
Statistical Machine Translation Part IV – Log-Linear Models Alexander Fraser Institute for Natural Language Processing University of Stuttgart
Hidden Markov Models for Information Extraction CSE 454.
Statistical Machine Translation Part III – Phrase-based SMT Alexander Fraser CIS, LMU München WSD and MT.
Posterior Regularization for Structured Latent Variable Models Li Zhonghua I2R SMT Reading Group.
Using Surface Syntactic Parser & Deviation from Randomness Jean-Pierre Chevallet IPAL I2R Gilles Sérasset CLIPS IMAG.
Korea Maritime and Ocean University NLP Jung Tae LEE
Statistical Machine Translation Part III – Phrase-based SMT / Decoding Alexander Fraser Institute for Natural Language Processing Universität Stuttgart.
LREC 2008 Marrakech 29 May Caroline Lavecchia, Kamel Smaïli and David Langlois LORIA / Groupe Parole, Vandoeuvre-Lès-Nancy, France Phrase-Based Machine.
Improving Named Entity Translation Combining Phonetic and Semantic Similarities Fei Huang, Stephan Vogel, Alex Waibel Language Technologies Institute School.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 1/28 Question Answering Passage Retrieval Using Dependency Parsing Hang Cui.
MACHINE TRANSLATION PAPER 1 Daniel Montalvo, Chrysanthia Cheung-Lau, Jonny Wang CS159 Spring 2011.
Statistical Machine Translation Part III – Many-to-Many Alignments Alexander Fraser CIS, LMU München WSD and MT.
1 Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California ACL 2003.
Statistical Machine Translation Part II: Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
Multilingual Information Retrieval using GHSOM Hsin-Chang Yang Associate Professor Department of Information Management National University of Kaohsiung.
Getting the structure right for word alignment: LEAF Alexander Fraser and Daniel Marcu Presenter Qin Gao.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
Question Answering Passage Retrieval Using Dependency Relations (SIGIR 2005) (National University of Singapore) Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan,
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
Neural Machine Translation
Statistical Machine Translation Part II: Word Alignments and EM
Statistical Machine Translation Part IV – Log-Linear Models
Alexander Fraser CIS, LMU München Machine Translation
Statistical Models for Automatic Speech Recognition
Data Mining Lecture 11.
Statistical Machine Translation Part III – Phrase-based SMT / Decoding
Alex Fraser Institute for Natural Language Processing
Statistical Machine Translation
CSCI 5832 Natural Language Processing
Expectation-Maximization Algorithm
Machine Translation and MT tools: Giza++ and Moses
Improved Word Alignments Using the Web as a Corpus
Statistical Machine Translation Papers from COLING 2004
Statistical Machine Translation Part IIIb – Phrase-based Model
Improving IBM Word-Alignment Model 1(Robert C. MOORE)
Machine Translation and MT tools: Giza++ and Moses
Statistical NLP Spring 2011
Statistical Machine Translation Part VI – Phrase-based Decoding
Pushpak Bhattacharyya CSE Dept., IIT Bombay 31st Jan, 2011
Neural Machine Translation by Jointly Learning to Align and Translate
Presentation transcript:

Statistical Machine Translation Part II – Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart EMA Summer School

2 Word Alignments Recall that we build translation models from word-aligned parallel sentences – The statistics involved in state of the art SMT decoding models are simple – Just count translations in the word-aligned parallel sentences But what is a word alignment, and how do we obtain it?

Word alignment is annotation of minimal translational correspondences Annotated in the context in which they occur Not idealized translations! (solid blue lines annotated by a bilingual expert)

Automatic word alignments are typically generated using a model called IBM Model 4 No linguistic knowledge No correct alignments are supplied to the system unsupervised learning (red dashed line = automatically generated hypothesis)

5 Uses of Word Alignment Multilingual – Machine Translation – Cross-Lingual Information Retrieval – Translingual Coding (Annotation Projection) – Document/Sentence Alignment – Extraction of Parallel Sentences from Comparable Corpora Monolingual – Paraphrasing – Query Expansion for Monolingual Information Retrieval – Summarization – Grammar Induction

Outline Measuring alignment quality Types of alignments IBM Model 1 – Training IBM Model 1 with Expectation Maximization IBM Models 3 and 4 – Approximate Expectation Maximization Heuristics for high quality alignments from the IBM models

7 How to measure alignment quality? If we want to compare two word alignment algorithms, we can generate a word alignment with each algorithm for fixed training data – Then build an SMT system from each alignment – Compare performance of the SMT systems using BLEU But this is slow, building SMT systems can take days of computation – Question: Can we have an automatic metric like BLEU, but for alignment? – Answer: yes, by comparing with gold standard alignments

8 Measuring Precision and Recall Precision is percentage of links in hypothesis that are correct – If we hypothesize there are no links, have 100% precision Recall is percentage of correct links we hypothesized – If we hypothesize all possible links, have 100% recall

9 F  -score Gold f1 f2 f3 f4 f5 e1 e2 e3 e4 Hypothesis f1 f2 f3 f4 f5 e1 e2 e3 e4 = 3 4 = 3 5 (e3,f4) wrong (e2,f3) (e3,f5) not in hyp Called F  -score to differentiate from ambiguous term F-Measure

Alpha allows trade-off between precision and recall But alpha must be set correctly for the task! Alpha between 0.1 and 0.4 works well for SMT – Biased towards recall

Slide from Koehn 2008

Generative Word Alignment Models We observe a pair of parallel sentences (e,f) We would like to know the highest probability alignment a for (e,f) Generative models are models that follow a series of steps – We will pretend that e has been generated from f – The sequence of steps to do this is encoded in the alignment a – A generative model associates a probability p(e,a|f) to each alignment In words, this is the probability of generating the alignment a and the English sentence e, given the foreign sentence f

IBM Model 1 A simple generative model, start with: – foreign sentence f – a lexical mapping distribution t(EnglishWord|ForeignWord) How to generate an English sentence e from f: 1.Pick a length for the English sentence at random 2.Pick an alignment function at random 3.For each English position generate an English word by looking up the aligned ForeignWord in the alignment function

Slide from Koehn 2008

p(e,a|f) = Є 5454 × t(the|das) × t(house|Haus) × t(is|ist) × t(small|klein) Є 625 × 0.7 × 0.8 × 0.8 × 0.4 = = Є Slide modified from Koehn 2008

Slide from Koehn 2008

26 Unsupervised Training with EM Expectation Maximization (EM) – Unsupervised learning – Maximize the likelihood of the training data Likelihood is (informally) the probability the model assigns to the training data – E-Step: predict according to current parameters – M-Step: reestimate parameters from predictions – Amazing but true: if we iterate E and M steps, we increase likelihood!

Slide from Koehn 2008

= t(e 1 |f 0 ) t(e 2 |f 0 ) + t(e 1 |f 0 ) t(e 2 |f 1 ) + t(e 1 |f 0 ) t(e 2 |f 2 ) + t(e 1 |f 1 ) t(e 2 |f 0 ) + t(e 1 |f 1 ) t(e 2 |f 1 ) + t(e 1 |f 1 ) t(e 2 |f 2 ) + t(e 1 |f 2 ) t(e 2 |f 0 ) + t(e 1 |f 2 ) t(e 2 |f 1 ) + t(e 1 |f 2 ) t(e 2 |f 2 ) = t(e 1 |f 0 ) [t(e 2 |f 0 ) + t(e 2 |f 1 ) + t(e 2 |f 2 ) ] + t(e 1 |f 1 ) [t(e 2 |f 0 ) + t(e 2 |f 1 ) + t(e 2 |f 2 )] + t(e 1 |f 2 ) [t(e 2 |f 0 ) + t(e 2 |f 1 ) + t(e 2 |f 2 )] = [t (e 1 |f 0 ) + t(e 1 |f 1 ) + t(e 1 |f 2 ) ] [t(e 2 |f 0 ) + t(e 2 |f 1 ) + t(e 2 |f 2 ) ] Slide modified from Koehn 2008

Slide from Koehn 2008

Outline Measuring alignment quality Types of alignments IBM Model 1 – Training IBM Model 1 with Expectation Maximization IBM Models 3 and 4 – Approximate Expectation Maximization Heuristics for improving IBM alignments

Slide from Koehn 2008

Training IBM Models 3/4/5 Approximate Expectation Maximization – Focusing probability on small set of most probable alignments

Slide from Koehn 2008

Maximum Approximation Mathematically, P(e| f) = ∑ P(e, a | f) An alignment represents one way e could be generated from f But for IBM models 3, 4 and 5 we approximate Maximum approximation: P(e| f) = argmax P(e, a | f) Another approximation close to this will be discussed in a few slides 47 a a

48 Bootstrap M-Step E-Step Translation Model Initial parameters Viterbi alignments Refined parameters Viterbi alignments Model 3/4/5 training: Approx. EM

49 Model 3/4/5 E-Step E-Step: search for Viterbi alignments Solved using local hillclimbing search – Given a starting alignment we can permute the alignment by making small changes such as swapping the incoming links for two words Algorithm: – Begin: Given starting alignment, make list of possible small changes (e.g. list every possible swap of the incoming links for two words) – for each possible small change Create new alignment A2 by copying A and applying small change If score(A2) > score(best) then best = A2 – end for – Choose best alignment as starting point, goto Begin:

50 Model 3/4/5 M-Step M-Step: reestimate parameters – Count events in the neighborhood of the Viterbi Neighborhood approximation: consider only those alignments reachable by one change to the alignment Calculate p(e,a|f) only over this neighborhood, then divide to sum to 1 to get p(a|e,f) – Sum counts over sentences, weighted by p(a|e,f) – Normalize counts to sum to 1

51 IBM Models: 1-to-N Assumption 1-to-N assumption Multi-word “cepts” (words in one language translated as a unit) only allowed on target side. Source side limited to single word “cepts”. Forced to create M-to-N alignments using heuristics

Slide from Koehn 2008

Discussion Most state of the art SMT systems are built as presented here Use IBM Models to generate both: – one-to-many alignment – many-to-one alignment Combine these two alignments using symmetrization heuristic – output is a many-to-many alignment – used for building decoder Moses toolkit for implementation: – Uses Och GIZA++ tool for Model 1, HMM, Model 4 However, there is newer work on alignment that is interesting!

Practice 1 – Implement EM for IBM Model 1 – Please see my home page: Stuck? My office is two doors away from this room Lecture tomorrow: 14:00, this room, phrase- based modeling and decoding

Thank You!