Presented By: Sparsh Gupta Anmol Popli Hammad Abdullah Ayyubi

Slides:



Advertisements
Similar presentations
Statistical Machine Translation
Advertisements

Statistical Machine Translation Part II: Word Alignments and EM Alexander Fraser Institute for Natural Language Processing University of Stuttgart
Statistical Machine Translation Part II: Word Alignments and EM Alexander Fraser ICL, U. Heidelberg CIS, LMU München Statistical Machine Translation.
Statistical Machine Translation Part II – Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
Arthur Chan Prepared for Advanced MT Seminar
Translation Model Parameters & Expectation Maximization Algorithm Lecture 2 (adapted from notes from Philipp Koehn & Mary Hearne) Dr. Declan Groves, CNGL,
1 Duluth Word Alignment System Bridget Thomson McInnes Ted Pedersen University of Minnesota Duluth Computer Science Department 31 May 2003.
Orange: a Method for Evaluating Automatic Evaluation Metrics for Machine Translation Chin-Yew Lin & Franz Josef Och (presented by Bilmes) or Orange: a.
Improving Word-Alignments for Machine Translation Using Phrase-Based Techniques Mike Rodgers Sarah Spikes Ilya Sherman.
Application of RNNs to Language Processing Andrey Malinin, Shixiang Gu CUED Division F Speech Group.
Natural Language Processing Expectation Maximization.
Statistical Machine Translation Part IV – Log-Linear Models Alex Fraser Institute for Natural Language Processing University of Stuttgart Seminar:
METEOR-Ranking & M-BLEU: Flexible Matching & Parameter Tuning for MT Evaluation Alon Lavie and Abhaya Agarwal Language Technologies Institute Carnegie.
Arthur Chan Prepared for Advanced MT Seminar
Advanced Signal Processing 05/06 Reinisch Bernhard Statistical Machine Translation Phrase Based Model.
Machine Translation Course 5 Diana Trandab ă ț Academic year:
Intelligent Database Systems Lab N.Y.U.S.T. I. M. Chinese Word Segmentation and Statistical Machine Translation Presenter : Wu, Jia-Hao Authors : RUIQIANG.
Training dependency parsers by jointly optimizing multiple objectives Keith HallRyan McDonaldJason Katz- BrownMichael Ringgaard.
A daptable A utomatic E valuation M etrics for M achine T ranslation L ucian V lad L ita joint work with A lon L avie and M onica R ogati.
An Investigation of Statistical Machine Translation (Spanish to English) Raghav Bashyal.
Korea Maritime and Ocean University NLP Jung Tae LEE
Statistical Machine Translation Part III – Phrase-based SMT / Decoding Alexander Fraser Institute for Natural Language Processing Universität Stuttgart.
Chapter 23: Probabilistic Language Models April 13, 2004.
Addressing the Rare Word Problem in Neural Machine Translation
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 1/28 Question Answering Passage Retrieval Using Dependency Parsing Hang Cui.
1 Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California ACL 2003.
Learning and Acting with Bayes Nets Chapter 20.. Page 2 === A Network and a Training Data.
Statistical Machine Translation Part II: Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
Information Retrieval Search Engine Technology (8) Prof. Dragomir R. Radev.
Question Answering Passage Retrieval Using Dependency Relations (SIGIR 2005) (National University of Singapore) Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan,
Fabien Cromieres Chenhui Chu Toshiaki Nakazawa Sadao Kurohashi
Neural Machine Translation
Statistical Machine Translation Part II: Word Alignments and EM
METEOR: Metric for Evaluation of Translation with Explicit Ordering An Improved Automatic Metric for MT Evaluation Alon Lavie Joint work with: Satanjeev.
A Straightforward Author Profiling Approach in MapReduce
Deep Learning Amin Sobhani.
Wu et. al., arXiv - sept 2016 Presenter: Lütfi Kerem Şenel
N-Gram Based Approaches
Neural Machine Translation by Jointly Learning to Align and Translate
Attention Is All You Need
An Overview of Machine Translation
KantanNeural™ LQR Experiment
Joint Training for Pivot-based Neural Machine Translation
Intelligent Information System Lab
Collective Network Linkage across Heterogeneous Social Platforms
Data Mining Lecture 11.
Statistical Machine Translation Part III – Phrase-based SMT / Decoding
Neural Language Model CS246 Junghoo “John” Cho.
Build MT systems with Moses
Paraphrase Generation Using Deep Learning
Bayesian Models in Machine Learning
N-Gram Model Formulas Word sequences Chain rule of probability
Expectation-Maximization Algorithm
Word Embedding Word2Vec.
Machine Translation and MT tools: Giza++ and Moses
Presented by Wen-Hung Tsai Speech Lab, CSIE, NTNU 2005/07/13
Machine Translation(MT)
Word Alignment David Kauchak CS159 – Fall 2019 Philipp Koehn
Lecture 12: Machine Translation (II) November 4, 2004 Dan Jurafsky
Machine Translation and MT tools: Giza++ and Moses
A Path-based Transfer Model for Machine Translation
Word embeddings (continued)
Type Topic in here! Created by Educational Technology Network
Johns Hopkins 2003 Summer Workshop on Syntax and Statistical Machine Translation Chapters 5-8 Ethan Phelps-Goodman.
Statistical Machine Translation Part VI – Phrase-based Decoding
Word representations David Kauchak CS158 – Fall 2016.
Pushpak Bhattacharyya CSE Dept., IIT Bombay 31st Jan, 2011
Neural Machine Translation
Presented By: Harshul Gupta
Neural Machine Translation by Jointly Learning to Align and Translate
Presentation transcript:

Presented By: Sparsh Gupta Anmol Popli Hammad Abdullah Ayyubi Machine Translation Presented By: Sparsh Gupta Anmol Popli Hammad Abdullah Ayyubi

Translation System Machine Translation The task of translation of a word/sentence/document from source language S to target language T. ENGLISH Translation System SPANISH winter is coming viene el invierno

Machine Translation - Applications

Evaluation of Machine Translation Systems Key points to judge: Adequacy: word overlap Fluency: phrase overlap Length of translated sentence Key challenges: Some words have multiple meanings/translations There can be more than one correct translation for given sentence

BLEU Score n-gram precision: Unigram Precision: 7/7 !! Candidate: the the the the the the the Reference 1: The cat is on the mat Reference 2: There is a cat on the mat Unigram Precision: 7/7 !!

BLEU Score modified n-gram precision: Modified Unigram Precision: 2/7 Candidate: the the the the the the the Reference 1: The cat is on the mat Reference 2: There is a cat on the mat Modified Unigram Precision: 2/7

BLEU Score Modified n-gram precision is computed on a per-sentence basis. For the entire corpus, the clipped n-gram matches for all the n-grams in each sentence are summed. Similarly for the denominator, the total number of n-grams for the entire corpus are summed.

BLEU Score Modified Unigram Precision: 2/2 !! Translated sentence should not either be too long, or be too short compared to the length of ground truth translation. Modified n-gram precision accounts for longer translated sentences. Candidate: of my Reference 1: I repaid my friend’s loan. Reference 2: I repaid the loan of my friend. Modified Unigram Precision: 2/2 !! Modified Bigram Precision: 1/1 !!

BLEU Score BP: brevity penalty; It is set as 1 if the candidate corpus length is more than reference corpus length. It is set to an exponentially decaying factor otherwise, to penalize short candidate sentences. r: Total length of reference corpus c: Total length of candidate corpus N is generally set to 4 Higher the BLEU Score, better it is.

IBM Model 1

IBM Model 1 - Word Alignments

IBM Model 1 - Word Alignments But we do not know the alignment of words from source language to target language! This alignment is learnt using EM (Expectation-Maximization) Algorithm. The EM algorithm can broadly be understood in 4 steps: 1. Initialize the model parameters 2. Assign probabilities to missing nodes 3. Estimate model parameters 4. Repeat steps 2 and 3 until convergence

IBM Model 1 - Word Probabilities The translation probabilities are computed from training data by maintaining a count of the word translations observed. Translation for word haus Count house 8000 building 1600 home 200 household 150 shell 50

IBM Model 1 Given a sentence in source language S, and an alignment function, the IBM Model 1 generates a translated sentence that maximizes probability: K: Constant factor l_e: Length of english sentence t: Translation probability f_a(j): Foreign word aligned with English word

IBM Model 1 - Translation ENGLISH IBM Model 1 PIG LATIN i love deep learning iway ovelay eepday earninglay

Need for Neural Machine Translation NMT Systems understand similarities between words -- Word embeddings to model word relationships NMT Systems consider entire sentence -- Recurrent neural networks allow long term dependencies NMT Systems learn complex relationships between languages -- Hidden layers learn more complex features built upon simple features like n-gram similarities