Statistical NLP: Lecture 13

Slides:



Advertisements
Similar presentations
Statistical Machine Translation
Advertisements

Statistical Machine Translation Part II: Word Alignments and EM Alexander Fraser ICL, U. Heidelberg CIS, LMU München Statistical Machine Translation.
1 Statistical NLP: Lecture 12 Probabilistic Context Free Grammars.
January 12, Statistical NLP: Lecture 2 Introduction to Statistical NLP.
The current status of Chinese-English EBMT research -where are we now Joy, Ralf Brown, Robert Frederking, Erik Peterson Aug 2001.
Machine Translation Prof. Alexandros Potamianos Dept. of Electrical & Computer Engineering Technical University of Crete, Greece May 2003.
Statistical techniques in NLP Vasileios Hatzivassiloglou University of Texas at Dallas.
Course Summary LING 575 Fei Xia 03/06/07. Outline Introduction to MT: 1 Major approaches –SMT: 3 –Transfer-based MT: 2 –Hybrid systems: 2 Other topics.
C SC 620 Advanced Topics in Natural Language Processing Lecture 24 4/22.
1 The Web as a Parallel Corpus  Parallel corpora are useful  Training data for statistical MT  Lexical correspondences for cross-lingual IR  Early.
1 Statistical NLP: Lecture 13 Statistical Alignment and Machine Translation.
5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation.
Jan 2005Statistical MT1 CSA4050: Advanced Techniques in NLP Machine Translation III Statistical MT.
Statistical Natural Language Processing. What is NLP?  Natural Language Processing (NLP), or Computational Linguistics, is concerned with theoretical.
A Pattern Matching Method for Finding Noun and Proper Noun Translations from Noisy Parallel Corpora Benjamin Arai Computer Science and Engineering Department.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2005 Lecture 1 21 July 2005.
1 Machine Translation (MT) Definition –Automatic translation of text or speech from one language to another Goal –Produce close to error-free output that.
Machine translation Context-based approach Lucia Otoyo.
Statistical Alignment and Machine Translation
Comparable Corpora Kashyap Popat( ) Rahul Sharnagat(11305R013)
1 Statistical NLP: Lecture 10 Lexical Acquisition.
An Integrated Approach for Arabic-English Named Entity Translation Hany Hassan IBM Cairo Technology Development Center Jeffrey Sorensen IBM T.J. Watson.
Advanced Signal Processing 05/06 Reinisch Bernhard Statistical Machine Translation Phrase Based Model.
1 Statistical NLP: Lecture 9 Word Sense Disambiguation.
CSA2050: Introduction to Computational Linguistics Part of Speech (POS) Tagging II Transformation Based Tagging Brill (1995)
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Malay-English Bitext Mapping and Alignment Using SIMR/GSA Algorithms Mosleh Al-Adhaileh Tang Enya Kong Mosleh Al-Adhaileh and Tang Enya Kong Computer Aided.
5/28/031 Data Intensive Linguistics Statistical Alignment and Machine Translation.
1 Statistical NLP: Lecture 7 Collocations. 2 Introduction 4 Collocations are characterized by limited compositionality. 4 Large overlap between the concepts.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology Advisor : Dr. Hsu Student : Sheng-Hsuan Wang Department.
Iterative Translation Disambiguation for Cross Language Information Retrieval Christof Monz and Bonnie J. Dorr Institute for Advanced Computer Studies.
Alignment of Bilingual Named Entities in Parallel Corpora Using Statistical Model Chun-Jen Lee Jason S. Chang Thomas C. Chuang AMTA 2004.
For Monday Read chapter 24, sections 1-3 Homework: –Chapter 23, exercise 8.
LREC 2008 Marrakech 29 May Caroline Lavecchia, Kamel Smaïli and David Langlois LORIA / Groupe Parole, Vandoeuvre-Lès-Nancy, France Phrase-Based Machine.
For Friday Finish chapter 23 Homework –Chapter 23, exercise 15.
Multi-level Bootstrapping for Extracting Parallel Sentence from a Quasi-Comparable Corpus Pascale Fung and Percy Cheung Human Language Technology Center,
1 Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California ACL 2003.
Data Mining and Decision Support
Statistical Machine Translation Part II: Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
CSA2050: Introduction to Computational Linguistics Part of Speech (POS) Tagging II Transformation Based Tagging Brill (1995)
September 2004CSAW Extraction of Bilingual Information from Parallel Texts Mike Rosner.
Overview of Statistical NLP IR Group Meeting March 7, 2006.
NATURAL LANGUAGE PROCESSING
Part-Of-Speech Tagging Radhika Mamidi. POS tagging Tagging means automatic assignment of descriptors, or tags, to input tokens. Example: “Computational.
Statistical Machine Translation Part II: Word Alignments and EM
Korean version of GloVe Applying GloVe & word2vec model to Korean corpus speaker : 양희정 date :
Approaches to Machine Translation
Designing Cross-Language Information Retrieval System using various Techniques of Query Expansion and Indexing for Improved Performance  Hello everyone,
Statistical NLP: Lecture 7
Joint Training for Pivot-based Neural Machine Translation
Data Mining Lecture 11.
--Mengxue Zhang, Qingyang Li
CSCI 5832 Natural Language Processing
Deep Learning based Machine Translation
CSCI 5832 Natural Language Processing
Statistical NLP: Lecture 9
Objective of This Course
CSCI 5832 Natural Language Processing
Eiji Aramaki* Sadao Kurohashi* * University of Tokyo
Approaches to Machine Translation
Statistical Machine Translation Papers from COLING 2004
Classical Part of Speech (PoS) Tagging
Lecture 13 Corpus Linguistics I CS 4705.
Machine Translation(MT)
LECTURE 15: REESTIMATION, EM AND MIXTURES
Word embeddings (continued)
EGR 2131 Unit 12 Synchronous Sequential Circuits
Unsupervised Learning of Narrative Schemas and their Participants
Statistical NLP : Lecture 9 Word Sense Disambiguation
Statistical NLP: Lecture 10
Presentation transcript:

Statistical NLP: Lecture 13 Statistical Alignment and Machine Translation March 20, 22 & 24

Overview MT is very hard: translation programs available today do not perform very well. Different approaches to MT: Word for Word Syntactic Transfer Approaches Semantic Transfer Approaches Interlingua Most MT systems are a mix of probabilistic and non-probabilistic components, though there are a few completely statistical translation systems. March 20, 22 & 24

Overview (Cont’d) A large part of implementing an MT system [e.g., probabilistic parsing, word sense disambiguation] is not specific to MT and is discussed in other, more general chapters. Nonetheless, parts of MT that are specific to it are: text alignment and word alignment. Definition: In the sentence alignment problem, one seeks to say that some group of sentences in one language corresponds in content to some other group of sentences in another language. Such a grouping is referred to as a bead of sentences. March 20, 22 & 24

Overview of the Lecture Text Alignment Word Alignment Fully Statistical Attempt at MT March 20, 22 & 24

Text Alignment: Aligning Sentences and Paragraphs Text alignment is useful for bilingual lexicography, MT, but also as a first step to using bilingual corpora for other tasks. Text alignment is not trivial because translators do not always translate one sentence in the input into one sentence in the output, although they do so in 90% of the cases. Another problem is that of crossing dependencies, where the order of sentences are changed in the translation. March 20, 22 & 24

Different Approached to Text Alignment Length-Based Approaches: short sentences will be translated as short sentences and long sentences as long sentences. Offset Alignment by Signal Processing Techniques: these approaches do not attempt to align beads of sentences but rather just to align position offsets in the two parallel texts. Lexical Methods: Use lexical information to align beads of sentences. March 20, 22 & 24

Length-Based Methods I: General Approach Goal: Find alignment A with highest probability given the two parallel texts S and T: arg maxA P(A|S, T)=argmaxA P(A, S, T) To estimate the above probabilities, the aligned text is decomposed in a sequence of aligned beads where each bead is assumed to be independent of the others. Then P(A, S, T)  k=1K P(Bk). The question, then, is how to estimate the probability of a certain type of alignment bead given the sentences in that bead. March 20, 22 & 24

Length-Based Methods II: Gale and Church, 1993 The algorithm uses sentence length (measured in characters) to evaluate how likely an alignment of some number of sentences in L1 is with some number of sentences in L2. The algorithm uses a Dynamic Programming technique that allows the system to efficiently consider all possible alignments and find the minimum cost alignment. The method performs well (at least on related languages). It gets a 4% error rate. It works best on 1:1 alignments [only 2% error rate]. It has a high error rate on more difficult alignments. March 20, 22 & 24

Length-Based Methods II: Other Approaches Brown et al., 1991: Same approach as Gale and Church, except that sentence lengths are compared in terms of words rather than characters. Other difference in goal: Brown et al. Didn’t want to align entire articles but just a subset of the corpus suitable for further research. Wu, 1994: Wu applies Gale and Church’s method to a corpus of parallel English and Cantonese Text. The results are not much worse than on related languages. To improve accuracy, Wu uses lexical cues. March 20, 22 & 24

Offset Alignment by Signal Processing Techniques I : Church, 1993 Church argues that length-based methods work well on clean text but may break down in real-world situations (noisy OCR or unknown markup conventions) Church’s method is to induce an alignment by using cognates (words that are similar across languages) at the level of character sequences. The method consists of building a dot-plot, i.e., the source and translated text are concatenated and then a square graph is made with this text on both axes. A dot is placed at (x,y) when there is a match. [Unit=4-grams]. March 20, 22 & 24

Offset Alignment by Signal Processing Techniques II: Church, 1993 (Cont’d) Signal processing methods are then used to compress the resulting plot. The interesting part in a dot-plot is called the bitext maps. These maps show the correspondence between the two languages. In the bitext maps, can be found faint, roughly straight diagonals corresponding to cognates. A heuristic search along this diagonal provides an alignment in terms of offsets in the two texts. March 20, 22 & 24

Offset Alignment by Signal Processing Techniques III: Fung & McKeown, 1994 Fung and McKeown’s algorithm works: Without having found sentence boudaries. In only roughly parallel text (with certain sections missing in one language) With unrelated language pairs. The technique is to infer a small bilingual dictionary that will give points of alignment. For each word, a signal is produced, as an arrival vector of integer numbers giving the numver of words between each occurrence of the word at hand. March 20, 22 & 24

Lexical Methods of Sentence Alignment I: Kay & Roscheisen, 1993 Assume the first and last sentences of the texts align. These are the initial anchors. Then, until most sentences are aligned: 1. Form an envelope of possible alignments. 2. Choose pairs of words that tend to co-occur in these potential partial alignments. 3. Find pairs of source and target sentences which contain many possible lexical correspondences. The most reliable of these pairs are used to induce a set of partial alignments which will be part of the final result. March 20, 22 & 24

Lexical Methods of Sentence Alignment II: Chen, 1993 Chen does sentence alignment by constructing a simple word-to-word translation model as he goes along. The best alignment is the one that maximizes the likelihood of generating the corpus given the translation model. This best alignment is found by using dynamic programming. March 20, 22 & 24

Lexical Methods of Sentence Alignment III: Haruno & Yamazaki, 1996 Their method is a variant of Kay & Roscheisen (1993) with the following differences: For structurally very different languages, function words impede alignment. They eliminate function words using a POS Tagger. If trying to align short texts, there are not enough repeated words for reliable alignment using Kay & Roscheisen (1993). So they use an online dictionary to find matching word pairs March 20, 22 & 24

Word Alignment A common use of aligned texts is the derivation of bilingual dictionaries and terminology databases. This is usually done in two steps: First, the text alignment is extended to a word alignment. Then, some criterion, such as frequency is used to select aligned pairs for which there is enough evidence to include them in the bilingual dictionary. Using a 2 measure works well unless one word in L1 occurs with more than one word in L2. Then, it is useful to assume a one-to-one correspondence. Future work is likely to use existing bilingual dictionaries. March 20, 22 & 24

Fully Statistical MT I MT has been attempted using a noisy channel model. Such a model requires: A Language Model A Translation Model A Decoder Translation Probabilities An evaluation of the model found that only 48% of French sentences were translated correctly. The errors were either incorrect decodings or ungrammatical decodings. March 20, 22 & 24

Fully Statistical MT II: Problems with the Model Fertility is Asymmetric Independence Assumptions Sensitivity to Training Data Efficiency No Notion of Phrases Non-Local Dependencies Morphology Sparse Data Problems. In summary, non-linguistic models are fairly successful for word alignments, but they fail for MT. March 20, 22 & 24