Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:

Slides:



Advertisements
Similar presentations
The Application of Machine Translation in CADAL Huang Chen, Chen Haiying Zhejiang University Libraries, Hangzhou, China
Advertisements

Improving Machine Translation Quality via Hybrid Systems and Refined Evaluation Methods Andreas Eisele DFKI GmbH and Saarland University Helsinki, November.
Enabling MT for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University.
The current status of Chinese- English EBMT -where are we now Joy (Ying Zhang) Ralf Brown, Robert Frederking, Erik Peterson Aug 2001.
NICE: Native language Interpretation and Communication Environment Lori Levin, Jaime Carbonell, Alon Lavie, Ralf Brown Carnegie Mellon University.
The current status of Chinese-English EBMT research -where are we now Joy, Ralf Brown, Robert Frederking, Erik Peterson Aug 2001.
Automatic Rule Learning for Resource-Limited Machine Translation Alon Lavie, Katharina Probst, Erik Peterson, Jaime Carbonell, Lori Levin, Ralf Brown Language.
Course Summary LING 575 Fei Xia 03/06/07. Outline Introduction to MT: 1 Major approaches –SMT: 3 –Transfer-based MT: 2 –Hybrid systems: 2 Other topics.
Semi-Automatic Learning of Transfer Rules for Machine Translation of Low-Density Languages Katharina Probst April 5, 2002.
MT Summit VIII, Language Technologies Institute School of Computer Science Carnegie Mellon University Pre-processing of Bilingual Corpora for Mandarin-English.
Does Syntactic Knowledge help English- Hindi SMT ? Avinesh. PVS. K. Taraka Rama, Karthik Gali.
Czech-to-English Translation: MT Marathon 2009 Session Preview Jonathan Clark Greg Hanneman Language Technologies Institute Carnegie Mellon University.
Eliciting Features from Minor Languages The elicitation tool provides a simple interface for bilingual informants with no linguistic training and limited.
Machine Translation Overview Alon Lavie Language Technologies Institute Carnegie Mellon University LTI Open House March 24, 2006.
Coping with Surprise: Multiple CMU MT Approaches Alon Lavie Lori Levin, Jaime Carbonell, Alex Waibel, Stephan Vogel, Ralf Brown, Robert Frederking Language.
Statistical XFER: Hybrid Statistical Rule-based Machine Translation Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Recent Major MT Developments at CMU Briefing for Joe Olive February 5, 2008 Alon Lavie and Stephan Vogel Language Technologies Institute Carnegie Mellon.
A Language Independent Method for Question Classification COLING 2004.
Dependency Tree-to-Dependency Tree Machine Translation November 4, 2011 Presented by: Jeffrey Flanigan (CMU) Lori Levin, Jaime Carbonell In collaboration.
Advanced MT Seminar Spring 2008 Instructors: Alon Lavie and Stephan Vogel.
Approaches to Machine Translation CSC 5930 Machine Translation Fall 2012 Dr. Tom Way.
Rapid Prototyping of a Transfer-based Hebrew-to-English Machine Translation System Alon Lavie Language Technologies Institute Carnegie Mellon University.
Rule Learning - Overview Goal: Syntactic Transfer Rules 1) Flat Seed Generation: produce rules from word- aligned sentence pairs, abstracted only to POS.
Hindi SLE Debriefing AVENUE Transfer System July 3, 2003.
AMTEXT: Extraction-based MT for Arabic Faculty: Alon Lavie, Jaime Carbonell Students and Staff: Laura Kieras, Peter Jansen Informant: Loubna El Abadi.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
MEMT: Multi-Engine Machine Translation Faculty: Alon Lavie, Robert Frederking, Ralf Brown, Jaime Carbonell Students: Shyamsundar Jayaraman, Satanjeev Banerjee.
AVENUE Automatic Machine Translation for low-density languages Ariadna Font Llitjós Language Technologies Institute SCS Carnegie Mellon University.
Ideas for 100K Word Data Set for Human and Machine Learning Lori Levin Alon Lavie Jaime Carbonell Language Technologies Institute Carnegie Mellon University.
Carnegie Mellon Goal Recycle non-expert post-editing efforts to: - Refine translation rules automatically - Improve overall translation quality Proposed.
Hebrew-to-English XFER MT Project - Update Alon Lavie June 2, 2004.
Improving Named Entity Translation Combining Phonetic and Semantic Similarities Fei Huang, Stephan Vogel, Alex Waibel Language Technologies Institute School.
A Trainable Transfer-based MT Approach for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University Joint.
Machine Translation Overview Alon Lavie Language Technologies Institute Carnegie Mellon University Open House March 18, 2005.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
Coping with Surprise: Multiple CMU MT Approaches Alon Lavie Lori Levin, Jaime Carbonell, Alex Waibel, Stephan Vogel, Ralf Brown, Robert Frederking Language.
MEMT: Multi-Engine Machine Translation Faculty: Alon Lavie, Jaime Carbonell Students and Staff: Gregory Hanneman, Justin Merrill (Shyamsundar Jayaraman,
A Trainable Transfer-based MT Approach for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University Joint.
The CMU Mill-RADD Project: Recent Activities and Results Alon Lavie Language Technologies Institute Carnegie Mellon University.
Error Analysis of Two Types of Grammar for the purpose of Automatic Rule Refinement Ariadna Font Llitjós, Katharina Probst, Jaime Carbonell Language Technologies.
Approaching a New Language in Machine Translation Anna Sågvall Hein, Per Weijnitz.
Machine Translation Overview Alon Lavie Language Technologies Institute Carnegie Mellon University August 25, 2004.
Large Vocabulary Data Driven MT: New Developments in the CMU SMT System Stephan Vogel, Alex Waibel Work done in collaboration with: Ying Zhang, Alicia.
Bridging the Gap: Machine Translation for Lesser Resourced Languages
Avenue Architecture Learning Module Learned Transfer Rules Lexical Resources Run Time Transfer System Decoder Translation Correction Tool Word- Aligned.
October 10, 2003BLTS Kickoff Meeting1 Transfer with Strong Decoding Learning Module Transfer Rules {PP,4894} ;;Score: PP::PP [NP POSTP] -> [PREP.
CMU Statistical-XFER System Hybrid “rule-based”/statistical system Scaled up version of our XFER approach developed for low-resource languages Large-coverage.
Eliciting a corpus of word- aligned phrases for MT Lori Levin, Alon Lavie, Erik Peterson Language Technologies Institute Carnegie Mellon University.
Seed Generation and Seeded Version Space Learning Version 0.02 Katharina Probst Feb 28,2002.
CMU MilliRADD Small-MT Report TIDES PI Meeting 2002 The CMU MilliRADD Team: Jaime Carbonell, Lori Levin, Ralf Brown, Stephan Vogel, Alon Lavie, Kathrin.
AVENUE: Machine Translation for Resource-Poor Languages NSF ITR
Developing affordable technologies for resource-poor languages Ariadna Font Llitjós Language Technologies Institute Carnegie Mellon University September.
FROM BITS TO BOTS: Women Everywhere, Leading the Way Lenore Blum, Anastassia Ailamaki, Manuela Veloso, Sonya Allin, Bernardine Dias, Ariadna Font Llitjós.
A Simple English-to-Punjabi Translation System By : Shailendra Singh.
MEMT: Multi-Engine Machine Translation Faculty: Alon Lavie, Robert Frederking, Ralf Brown, Jaime Carbonell Students: Shyamsundar Jayaraman, Satanjeev Banerjee.
Semi-Automatic Learning of Transfer Rules for Machine Translation of Minority Languages Katharina Probst Language Technologies Institute Carnegie Mellon.
Enabling MT for Languages with Limited Resources Alon Lavie and Lori Levin Language Technologies Institute Carnegie Mellon University.
LingWear Language Technology for the Information Warrior Alex Waibel, Lori Levin Alon Lavie, Robert Frederking Carnegie Mellon University.
The AVENUE Project: Automatic Rule Learning for Resource-Limited Machine Translation Faculty: Alon Lavie, Jaime Carbonell, Lori Levin, Ralf Brown Students:
Eliciting a corpus of word-aligned phrases for MT
Approaches to Machine Translation
Faculty: Alon Lavie, Jaime Carbonell, Lori Levin, Ralf Brown Students:
Basic Parsing with Context Free Grammars Chapter 13
Urdu-to-English Stat-XFER system for NIST MT Eval 2008
Alon Lavie, Jaime Carbonell, Lori Levin,
Vamshi Ambati 14 Sept 2007 Student Research Symposium
Approaches to Machine Translation
AMTEXT: Extraction-based MT for Arabic
Presentation transcript:

Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with: Stephan Vogel, Kathrin Probst, Erik Peterson, Ari Font-Llitjos, Lori Levin, Rachel Reynolds, Jaime Carbonell, Richard Cohen

July 21, 2003TIDES MT Evaluation Workshop 2 Rationale and Motivation Our Transfer-based MT approach is specifically designed for limited-data scenarios Hindi SLE was first open-domain large-scale test for our system, but… Hindi turned out to be not a limited-data scenario –1.5 Million words of parallel text Lessons Learned by end of SLE –Basic XFER system did not have a strong decoder –“noisy” statistical lexical resources interfere with transfer-rules in our basic XFER system

July 21, 2003TIDES MT Evaluation Workshop 3 Rationale and Motivation Research Questions: How would we do in a more “realistic” minority language scenario, with very limited resources? How does XFER compare with EBMT and SMT under such a scenario? How well can we do when we add a strong decoder to our XFER system? What is the effect of Multi-Engine combination when using a strong decoder?

July 21, 2003TIDES MT Evaluation Workshop 4 A Limited Data Scenario for Hindi-to-English Put together a scenario with “miserly” data resources: –Elicited Data corpus: phrases –Cleaned portion (top 12%) of LDC dictionary: ~2725 Hindi words (23612 translation pairs) –Manually acquired resources during the SLE: 500 manual bigram translations 72 manually written phrase transfer rules 105 manually written postposition rules 48 manually written time expression rules No additional parallel text!!

July 21, 2003TIDES MT Evaluation Workshop 5 Learning Transfer-Rules from Elicited Data Rationale: –Large bilingual corpora not available –Bilingual native informant(s) can translate and word align a well-designed elicitation corpus, using our elicitation tool –Controlled Elicitation Corpus designed to be typologically comprehensive and compositional –Significantly enhance the elicitation corpus using a new technique for extracting appropriate data from an uncontrolled corpus –Transfer-rule engine and learning approach support acquisition of generalized transfer-rules from the data

July 21, 2003TIDES MT Evaluation Workshop 6 The CMU Elicitation Tool

July 21, 2003TIDES MT Evaluation Workshop 7 Elicited Data Collection Goal: Acquire high quality word aligned Hindi- English data to support system development, especially grammar development and automatic grammar learning Recruited team of ~20 bilingual speakers Extracted a corpus of phrases (NPs and PPs) from Brown Corpus section of Penn TreeBank Extracted corpus divided into files and assigned to translators, here and in India Controlled Elicitation Corpus also translated into Hindi Resulting in total of word aligned translated phrases

July 21, 2003TIDES MT Evaluation Workshop 8 XFER System Architecture User Learning Module Elicitation Process SVS Learning Process Transfer Rules Run-Time Module SL Input SL Parser Transfer Engine TL Generator Decoder Module TL Output

July 21, 2003TIDES MT Evaluation Workshop 9 The Transfer Engine Analysis Source text is parsed into its grammatical structure. Determines transfer application ordering. Example: 他 看 书。 (he read book) S NP VP N V NP 他 看 书 Transfer A target language tree is created by reordering, insertion, and deletion. S NP VP N V NP he read DET N a book Article “a” is inserted into object NP. Source words translated with transfer lexicon. Generation Target language constraints are checked and final translation produced. E.g. “reads” is chosen over “read” to agree with “he”. Final translation: “He reads a book”

July 21, 2003TIDES MT Evaluation Workshop 10 Transfer Rule Formalism Type information Part-of-speech/constituent information Alignments x-side constraints y-side constraints xy-constraints, e.g. ((Y1 AGR) = (X1 AGR)) ; SL: the man, TL: der Mann NP::NP [DET N] -> [DET N] ( (X1::Y1) (X2::Y2) ((X1 AGR) = *3-SING) ((X1 DEF = *DEF) ((X2 AGR) = *3-SING) ((X2 COUNT) = +) ((Y1 AGR) = *3-SING) ((Y1 DEF) = *DEF) ((Y2 AGR) = *3-SING) ((Y2 GENDER) = (Y1 GENDER)) )

July 21, 2003TIDES MT Evaluation Workshop 11 Example Transfer Rule ;; PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT VERB ;; passive of 43 (7b) {VP,28} VP::VP : [V V V] -> [Aux V] ( (X1::Y2) ((x1 form) = root) ((x2 type) =c light) ((x2 form) = part) ((x2 aspect) = perf) ((x3 lexwx) = 'jAnA') ((x3 form) = part) ((x3 aspect) = perf) (x0 = x1) ((y1 lex) = be) ((y1 tense) = past) ((y1 agr num) = (x3 agr num)) ((y1 agr pers) = (x3 agr pers)) ((y2 form) = part) )

July 21, 2003TIDES MT Evaluation Workshop 12 Rule Learning - Overview Goal: Acquire Syntactic Transfer Rules Use available knowledge from the source side (grammatical structure) Three steps: 1.Flat Seed Generation: first guesses at transfer rules; no syntactic structure 2.Compositionality: use previously learned rules to add structure 3.Seeded Version Space Learning: refine rules by generalizing with validation (learn appropriate feature constraints)

July 21, 2003TIDES MT Evaluation Workshop 13 Examples of Learned Rules (I) {NP,14244} ;;Score: NP::NP [N] -> [DET N] ( (X1::Y2) ) {NP,14434} ;;Score: NP::NP [ADJ CONJ ADJ N] -> [ADJ CONJ ADJ N] ( (X1::Y1) (X2::Y2) (X3::Y3) (X4::Y4) ) {PP,4894} ;;Score: PP::PP [NP POSTP] -> [PREP NP] ( (X2::Y1) (X1::Y2) )

July 21, 2003TIDES MT Evaluation Workshop 14 Examples of Learned Rules (II) ;; OF DEQUINDRE AND 14 MILE ROAD EAST PP::PP [N CONJ NUM N N N POSTP] -> [PREP N CONJ NUM N N N] ( (X7::Y1) (X1::Y2) (X2::Y3) (X3::Y4) (X4::Y5) (X5::Y6) (X6::Y7) ) NP::NP [ADJ N] -> [ADJ N] ( (X1::Y1) (X2::Y2) ((X1 NUM) = (Y2 NUM)) ((X2 CASE) = (X1 CASE)) ((X2 GEN) = (X1 GEN)) ((X2 NUM) = (X1 NUM)) ) NP::NP [N N] -> [N N] ( (X1::Y1) (X2::Y2) ((Y2 NUM) = P) )

July 21, 2003TIDES MT Evaluation Workshop 15 Basic XFER System for Hindi Three passes: –Pass1: match against phrase-to-phrase entries (full- forms, no morphology) –Pass2: morphologically analyze input words and match against lexicon – matches are allowed to feed into higher-level transfer grammar rules –Pass3: match original word against lexicon - provides only word-to-word translation, no feeding into grammar rules. “Weak” decoding: greedy left-to-right search that prefers longer input segments

July 21, 2003TIDES MT Evaluation Workshop 16 Manual Grammar Development Manual grammar developed only late into SLE exercise, after morphology and lexical resource issues were resolved Covers mostly NPs, PPs and VPs (verb complexes) ~70 grammar rules, covering basic and recursive NPs and PPs, verb complexes of main tenses in Hindi

July 21, 2003TIDES MT Evaluation Workshop 17 Manual Transfer Rules: Example ;; PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT VERB ;; passive of 43 (7b) {VP,28} VP::VP : [V V V] -> [Aux V] ( (X1::Y2) ((x1 form) = root) ((x2 type) =c light) ((x2 form) = part) ((x2 aspect) = perf) ((x3 lexwx) = 'jAnA') ((x3 form) = part) ((x3 aspect) = perf) (x0 = x1) ((y1 lex) = be) ((y1 tense) = past) ((y1 agr num) = (x3 agr num)) ((y1 agr pers) = (x3 agr pers)) ((y2 form) = part) )

July 21, 2003TIDES MT Evaluation Workshop 18 Manual Transfer Rules: Example ; NP1 ke NP2 -> NP2 of NP1 ; Example: jIvana ke eka aXyAya ; life of (one) chapter ==> a chapter of life ; {NP,12} NP::NP : [PP NP1] -> [NP1 PP] ( (X1::Y2) (X2::Y1) ; ((x2 lexwx) = 'kA') ) {NP,13} NP::NP : [NP1] -> [NP1] ( (X1::Y1) ) {PP,12} PP::PP : [NP Postp] -> [Prep NP] ( (X1::Y2) (X2::Y1) )

July 21, 2003TIDES MT Evaluation Workshop 19 Adding a “Strong” Decoder XFER system produces a full lattice Edges are scored using word-to-word translation probabilities, trained from the limited bilingual data Decoder uses an English LM (70m words) Decoder can also reorder words or phrases (up to 4 positions ahead) For XFER (strong), ONLY edges from basic XFER system are used!

July 21, 2003TIDES MT Evaluation Workshop 20 Testing Conditions Tested on section of JHU provided data: 258 sentences with four reference translations –SMT system (stand-alone) –EBMT system (stand-alone) –XFER system (naïve decoding) –XFER system with “strong” decoder No grammar rules (baseline) Manually developed grammar rules Automatically learned grammar rules –XFER+SMT with strong decoder (MEMT)

July 21, 2003TIDES MT Evaluation Workshop 21 Results on JHU Test Set SystemBLEUM-BLEUNIST EBMT SMT XFER (naïve) man grammar XFER (strong) no grammar XFER (strong) learned grammar XFER (strong) man grammar XFER+SMT

July 21, 2003TIDES MT Evaluation Workshop 22 Effect of Reordering in the Decoder

July 21, 2003TIDES MT Evaluation Workshop 23 Observations and Lessons (I) XFER with strong decoder outperformed SMT even without any grammar rules –SMT Trained on elicited phrases that are very short –SMT has insufficient data to train more discriminative translation probabilities –XFER takes advantage of Morphology Token coverage without morphology: Token coverage with morphology: Manual grammar currently quite a bit better than automatically learned grammar –Learned rules did not use version-space learning –Large room for improvement on learning rules –Importance of effective well-founded scoring of learned rules

July 21, 2003TIDES MT Evaluation Workshop 24 Observations and Lessons (II) Strong decoder for XFER system is essential, even with extremely limited data XFER system with manual or automatically learned grammar outperforms SMT and EBMT in the extremely limited data scenario –where is the cross-over point? MEMT based on strong decoder produced best results in this scenario Reordering within the decoder provided very significant score improvements –Much room for more sophisticated grammar rules –Strong decoder can carry some of the reordering “burden” Conclusion: transfer rules (both manual and learned) offer significant contributions that can complement existing data-driven approaches –Also in medium and large data settings?

July 21, 2003TIDES MT Evaluation Workshop 25 Conclusions Initial steps to development of a statistically grounded transfer-based MT system with: –Rules that are scored based on a well-founded probability model –Strong and effective decoding that incorporates the most advanced techniques used in SMT decoding Working from the “opposite” end of research on incorporating models of syntax into “standard” SMT systems [Knight et al] Our direction makes sense in the limited data scenario

July 21, 2003TIDES MT Evaluation Workshop 26 Future Directions Significant work on automatic rule learning (especially Seeded Version Space Learning) Improved leveraging from manual grammar resources, interaction with bilingual speakers Developing a well-founded model for assigning scores (probabilities) to transfer rules Improving the strong decoder to better fit the specific characteristics of the XFER model MEMT with improved –Combination of output from different translation engines with different scorings – strong decoding capabilities

July 21, 2003TIDES MT Evaluation Workshop 27 Debug Output with Sources praXAnamaMwrIatalajI, rAjyapAla SrI BAI mahAvIra va muKyamaMwrI SrI xigvijayasiMha sahiwa aneka newAoM ne Soka vyakwa kiyA hE | gyAwavya ho ki jile ke cAroM kRewroM meM mawaxAna wIna aktUbara ko honA hE |

July 21, 2003TIDES MT Evaluation Workshop 28 Main CMU Contributions to SLE Shared Resources OFFICIAL CREDIT ON SLE WEBSITE "PROCESSED RESOURCES": CMU Phrase Lexicon Joyphrase.gz (Ying Zhang, 3.5 MB) Cleaned IBM lexicon ibmlex-cleaned.txt.gz (Ralf Brown, 1.5 MB) CMU Aligned Sentences CMU-aligned-sentences.tar.gz (Lori Levin, 1.3 MB) Indian Government Parallel Text ERDC.tgz (Raj Reddy and Alon Lavie, 338 MB) CMU Phrases and sentences CMU-phrases+sentences.zip (Lori Levin, 468 KB) Bilingual Named Entity List IndiaTodayLPNETranslists.tar.gz (Fei Huang, 54KB) OFFICIAL CREDIT ON SLE WEBSITE "FOUND RESOURCES": Osho

July 21, 2003TIDES MT Evaluation Workshop 29 Other CMU Contributions to SLE Shared Resources FOUND RESOURCES BUT NO CREDIT: [From TidesSLList Archive website] Vogel 6/2 –Hindi Language Resources: –General Information on Hindi Script: –Dictionaries at: –English to Hindu dictionary in different formats: –A small English to Urdu dictionary: –The Bible at: –The Emille Project: –[Hardcopy phrasebook references] –A Monthly Newsletter of Vigyan Prasar – –Morphological Analyser:

July 21, 2003TIDES MT Evaluation Workshop 30 Other CMU Contributions to SLE Shared Resources FOUND RESOURCES BUT NO CREDIT: (cont.) [From TidesSLList Archive website] Tribble , via Vogel 6/2 Possible parallel websites: – (English) – (Hindi) – – – (English) – (Hindi) – – Vogel 6/2 – – [Already listed] – – – – –The Gita Supersite –Press Information Bureau, Government of India English: Hindi:

July 21, 2003TIDES MT Evaluation Workshop 31 Other CMU Contributions to SLE Shared Resources FOUND RESOURCES BUT NO CREDIT: (cont.) [From TidesSLList Archive website] 6/20 Parallel Hindi/English webpages: –GAIL (Natural Gas Co.) UTF-8. [Found by CMU undergrad Web team] [Mike Maxwell, LDC, found it at the same time.] SHARED PROCESSED RESOURCES NOT ON LDC WEBSITE: [From TidesSLList Archive website:] Frederking 6/3 [announced], 6/4 [provided] –Ralf Brown's idenc encoding classifier Frederking 6/5 –PDF extractions from LanguageWeaver URLs: Frederking 6/5 –Richard Wang's Perl ident.pl encoding classifier and ISCII-UTF8.pl converter Frederking 6/11 –Erik Peterson here has put together a Perl wrapper for the IIIT Morphology package, so that the input can be UTF-8:

July 21, 2003TIDES MT Evaluation Workshop 32 Other CMU Contributions to SLE Shared Resources SHARED PROCESSED RESOURCES NOT ON LDC WEBSITE: (cont.) [From TidesSLList Archive website:] Levin 6/13 –Directory of Elicited Word-Aligned English-Hindi Translated Phrases: Frederking 6/20 –Undecoded but believed to be parallel webpages: –PDF extractions from same: Frederking 6/24 –Several individual parallel webpages; sites may have more: mohfw.nic.in/kk/95/books1.htm mohfw.nic.in/oph.htm wwww.mp.nic.in