Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:

Slides:



Advertisements
Similar presentations
Resource Acquisition for Syntax-based MT from Parsed Parallel data Alon Lavie, Alok Parlikar and Vamshi Ambati Language Technologies Institute Carnegie.
Advertisements

Enabling MT for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University.
1 A Tree Sequence Alignment- based Tree-to-Tree Translation Model Authors: Min Zhang, Hongfei Jiang, Aiti Aw, et al. Reporter: 江欣倩 Professor: 陳嘉平.
Rapid Prototyping of a Transfer-based Hebrew-to-English Machine Translation System Alon Lavie Language Technologies Institute Carnegie Mellon University.
The current status of Chinese-English EBMT research -where are we now Joy, Ralf Brown, Robert Frederking, Erik Peterson Aug 2001.
Automatic Rule Learning for Resource-Limited Machine Translation Alon Lavie, Katharina Probst, Erik Peterson, Jaime Carbonell, Lori Levin, Ralf Brown Language.
Semi-Automatic Learning of Transfer Rules for Machine Translation of Low-Density Languages Katharina Probst April 5, 2002.
Probabilistic Parsing Ling 571 Fei Xia Week 5: 10/25-10/27/05.
Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
SI485i : NLP Set 9 Advanced PCFGs Some slides from Chris Manning.
Czech-to-English Translation: MT Marathon 2009 Session Preview Jonathan Clark Greg Hanneman Language Technologies Institute Carnegie Mellon University.
Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
PFA Node Alignment Algorithm Consider the parse trees of a Chinese-English parallel pair of sentences.
MT for Languages with Limited Resources Machine Translation April 20, 2011 Based on Joint Work with: Lori Levin, Jaime Carbonell, Stephan Vogel,
Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Tree Kernels for Parsing: (Collins & Duffy, 2001) Advanced Statistical Methods in NLP Ling 572 February 28, 2012.
July 24, 2007GALE Update: Alon Lavie1 Statistical Transfer and MEMT Activities Multi-Engine Machine Translation –MEMT service within the cross-GALE IOD.
Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Coping with Surprise: Multiple CMU MT Approaches Alon Lavie Lori Levin, Jaime Carbonell, Alex Waibel, Stephan Vogel, Ralf Brown, Robert Frederking Language.
Statistical XFER: Hybrid Statistical Rule-based Machine Translation Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
Recent Major MT Developments at CMU Briefing for Joe Olive February 5, 2008 Alon Lavie and Stephan Vogel Language Technologies Institute Carnegie Mellon.
Reordering Model Using Syntactic Information of a Source Tree for Statistical Machine Translation Kei Hashimoto, Hirohumi Yamamoto, Hideo Okuma, Eiichiro.
Advanced MT Seminar Spring 2008 Instructors: Alon Lavie and Stephan Vogel.
Approaches to Machine Translation CSC 5930 Machine Translation Fall 2012 Dr. Tom Way.
Rapid Prototyping of a Transfer-based Hebrew-to-English Machine Translation System Alon Lavie Language Technologies Institute Carnegie Mellon University.
Improving Statistical Machine Translation by Means of Transfer Rules Nurit Melnik.
Rule Learning - Overview Goal: Syntactic Transfer Rules 1) Flat Seed Generation: produce rules from word- aligned sentence pairs, abstracted only to POS.
AMTEXT: Extraction-based MT for Arabic Faculty: Alon Lavie, Jaime Carbonell Students and Staff: Laura Kieras, Peter Jansen Informant: Loubna El Abadi.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Transfer-based MT with Strong Decoding for a Miserly Data Scenario Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with:
MEMT: Multi-Engine Machine Translation Faculty: Alon Lavie, Robert Frederking, Ralf Brown, Jaime Carbonell Students: Shyamsundar Jayaraman, Satanjeev Banerjee.
AVENUE Automatic Machine Translation for low-density languages Ariadna Font Llitjós Language Technologies Institute SCS Carnegie Mellon University.
Ideas for 100K Word Data Set for Human and Machine Learning Lori Levin Alon Lavie Jaime Carbonell Language Technologies Institute Carnegie Mellon University.
Hebrew-to-English XFER MT Project - Update Alon Lavie June 2, 2004.
INSTITUTE OF COMPUTING TECHNOLOGY Forest-to-String Statistical Translation Rules Yang Liu, Qun Liu, and Shouxun Lin Institute of Computing Technology Chinese.
Chinese Word Segmentation Adaptation for Statistical Machine Translation Hailong Cao, Masao Utiyama and Eiichiro Sumita Language Translation Group NICT&ATR.
A Trainable Transfer-based MT Approach for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University Joint.
Supertagging CMSC Natural Language Processing January 31, 2006.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
Coping with Surprise: Multiple CMU MT Approaches Alon Lavie Lori Levin, Jaime Carbonell, Alex Waibel, Stephan Vogel, Ralf Brown, Robert Frederking Language.
A Trainable Transfer-based MT Approach for Languages with Limited Resources Alon Lavie Language Technologies Institute Carnegie Mellon University Joint.
The CMU Mill-RADD Project: Recent Activities and Results Alon Lavie Language Technologies Institute Carnegie Mellon University.
Statistical Machine Translation Part II: Word Alignments and EM Alex Fraser Institute for Natural Language Processing University of Stuttgart
Error Analysis of Two Types of Grammar for the purpose of Automatic Rule Refinement Ariadna Font Llitjós, Katharina Probst, Jaime Carbonell Language Technologies.
Large Vocabulary Data Driven MT: New Developments in the CMU SMT System Stephan Vogel, Alex Waibel Work done in collaboration with: Ying Zhang, Alicia.
Avenue Architecture Learning Module Learned Transfer Rules Lexical Resources Run Time Transfer System Decoder Translation Correction Tool Word- Aligned.
July 24, 2007GALE Update: Alon Lavie1 Statistical Transfer and MEMT Activities Chinese-to-English Statistical Transfer MT system (Stat-XFER) –Developed.
October 10, 2003BLTS Kickoff Meeting1 Transfer with Strong Decoding Learning Module Transfer Rules {PP,4894} ;;Score: PP::PP [NP POSTP] -> [PREP.
CMU Statistical-XFER System Hybrid “rule-based”/statistical system Scaled up version of our XFER approach developed for low-resource languages Large-coverage.
Eliciting a corpus of word- aligned phrases for MT Lori Levin, Alon Lavie, Erik Peterson Language Technologies Institute Carnegie Mellon University.
Seed Generation and Seeded Version Space Learning Version 0.02 Katharina Probst Feb 28,2002.
CMU MilliRADD Small-MT Report TIDES PI Meeting 2002 The CMU MilliRADD Team: Jaime Carbonell, Lori Levin, Ralf Brown, Stephan Vogel, Alon Lavie, Kathrin.
AVENUE: Machine Translation for Resource-Poor Languages NSF ITR
Developing affordable technologies for resource-poor languages Ariadna Font Llitjós Language Technologies Institute Carnegie Mellon University September.
MEMT: Multi-Engine Machine Translation Faculty: Alon Lavie, Robert Frederking, Ralf Brown, Jaime Carbonell Students: Shyamsundar Jayaraman, Satanjeev Banerjee.
Semi-Automatic Learning of Transfer Rules for Machine Translation of Minority Languages Katharina Probst Language Technologies Institute Carnegie Mellon.
Enabling MT for Languages with Limited Resources Alon Lavie and Lori Levin Language Technologies Institute Carnegie Mellon University.
Learning to Generate Complex Morphology for Machine Translation Einat Minkov †, Kristina Toutanova* and Hisami Suzuki* *Microsoft Research † Carnegie Mellon.
LingWear Language Technology for the Information Warrior Alex Waibel, Lori Levin Alon Lavie, Robert Frederking Carnegie Mellon University.
The AVENUE Project: Automatic Rule Learning for Resource-Limited Machine Translation Faculty: Alon Lavie, Jaime Carbonell, Lori Levin, Ralf Brown Students:
Natural Language Processing Vasile Rus
Approaches to Machine Translation
Monoligual Semantic Text Alignment and its Applications in Machine Translation Alon Lavie March 29, 2012.
Basic Parsing with Context Free Grammars Chapter 13
Urdu-to-English Stat-XFER system for NIST MT Eval 2008
Stat-Xfer מציגים: יוגב וקנין ועומר טבח, 05/01/2012
Stat-XFER: A General Framework for Search-based Syntax-driven MT
Approaches to Machine Translation
Stat-XFER: A General Framework for Search-based Syntax-driven MT
Presentation transcript:

Stat-XFER: A General Framework for Search-based Syntax-driven MT Alon Lavie Language Technologies Institute Carnegie Mellon University Joint work with: Greg Hanneman, Vamshi Ambati, Alok Parlikar, Edmund Huber, Jonathan Clark, Erik Peterson, Christian Monson, Abhaya Agarwal, Kathrin Probst, Ari Font Llitjos, Lori Levin, Jaime Carbonell, Bob Frederking, Stephan Vogel

August 21, 2008IC-2008: Stat-XFER2 Outline Context and Rationale CMU Statistical Transfer MT Framework Broad Resource Scenarios: Chinese-to-English Low Resource Scenarios: Hebrew-to-English Open Research Challenges Conclusions

August 21, 2008IC-2008: Stat-XFER3 Rule-based vs. Statistical MT Traditional Rule-based MT: –Expressive and linguistically-rich formalisms capable of describing complex mappings between the two languages –Accurate “clean” resources –Everything constructed manually by experts –Main challenge: obtaining broad coverage Phrase-based Statistical MT: –Learn word and phrase correspondences automatically from large volumes of parallel data –Search-based “decoding” framework: Models propose many alternative translations Effective search algorithms find the “best” translation –Main challenge: obtaining high translation accuracy

Research Goals Long-term research agenda (since 2000) focused on developing a unified framework for MT that addresses the core fundamental weaknesses of previous approaches: –Representation – explore richer formalisms that can capture complex divergences between languages –Ability to handle morphologically complex languages –Methods for automatically acquiring MT resources from available data and combining them with manual resources –Ability to address both rich and poor resource scenarios Main research funding sources: NSF (AVENUE and LETRAS projects) and DARPA (GALE) August 21, 20084IC-2008: Stat-XFER

August 21, 2008IC-2008: Stat-XFER5 CMU Statistical Transfer (Stat-XFER) MT Approach Integrate the major strengths of rule-based and statistical MT within a common framework: –Linguistically rich formalism that can express complex and abstract compositional transfer rules –Rules can be written by human experts and also acquired automatically from data –Easy integration of morphological analyzers and generators –Word and syntactic-phrase correspondences can be automatically acquired from parallel text –Search-based decoding from statistical MT adapted to find the best translation within the search space: multi-feature scoring, beam-search, parameter optimization, etc. –Framework suitable for both resource-rich and resource- poor language scenarios

August 21, 2008IC-2008: Stat-XFER6 Stat-XFER Main Principles Framework: Statistical search-based approach with syntactic translation transfer rules that can be acquired from data but also developed and extended by experts Automatic Word and Phrase translation lexicon acquisition from parallel data Transfer-rule Learning: apply ML-based methods to automatically acquire syntactic transfer rules for translation between the two languages Elicitation: use bilingual native informants to produce a small high-quality word-aligned bilingual corpus of translated phrases and sentences Rule Refinement: refine the acquired rules via a process of interaction with bilingual informants XFER + Decoder: –XFER engine produces a lattice of possible transferred structures at all levels –Decoder searches and selects the best scoring combination

August 21, 2008IC-2008: Stat-XFER7 Stat-XFER MT Approach Interlingua Syntactic Parsing Semantic Analysis Sentence Planning Text Generation Source (e.g. Quechua) Target (e.g. English) Transfer Rules Direct: SMT, EBMT Statistical-XFER

Stat-XFER Framework Source Input Preprocessing Morphology Transfer Engine Transfer Rules Bilingual Lexicon Translation Lattice Second-Stage Decoder Language Model Weighted Features Target Output August 21, 20088IC-2008: Stat-XFER

Transfer Engine Language Model + Additional Features Transfer Rules {NP1,3} NP1::NP1 [NP1 "H" ADJ] -> [ADJ NP1] ((X3::Y1) (X1::Y2) ((X1 def) = +) ((X1 status) =c absolute) ((X1 num) = (X3 num)) ((X1 gen) = (X3 gen)) (X0 = X1)) Translation Lexicon N::N |: ["$WR"] -> ["BULL"] ((X1::Y1) ((X0 NUM) = s) ((Y0 lex) = "BULL")) N::N |: ["$WRH"] -> ["LINE"] ((X1::Y1) ((X0 NUM) = s) ((Y0 lex) = "LINE")) Source Input בשורה הבאה Decoder English Output in the next line Translation Output Lattice (0 1 (1 1 (2 2 (1 2 "THE (0 2 "IN (0 4 "IN THE NEXT Preprocessing Morphology

August 21, 2008IC-2008: Stat-XFER10 Transfer Rule Formalism Type information Part-of-speech/constituent information Alignments x-side constraints y-side constraints xy-constraints, e.g. ((Y1 AGR) = (X1 AGR)) ; SL: the old man, TL: ha-ish ha-zaqen NP::NP [DET ADJ N] -> [DET N DET ADJ] ( (X1::Y1) (X1::Y3) (X2::Y4) (X3::Y2) ((X1 AGR) = *3-SING) ((X1 DEF = *DEF) ((X3 AGR) = *3-SING) ((X3 COUNT) = +) ((Y1 DEF) = *DEF) ((Y3 DEF) = *DEF) ((Y2 AGR) = *3-SING) ((Y2 GENDER) = (Y4 GENDER)) )

August 21, 2008IC-2008: Stat-XFER11 Transfer Rule Formalism Value constraints Agreement constraints ;SL: the old man, TL: ha-ish ha-zaqen NP::NP [DET ADJ N] -> [DET N DET ADJ] ( (X1::Y1) (X1::Y3) (X2::Y4) (X3::Y2) ((X1 AGR) = *3-SING) ((X1 DEF = *DEF) ((X3 AGR) = *3-SING) ((X3 COUNT) = +) ((Y1 DEF) = *DEF) ((Y3 DEF) = *DEF) ((Y2 AGR) = *3-SING) ((Y2 GENDER) = (Y4 GENDER)) )

August 21, 2008IC-2008: Stat-XFER12 Translation Lexicon: Examples PRO::PRO |: ["ANI"] -> ["I"] ( (X1::Y1) ((X0 per) = 1) ((X0 num) = s) ((X0 case) = nom) ) PRO::PRO |: ["ATH"] -> ["you"] ( (X1::Y1) ((X0 per) = 2) ((X0 num) = s) ((X0 gen) = m) ((X0 case) = nom) ) N::N |: ["$&H"] -> ["HOUR"] ( (X1::Y1) ((X0 NUM) = s) ((Y0 NUM) = s) ((Y0 lex) = "HOUR") ) N::N |: ["$&H"] -> ["hours"] ( (X1::Y1) ((Y0 NUM) = p) ((X0 NUM) = p) ((Y0 lex) = "HOUR") )

August 21, 2008IC-2008: Stat-XFER13 Hebrew Transfer Grammar Example Rules {NP1,2} ;;SL: $MLH ADWMH ;;TL: A RED DRESS NP1::NP1 [NP1 ADJ] -> [ADJ NP1] ( (X2::Y1) (X1::Y2) ((X1 def) = -) ((X1 status) =c absolute) ((X1 num) = (X2 num)) ((X1 gen) = (X2 gen)) (X0 = X1) ) {NP1,3} ;;SL: H $MLWT H ADWMWT ;;TL: THE RED DRESSES NP1::NP1 [NP1 "H" ADJ] -> [ADJ NP1] ( (X3::Y1) (X1::Y2) ((X1 def) = +) ((X1 status) =c absolute) ((X1 num) = (X3 num)) ((X1 gen) = (X3 gen)) (X0 = X1) )

August 21, 2008IC-2008: Stat-XFER14 The Transfer Engine Input: source-language input sentence, or source- language confusion network Output: lattice representing collection of translation fragments at all levels supported by transfer rules Basic Algorithm: “bottom-up” integrated “parsing- transfer-generation” guided by the transfer rules –Start with translations of individual words and phrases from translation lexicon –Create translations of larger constituents by applying applicable transfer rules to previously created lattice entries –Beam-search controls the exponential combinatorics of the search-space, using multiple scoring features

August 21, 2008IC-2008: Stat-XFER15 The Transfer Engine Some Unique Features: –Works with either learned or manually-developed transfer grammars –Handles rules with or without unification constraints –Supports interfacing with servers for morphological analysis and generation –Can handle ambiguous source-word analyses and/or SL segmentations represented in the form of lattice structures

August 21, 2008IC-2008: Stat-XFER16 XFER Output Lattice (28 28 "AND" "W" "(CONJ,0 'AND')") (29 29 "SINCE" "MAZ " "(ADVP,0 (ADV,5 'SINCE')) ") (29 29 "SINCE THEN" "MAZ " "(ADVP,0 (ADV,6 'SINCE THEN')) ") (29 29 "EVER SINCE" "MAZ " "(ADVP,0 (ADV,4 'EVER SINCE')) ") (30 30 "WORKED" "&BD " "(VERB,0 (V,11 'WORKED')) ") (30 30 "FUNCTIONED" "&BD " "(VERB,0 (V,10 'FUNCTIONED')) ") (30 30 "WORSHIPPED" "&BD " "(VERB,0 (V,12 'WORSHIPPED')) ") (30 30 "SERVED" "&BD " "(VERB,0 (V,14 'SERVED')) ") (30 30 "SLAVE" "&BD " "(NP0,0 (N,34 'SLAVE')) ") (30 30 "BONDSMAN" "&BD " "(NP0,0 (N,36 'BONDSMAN')) ") (30 30 "A SLAVE" "&BD " "(NP,1 (LITERAL 'A') (NP2,0 (NP1,0 (NP0,0 (N,34 'SLAVE')) ) ) ) ") (30 30 "A BONDSMAN" "&BD " "(NP,1 (LITERAL 'A') (NP2,0 (NP1,0 (NP0,0 (N,36 'BONDSMAN')) ) ) ) ")

August 21, 2008IC-2008: Stat-XFER17 The Lattice Decoder Simple Stack Decoder, similar in principle to simple Statistical MT decoders Searches for best-scoring path of non-overlapping lattice arcs No reordering during decoding Scoring based on log-linear combination of scoring features, with weights trained using Minimum Error Rate Training (MERT) Scoring components: –Statistical Language Model –Bi-directional MLE phrase and rule scores –Lexical Probabilities –Fragmentation: how many arcs to cover the entire translation? –Length Penalty: how far from expected target length?

August 21, 2008IC-2008: Stat-XFER18 XFER Lattice Decoder 0 0 ON THE FOURTH DAY THE LION ATE THE RABBIT TO A MORNING MEAL Overall: , Prob: , Rules: 0, Frag: , Length: 0, Words: 13, < : B H IWM RBI&I (PP,0 (PREP,3 'ON')(NP,2 (LITERAL 'THE') (NP2,0 (NP1,1 (ADJ,2 (QUANT,0 'FOURTH'))(NP1,0 (NP0,1 (N,6 'DAY')))))))> 918 < : H ARIH AKL AT H $PN (S,2 (NP,2 (LITERAL 'THE') (NP2,0 (NP1,0 (NP0,1 (N,17 'LION')))))(VERB,0 (V,0 'ATE'))(NP,100 (NP,2 (LITERAL 'THE') (NP2,0 (NP1,0 (NP0,1 (N,24 'RABBIT')))))))> 584 < : L ARWXH BWQR (PP,0 (PREP,6 'TO')(NP,1 (LITERAL 'A') (NP2,0 (NP1,0 (NNP,3 (NP0,0 (N,32 'MORNING'))(NP0,0 (N,27 'MEAL')))))))>

August 21, 2008IC-2008: Stat-XFER19 Stat-XFER MT Systems General Stat-XFER framework under development for past seven years Systems so far: –Chinese-to-English –French-to-English –Hebrew-to-English –Urdu-to-English –German-to-English –Hindi-to-English –Dutch-to-English –Mapudungun-to-Spanish In progress or planned: –Arabic-to-English –Brazilian Portuguese-to-English –Native-Brazilian languages to Brazilian Portuguese –Hebrew-to-Arabic –Quechua-to-Spanish –Turkish-to-English

MT Resource Acquisition in Resource-rich Scenarios Scenario: Significant amounts of parallel-text at sentence-level are available –Parallel sentences can be word-aligned and parsed (at least on one side, ideally on both sides) Goal: Acquire both broad-coverage translation lexicons and transfer rule grammars automatically from the data Syntax-based translation lexicons: –Broad-coverage constituent-level translation equivalents at all levels of granularity –Can serve as the elementary building blocks for transfer trees constructed at runtime using the transfer rules August 21, IC-2008: Stat-XFER

Syntax-driven Resource Acquisition Process Automatic Process for Extracting Syntax-driven Rules and Lexicons from sentence-parallel data: 1.Word-align the parallel corpus (GIZA++) 2.Parse the sentences independently for both languages 3.Run our new PFA Constituent Aligner over the parsed sentence pairs 4.Extract all aligned constituents from the parallel trees 5.Extract all derived synchronous transfer rules from the constituent-aligned parallel trees 6.Construct a “data-base” of all extracted parallel constituents and synchronous rules with their frequencies and model them statistically (assign them relative-likelihood probabilities) August 21, IC-2008: Stat-XFER

PFA Constituent Node Aligner Input: a bilingual pair of parsed and word-aligned sentences Goal: find all sub-sentential constituent alignments between the two trees which are translation equivalents of each other Equivalence Constraint: a pair of constituents are considered translation equivalents if: –All words in yield of are aligned only to words in yield of (and vice-versa) –If has a sub-constituent that is aligned to, then must be a sub-constituent of (and vice-versa) Algorithm is a bottom-up process starting from word- level, marking nodes that satisfy the constraints August 21, IC-2008: Stat-XFER

PFA Node Alignment Algorithm Example Words don’t have to align one-to-one Constituent labels can be different in each language Tree Structures can be highly divergent August 21, IC-2008: Stat-XFER

PFA Node Alignment Algorithm Example Aligner uses a clever arithmetic manipulation to enforce equivalence constraints Resulting aligned nodes are highlighted in figure August 21, IC-2008: Stat-XFER

PFA Node Alignment Algorithm Example Extraction of Phrases: Get the Yields of the aligned nodes and add them to a phrase table tagged with syntactic categories on both source and target sides Example: NP # NP :: 澳洲 # Australia August 21, IC-2008: Stat-XFER

PFA Node Alignment Algorithm Example All Phrases from this tree pair: 1.IP # S :: 澳洲 是 与 北韩 有 邦交 的 少数 国家 之一 。 # Australia is one of the few countries that have diplomatic relations with North Korea. 2.VP # VP :: 是 与 北韩 有 邦交 的 少数 国家 之一 # is one of the few countries that have diplomatic relations with North Korea 3.NP # NP :: 与 北韩 有 邦交 的 少数 国家 之一 # one of the few countries that have diplomatic relations with North Korea 4.VP # VP :: 与 北韩 有 邦交 # have diplomatic relations with North Korea 5.NP # NP :: 邦交 # diplomatic relations 6.NP # NP :: 北韩 # North Korea 7.NP # NP :: 澳洲 # Australia August 21, IC-2008: Stat-XFER

Recent Improvements Tree-to-Tree (T2T) method is high precision but suffers from low recall Alternative: Tree-to-String (T2S) method uses trees on ONE side and projects the nodes based on word alignments –High recall, but lower precision Recent work by Vamshi Ambati: combine both methods (T2T*) by seeding with the T2T correspondences and then adding in projected nodes from the T2S method –Can be viewed as restructuring target tree to be maximally isomorphic to source tree –Produces richer and more accurate syntactic phrase tables that improve translation quality (versus T2T and T2S) August 21, IC-2008: Stat-XFER

Transfer Rule Learning Input: Constituent-aligned parallel trees Idea: Aligned nodes act as possible decomposition points of the parallel trees –The sub-trees of any aligned pair of nodes can be broken apart at any lower-level aligned nodes, creating an inventory of “treelet” correspondences –Synchronous “treelets” can be converted into synchronous rules Algorithm: –Find all possible treelet decompositions from the node aligned trees –“Flatten” the treelets into synchronous CFG rules August 21, IC-2008: Stat-XFER

Rule Extraction Algorithm Sub-Treelet extraction: Extract Sub-tree segments including synchronous alignment information in the target tree. All the sub-trees and the super-tree are extracted. August 21, IC-2008: Stat-XFER

Rule Extraction Algorithm Flat Rule Creation: Each of the treelets pairs is flattened to create a Rule in the ‘Avenue Formalism’ – Four major parts to the rule: 1. Type of the rule: Source and Target side type information 2. Constituent sequence of the synchronous flat rule 3. Alignment information of the constituents 4. Constraints in the rule (Currently not extracted) August 21, IC-2008: Stat-XFER

Rule Extraction Algorithm Flat Rule Creation: Sample rule: IP::S [ NP VP.] -> [NP VP.] ( ;; Alignments (X1::Y1) (X2::Y2) ;;Constraints ) August 21, IC-2008: Stat-XFER

Rule Extraction Algorithm Flat Rule Creation: Sample rule: NP::NP [VP 北 CD 有 邦交 ] -> [one of the CD countries that VP] ( ;; Alignments (X1::Y7) (X3::Y4) ) Note: 1.Any one-to-one aligned words are elevated to Part-Of-Speech in flat rule. 2.Any non-aligned words on either source or target side remain lexicalized August 21, IC-2008: Stat-XFER

Rule Extraction Algorithm All rules extracted: VP::VP [VC NP] -> [VBZ NP] ( (*score* 0.5) ;; Alignments (X1::Y1) (X2::Y2) ) VP::VP [VC NP] -> [VBZ NP] ( (*score* 0.5) ;; Alignments (X1::Y1) (X2::Y2) ) NP::NP [NR] -> [NNP] ( (*score* 0.5) ;; Alignments (X1::Y1) (X2::Y2) ) VP::VP [ 北 NP VE NP] -> [ VBP NP with NP] ( (*score* 0.5) ;; Alignments (X2::Y4) (X3::Y1) (X4::Y2) ) All rules extracted: NP::NP [VP 北 CD 有 邦交 ] -> [one of the CD countries that VP] ( (*score* 0.5) ;; Alignments (X1::Y7) (X3::Y4) ) IP::S [ NP VP ] -> [NP VP ] ( (*score* 0.5) ;; Alignments (X1::Y1) (X2::Y2) ) NP::NP [ “ 北韩 ”] -> [“North” “Korea”] ( ;Many to one alignment is a phrase ) August 21, IC-2008: Stat-XFER

Combining Syntactic and Standard Phrase Tables Recent work by Greg Hanneman, Alok Parlikar and Vamshi Ambati Syntax-based phrase tables are still significantly lower in coverage than “standard” heuristic-based phrase extraction used in Statistical MT Can we combine the two approaches and obtain superior results? Experimenting with two main combination methods: –Direct Combination: Extract phrases using both approaches and then jointly score (assign MLE probabilities) them –Prioritized Combination: For source phrases that are syntactic – use the syntax-extracted method, for non-syntactic source phrases - take them from the “standard” extraction method Direct Combination appears to be slightly better so far Grammar builds upon syntactic phrases, decoder uses both August 21, IC-2008: Stat-XFER

Chinese-English System Developed over past year under DARPA/GALE funding (within IBM-led “Rosetta” team) Participated in recent NIST MT-08 Evaluation Large-scale broad-coverage system Integrates large manual resources with automatically extracted resources Current performance-level is still inferior to state-of-the-art phrase-based systems August 21, IC-2008: Stat-XFER

Chinese-English System Lexical Resources: –Manual Lexicons (base forms): LDC, ADSO, Wiki Total number of entries: 1.07 million –Automatically acquired from parallel data: Approx 5 million sentences LDC/GALE data Filtered down to phrases < 10 words in length Full formed Total number of entries: 2.67 million August 21, IC-2008: Stat-XFER

August 21, 2008IC-2008: Stat- XFER 37 Translation Example SrcSent 3 澳洲是与北韩有邦交的少数国家之一。 Gloss: Australia is with north korea have diplomatic relations DE few country world Reference: Australia is one of the few countries that have diplomatic relations with North Korea. Translation:Australia is one of the few countries that has diplomatic relations with north korea. Overall: , Prob: , Rules: , TransSGT: , TransTGS: , Frag: , Length: , Words: 11,15 ( 0 10 "Australia is one of the few countries that has diplomatic relations with north korea" " 澳洲 是 与 北韩 有 邦交 的 少数 国家 之一 " "(S1, (S, (NP,2 (NB,1 (LDC_N,1267 'Australia') ) ) (VP, (MISC_V,1 'is') (NP, (LITERAL 'one') (LITERAL 'of') (NP, (NP, (NP,1 (LITERAL 'the') (NUMNB,2 (LDC_NUM,420 'few') (NB,1 (WIKI_N,62230 'countries') ) ) ) (LITERAL 'that') (VP, (LITERAL 'has') (FBIS_NP,11916 'diplomatic relations') ) ) (FBIS_PP,84791 'with north korea') ) ) ) ) ) ") ( "." " 。 " "(MISC_PUNC,20 '.')")

August 21, 2008IC-2008: Stat- XFER 38 Example: Syntactic Lexical Phrases (LDC_N,1267 'Australia') (WIKI_N,62230 'countries') (FBIS_NP,11916 'diplomatic relations') (FBIS_PP,84791 'with north korea')

August 21, 2008IC-2008: Stat- XFER 39 Example: XFER Rules ;;SL::(2,4) 对 台 贸易 ;;TL::(3,5) trade to taiwan ;;Score::22 {NP, } NP::NP [PP NP ] -> [NP PP ] ((*score* ) (X2::Y1) (X1::Y2)) ;;SL::(2,7) 直接 提到 伟 哥 的 广告 ;;TL::(1,7) commercials that directly mention the name viagra ;;Score::5 {NP, } NP::NP [VP " 的 " NP ] -> [NP "that" VP ] ((*score* ) (X3::Y1) (X1::Y3)) ;;SL::(4,14) 有 一 至 多 个 高 新 技术 项目 或 产品 ;;TL::(3,14) has one or more new, high level technology projects or products ;;Score::4 {VP, } VP::VP [" 有 " NP ] -> ["has" NP ] ((*score* 0.1) (X2::Y2))

MT Resource Acquisition in Resource-poor Scenarios Scenario: Very limited amounts of parallel-text at sentence-level are available –Significant amounts of monolingual text available for one of the two languages (i.e. English, Spanish) Approach: –Manually acquire and/or construct translation lexicons –Transfer rule grammars can be manually developed and/or automatically acquired from an elicitation corpus Strategy: –Learn transfer rules by syntax projection from major language to minor language –Build MT system to translate from minor language to major language August 21, IC-2008: Stat-XFER

August 21, 2008IC-2008: Stat-XFER41 Learning Transfer-Rules for Languages with Limited Resources Rationale: –Large bilingual corpora not available –Bilingual native informant(s) can translate and align a small pre-designed elicitation corpus, using elicitation tool –Elicitation corpus designed to be typologically comprehensive and compositional –Transfer-rule engine and new learning approach support acquisition of generalized transfer-rules from the data

August 21, 2008IC-2008: Stat-XFER42 Elicitation Tool: English-Hindi Example

August 21, 2008IC-2008: Stat-XFER43 Elicitation Tool: English-Arabic Example

August 21, 2008IC-2008: Stat-XFER44 Elicitation Tool: Spanish-Mapudungun Example

August 21, 2008IC-2008: Stat-XFER45 Hebrew-to-English MT Prototype Initial prototype developed within a two month intensive effort Accomplished: –Adapted available morphological analyzer –Constructed a preliminary translation lexicon –Translated and aligned Elicitation Corpus –Learned XFER rules –Developed (small) manual XFER grammar –System debugging and development –Evaluated performance on unseen test data using automatic evaluation metrics

August 21, 2008IC-2008: Stat-XFER46 Challenges for Hebrew MT Puacity in existing language resources for Hebrew –No publicly available broad coverage morphological analyzer –No publicly available bilingual lexicons or dictionaries –No POS-tagged corpus or parse tree-bank corpus for Hebrew –No large Hebrew/English parallel corpus Scenario well suited for Stat-XFER framework for languages with limited resources

August 21, 2008IC-2008: Stat-XFER47 Modern Hebrew Spelling Two main spelling variants –“KTIV XASER” (difficient): spelling with the vowel diacritics, and consonant words when the diacritics are removed –“KTIV MALEH” (full): words with I/O/U vowels are written with long vowels which include a letter KTIV MALEH is predominant, but not strictly adhered to even in newspapers and official publications  inconsistent spelling Example: –niqud (spelling): NIQWD, NQWD, NQD –When written as NQD, could also be niqed, naqed, nuqad

August 21, 2008IC-2008: Stat-XFER48 Morphological Analyzer We use a publicly available morphological analyzer distributed by the Technion’s Knowledge Center, adapted for our system Coverage is reasonable (for nouns, verbs and adjectives) Produces all analyses or a disambiguated analysis for each word Output format includes lexeme (base form), POS, morphological features Output was adapted to our representation needs (POS and feature mappings)

August 21, 2008IC-2008: Stat-XFER49 Morphology Example Input word: B$WRH | B$WRH | |-----B-----|$WR|--H--| |--B--|-H--|--$WRH---|

August 21, 2008IC-2008: Stat-XFER50 Morphology Example Y0: ((SPANSTART 0) Y1: ((SPANSTART 0) Y2: ((SPANSTART 1) (SPANEND 4) (SPANEND 2) (SPANEND 3) (LEX B$WRH) (LEX B) (LEX $WR) (POS N) (POS PREP)) (POS N) (GEN F) (GEN M) (NUM S) (NUM S) (STATUS ABSOLUTE)) (STATUS ABSOLUTE)) Y3: ((SPANSTART 3) Y4: ((SPANSTART 0) Y5: ((SPANSTART 1) (SPANEND 4) (SPANEND 1) (SPANEND 2) (LEX $LH) (LEX B) (LEX H) (POS POSS)) (POS PREP)) (POS DET)) Y6: ((SPANSTART 2) Y7: ((SPANSTART 0) (SPANEND 4) (SPANEND 4) (LEX $WRH) (LEX B$WRH) (POS N) (POS LEX)) (GEN F) (NUM S) (STATUS ABSOLUTE))

August 21, 2008IC-2008: Stat-XFER51 Translation Lexicon Constructed our own Hebrew-to-English lexicon, based primarily on existing “Dahan” H-to-E and E-to-H dictionary made available to us, augmented by other public sources Coverage is not great but not bad as a start –Dahan H-to-E is about 15K translation pairs –Dahan E-to-H is about 7K translation pairs Base forms, POS information on both sides Converted Dahan into our representation, added entries for missing closed-class entries (pronouns, prepositions, etc.) Had to deal with spelling conventions Recently augmented with ~50K translation pairs extracted from Wikipedia (mostly proper names and named entities)

August 21, 2008IC-2008: Stat-XFER52 Manual Transfer Grammar (human-developed) Initially developed by Alon in a couple of days, extended and revised by Nurit over time Current grammar has 36 rules: –21 NP rules –one PP rule –6 verb complexes and VP rules –8 higher-phrase and sentence-level rules Captures the most common (mostly local) structural differences between Hebrew and English

August 21, 2008IC-2008: Stat-XFER53 Transfer Grammar Example Rules {NP1,2} ;;SL: $MLH ADWMH ;;TL: A RED DRESS NP1::NP1 [NP1 ADJ] -> [ADJ NP1] ( (X2::Y1) (X1::Y2) ((X1 def) = -) ((X1 status) =c absolute) ((X1 num) = (X2 num)) ((X1 gen) = (X2 gen)) (X0 = X1) ) {NP1,3} ;;SL: H $MLWT H ADWMWT ;;TL: THE RED DRESSES NP1::NP1 [NP1 "H" ADJ] -> [ADJ NP1] ( (X3::Y1) (X1::Y2) ((X1 def) = +) ((X1 status) =c absolute) ((X1 num) = (X3 num)) ((X1 gen) = (X3 gen)) (X0 = X1) )

August 21, 2008IC-2008: Stat-XFER54 Example Translation Input: – לאחר דיונים רבים החליטה הממשלה לערוך משאל עם בנושא הנסיגה –Gloss: After debates many decided the government to hold referendum in issue the withdrawal Output: –AFTER MANY DEBATES THE GOVERNMENT DECIDED TO HOLD A REFERENDUM ON THE ISSUE OF THE WITHDRAWAL

August 21, 2008IC-2008: Stat-XFER55 Noun Phrases – Construct State decision.3SF-CSthe-president.3SMthe-first.3SM החלטת הנשיא הראשון החלטת הנשיא הראשונה decision.3SF-CSthe-president.3SMthe-first.3SF THE DECISION OF THE FIRST PRESIDENT THE FIRST DECISION OF THE PRESIDENT

August 21, 2008IC-2008: Stat-XFER56 Noun Phrases - Possessives HNSIAHKRIZ$HM$IMHHRA$WNH$LWTHIH the-presidentannouncedthat-the-task.3SFthe-first.3SFof-himwill.3SF LMCWA PTRWNLSKSWKBAZWRNW to-findsolutionto-the-conflictin-region-POSS.1P הנשיא הכריז שהמשימה הראשונה שלו תהיה למצוא פתרון לסכסוך באזורנו Without transfer grammar: THE PRESIDENT ANNOUNCED THAT THE TASK THE BEST OF HIM WILL BE TO FIND SOLUTION TO THE CONFLICT IN REGION OUR With transfer grammar: THE PRESIDENT ANNOUNCED THAT HIS FIRST TASK WILL BE TO FIND A SOLUTION TO THE CONFLICT IN OUR REGION

August 21, 2008IC-2008: Stat-XFER57 Subject-Verb Inversion ATMWLHWDI&HHMM$LH yesterdayannounced.3SFthe-government.3SF אתמול הודיעה הממשלה שתערכנה בחירות בחודש הבא $T&RKNHBXIRWTBXWD$HBA that-will-be-held.3PFelections.3PFin-the-monththe-next Without transfer grammar: YESTERDAY ANNOUNCED THE GOVERNMENT THAT WILL RESPECT OF THE FREEDOM OF THE MONTH THE NEXT With transfer grammar: YESTERDAY THE GOVERNMENT ANNOUNCED THAT ELECTIONS WILL ASSUME IN THE NEXT MONTH

August 21, 2008IC-2008: Stat-XFER58 Subject-Verb Inversion LPNIKMH$BW&WTHWDI&HHNHLTHMLWN beforeseveralweeksannounced.3SFmanagement.3SF.CSthe-hotel לפני כמה שבועות הודיעה הנהלת המלון שהמלון יסגר בסוף השנה $HMLWNISGRBSWFH$NH that-the-hotel.3SMwill-be-closed.3SMat-end.3SM.CSthe-year Without transfer grammar: IN FRONT OF A FEW WEEKS ANNOUNCED ADMINISTRATION THE HOTEL THAT THE HOTEL WILL CLOSE AT THE END THIS YEAR With transfer grammar: SEVERAL WEEKS AGO THE MANAGEMENT OF THE HOTEL ANNOUNCED THAT THE HOTEL WILL CLOSE AT THE END OF THE YEAR

August 21, 2008IC-2008: Stat-XFER59 Evaluation Results Test set of 62 sentences from Haaretz newspaper, 2 reference translations SystemBLEUNISTPRMETEOR No Gram Learned Manual

August 21, 2008IC-2008: Stat-XFER60 Major Research Directions Automatic Transfer Rule Learning: –Under different scenarios: From manually word-aligned elicitation corpus From large volumes of automatically word-aligned “wild” parallel data, with parse trees on one or both sides In the absence of morphology or POS annotated lexica –Compositionality and generalization –Identifying “good” rules from “bad” rules –Effective models for rule scoring for Decoding: using scores at runtime Pruning the large collections of learned rules –Learning Unification Constraints

August 21, 2008IC-2008: Stat-XFER61 Major Research Directions Advanced Methods for Extracting and Combining Phrase Tables from Parallel Data: –Leveraging from both syntactic and non-syntactic extraction methods –Can we “syntactify” the non-syntactic phrases or apply grammar rules on them? Syntax-aware Word Alignment: –Current word alignments are naïve and unaware of syntactic information –Can we remove incorrect word alignments to improve the syntax-based phrase extraction? –Develop new syntax-aware word alignment methods

August 21, 2008IC-2008: Stat-XFER62 Major Research Directions Syntax-based LMs: –Our MT approach performs parsing and translation as integrated processes –Our translations come out with syntax trees attached to them –Add syntax-based LM features that can discriminate between good and bad trees, on both target and source sides!

August 21, 2008IC-2008: Stat-XFER63 Major Research Directions Algorithms for XFER and Decoding –Integration and optimization of multiple features into search-based XFER parser –Complexity and efficiency improvements –Non-monotonicity issues (LM scores, unification constraints) and their consequences on search

Aug 29, 2007Statistical XFER MT64 Major Research Directions Building Elicitation Corpora: –Feature Detection –Corpus Navigation Automatic Rule Refinement Translation for highly polysynthetic languages such as Mapudungun and Iñupiaq

Conclusions Stat-XFER is a promising general MT framework, suitable to a variety of MT scenarios and languages Provides a complete solution for building end-to-end MT systems from parallel data, akin to phrase-based SMT systems (training, tuning, runtime system) No open-source publically available toolkits, but extensive collaboration activities with other groups Complex but highly interesting set of open research issues Prediction: this is the future direction of MT! August 21, IC-2008: Stat-XFER

August 21, 2008IC-2008: Stat-XFER66 Questions?