Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2008 The MITRE Corporation. All rights reserved Sherri Condon, Jon Phillips, Christy Doran, John Aberdeen, Dan Parvaz, Beatrice Oshika, Greg Sanders,

Similar presentations


Presentation on theme: "© 2008 The MITRE Corporation. All rights reserved Sherri Condon, Jon Phillips, Christy Doran, John Aberdeen, Dan Parvaz, Beatrice Oshika, Greg Sanders,"— Presentation transcript:

1 © 2008 The MITRE Corporation. All rights reserved Sherri Condon, Jon Phillips, Christy Doran, John Aberdeen, Dan Parvaz, Beatrice Oshika, Greg Sanders, and Craig Schlenoff LREC 2008 Applying Automated Metrics to Speech Translation Dialogs

2 © 2008 The MITRE Corporation. All rights reserved DARPA TRANSTAC: Speech Translation for Tactical Communication DARPA Objective: rapidly develop and field two-way translation systems for spontaneous communication in real-world tactical situations English Speaker Iraqi Arabic Speaker “There were four men” “How many men did you see?”  Speech Recognition  Machine Translation  Speech Synthesis

3 © 2008 The MITRE Corporation. All rights reserved 3 Evaluation of Speech Translation n Few precedents for speech translation evaluation compared to machine translation of text n High level human judgments –CMU (Gates et al., 1996) –Verbmobil (Nübel, 1997) –Binary or ternary ratings combine assessments of accuracy and fluency n Humans score abstract semantic representations –Interlingua Interchange Format (Levin et al., 2000) –Predicate-argument structures (Belvin et al, 2004) –Fine-grained, low-level assessments

4 © 2008 The MITRE Corporation. All rights reserved 4 Automated Metrics n High correlation with human judgments for translation of text, but dialog is different than text –Relies on context vs. explicitness –Variability: contractions, sentence fragments –Utterance length: TIDES average 30 words/sentence n Studies have primarily involved translation to English and other European languages, but Arabic is different than Western languages –Highly inflected –Variability: orthography, dialect, register, word order

5 © 2008 The MITRE Corporation. All rights reserved 5 TRANSTAC Evaluations n Directed by NIST with support from MITRE (see Weiss et al. for details) n Live evaluations –Military users –Iraqi Arabic bilinguals (English speaker is masked) –Structured interactions (Information is specified) n Offline evaluations –Recorded dialogs held out from training data –Military users and Iraqi Arabic bilinguals –Spontaneous interactions elicited by scenario prompts

6 © 2008 The MITRE Corporation. All rights reserved 6 TRANSTAC Measures n Live evaluations –Global binary judgments of ‘high level concepts’ –Speech input was or was not adequately communicated n Offline evaluations –Automated measures n WER for speech recognition n BLEU for translation n TER for translation n METEOR for translation –Likert-style human judgments for sample of offline data –Low-level concept analysis for sample of offline data

7 © 2008 The MITRE Corporation. All rights reserved 7 Issues for Offline Evaluation n Initial focus was similarity to live inputs –Scripted dialogs are not natural –Wizard methods are resource intensive n Training data differs from use of device –Disfluencies –Utterance lengths –No ability to repeat and rephrase –No dialog management n I don’t understand n Please try to say that another way n Same speakers in both training and test sets

8 © 2008 The MITRE Corporation. All rights reserved 8 Training Data Unlike Actual Device Use n then %AH how is the water in the area what's the -- what's the quality how does it taste %AH is there %AH %breath sufficient supply? n the -- the first thing when it comes to %AH comes to fractures is you always look for %breath %AH fractures of the skull or of the spinal column %breath because these need to be these need to be treated differently than all other fractures. n and then if in the end we find tha- -- that %AH -- that he may be telling us the truth we'll give him that stuff back. n would you show me what part of the -- %AH %AH roughly how far up and down the street this %breath %UM this water covers when it backs up?

9 © 2008 The MITRE Corporation. All rights reserved 9 Selection Process n Initial selection of representative dialogs (Appen) –Percentage of word tokens and types that occur in other scenarios: mid range (87-91% in January) –Number of times a word in the dialog appears in the entire corpus: average for all words is maximized –All scenarios are represented, roughly proportionately –Variety of speakers and genders are represented n Criteria for selecting dialogues for test set –Gender, speaker, scenario distribution –Exclude dialogs with weak content or other issues such as excessive disfluencies and utterances directed to interpreter “Greet him” “Tell him we are busy”

10 © 2008 The MITRE Corporation. All rights reserved 10 July 2007 Offline Data n About 400 utterances for each translation direction –From 45 dialogues using 20 scenarios –Drawn from entire set held back from data collected in 2007 n Two selection methods from held out data (200 each) –Random: select every n utterances –Hand: select fluent utterances (1 dialogue per scenario) n 5 Iraqi Arabic dialogues selected for rerecording –About 140 utterances for each language –Selected from the same dialogues used for hand selection

11 © 2008 The MITRE Corporation. All rights reserved 11 Human Judgments n High-level adequacy judgments (Likert-style) –Completely Adequate –Tending Adequate –Tending Inadequate –Inadequate –Proportion judged completely adequate or tending adequate n Low-level concept judgments –Each content word (c-word) in source language is a concept –Translation score based on insertion, deletion, substitution errors –DARPA score is represented as an odds ratio –For comparison to automated metrics here, it is given as total correct c-words / (total correct c-words) + (total errors)

12 © 2008 The MITRE Corporation. All rights reserved Measures for Iraqi Arabic to English 12 Automated MetricsHuman Judgments TRANSTAC Systems: A BCDE

13 © 2008 The MITRE Corporation. All rights reserved Measures for English to Iraqi Arabic 13 Automated MetricsHuman Judgments TRANSTAC Systems: A BCDE

14 © 2008 The MITRE Corporation. All rights reserved 14 Directional Asymmetries in Measures BLEU ScoresHuman Adequacy Judgments English to ArabicArabic to English

15 © 2008 The MITRE Corporation. All rights reserved 15 Normalization for Automated Scoring n Normalization for WER has become standard –NIST normalizes reference transcriptions and system outputs –Contractions, hyphens to spaces, reduced forms (wanna) –Partial matching on fragments –GLM mappings n Normalization for BLEU scoring is not standard –Yet BLEU depends on matching n-grams –METEOR’s stemming addresses some of the variation n Can communicate meaning in spite of inflectional errors n two book, him are my brother, they is there n English-Arabic translation introduces much variation

16 © 2008 The MITRE Corporation. All rights reserved 16 Orthographic Variation: Arabic n Short vowel / shadda inclusions: جَمهُورِيَّة, جمهورية n Variations by including explicit nunation: أحيانا, أحياناً n Omission of the hamza: شي, شيء n Misplacement of the seat of the hamza: الطوارئ or الطوارىء n Variations where the taa martbuta should be used: بالجمجمة, بالجمجمه n Confusions between yaa and alif maksura: شي, شى n Initial alif with or without hamza/madda/wasla: اسم, إسم n Variations in spelling of Iraqi words: وياي, ويايا

17 © 2008 The MITRE Corporation. All rights reserved 17 Data Normalization Two types of normalization were applied for both ASR/MT system outputs & references 1.Rule based: simple diacritic normalization ne.g. آ,أ,إ => ا 2.GLM based: lexical substitution ne.g. doesn’t => does not ne.g. ﺂﺑﺍی => ﺂﺒﻫﺍی

18 © 2008 The MITRE Corporation. All rights reserved 18 Normalization for English to Arabic Text: BLEU Scores Norm0Norm1Norm2 Average0.2270.2400.241 *CS = Statistical MT version of CR, which is rule-based

19 © 2008 The MITRE Corporation. All rights reserved 19 Normalization for Arabic to English Text: BLEU Scores Norm0Norm1Norm2 Average0.4120.4140.440

20 © 2008 The MITRE Corporation. All rights reserved Summary n For Iraqi Arabic to English MT, there is good agreement on the relative scores among all the automated measures and human judgments of the same data n For English to Iraqi Arabic MT, there is fairly good agreement among the automated measures, but relative scores are less similar to human judgments of the same data n Automated MT metrics exhibit a strong directional asymmetry with Arabic to English scoring higher than English to Arabic in spite of much lower WER for English n Human judgments exhibit the opposite asymmetry n Normalization improves BLEU scores. 20

21 © 2008 The MITRE Corporation. All rights reserved Future Work n More Arabic normalization, beginning with function words orthographically attached to a following word n Explore ways to overcome Arabic morphological variation without perfect analyses n Arabic WordNet? n Resampling to test for significance, stability of scores n Systematic contrast of live inputs and training data 21

22 © 2008 The MITRE Corporation. All rights reserved 22 Rerecorded Scenarios n Scripted from dialogs held back for training –New speakers recorded reading scripts –Based on the 5 dialogs used for hand selection n Dialogues are edited minimally –Disfluencies, false starts, fillers removed from transcripts –A few entire utterances deleted –Instances of قل له “tell him” removed n Scripts recorded at DLI –138 English utterances, 141 Iraqi Arabic utterances –89 English and 80 Arabic utterances have corresponding utterances in the hand and randomly selected sets

23 © 2008 The MITRE Corporation. All rights reserved 23 WER Original vs. Rerecorded Utterances English Offline English Rerecorded Arabic Offline Arabic Rerecorded Average26.3623.750.7635.54

24 © 2008 The MITRE Corporation. All rights reserved 24 English to Iraqi Arabic BLEU Scores: Original vs. Rerecorded Utterances OriginalRerecorded Average0.1780.187 *E2 = Statistical MT version of E, which is rule-based

25 © 2008 The MITRE Corporation. All rights reserved 25 Iraqi Arabic to English BLEU Scores: Original vs. Rerecorded Utterances OriginalRerecorded Average0.2600.334


Download ppt "© 2008 The MITRE Corporation. All rights reserved Sherri Condon, Jon Phillips, Christy Doran, John Aberdeen, Dan Parvaz, Beatrice Oshika, Greg Sanders,"

Similar presentations


Ads by Google