We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byDayna Stanley
Modified about 1 year ago
© 2008 The MITRE Corporation. All rights reserved Sherri Condon, Jon Phillips, Christy Doran, John Aberdeen, Dan Parvaz, Beatrice Oshika, Greg Sanders, and Craig Schlenoff LREC 2008 Applying Automated Metrics to Speech Translation Dialogs
© 2008 The MITRE Corporation. All rights reserved DARPA TRANSTAC: Speech Translation for Tactical Communication DARPA Objective: rapidly develop and field two-way translation systems for spontaneous communication in real-world tactical situations English Speaker Iraqi Arabic Speaker “There were four men” “How many men did you see?” Speech Recognition Machine Translation Speech Synthesis
© 2008 The MITRE Corporation. All rights reserved 3 Evaluation of Speech Translation n Few precedents for speech translation evaluation compared to machine translation of text n High level human judgments –CMU (Gates et al., 1996) –Verbmobil (Nübel, 1997) –Binary or ternary ratings combine assessments of accuracy and fluency n Humans score abstract semantic representations –Interlingua Interchange Format (Levin et al., 2000) –Predicate-argument structures (Belvin et al, 2004) –Fine-grained, low-level assessments
© 2008 The MITRE Corporation. All rights reserved 4 Automated Metrics n High correlation with human judgments for translation of text, but dialog is different than text –Relies on context vs. explicitness –Variability: contractions, sentence fragments –Utterance length: TIDES average 30 words/sentence n Studies have primarily involved translation to English and other European languages, but Arabic is different than Western languages –Highly inflected –Variability: orthography, dialect, register, word order
© 2008 The MITRE Corporation. All rights reserved 5 TRANSTAC Evaluations n Directed by NIST with support from MITRE (see Weiss et al. for details) n Live evaluations –Military users –Iraqi Arabic bilinguals (English speaker is masked) –Structured interactions (Information is specified) n Offline evaluations –Recorded dialogs held out from training data –Military users and Iraqi Arabic bilinguals –Spontaneous interactions elicited by scenario prompts
© 2008 The MITRE Corporation. All rights reserved 6 TRANSTAC Measures n Live evaluations –Global binary judgments of ‘high level concepts’ –Speech input was or was not adequately communicated n Offline evaluations –Automated measures n WER for speech recognition n BLEU for translation n TER for translation n METEOR for translation –Likert-style human judgments for sample of offline data –Low-level concept analysis for sample of offline data
© 2008 The MITRE Corporation. All rights reserved 7 Issues for Offline Evaluation n Initial focus was similarity to live inputs –Scripted dialogs are not natural –Wizard methods are resource intensive n Training data differs from use of device –Disfluencies –Utterance lengths –No ability to repeat and rephrase –No dialog management n I don’t understand n Please try to say that another way n Same speakers in both training and test sets
© 2008 The MITRE Corporation. All rights reserved 8 Training Data Unlike Actual Device Use n then %AH how is the water in the area what's the -- what's the quality how does it taste %AH is there %AH %breath sufficient supply? n the -- the first thing when it comes to %AH comes to fractures is you always look for %breath %AH fractures of the skull or of the spinal column %breath because these need to be these need to be treated differently than all other fractures. n and then if in the end we find tha- -- that %AH -- that he may be telling us the truth we'll give him that stuff back. n would you show me what part of the -- %AH %AH roughly how far up and down the street this %breath %UM this water covers when it backs up?
© 2008 The MITRE Corporation. All rights reserved 9 Selection Process n Initial selection of representative dialogs (Appen) –Percentage of word tokens and types that occur in other scenarios: mid range (87-91% in January) –Number of times a word in the dialog appears in the entire corpus: average for all words is maximized –All scenarios are represented, roughly proportionately –Variety of speakers and genders are represented n Criteria for selecting dialogues for test set –Gender, speaker, scenario distribution –Exclude dialogs with weak content or other issues such as excessive disfluencies and utterances directed to interpreter “Greet him” “Tell him we are busy”
© 2008 The MITRE Corporation. All rights reserved 10 July 2007 Offline Data n About 400 utterances for each translation direction –From 45 dialogues using 20 scenarios –Drawn from entire set held back from data collected in 2007 n Two selection methods from held out data (200 each) –Random: select every n utterances –Hand: select fluent utterances (1 dialogue per scenario) n 5 Iraqi Arabic dialogues selected for rerecording –About 140 utterances for each language –Selected from the same dialogues used for hand selection
© 2008 The MITRE Corporation. All rights reserved 11 Human Judgments n High-level adequacy judgments (Likert-style) –Completely Adequate –Tending Adequate –Tending Inadequate –Inadequate –Proportion judged completely adequate or tending adequate n Low-level concept judgments –Each content word (c-word) in source language is a concept –Translation score based on insertion, deletion, substitution errors –DARPA score is represented as an odds ratio –For comparison to automated metrics here, it is given as total correct c-words / (total correct c-words) + (total errors)
© 2008 The MITRE Corporation. All rights reserved Measures for Iraqi Arabic to English 12 Automated MetricsHuman Judgments TRANSTAC Systems: A BCDE
© 2008 The MITRE Corporation. All rights reserved Measures for English to Iraqi Arabic 13 Automated MetricsHuman Judgments TRANSTAC Systems: A BCDE
© 2008 The MITRE Corporation. All rights reserved 14 Directional Asymmetries in Measures BLEU ScoresHuman Adequacy Judgments English to ArabicArabic to English
© 2008 The MITRE Corporation. All rights reserved 15 Normalization for Automated Scoring n Normalization for WER has become standard –NIST normalizes reference transcriptions and system outputs –Contractions, hyphens to spaces, reduced forms (wanna) –Partial matching on fragments –GLM mappings n Normalization for BLEU scoring is not standard –Yet BLEU depends on matching n-grams –METEOR’s stemming addresses some of the variation n Can communicate meaning in spite of inflectional errors n two book, him are my brother, they is there n English-Arabic translation introduces much variation
© 2008 The MITRE Corporation. All rights reserved 16 Orthographic Variation: Arabic n Short vowel / shadda inclusions: جَمهُورِيَّة, جمهورية n Variations by including explicit nunation: أحيانا, أحياناً n Omission of the hamza: شي, شيء n Misplacement of the seat of the hamza: الطوارئ or الطوارىء n Variations where the taa martbuta should be used: بالجمجمة, بالجمجمه n Confusions between yaa and alif maksura: شي, شى n Initial alif with or without hamza/madda/wasla: اسم, إسم n Variations in spelling of Iraqi words: وياي, ويايا
© 2008 The MITRE Corporation. All rights reserved 17 Data Normalization Two types of normalization were applied for both ASR/MT system outputs & references 1.Rule based: simple diacritic normalization ne.g. آ,أ,إ => ا 2.GLM based: lexical substitution ne.g. doesn’t => does not ne.g. ﺂﺑﺍی => ﺂﺒﻫﺍی
© 2008 The MITRE Corporation. All rights reserved 18 Normalization for English to Arabic Text: BLEU Scores Norm0Norm1Norm2 Average *CS = Statistical MT version of CR, which is rule-based
© 2008 The MITRE Corporation. All rights reserved 19 Normalization for Arabic to English Text: BLEU Scores Norm0Norm1Norm2 Average
© 2008 The MITRE Corporation. All rights reserved Summary n For Iraqi Arabic to English MT, there is good agreement on the relative scores among all the automated measures and human judgments of the same data n For English to Iraqi Arabic MT, there is fairly good agreement among the automated measures, but relative scores are less similar to human judgments of the same data n Automated MT metrics exhibit a strong directional asymmetry with Arabic to English scoring higher than English to Arabic in spite of much lower WER for English n Human judgments exhibit the opposite asymmetry n Normalization improves BLEU scores. 20
© 2008 The MITRE Corporation. All rights reserved Future Work n More Arabic normalization, beginning with function words orthographically attached to a following word n Explore ways to overcome Arabic morphological variation without perfect analyses n Arabic WordNet? n Resampling to test for significance, stability of scores n Systematic contrast of live inputs and training data 21
© 2008 The MITRE Corporation. All rights reserved 22 Rerecorded Scenarios n Scripted from dialogs held back for training –New speakers recorded reading scripts –Based on the 5 dialogs used for hand selection n Dialogues are edited minimally –Disfluencies, false starts, fillers removed from transcripts –A few entire utterances deleted –Instances of قل له “tell him” removed n Scripts recorded at DLI –138 English utterances, 141 Iraqi Arabic utterances –89 English and 80 Arabic utterances have corresponding utterances in the hand and randomly selected sets
© 2008 The MITRE Corporation. All rights reserved 23 WER Original vs. Rerecorded Utterances English Offline English Rerecorded Arabic Offline Arabic Rerecorded Average
© 2008 The MITRE Corporation. All rights reserved 24 English to Iraqi Arabic BLEU Scores: Original vs. Rerecorded Utterances OriginalRerecorded Average *E2 = Statistical MT version of E, which is rule-based
© 2008 The MITRE Corporation. All rights reserved 25 Iraqi Arabic to English BLEU Scores: Original vs. Rerecorded Utterances OriginalRerecorded Average
CS1512 Foundations of Computing Science 2 Lecture 5 Inferential statistics.
Bristol Year 12 Conference Answering Data Response Questions John Birchall.
ABCs of CBMs Summary of A Practical Guide to Curriculum-Based Measurement By: Michelle Hosp, John Hosp, & Kenneth Howell.
Last weeks question What is a DEVIATION IQ? In your answer, explain how the deviation IQ differs from the IQ as defined by Stern. What is the advantage.
The. of and a to in is you that it he for.
University of Sheffield NLP Machine Learning in GATE Angus Roberts, Horacio Saggion, Genevieve Gorrell.
Povertyactionlab.org Planning Sample Size for Randomized Evaluations Esther Duflo J-PAL.
Of An Expert System. Introduction What is AI? Intelligent in Human & Machine? What is Expert System? How are Expert System used? Elements of ES Who are.
Of. and a to the in is you that it at be.
Example. Solution No, it doesnt. A is wrong. Yes, it does. B looks good. No, the Mauchly is a preliminary test of one of the assumptions of the within.
Evaluation in NLG Anja Belz, NLTG, Brighton University Ehud Reiter, CS, Aberdeen University.
Probability and Statistics Representation of Data Measures of Center for Data Simple Analysis of Data.
High Frequency Words List A Group 1. the of and.
PLANNING THE AUDIT Individual audits must be properly planned to ensure: Appropriate and sufficient evidence is obtained to support the auditors opinion;
Effect Size Mechanics. COHEN’S D (HEDGE’S G) Cohen was one of the pioneers in advocating effect size over statistical significance Defined d for the one-sample.
Dolch Words the of and to a in that is was.
University of Sheffield NLP Module 4: Machine Learning.
USER GUIDE for Learners. Workplace English Training E-Platform INTRODUCTION This User Guide will provide you with useful.
The. of and a to in is you that it he was.
Introduction to Measurement Goals of Workshop Reviewing assessment concepts Reviewing instruments used in norming process Getting an overview of the.
The Lexile Framework An Introduction for Educators Thomas Schnick and Mark Knickelbine Forward by A.J. Stenner An overview created by the Clair E. Gale.
INSIGHTS INTO STUDY SKILLS Colin Rees & Alan Glasper Chapter 1.
Academic Language for English Language Learners. the language used in the classroom and workplace the language of text the language assessments the language.
University of Sheffield NLP Module 2: Introduction to IE and ANNIE.
NC K-2 Literacy Assessment 2009 K-5 English Language Arts NC DPI.
Albert Gatt LIN3022 Natural Language Processing Lecture 10.
Opinion Mining in a Telephone Survey Corpus Presenter: Shih-Hsiang Reference: Robert E. Schapire and Yoram Singer. BoosTexter : A boosting-based system.
Oral Reading Fluency First 100 Most Used Phrases.
Simple Statistics for Corpus Linguistics Sean Wallis Survey of English Usage University College London
Exam Preparation. How to Study You may notice after doing a few practice exams you still have a few learning gaps! Go back to fill those learning gaps.
© 2016 SlidePlayer.com Inc. All rights reserved.