Dr. Stephen Doherty & Dr. Sharon O’Brien

Slides:



Advertisements
Similar presentations
A Study of Foreign Language Learners Cognitive Style in Multimedia Classroom Wang Qi Northwest Normal University.
Advertisements

Evaluating quality: the MILE method applied to museum Web sites Franca Garzotto - HOC- Hypermedia Open Center, Politecnico di Milano Maria Pia Guermandi.
Cultural Heritage in REGional NETworks REGNET Project Meeting Content Group Part 1: Usability Testing.
Rationale for a multilingual corpus for machine translation evaluation Debbie Elliott Anthony Hartley Eric Atwell Corpus Linguistics 2003, Lancaster, England.
KeTra.
Third Joint EM+ / CNGL Workshop, Luxembourg 14 October 2011 User-focused task-oriented MT evaluation for wikis: a case study Federico Gaspari, Antonio.
Lindsey Main 1, 2 Lindsey Main 1, 2 Kathleen McGraw 2 Kathleen McGraw 2 User Services Department at UNC Chapel Hill Health Sciences Library  supports.
Addressing Patient Motivation In Virtual Reality Based Neurocognitive Rehabilitation A.S.Panic - M.Sc. Media & Knowledge Engineering Specialization Man.
2006 International Symposium of Computer Assisted Language Learning,June 2-4, Beijing China Tutor feedback in online English language learning: tutor perceptions.
Carlos S. C. Teixeira Intercultural Studies Group Universitat Rovira i Virgili (Tarragona, Spain) Knowledge of provenance.
Evaluation of noise annoyance of port related noise Le Havre 10 March 2010 Johannes Hyrynen VTT Technical Research Centre of Finland Machinery and Environmental.
Robert J. Mislevy & Min Liu University of Maryland Geneva Haertel SRI International Robert J. Mislevy & Min Liu University of Maryland Geneva Haertel SRI.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
The Use of Eye Tracking Technology in the Evaluation of e-Learning: A Feasibility Study Dr Peter Eachus University of Salford.
Comparison of two eye tracking devices used on printed images Barbora Komínková The Norwegian Color Research Laboratory Faculty of Computer Science and.
Usability Inspection n Usability inspection is a generic name for a set of methods based on having evaluators inspect or examine usability-related issues.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
Design of metadata surrogates in search result interfaces of learning object repositories: Linear versus clustered metadata design Panos Balatsoukas Anne.
Comparative Evaluation of the Linguistic Output of MT Systems for Translation and Non-translation Purposes Marie-Jo Astre Anna Civil Francine Braun-Chen.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Long Liu, Uvo Hoelscher Muenster University of Applied Sciences
Huseyin Ergin Advisor: Dr. Eugene Syriani University of Alabama Software Modeling Lab Software Engineering Group Department of Computer Science College.
1 The Benefits of Using Eye Tracking in Usability Testing Jennifer C. Romano Usability Laboratory Statistical Research Division U.S. Census Bureau.
The role of eye tracking in usability evaluation of LMS in ODL context Mr Sam Ssemugabi Ms Jabulisiwe Mabila (Professor Helene Gelderblom) College of Science.
Search is not only about the Web An Overview on Printed Documents Search and Patent Search Walid Magdy Centre for Next Generation Localisation School of.
Copyright © 2009 On The Edge Software Consulting Advanced Enterprise Java Instructional Plan Presentation Tier Design using an Event Driven Design Methodology.
1 Integrating Google Apps for Education to Business English Student Trainees’ On-the-Job Training English Reports Asst.Prof. Phunsuk Kannarik.
+ CS/CM 348 Human Computer Interaction Course E-book Usability Study.
Human Factor Evaluation for Marine Education by using Neuroscience Tools N. Νikitakos D. Papachristos Professor Ph.d. candidate Dept. Shipping, Trade and.
Stephen Doherty, CNGL/SALIS
University of Dublin Trinity College Localisation and Personalisation: Dynamic Retrieval & Adaptation of Multi-lingual Multimedia Content Prof Vincent.
Year 7 Settling – in Evening. Assessment Process and Ability Grouping.
Localisation Education for Translation Students The Grad. Dip./MA in Translation Studies at Dublin City University Sharon O’Brien Localisation Pre-Summer.
Silke Gutermuth & Silvia Hansen-Schirra University of Mainz Germany Post-editing machine translation – a usability test for professional translation settings.
Can Controlled Language Rules increase the value of MT? Fred Hollowood & Johann Rotourier Symantec Dublin.
Gaming the System in the CALL Classroom Peter Gobel Kyoto Sangyo University
How do Humans Evaluate Machine Translation? Francisco Guzmán, Ahmed Abdelali, Irina Temnikova, Hassan Sajjad, Stephan Vogel.
CIALDINI, R., et al Culture and Compliance. Personality and Social Psychology Bulletin 125, 1242–1253. KAHNEMAN, D. AND TVERSKY, A Prospect.
AFM The Balanced Scorecard By Isuru Manawadu B.Sc in Accounting Sp. (USJP), ACA.
Ecological Footprint Calculator ArminArmin Emo SunotoEmo Sunoto Ecological Footprint Calculator Team 04.
A merging strategy proposal: The 2-step retrieval status value method Fernando Mart´inez-Santiago · L. Alfonso Ure ˜na-L´opez · Maite Mart´in-Valdivia.
: 1. cognitive evaluation non verbal 2 verbal An assessment must contain.
The MultilingualWeb-LT Working Group receives funding by the European Commission (project name LT-Web) through the Seventh Framework Programme (FP7) in.
The Localisation Industry in Transition: New Economy, New Technology Florita Mendez Localisation Ireland 2000 Dublin, November 7, 2000.
Eye Tracking In Evaluating The Effectiveness OF Ads Guide : Dr. Andrew T. Duchowski.
1 Generating Comparative Summaries of Contradictory Opinions in Text (CIKM09’)Hyun Duk Kim, ChengXiang Zhai 2010/05/24 Yu-wen,Hsu.
A Classification-based Approach to Question Answering in Discussion Boards Liangjie Hong, Brian D. Davison Lehigh University (SIGIR ’ 09) Speaker: Cho,
Instructional Plan Template | Slide 1 AET/515 Instructional Plan Advanced Enterprise Java Platform Training Presentation Tier Design using an Event Driven.
PED School Grade Reports (with thanks to Valley High School) ACE August 3, 2012 Dr. Russ Romans District Accountability Manager.
Virtual Tutor Application v1.0 Ruth Agada Dr. Jie Yan Bowie State University Computer Science Department.
Pastra and Saggion, EACL 2003 Colouring Summaries BLEU Katerina Pastra and Horacio Saggion Department of Computer Science, Natural Language Processing.
Eye-tracking and Cognitive Load in Translation Sharon O’Brien School of Applied Language and Intercultural Studies Dublin City University.
Saving Bitrate vs. Users: Where is the Break-Even Point in Mobile Video Quality? ACM MM’11 Presenter: Piggy Date:
Review: Review: Translating without in-domain corpus: Machine translation post-editing with online learning techniques Antonio L. Lagarda, Daniel Ortiz-Martínez,
Methods: The project used a mixed methods approach including qualitative focus groups, a questionnaire study and systematic searches of the research literature.
Information Retention in e-Learning De Leon Kimberly Obonyo Carolyne Penn John Yang Xiaoyan.
Impact Of Online Advertising On Consumer Behaviour By Thatipalli Sagar 10cqcma108 Under The Guidance Of Dr. Maruthi Ram. R.
Day 8 Usability testing.
Opinion spam and Analysis 소프트웨어공학 연구실 G 최효린 1 / 35.
Saisai Gong, Wei Hu, Yuzhong Qu
Usability engineering
Awatef Ahmed Ben Ramadan
HCI in the software process
The design process Software engineering and the design process for interactive systems Standards and guidelines as design rules Usability engineering.
The design process Software engineering and the design process for interactive systems Standards and guidelines as design rules Usability engineering.
Introduction to the HEART Framework
Translation into Practice
HCI in the software process
HCI in the software process
Human Computer Interaction Lecture 14 HCI in Software Process
Presentation transcript:

Assessing the Usability of Machine Translated Content: A User-Centred Study using Eye Tracking Dr. Stephen Doherty & Dr. Sharon O’Brien Centre for Next Generation Localisation School of Applied Language & Intercultural Studies Dublin City University

Outline Introduction Research Aims Methods Results Conclusions

Introduction Increased need for translation Diversity of content and users Rise in prevalence of machine translation [MT] both off- and online Mixed reports of quality – attitudes and expectations Divergence in R&D – translation studies/computer science Evaluation metrics – human and automatic Our focus here is on usability

Research Aims To investigate if there are differences in usability between the English [source language] and the unedited machine translated target languages [FR, DE, SP, JP]. Or in other words: how usable is machine translated content? Adoption of the ISO/TR 16982 definition of usability Importance of ecological validity: real materials and users

Methods User-centred approach [n = 30]; task driven – ‘new user’ scenario Eye tracking [tobii 1750]: Fixation count and average duration Attentional shifts; percentage time in each window Textual regressions

Methods Post-task questionnaire; five-point Likert Comprehension Task completion Potential improvement Future reuse Recommendation Recall

Methods Usability Satisfaction Efficiency [task success/task time]

Eye Tracking Task time Fixation count and average duration Lowest for EN [sig. JP] Fixation count and average duration Lowest for EN [sig. JP] for both Attentional shifts; percentage time in each window EN and FR spent most time in task window EN fewest shifts of attention [sig. JP] Textual regressions Raw number and distance: EN and SP [sig. JP] ‘Long’ regressions: JP [sig. all others]

Questionnaire Results Comprehension EN rated highest [sig. for FR and JP] Task completion EN rated highest [sig. for JP] Potential improvement SP & EN rated as needing least improvement, but could still be improved upon Future reuse FR & EN rated highest Recommendation EN rated highest [sig. for JP and DE] Recall EN scored highest [sig. for JP and DE]

Usability Results Task completion Efficiency Satisfaction EN rated highest [sig. for FR, DE, and JP] Task completion EN and SP more successful [sig. JP] Efficiency EN most efficient [sig. JP and DE]

Conclusions So, just how usable is raw MT? Similar results for EN, SP, and FR DE and JP more problematic [MT system] Functionally usable [more than just ‘gisting’] UX best for EN users MT viable for certain pairs Human intervention necessary to ensure best UX

stephen.doherty@dcu.ie sharon.obrien@dcu.ie Questions? stephen.doherty@dcu.ie sharon.obrien@dcu.ie This research is supported by the Science Foundation Ireland (Grant 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University.

Predictors of Positive UX Satisfied users: comprehension & task time Satisfied users: recommend to others Task completion: textual regressions Cognitive effort: instructions aiding task completion