Stephen Cox, Michael Lincoln and Judy Tryggvason School of Information Systems, University of East Anglia, Norwich NR4 7TJ, U.K. Melanie Nakisa Royal National.

Slides:



Advertisements
Similar presentations
ViSiCAST: Virtual sign: Capture, Animation Storage & Transmission BDA Conference 2nd August 2000 Belfast Dr John Low RNID.
Advertisements

1 Synthetic animation of deaf signing Richard Kennaway University of East Anglia.
1 Dicta-Sign Kick Off Meeting, Athens, 06 February 2009 University of East Anglia, UK Dicta-Sign Partner 3 Prof John Glauert Virtual Humans Group School.
Technical and design issues in implementation Dr. Mohamed Ally Director and Professor Centre for Distance Education Athabasca University Canada New Zealand.
By Mr. Muyowa Mutemwa Supervisor: Dr W D Tucker Co-Supervisor: Mr. M Norman.
PROBLEMS IN TEACHING LISTENING AND SPEAKING.  Context. Teaching speaking and listening skills in a college in Tokyo specializing in foreign language.
Françoise PRETEUX, Unité de Projet ARTEMIS - INT - Evry France The ViSiCAST Project Vi rtual Si gning: C apture, A nimation, S torage & T ransmission IST.
A tool for simplifying text For people with autism who have difficulty with understanding regular text Converts.
R OLE OF I NFORMATION AND C OMMUNICATION TECHNOLOGY (ICT) IN LIFE OF P ERSONS WITH L OCOMOTOR D ISABILITY Dr. Dharmendra Kumar Director Pandit Deendayal.
1 Software Engineering: A Practitioner’s Approach, 6/e Chapter 12b: User Interface Design Software Engineering: A Practitioner’s Approach, 6/e Chapter.
A / A* Communicate a lot of relevant information in well sequenced paragraphs Narrate events, give full descriptions Express and explain ideas and points.
I-Room : Integrating Intelligent Agents and Virtual Worlds.
PHONEXIA Can I have it in writing?. Discuss and share your answers to the following questions: 1.When you have English lessons listening to spoken English,
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Input to the Computer * Input * Keyboard * Pointing Devices
SiGML, Signing Gesture Mark-up Language, is the notation developed at UEA over the past three years to support the work of the EU-funded ViSiCAST and eSIGN.
2 You know, now people try oneself the public administration, eg. the municipal office, the office of provincial institutions, there was agreement various.
Web accessibility A practical introduction. Presentation title and date1 Web accessibility is about designing sites so as many people as possible can.
1 CS 430: Information Discovery Lecture 22 Non-Textual Materials 2.
1 Voice Command Generation for Teleoperated Robot Systems Authors : M. Ferre, J. Macias-Guarasa, R. Aracil, A. Barrientos Presented by M. Ferre. Universidad.
Funded by Alzheimer’s Society Emma Ferguson-Coleman 1, Tanya Denmark 2, Alys Young 1, Bencie Woll 2, Jo Atkinson 2, Jane Marshall 3, Katherine Rogers 1,
1 Virtual Humans : Real Communication Signs for the future John Glauert School of Computing Sciences University of East Anglia Norwich, UK.
Signing Matters 1 in 1000 people become deaf before they have acquired speech and may always have a low reading age for written English. Sign is their.
Standard Grade Automated Systems and Industrial Applications Automated Systems and Industrial Applications Standard Grade.
Signing Matters 1 in 1000 people become deaf before they have acquired speech and may always have a low reading age for written English. Sign is their.
And how these barriers can be overcome. BARRIERS TO EFFECTIVE COMMUNICATION.
PROJECT UPDATE:DECEMBER 2014 Yicun Xiong #
A Voice Based Command and control System for Emergency Applications William H. Lenharth, Ph.D. Project54 / ECE Dept.-UNH.
Assistive Technology Ability to be free. Quick Facts  Assistive technology is technology used by individuals with disabilities in order to perform functions.
DISABILITY INFORMATION SESSION OCTOBER 25, 2011 Can a Pen Make You Smarter?
Assistive Technology Tools Alisha Little EDN Dr. Ertzberger.
How can robots help people and make the world a better place?
USE Case Model.
   Input Devices Main Memory Backing Storage PROCESSOR
Signing for the Deaf using Virtual Humans Ian Marshall Mike Lincoln J.A. Bangham S.J.Cox (UEA) M. Tutt M.Wells (TeleVirtual, Norwich)
Chapter 11: Interaction Styles. Interaction Styles Introduction: Interaction styles are primarily different ways in which a user and computer system can.
Knowledge Base approach for spoken digit recognition Vijetha Periyavaram.
© 2014 wheresjenny.com Lip reading LIP READING. © 2014 wheresjenny.com Lip reading Vocabulary Decipher : Succeed in understanding, interpreting, or identifying.
MARCH 11, 2011 The Continuum of ASL The Continuum of ASL.
Sensory Impairment Team Faseman House Faseman Avenue Tile Hill Coventry CV4 9RB Tel:
An Overview of ViSiCAST zVirtual Signing: Capture, Animation, Storage and Transmission zJohn Glauert, Andrew Bangham, Stephen Cox, Ralph Elliott, Ian Marshall.
Healthcare Communications Shannon Cofield, RDH. Essential Question How can communication affect patient care?
Visual and Specialised Interfaces. 3D Software Developments have led to visual systems such as pictures of the earth’s structure and reproductions of.
Originated by K.Ingram, J.Westlake.Edited by N.A.Shulver Use Case Scripts What is a Use Case Script? The text to describe a particular Use Case interaction.
Learning Strategies of Deaf and Hearing Impaired Students in Higher Education Joan Fleming and John A. Hay Deaf Studies & BSL/English Interpreting University.
American Sign Language Kacie Huber. A Brief Description of ASL Expressed through the hands and face ASL has been used in America since the early 1800’s.
Background To Literacy 1-2-3
Dobrin / Keller / Weisser : Technical Communication in the Twenty-First Century. © 2008 Pearson Education. Upper Saddle River, NJ, All Rights Reserved.
1 Welcome to the Virtual Enterprise IME420 The Virtual Enterprise.
Caption Mic Jeannie Colangelo. Function Caption Mic is a speech to text assistive technology tool. It serves the deaf and hard of hearing population.
Higher Vision, language and movement. Strong AI Is the belief that AI will eventually lead to the development of an autonomous intelligent machine. Some.
NEW REQUIREMENTS New requirements – American Sign Language – Recently Generated Sentences Issues with Requirements Options for Implementation Choice and.
By: Brandi Pietila, Kristine Roman, Shelley Ruiz and Lauren Schminky By: Brandi Pietila, Kristine Roman, Shelley Ruiz and Lauren Schminky.
Presented by Tiffany Cooks November 17 th, 2008 Deaf-Blind Interpreting Class.
Piano Glove Team 3 Patent Analysis. Introduction Allow the user to play on a virtual keyboard on any flat surface The Piano Glove will consist of two.
-BY SAMPATH SAGAR( ) ABHISHEK ANAND( )
Let's GO Training Disability Equality & Care Training blueyonder.co.uk Phone:
This paper discusses a prototype for a health communication aid on a mobile phone to support a Deaf person visiting a public hospital pharmacy to collect.
Multimedia. A medium (plural media) is something that a presenter can use for presentation of information Two basic ways to present information are: –Unimedium.
Dr Guita Movallali. How does Cued Speech help speech? Speech is much more complex than the ability to make speech sounds. It is necessary to know how.
PREPARED BY MANOJ TALUKDAR MSC 4 TH SEM ROLL-NO 05 GUKC-2012 IN THE GUIDENCE OF DR. SANJIB KR KALITA.
 Communication Barriers. Learning Goals  5. I will be able to explain obstacles/barriers to effective communication  6. I will be able to suggest ways.
Communicating with people who have autism to keep them safe Robert Lamb.
KINECT AMERICAN SIGN TRANSLATOR (KAST)
G. Anushiya Rachel Project Officer
An Overview of ViSiCAST
Automatic Speech Recognition
Educating the Deaf Using Speech Recognition
BSL – British sign language
Languages in Lambeth. Languages in Lambeth Who we talked to and how Thanks to all the people who helped the review Online and paper survey Translated.
Presentation transcript:

Stephen Cox, Michael Lincoln and Judy Tryggvason School of Information Systems, University of East Anglia, Norwich NR4 7TJ, U.K. Melanie Nakisa Royal National Institute for Deaf People, 19–23 Featherstone Street, London EC1Y 8SL U.K. Mark Wells, Marcus Tutt and Sanja Abbott Televirtual Ltd. Anglia House, Agricultural Hall Plain, Norwich, U.K.

What is TESSA ?  TESSA is an experimental system that aims to aid transactions between a deaf person and a clerk in a post office.  A speech recognizer recognizes the clerks speech  A phrase lookup identifies a sequence of British Sign Language (BSL) that relates to this speech  A 3-d digital avatar TESSA signs these sequences to the deaf user.

Scope and Aim of TESSA  This system deals with a fairly limited vocabulary that is limited to PO transactions.  Defines a limited set of phrases that will be identified and translated  It’s important to not that it while it may be simpler to simply display speech on a screen, a major part of the deaf community has been profoundly deaf from the beginning.  This means that BSL was their primary language and English was secondary.  Hence, Signing is more effective than English plaintext.

System Design The post office clerk wears a headset microphone The speech recognizer is constantly active and responds when the clerk utters a phrase Prior to system design, they obtained transcripts of PO transactions and found a list of 115 phrases that depicted 90% of daily business transactions. Speaker Adaptation :- The speech recognizer can be trained within an hour to eliminate all doubts when converting speech to text.

System Design More sophisticated systems in the futue will adapt to a wider and unrestricted range of words Currently, the clerk can use any of the words from the ”legal” list and based on a probabilistic model, the intended phrase can be identified. The system can also be adapted to another spoken language and therefore has a lot of potential for translating text too.

System Design This diagram explains the interactions between the PO clerk and the deaf customer.

TESSA – The Avatar  The simplest way to sign phrases would be to have video recordings of signs for words play consecutively.  This is not the best way to do it since a system with a large vocabulary would require a very large library of such videos.  Instead, we have the virtual entity, TESSA, that is capable of seamlessly connecting various signing phrases.  The exact position of the avatar can be manipulated  Using 18 motion sensors for each hand, magnetic sensors on the wrist and upper arms, and 18 sensors on the signers face, movements for the 115 phrases were recorded for the avatar

TESSA – The Avatar Evaluated  The researchers evaluated:- The quality of the signs Difficulty of performing a transaction with TESSA Scope for improvement for interface with the deaf as well as the clerks.  Quality of signs :- Measured how self-explanatory the signs were Measured how acceptable they were in what they were trying to explain

TESSA – The Avatar Evaluated  Difficulty of transactions Transactions did take twice as long but this can be attributed to the lack of training for the PO clerks. Researchers believe that this can be alleviated with a little more training time for the clerks  Feedback from both sides All clerks said communication with TESSA was “slightly easier” or “much easier” The deaf participants said TESSA would be extremely helpful in explaining mildly to extremely complex problems at the PO Both sides look forward to a more encompassing TESSA which would dramatically simplify transactions.