Considerations for the Development and Fitting of Hearing-Aids for Auditory-Visual Communication Ken W. Grant and Brian E. Walden Walter Reed Army Medical.

Slides:



Advertisements
Similar presentations
Artrelle Fragher & Robert walker. 1 you look for the median 1 you look for the median 2 then you look for the min and max 2 then you look for the min.
Advertisements

Fill in missing numbers or operations
Frequency Band-Importance Functions for Auditory and Auditory- Visual Speech Recognition Ken W. Grant Walter Reed Army Medical Center Washington, D.C.
Musical Sounds Physical Science101 Chapter twenty Amanda Hyer.
Ozone Level ppb (parts per billion)
Characteristics of Congenital Hearing Loss Barbara S. Herrmann, Ph.D. CCC-A Audiology Department Massachusetts Eye and Ear Infirmary Harvard Medical School.
/4/2010 Box and Whisker Plots Objective: Learn how to read and draw box and whisker plots Starter: Order these numbers.
The Fully Networked Car Geneva, 4-5 March Jean-Pierre Jallet Car Active Noise Cancellation for improved car efficiency, From/In/To car voice communication.
Half Life. The half-life of a quantity whose value decreases with time is the interval required for the quantity to decay to half of its initial value.
1 1  1 =.
1  1 =.
DiseaseNo disease 60 people with disease 40 people without disease Total population = 100.
The 5S numbers game..
2 nd International Hearing Loop Conference: Telecoil Panel Linda Kozma-Spytek Research Audiologist Technology Access Program Gallaudet University RERC.
Revised estimates of human cochlear tuning from otoacoustic and behavioral measurements Christopher A. Shera, John J. Guinan, Jr., and Andrew J. Oxenham.
Modernising Childrens Hearing Aid Services Speech Testing Wave 4 EJB 17/05/04.
Number your paper from 1 to 10.
EXAMPLE 4 Solve a multi-step problem SHOPPING
Waterloo Region Nurse Practitioner-Led Clinic Quality Improvement Plan Initiative.
Matching and comparing coins and bills
Marks out of 100 Mrs Smith’s Class Median Lower Quartile Upper Quartile Minimum Maximum.
SOUND PRESSURE, POWER AND LOUDNESS MUSICAL ACOUSTICS Science of Sound Chapter 6.
Sudden Cardiac Arrest (SCA)
Improving audibility as a foundation for better speech understanding Pamela Souza, PhD Northwestern University Evanston, IL.
Figures for Chapter 9 Prescription
MathOnMonday® Presents: Building Math Courage® A program designed for Adults or Kids who have struggled to learn their Math in the traditional classroom!
Number bonds to 10,
Beat the Computer Drill Divide 10s Becky Afghani, LBUSD Math Curriculum Office, 2004 Vertical Format.
Welcome to the Annual Meeting of the Canadian Coalition for Good Governance.
INTRODUCTION Human hearing and speech cover a wide frequency range from 20 to 20,000 Hz, but only a 300 to 3,400 Hz range is typically used for speech.
PREDICTION OF UNCOMFORTABLE LOUDNESS LEVELS IN NORMAL HEARING ADULTS FROM THE AUDITORY STEADY STATE RESPONSES Barajas de Prat, J.J. (1), Zenker Castro,
“I’m not listening!” This presentation will:  Give a brief overview of hearing loss.  Explain typical needs of deaf or hard of hearing students. 
Karen Iler Kirk PhD, Hearing Science, The University of Iowa –Speech perception & cochlear implants Professor, Dept. of Speech, Language and Hearing Sciences.
Room Acoustics: implications for speech reception and perception by hearing aid and cochlear implant users 2003 Arthur Boothroyd, Ph.D. Distinguished.
Hearing and Deafness Outer, middle and inner ear.
More From Music music through a cochlear implant Dr Rachel van Besouw Hearing & Balance Centre, ISVR.
Chapter 6: Masking. Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking.
SYED SYAHRIL TRADITIONAL MUSICAL INSTRUMENT SIMULATOR FOR GUITAR1.
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
A.Diederich– International University Bremen – Sensation and Perception – Fall Frequency Analysis in the Cochlea and Auditory Nerve cont'd The Perception.
Speech perception Relating features of hearing to the perception of speech.
Interrupted speech perception Su-Hyun Jin, Ph.D. University of Texas & Peggy B. Nelson, Ph.D. University of Minnesota.
Fitting Formulas Estimate amplification requirements of individual patients Maximize intelligibility of speech Provide good overall sound quality Keep.
TOPIC 4 BEHAVIORAL ASSESSMENT MEASURES. The Audiometer Types Clinical Screening.
Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners L.M. Litvak, A.J. Spahr, A.A. Saoji,
1 New Technique for Improving Speech Intelligibility for the Hearing Impaired Miriam Furst-Yust School of Electrical Engineering Tel Aviv University.
Deborah Edwards, MS,CCC-A Dawn Ruley, AuD, CCC-A Advanced FM: Programming & Verification.
1 Audio Compression Multimedia Systems (Module 4 Lesson 4) Summary: r Simple Audio Compression: m Lossy: Prediction based r Psychoacoustic Model r MPEG.
Chapter 5. Sound Intensity (db) = 20 log (P1/P2)
A Child with a Hearing Impairment, Including Deafness ECEA Disability Category, Definition and Eligibility Criteria CDE Eligibility Training Slides March.
Super Power BTE A great new Trimmer Family. The new & complete, fully digital Trimmer family ReSound is proud to introduce the complete new trimmer family,
1 Auditory, tactile, and vestibular sensory systems n Perceptually relevant characteristics of sound n The receptor system: The ear n Basic sensory characteristics.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Tactile Auditory Sensory Substitution Ryan Thome, Sarah Offutt, Laura Bagley, Amy Weaver, Jack Page BME 200/300 October 20, 2006.
1 Audio Compression. 2 Digital Audio  Human auditory system is much more sensitive to quality degradation then is the human visual system  redundancy.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
Analog vs. Digital “’A Hearing Perspective” Andy Raguskus, CEO SONIC innovations, Inc.
Temporal masking of spectrally reduced speech: psychoacoustical experiments and links with ASR Frédéric Berthommier and Angélique Grosgeorges ICP 46 av.
SOUND PRESSURE, POWER AND LOUDNESS MUSICAL ACOUSTICS Science of Sound Chapter 6.
Functional Listening Evaluations:
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Predicting the Intelligibility of Cochlear-implant Vocoded Speech from Objective Quality Measure(1) Department of Electrical Engineering, The University.
Evaluation of a Binaural FMV Beamforming Algorithm in Noise Jeffery B. Larsen, Charissa R. Lansing, Robert C. Bilger, Bruce Wheeler, Sandeep Phatak, Nandini.
Figures for Chapter 8 Candidacy Dillon (2001) Hearing Aids.
SOUND PRESSURE, POWER AND LOUDNESS
Hz A A A A A A LDL for speech Aided Speech output Functional Gain: The way.
A. B..
4aPPa32. How Susceptibility To Noise Varies Across Speech Frequencies
Masking for SRT Crosshearing suspected when SRTTE – IA ≥ Best BCNTE
The distance problem 60 dBA 4 ft. 54 dBA 8 ft.
Presentation transcript:

Considerations for the Development and Fitting of Hearing-Aids for Auditory-Visual Communication Ken W. Grant and Brian E. Walden Walter Reed Army Medical Center Army Audiology and Speech Center Washington, DC

Typical Listening Environments for Multi- Memory Hearing Aids Quiet Background Noise –Low-frequency –High-frequency –Multiple Talkers Reverberation Music and Environmental Sounds

Face-to-face communication is the most common of all listening environments. Should hearing aids be programmed differently when visual speech cues are available? Auditory-Visual Listening Environments

If we improve auditory-only speech recognition, do we necessarily improve auditory-visual speech recognition? What speech information is provided by speechreading? What speech information is provided by hearing aids? To what extent is the information provided by speechreading and by hearing aids redundant? What frequency regions best convey information that is complementary to speechreading?

If we improve auditory-only speech recognition, do we necessarily improve auditory-visual speech recognition ? Recognition of medial consonants (/ A C A /) spoken by a female talker and recorded on optical disk. Manipulated the auditory intelligibility by band-pass filtering. Compared A and AV speech recognition scores for normal-hearing subjects.

Auditory Consonant Recognition (%) r = 0.38 Auditory-Visual Consonant Recognition (%)

What speech information is provided by speechreading? Recognition of medial consonants (/ A C A /) spoken by female talker and recorded on optical disk. Speechreading only. Measured percent information transmission of voicing, manner, and place cues.

0% 3% 4% Voicing Manner Place Other Visual Feature Distribution %Information Transmitted re: Total Information Received 93%

HYPOTHESIS: The amount of benefit obtained from the combination of visual and auditory speech cues depends of the degree of redundancy between the two modalities. Speechreading provides information primarily about place-of-articulation. Hearing aids that provide primarily (redundant) place information will result in small AV benefit. Hearing aids that provide (complementary) voicing and manner information will result in large AV benefit.

Auditory Voicing+Manner Information (re: Information Received) Auditory Place Information (re: Information Received) r = r = 0.88 AV Benefit (AV-A)

What speech information is provided by hearing aids? Twenty-five patients fit with the ReSound BT2 multi- band wide dynamic range compression hearing system. Recognition of medial consonants (/ A C A /) under four receiving conditions (speech level at 50 dB SPL): –Unaided Listening (without hearing aid or visual cues) –Aided Listening (with hearing aid, no visual cues) –Unaided Speechreading (without hearing aid) –Aided Speechreading (with hearing aid)

E E E E E E E E E E I I I I I I I I I I Frequency (Hz) E Right (N=25) I Left (N=25) NU-6 Right: M=86.1% (sd: 6.7%) Left: M=85.1% (sd: 6.8%) Hearing Threshold (dB HL)

Voicing Manner Place Consonant Recognition Without Hearing Aid (%) Listening With Hearing Aid Listening With Speechreading Hearing Aid Plus Speechreading

To what extent is the information provided by speechreading and by hearing aids redundant? Amplification and speechreading provide somewhat redundant information. –Hearing aid provided information primarily about place-of- articulation. Smaller gains over unaided hearing were achieved for voicing and manner cues. –Speechreading provided substantial improvement over unaided hearing for place and some improvement for manner. No benefit for voicing.

What frequency regions best convey information that is complementary to speechreading? Auditory recognition of medial consonants (/ A C A /) by normal hearing subjects. Band-pass filtered speech conditions with equal Articulation Index. Analyzed confusions for information transmission of voicing, manner, and place features.

Center Frequency (Hz) Place Manner Voicing V M P AI=0.1 Percent Information Transmitted

Center Frequency (Hz) Place Manner Voicing V M P AI=0.2 Percent Information Transmitted

Center Frequency (Hz) Place Manner Voicing V M P AI=0.3 Percent Information Transmitted

SUMMARY Improving auditory speech recognition does not necessarily improve AV speech recognition. To improve AV speech recognition A and V cues should be maximally complementary. Speechreading provides information about place-of- articulation. Hearing aids tend to provide mostly place information, making them somewhat redundant with speechreading. Complementary cues to speechreading (voicing and manner) are best conveyed by low-frequencies.

Recommendations for designing hearing aids for Auditory-Visual speech communication Programming should focus on improving the recognition of voicing information, and to a lessor extent, manner-of- articulation information. Since voicing and manner information are primarily low frequency information, extend frequency response to include this region. May need to consider effects of compression on low- frequency amplitude envelope. Traditional concerns about upward spread of masking may not be warranted under auditory-visual conditions.

Acknowledgement NIH Grant DC00792 ReSound Corporation, Redwood City, CA Department of Clinical Investigation, Walter Reed Army Medical Center, Washington, DC