Presentation on theme: "Considerations for the Development and Fitting of Hearing-Aids for Auditory-Visual Communication Ken W. Grant and Brian E. Walden Walter Reed Army Medical."— Presentation transcript:
Considerations for the Development and Fitting of Hearing-Aids for Auditory-Visual Communication Ken W. Grant and Brian E. Walden Walter Reed Army Medical Center Army Audiology and Speech Center Washington, DC
Typical Listening Environments for Multi- Memory Hearing Aids Quiet Background Noise –Low-frequency –High-frequency –Multiple Talkers Reverberation Music and Environmental Sounds
Face-to-face communication is the most common of all listening environments. Should hearing aids be programmed differently when visual speech cues are available? Auditory-Visual Listening Environments
If we improve auditory-only speech recognition, do we necessarily improve auditory-visual speech recognition? What speech information is provided by speechreading? What speech information is provided by hearing aids? To what extent is the information provided by speechreading and by hearing aids redundant? What frequency regions best convey information that is complementary to speechreading?
If we improve auditory-only speech recognition, do we necessarily improve auditory-visual speech recognition ? Recognition of medial consonants (/ A C A /) spoken by a female talker and recorded on optical disk. Manipulated the auditory intelligibility by band-pass filtering. Compared A and AV speech recognition scores for normal-hearing subjects.
What speech information is provided by speechreading? Recognition of medial consonants (/ A C A /) spoken by female talker and recorded on optical disk. Speechreading only. Measured percent information transmission of voicing, manner, and place cues.
0% 3% 4% Voicing Manner Place Other Visual Feature Distribution %Information Transmitted re: Total Information Received 93%
HYPOTHESIS: The amount of benefit obtained from the combination of visual and auditory speech cues depends of the degree of redundancy between the two modalities. Speechreading provides information primarily about place-of-articulation. Hearing aids that provide primarily (redundant) place information will result in small AV benefit. Hearing aids that provide (complementary) voicing and manner information will result in large AV benefit.
Auditory Voicing+Manner Information (re: Information Received) Auditory Place Information (re: Information Received) r = r = 0.88 AV Benefit (AV-A)
What speech information is provided by hearing aids? Twenty-five patients fit with the ReSound BT2 multi- band wide dynamic range compression hearing system. Recognition of medial consonants (/ A C A /) under four receiving conditions (speech level at 50 dB SPL): –Unaided Listening (without hearing aid or visual cues) –Aided Listening (with hearing aid, no visual cues) –Unaided Speechreading (without hearing aid) –Aided Speechreading (with hearing aid)
E E E E E E E E E E I I I I I I I I I I Frequency (Hz) E Right (N=25) I Left (N=25) NU-6 Right: M=86.1% (sd: 6.7%) Left: M=85.1% (sd: 6.8%) Hearing Threshold (dB HL)
Voicing Manner Place Consonant Recognition Without Hearing Aid (%) Listening With Hearing Aid Listening With Speechreading Hearing Aid Plus Speechreading
To what extent is the information provided by speechreading and by hearing aids redundant? Amplification and speechreading provide somewhat redundant information. –Hearing aid provided information primarily about place-of- articulation. Smaller gains over unaided hearing were achieved for voicing and manner cues. –Speechreading provided substantial improvement over unaided hearing for place and some improvement for manner. No benefit for voicing.
What frequency regions best convey information that is complementary to speechreading? Auditory recognition of medial consonants (/ A C A /) by normal hearing subjects. Band-pass filtered speech conditions with equal Articulation Index. Analyzed confusions for information transmission of voicing, manner, and place features.
Center Frequency (Hz) Place Manner Voicing V M P AI=0.1 Percent Information Transmitted
Center Frequency (Hz) Place Manner Voicing V M P AI=0.2 Percent Information Transmitted
Center Frequency (Hz) Place Manner Voicing V M P AI=0.3 Percent Information Transmitted
SUMMARY Improving auditory speech recognition does not necessarily improve AV speech recognition. To improve AV speech recognition A and V cues should be maximally complementary. Speechreading provides information about place-of- articulation. Hearing aids tend to provide mostly place information, making them somewhat redundant with speechreading. Complementary cues to speechreading (voicing and manner) are best conveyed by low-frequencies.
Recommendations for designing hearing aids for Auditory-Visual speech communication Programming should focus on improving the recognition of voicing information, and to a lessor extent, manner-of- articulation information. Since voicing and manner information are primarily low frequency information, extend frequency response to include this region. May need to consider effects of compression on low- frequency amplitude envelope. Traditional concerns about upward spread of masking may not be warranted under auditory-visual conditions.
Acknowledgement NIH Grant DC00792 ReSound Corporation, Redwood City, CA Department of Clinical Investigation, Walter Reed Army Medical Center, Washington, DC