1 What matters more, the right information or the right place? TRACK –Behavior and the Brain SYMPOSIUM. –Auditory Learning in Profoundly Deaf Adults with.

Slides:



Advertisements
Similar presentations
Looking At Other Digipacks/CD‘s
Advertisements

Encouraging enterprise Moving towards a zero-waste society Developing a capable population Fostering resilient communities Advancing global citizenship.
The Central Auditory System Functions: 1.High-level auditory abilities: “recognition, interpretation, integration” – e.g., speech recognition, speaker.
Time-Frequency Analysis Analyzing sounds as a sequence of frames
Freedom Processor for Nucleus CI24 The South of England Cochlear Implant Centre Experience Roberta Buhagiar, Sarie Cross and Julie Eyles 1 Aided Thresholds.
Karen Iler Kirk PhD, Hearing Science, The University of Iowa –Speech perception & cochlear implants Professor, Dept. of Speech, Language and Hearing Sciences.
School of Electrical and Computer Engineering Weldon School of Biomedical Engineering Thomas M. Talavage, PhD Associate Professor, Electrical and Computer.
Cochlear Implants The cochlear implant is the most significant technical advance in the treatment of hearing impairment since the development of the hearing.
MIMICKING THE HUMAN EAR Philipos Loizou (author) Oliver Johnson (me)
More From Music music through a cochlear implant Dr Rachel van Besouw Hearing & Balance Centre, ISVR.
Speech perception 2 Perceptual organization of speech.
Pitch Perception.
Increasing Hearing Ability – Using Frequency Shifting and Channel Control Yuan Alex Gao CMPT
Copyright © Allyn & Bacon (2007) Data and the Nature of Measurement Graziano and Raulin Research Methods: Chapter 4 This multimedia product and its contents.
Beginning the Visualization of Data
SYED SYAHRIL TRADITIONAL MUSICAL INSTRUMENT SIMULATOR FOR GUITAR1.
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
Significance Testing Chapter 13 Victor Katch Kinesiology.
A.Diederich– International University Bremen – Sensation and Perception – Fall Frequency Analysis in the Cochlea and Auditory Nerve cont'd The Perception.
Microsoft ® Office Excel ® 2007 Training Get started with PivotTable ® reports [Your company name] presents:
Bone Anchored Hearing Aid or Cochlea Implant?
Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners L.M. Litvak, A.J. Spahr, A.A. Saoji,
Consensus Hunt Game Click here to continue Click here to continue.
Different evaluations for different kinds of hearing Matthew B. Winn Au.D., Ph.D. Waisman Center, UW-Madison Dept. of Surgery.
ACE TESOL Diploma Program – London Language Institute OBJECTIVES You will understand: 1. Various techniques for assessing student listening ability. You.
Nick Hamilton EE April 2015 Abstract: When natural hearing is lost, cochlear implants provide an opportunity to restore hearing. These electronic.
Audiology Training Course ——Marketing Dept. Configuration of the ear ① Pinna ② Ear canal ③ Eardrum ④ Malleus ⑤ Incus ⑥ Eustachian tube ⑦ Stapes ⑧ Semicircular.
LE 460 L Acoustics and Experimental Phonetics L-13
Elasticity and Its Uses
1 Improved Subjective Weighting Function ANSI C63.19 Working Group Submitted by Stephen Julstrom for October 2, 2007.
The Limits of the Left Hemisphere Interpreter in a Split Brain patient Rami H. Gabriel University of California, Santa Barbara Department of Psychology.
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Ruth Litovsky University of Wisconsin Madison, WI USA Brain Plasticity and Development in Children and Adults with Cochlear Implants
Measures of Variability In addition to knowing where the center of the distribution is, it is often helpful to know the degree to which individual values.
The Care and Feeding of Loudness Models J. D. (jj) Johnston Chief Scientist Neural Audio Kirkland, Washington, USA.
Week Five: Assure Case Study Ruth Vega CMP/555 September 26, 2005.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Examples of individuals that would benefit from this tutorial: A worker who has lost a job and needs to find a job in a related occupation—immediately!
SOLIDWORKS: Lesson II – Revolutions, Fillets, & Chamfers UCF Engineering.
Md.Kausher ahmed Electrical department. Biomedical engineering Code:6875.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
Assistive Technology- Cochlear Implants By Anne Bartoszek.
COCHLEAR IMPLANTS Brittany M. Alphonse Biomedical Engineering BME 181.
Gammachirp Auditory Filter
DIRECTIONAL HYPOTHESIS The 1-tailed test: –Instead of dividing alpha by 2, you are looking for unlikely outcomes on only 1 side of the distribution –No.
Hearing Research Center
Examining Constraints on Speech Growth in Children with Cochlear Implants J. Bruce Tomblin The University of Iowa.
1.  Interpretation refers to the task of drawing inferences from the collected facts after an analytical and/or experimental study.  The task of interpretation.
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
Speech Perception.
Sensitivity of HOM Frequency in the ESS Medium Beta Cavity Aaron Farricker.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
 An important first quality of any good thesis is that it should stem from real problems in the field. Therefore, a researcher should emphasize the reasons.
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
Dr Guita Movallali. How does Cued Speech help speech? Speech is much more complex than the ability to make speech sounds. It is necessary to know how.
CHAPTER 11 Mean and Standard Deviation. BOX AND WHISKER PLOTS  Worksheet on Interpreting and making a box and whisker plot in the calculator.
Author: Zhenhui Rao Student: 范明麗 Olivia I D:
4aPPa32. How Susceptibility To Noise Varies Across Speech Frequencies
Figure 1 The pattern reversal stimuli
Cochlear implants Current Biology
Your Ear…. Your Ear…..
Speech Perception.
Scottish Health Survey What we know so far
Analysing your pat data
Cochlear implants Current Biology
MICE, MUSIC, & MAZES. MICE, MUSIC, & MAZES We have been running mice through mazes since we figured out they both started with M, which was quite a.
Presentation transcript:

1 What matters more, the right information or the right place? TRACK –Behavior and the Brain SYMPOSIUM. –Auditory Learning in Profoundly Deaf Adults with Cochlear Implants –Monday, February 16, 12:30 p.m. - 2:00 p.m. TITLE –What matters more, the right information or the right place? AUTHORS. –Stuart Rosen and Andrew Faulkner Dept. of Phonetics & Linguistics University College London SPEAKER. Stuart Rosen –Telephone:

2 Cochlear implants have proved a boon for deaf people in providing access to the auditory world. Yet there is still a great degree of variability in outcome for people provided with implants, and there are many possible reasons for this. One we will focus on here concerns the depth to which the electrode is inserted into the cochlea. All current cochlear implants try to mimic the normal working of the inner ear by dividing up the sound into separate frequency bands and delivering those bands to the appropriate nerve fibres. In the normal ear, different groups of hearing nerve fibres go to different parts of the cochlea, with each part of the cochlea most sensitive to a particular sound frequency. For children with little auditory experience, and with much ability to adapt, the exact nerve fibres to which that information is delivered is probably not very important.

3 But for deafened adults, with many years of auditory experience, the match of frequency band to auditory nerve fibres seems to be very important, at least initially. Matching up frequency bands to cochlear locations across the full frequency range of hearing depends upon a complete insertion of the electrode array, deep into the cochlea. But electrodes are often not inserted fully, for a variety of reasons. Because frequency positions in the normal cochlea are ordered from high to low as one moves from the base of the cochlea (where the electrode is inserted) to its apex, an incomplete electrode insertion means that the lower frequency areas of the cochlear are not reached. There are then 2 obvious choices of what to do in the case of incomplete insertion. Either one can match frequency bands to cochlear locations, but also lose the low frequency information corresponding to the cochlear locations not reached by the electrode. Or, one can decide which are the most important frequency regions to preserve in the speech, and then present them to electrodes regardless of their position. This is equivalent to moving the frequency content, or spectrum, of sounds upwards. The drawback of this approach is that speech is now presented with a different frequency content.

4 Because of the difficulty of doing well-controlled studies in genuine cochlear implant patients, our studies (and many performed by others) have used simulations of the sounds that cochlear implants deliver to patients, played to normal listeners (examples of these are available as WAV files). Initial studies by Bob Shannon & his colleagues (another speaker at this symposium) showed that shifting the frequency content of speech sufficiently could lead to an immediate and dramatic decrement in intelligibility. We were able to replicate this finding, assuming an electrode array that was inserted 6.5 mm short. For understanding words in sentences, performance dropped from about 65% correct to 0%. For audio examples of these stimuli, and spectrograms, or Voiceprints, of them, see the next page. (A spectrogram shows the dynamic frequency content of sounds. Time runs along the x-axis and frequency along the y-axis. The darkness of the trace indicates the amount of energy in a particular frequency region at a particular time. The darker the trace, the more energy.)

5 Simulations of incomplete insertions 0 mm 2.2 mm 4.3 mm 6.5 mm 12 noise-vocoded channels

6 But the listeners in these first experiments were given no chance to adapt to the altered stimuli. The acoustic characteristics of speech vary a lot from person to person, because of differences in sex, age, size, accent and emotional state, among others. So one essential aspect of our abilities as perceivers of speech is to adapt to changes in the particular acoustic form of the speech. In fact, people can adapt to extreme changes in the form of speech. One example we have recently studied (in an MSc thesis by Ruth Finn) is speech that has its spectrum rotated. See the next page for spectrograms and audio examples of a sentence in its normal form (top) and rotated (bottom), so its spectrogram looks upside down.

7 A more extreme transformation: Spectrally-rotated speech Rotate the spectrum of speech around 2 kHz (Blesser, 1972 — low frequencies become high, and vice-versa). Preserves aspects of voice melody and buzziness/ noisiness. She cut with her knife.

8 The following box plots show performance for identifying words in unknown spectrally-rotated sentences for a male and a female speaker for two groups of listeners. The control group were simply tested on three occasions, but had no other experience of rotated speech. The experimental group were tested before, in the middle of, and after, 6 hours of training by a female speaker using live connected speech that was spectrally-rotated. The median, or ‘typical’ score is shown by the horizontal middle bar on each box. (For more explanation about boxplots, see: Note that performance in both groups is very low before training, and does not change for the controls. For the group who received training, performance increased significantly. We suspect that performance for the female speaker improves faster because all the training involved a female speaker, albeit a different one. Also note that the 6 hours of training is a very small amount compared to what one would experience through even a few days of normal experience. So we expect that performance would improve even further with more training.

9 Identifying words in sentences (sound alone) male speakerfemale speaker

10 To return to the question of adapting to incomplete electrode insertions, we simulated an insertion that was too shallow by 6.5 mm, and chose to fix the frequency range of the speech presented rather than lose the low frequencies. As mentioned before, this is equivalent to an upward shift in the frequency content of the speech. We trained listeners to understand this shifted speech for 3 hours. The following box plots show performance for identifying words in unknown sentences before, throughout, and after training. Even after such little training, performance levels increase from near zero, to about ½ the level possible with the unshifted stimuli. (Imperfect performance for even the unshifted stimuli is due to a number of aspects of the processing meant to simulate what cochlear implants deliver to patients, including smearing of spectral detail.) Rosen, S., Faulkner, A., & Wilkinson, L. (1999) Adaptation by normal listeners to upward spectral shifts of speech: Implications for cochlear implants. J Acoust Soc Am 106:

11 Deleterious effects of spectral shifting can be ameliorated through experience Words in sentences over 3 hours of experience of continuous speech Pre-training  Post-training 

12 Given that listeners can adapt to spectral shifts to at least partially, there is still the question of the extent to which this would better than simply avoiding the necessary adaptation by matching frequency information to cochlear place (but losing some low frequency components in the speech). The spectrograms below give audio examples and spectrograms of the effect of having an electrode fully inserted (top) compared to one that is 8 mm short of full insertion (bottom). Note the higher range of frequencies (not very important for intelligibility), and missing low frequency components for the short insertion.

13 We therefore did a direct comparison of shifted vs. matched conditions in a crossover training study with 3 hours of training per condition. Looking at only at results for sentences, the boxplots on the next page shows that for the male talker, performance is always better in the shifted condition, whereas for the female talker, matched is better. This is easily understood given that male voices have more crucial information in the low frequency region that is lost in the matched condition. On the other hand, performance in the shifted condition benefits more from training. This also is easily understood, as in the shifted condition, the information is still present, but presented in a new way. In the matched condition, crucial information is lost. It also seems likely that with further training, performance in the shifted condition would improve further still, leading to improved performance even for the female talker. Faulkner, A., Rosen, S., & Norman, C. (2001) The right information matters more than frequency-place alignment: simulations of cochlear implant processors with an electrode array insertion depth of 17 mm. Speech, Hearing and Language: Work in Progress 13:

14 Shifting vs. Matching: sentences Male talker: shifted > matched Significant training effect: Training helps in matched only when first Female talker: shifted < matched Significant training effect: Training helps more in shifted matched shifted matched

15 In a final example of the ability of listeners to adapt to transformations of the speech signal, our PhD student Matt Smith has been simulating the effects of a missing region of nerve fibres, a so-called hole in hearing. As the next slide illustrates, it is normally assumed that the residual auditory nerve fibres necessary for the functioning of a cochlear implant, are spread reasonably uniformly through the cochlea.

16 ‘normal’ cochlear representation high frequency low frequency base apex analysis filter bank cochlear location

17 But suppose auditory nerve fibres do not survive in a region normally tuned to the crucial mid-frequency region of speech. Again we have two choices. We can preserve the frequency-to-electrode relationship and avoid the need for adaptation, accepting the fact that a frequency region is dropped from the signal, illustrated in the next slide.

18 A ‘hole’ can mean dropped frequencies high frequency low frequency base apex analysis filter bank cochlear location

19 Or similarly as was done in the shifted condition previously, we can warp the representation of frequencies from the hole to adjacent regions, as illustrated in the next slide. But here we would expect adaptation to the altered acoustic structure of speech to be necessary.

20 high frequency low frequency base apex A ‘hole’ can mean warped frequencies

21 Audio examples and spectrograms

22 The boxplots in the next slide show performance for the three conditions as the listeners receive 3 hours of experience in the warped condition. Interestingly, even before any training, the warped condition leads to a better performance than the dropped condition, but performance increases markedly for the warped condition but relatively little for the dropped condition. This study shows a clear advantage for altering the acoustic structure of speech to preserve crucial information, because people show a great deal of plasticity in adapting to altered acoustic structure.

23 Adapting to a warped spectrum

24 Conclusions Adaptation to spectrally-shifted, warped and rotated speech suggests considerable plasticity and scope for learning by adult listeners in general, and implant users, in particular. This ability needs to be allowed for in any study of speech processing, simulated or real. Perceptual testing without allowing opportunity for learning is likely to seriously underestimate the intelligibility of signals transformed in acoustic structure. Speech processors should deliver the most informative frequency range irrespective of electrode position.

25 WAV files available: I File nameSlide numberDescription ice_cream_no_shift.WAV5 left column Top “The ice cream was pink” Simulation of a cochlear implant (CI) with full electrode insertion. ice_cream_22mm_shift.WAV5 left column One down “The ice cream was pink” shifted condition Simulation of a CI electrode incompletely inserted by 2.2 mm ice_cream_43mm_shift.WAV5 left column Two down “The ice cream was pink” shifted condition Simulation of a CI electrode incompletely inserted by 4.3 mm ice_cream_65mm_shift.WAV5 left column Bottom “The ice cream was pink” shifted condition Simulation of a CI electrode incompletely inserted by 6.5 mm buying_bread_normal.WAV7 top spectrogram “They’re buying some bread” Normal speech buying_bread_rotate.WAV7 bottom spectrogram “They’re buying some bread” Spectrally-rotated speech cut_knife_normal.WAV7 bottom left“She cut with her knife” Normal speech cut_knife_rotated.WAV7 bottom right “She cut with her knife” Spectrally-rotated speech green_tomatoes_0mm_match_m.WAV12 top spectrogram “The green tomatoes are small” Simulation of a cochlear implant (CI) with full electrode insertion. green_tomatoes_8mm_match_m.WAV12 bottom spectrogram “The green tomatoes are small” matched condition Simulation of a CI electrode incompletely inserted by 8 mm

26 WAV files available: II File nameSlide numberDescription birch_normal.wav21 top spectrogram “The birch canoe slid on the smooth planks.” Simulation of a cochlear implant (CI) with full representation of auditory nerve fibres. birch_dropped.wav21 middle spectrogram “The birch canoe slid on the smooth planks.” dropped condition Simulation of a CI with a mid-frequency hole in auditory nerve fibres, but preserving the frequency-to-cochlear-place relationship. Information is lost. birch_warped.wav21 bottom spectrogram “The birch canoe slid on the smooth planks.” warped condition Simulation of a CI with a mid-frequency hole in auditory nerve fibres, in which the representation of the frequency information is warped. Information is preserved, but represented differently.

27 Thank you!