Presentation is loading. Please wait.

Presentation is loading. Please wait.

Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky

Similar presentations


Presentation on theme: "Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky"— Presentation transcript:

1 Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky
Possible Implications of Interaural Mismatch in the Place-of-Stimulation  on Spatial Release from Masking in Cochlear Implant Listeners Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky

2 Cochlear Implants (CI) restore hearing
Cochlear implants are used to restore hearing in profoundly deaf individuals by converting acoustical sound into electrical signals that stimulate the auditory nerves and take advantage of the tonotopic organization of the cochlea Bilateral cochlear implantation has become much more common in the last 8-10 years. Source:

3 Binaural cues tells us where sounds come from
Interaural time difference (ITD) Bang! Theoretically, with bilateral implantation, binaural cues such as ITDs (animate), which is the difference in the time of arrival of a sound at the 2 ears, and ILDs (animate), the difference in intensity at the two ears, become available to the cochlear implant user. These cues should ideally allow the bilateral user to tell where sounds are coming from on the horizontal plane. Research has shown that with 2 implants, bilateral users perform better at sound localization tasks and understanding speech in noisy environments over unilateral users, which you would expect since they have access to binaural cues. Interaural level difference (ILD)

4 Spatial Release from Masking (SRM)
The improvement in speech-in-noise understanding gained from a separation between target and masker sources Masker Target One other benefit of bilateral implantation is that there should be a benefit of spatial release from masking or SRM since implantees can somewhat tell where sounds are coming from. SRM is the improvement in speech-in-noise understanding gained from a separation between target and masker sources.

5 Normal hearing listeners
SRM (dB) CI users Normal hearing listeners This is a figure taken from Loizou et al. (2009) and it compares binaural listening in CI users and normal hearing listeners. I particularly want to focus on the data presented in the first row which shows the data for an SRM condition. Here, we see that SRM for CI users is on average about 2-4 dB, where as in NH listeners it is much higher at 6-8 dB. What we see here is that CI users seem to get a small amount of SRM but it is not as much as NH listeners. However, much of this SRM in CI users is due to head shadow rather than a true benefit of spatial release. There can be a few reasons why CI users show very little benefit of SRM. They include: Neural survival after deafness How well the acoustic sounds are being encoded and transmitted via electrical signals Surgical issues leading to differences in the insertion depths of the implants in the two cochlea. This last issue is the one I want to focus on for the rest of my talk. Loizou et. al. (2009)

6 Interaural place-of-stimulation mismatch
Base Apex Electrode # 2 4 6 8 10 12 14 16 18 20 22 7000 Hz 4000 Hz Left 2 4 6 8 10 12 14 16 18 20 22 Electrode # Right Let me explain what I mean by an interaural mismatch and its implications Imagine two electrode arrays, one inserted into each cochlea but at different depths. Typically a CI speech processor divides acoustic information up into different frequency bands and the information in each band is sent to the electrode number allocated to that band. However, if the electrode arrays are not inserted at the same depth, it may excite a different region of the cochlea in each ear leading to an interaural mismatch in the acoustic information being sent to the brain for processing. Now imagine that the acoustic information contained cues for a sound source location, that is ITD and ILD cues, how would the brain interpret this information, particularly if frequency matching is important for ITD/ILD processing physiologically? The answer to this question might affect how well CI users can get an advantage from SRM. 20000 Hz 20 Hz

7 Mismatch affects SRM Here’s some SRM data that we’ve been collecting in NH listeners using a CI simulator and with different amounts of interaural mismatch. Here, I’m plotting SRM as a function of interaural mismatch shown in mm along the cochlea. What we see is that there is quite a bit of individual variation in terms of how much mismatch affects SRM. For SYV, we see that SRM performance is pretty consistent except at large mismatches, where performance decreases significantly. For SRE, performance drops dramatically for –ve mismatches but not +ve ones and for SRS performance increases at +ve mismatches. What might cause these individual differences? I believe some of our work investigating the effect of mismatch on sound image fusion and binaural sensitivity might shed some insight into this matter.

8 What is the effect of mismatch on sound image fusion?
Base Apex Electrode # 2 4 6 8 10 12 14 16 18 20 22 Left Electrode # 2 4 6 8 10 12 14 16 18 20 22 Right We conducted a study that examined the effect of interaural mismatch on sound image fusion in CI listeners. 20000 Hz 20 Hz

9 Find a matched pair via pitch matching
Base Apex Electrode # 2 4 6 8 10 12 14 16 18 20 22 Left Electrode # 2 4 6 8 10 12 14 16 18 20 22 Right Δ=0 In order to set up this experiment an interaural pitch matched pair in the center of the electrode array was found through pitch-matching. This is a common technique and assumes that electrodes that have the same pitch excite the same area of the cochlea at the 2 ears. We found a pitch-matched pair near the middle of the array for our experiment. --- In order to set up this experiment an interaural pitch matched pair in the center of the electrode array was found using two tasks. First a pitch rating task was used where the subject was asked to rate the pitch of the sound presented on one of the electrodes using a number from 1 to 100 where 1 was for a low sound and 100 was for a high pitched sound. Here is the result for 1 subject (press). The electrodes that sounds were presented on are shown on the x-axis and the range of ratings on the y-axis. The markers show the average rating over 10 trials and the error bars the standard deviation. On average there is a trend but there is also high variability. We use this task as a guide for picking electrodes to test in the 2nd task. In the second task, subjects compared the pitch of 2 sounds across the ears. Subjects were asked to respond by saying whether the second sound was “much lower”, “lower”, “the same”, “higher” or “much higher” in pitch than the first. We kept one electrode fixed (here, L12) and varied the electrode we stimulated on the other side. We looked for the pair where there was the highest number of same responses (here L12,R11). 20000 Hz 20 Hz

10 Mismatch conditions Δ=0 Δ=8 Δ=4 Δ=2 Δ=-2 Δ=-4 Δ=-8 1.5 mm Left Right
Base Apex Electrode # 2 4 6 8 10 12 14 16 18 20 22 Left Electrode # 2 4 6 8 10 12 14 16 18 20 22 Right Δ=0 We then held one of the electrodes in the pair constant and varied the electrodes we stimulated in the contralateral ear to create mismatched pairs. For this talk I am going to use delta to describe the amount of mismatch in terms of number of electrodes away from the pitch-matched pair. For example, delta = 2 means there was a mismatch of 2 electrode spaces or 3mm. +ve deltas mean that the electrode in the right ear is at a higher frequency place compared to the left ear electrode and –ve deltas mean the left ear electrode is at a higher frequency place compared to the right. 20000 Hz 20 Hz Δ=8 Δ=4 Δ=2 Δ=-2 Δ=-4 Δ=-8 Right higher in frequency Left higher in frequency

11 What do you hear? For the experiment, subjects were asked to describe what they heard in a 10 alternative, forced choice task. This is the GUI for the experiment. The 10 categories were divided into 3 groups: Top row – single fused image Middle row – split image Bottom row – diffuse sound image 9 post-lingually deafened, Cochlear Nucleus implant users The stimuli were 300-ms, 100-pulse per second, constant amplitude pulse trains delivered using synchronized research processors directly to the implants. No ITD/ILD applied 20 trials per mismatch condition CI = 140 trials

12 Mismatch affects where a sound is perceived
Orientation: Icons are along the y-axis, mismatch is along x. Δ=0 is the pitch-matched pair. Color represents the percentage of trials at a particular mismatch a particular category was chosen. Things to note: Single fused image Centered at Δ=0 Lateralizes with mismatch (here IBD follows LOW freqencies) Left higher in frequency Right higher in frequency

13 Mismatch affects people differently
Things to note: Subject follows high frequencies Split image at large mismatch Left higher in frequency Right higher in frequency Left higher in frequency Right higher in frequency

14 Increasing mismatch leads to more split sound images
If I pool the group data and count the number of trials where multiple sound sources were perceived, we can see here that with increasing mismatch, there is an increasing percentage of trials where multiple sound sources are perceived.

15 Most people lateralize towards high frequencies
Here I’ve plotted the proportion of subjects who are “high frequency followers”, “low frequency followers” and “multiple image perceivers” and we can see that the proportions are not the same. About 2/3 of subjects fuse and lateralize towards the side of the higher frequency electrode.

16 Possible Implications
Target Target Target What are the possible implications? Well if we have a target speaker in front, A small mismatch would lead to a lateralized sound image (which would happen for most listeners); or At a large mismatch, the target talker may become a split image Lateralized sound image Split sound image

17 How does mismatch affect sounds off the mid-line?
Typical values: ILD = 0; ±2; ±5; ±10 CU ITD = 0; ±100; ±200; ±400; ±800 µs Negative values mean sound is to the left. Now what about a sound located off the mid-line? We take the same stimuli and now apply either an ILD or ITD. ILDs were applied by changing the current levels applied at each ear which changed the loudness of the stimuli at one ear relative to the other, and ITDs were applied by delaying the onset of one sound relative to the other. Subjects were asked to indicate on the GUI how many sound they heard and where they heard each sound. For each sound perceived, a colored bar appeared on the face. Here, we have an example of two sounds. The subject was also asked to rank the dominance of each sound, where the location of the primary or “most dominant” sound was placed on the top bar, etc.

18 Good lateralization at no mismatch
Here’s data for the same 2 subjects as before. We see that with no mismatch, these subjects hear the sound lateralize from the left to the right with changes in ILD.

19 Mismatch affect ILDs Now with large mismatches, we see that the lateralization curves shift, such that 0 ILD is no longer heard at the center of the head. It would appear that subjects can still lateralize sounds since loudness seems to compensate for the shift.

20 Mismatch affect ILDs If I show the intermediate mismatches, we see that there is a systematic shift of the lateralization curves.

21 Mismatch affect ITDs even more
However, with ITDs, subjects are unable to lateralize when there is a large mismatch in the place of stimulation across the ears.

22 Non-midline crossings increase with mismatch
So how many subjects are unable to lateralize sounds with increasing mismatches? Here, I’ve plotted the percentage of subjects whose lateralization curves do not cross the midline at different amounts of mismatch. We observe that with increasing mismatch, there is an increasing number of subjects who do not have lateralization curves that cross the midline and that mismatch affects ITD lateralization more than ILD lateralization

23 Possible Implications
Masker Masker Masker What are the possible implications? Well if we have a masker speaker on the side, with mismatch They may be perceived on the wrong side; or They too may become a split image Lateralized sound image Split sound image

24 Possible implications on SRM
Target Maskers Lateralized Image Target So, what are the possible implications for SRM? Mismatch might cause both Target and Masker to lateralize to the same side => No SRM Masker

25 Possible implications on SRM
Target Maskers Lateralized Image Target Maskers Lateralized Image Target 2. Target and Maskers might lateralize to different sides => possibly increasing SRM Masker

26 Possible implications on SRM
Target Maskers Lateralized Image Target Maskers Lateralized Image Target 3. Target and Maskers are both split => zero and negative SRM due to confusion Target Maskers Split Image Masker

27 Summary Interaural place-of-stimulation mismatch leads to lateralized or split sound images which affects perceived location of sounds Depending on location of target and masker, interaural mismatch might increase or hinder spatial release from masking

28 Thank you We’d like to thank our research participants for participating in our experiments, and Cochlear Ltd. for supplying the equipment and technical support. This work is supported by NIH-NIDCD R (Litovsky) and in part by a core grant to the Waisman Center from the NICHD (P30 HD03352).

29


Download ppt "Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky"

Similar presentations


Ads by Google