Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.

Slides:



Advertisements
Similar presentations
Revised estimates of human cochlear tuning from otoacoustic and behavioral measurements Christopher A. Shera, John J. Guinan, Jr., and Andrew J. Oxenham.
Advertisements

Introduction to MP3 and psychoacoustics Material from website by Mark S. Drew
Hearing relative phases for two harmonic components D. Timothy Ives 1, H. Martin Reimann 2, Ralph van Dinther 1 and Roy D. Patterson 1 1. Introduction.
Auditory scene analysis 2
Psychoacoustics Perception of Direction AUD202 Audio and Acoustics Theory.
Binaural Hearing Or now hear this! Upcoming Talk: Isabelle Peretz Musical & Non-musical Brains Nov. 12 noon + Lunch Rm 2068B South Building.
Introduction Relative weights can be estimated by fitting a linear model using responses from individual trials: where g is the linking function. Relative.
3-D Sound and Spatial Audio MUS_TECH 348. Psychology of Spatial Hearing There are acoustic events that take place in the environment. These can give rise.
Speech Science XII Speech Perception (acoustic cues) Version
3-D Sound and Spatial Audio MUS_TECH 348. Wightman & Kistler (1989) Headphone simulation of free-field listening I. Stimulus synthesis II. Psychophysical.
Source Localization in Complex Listening Situations: Selection of Binaural Cues Based on Interaural Coherence Christof Faller Mobile Terminals Division,
The Precedence Effect in mice, demonstrated in the inhibitory effect of reversing the order of clicks within pairs from two speakers on the startle reflex.
Localization cues with bilateral cochlear implants Bernhard U. Seeber and Hugo Fastl (2007) Maria Andrey Berezina HST.723 April 8 th, 2009.
ICA Madrid 9/7/ Simulating distance cues in virtual reverberant environments Norbert Kopčo 1, Scott Santarelli, Virginia Best, and Barbara Shinn-Cunningham.
Interrupted speech perception Su-Hyun Jin, Ph.D. University of Texas & Peggy B. Nelson, Ph.D. University of Minnesota.
The auditory cortex mediates the perceptual effects of acoustic temporal expectation Santiago Jaramillo & Anthony M Zador Cold Spring Harbor Laboratory,
Hearing & Deafness (3) Auditory Localisation
Task switching is not a unitary phenomenon: Behavioral and neuroimaging evidence S.M. Ravizza 1 & C.S. Carter 1,2 Depts. of 1 Psychology & 2 Psychiatry,
Spectral centroid 6 harmonics: f0 = 100Hz E.g. 1: Amplitudes: 6; 5.75; 4; 3.2; 2; 1 [(100*6)+(200*5.75)+(300*4)+(400*3.2)+(500*2 )+(600*1)] / = 265.6Hz.
4aPP17. Effect of signal frequency uncertainty for random multi-burst maskers Rong Huang and Virginia M. Richards Department of Psychology, University.
Cross-Spectral Channel Gap Detection in the Aging CBA Mouse Jason T. Moore, Paul D. Allen, James R. Ison Department of Brain & Cognitive Sciences, University.
Sound source segregation (determination)
Acoustical Society of America, Chicago 7 June 2001 Effect of Reverberation on Spatial Unmasking for Nearby Speech Sources Barbara Shinn-Cunningham, Lisa.
L INKWITZ L AB Accurate sound reproduction from two loudspeakers in a living room 13-Nov-07 (1) Siegfried Linkwitz.
Speech Segregation Based on Sound Localization DeLiang Wang & Nicoleta Roman The Ohio State University, U.S.A. Guy J. Brown University of Sheffield, U.K.
Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky
Measuring the brain’s response to temporally modulated sound stimuli Chloe Rose Institute of Digital Healthcare, WMG, University of Warwick, INTRODUCTION.
Creating sound valuewww.hearingcrc.org Kelley Graydon 1,2,, Gary Rance 1,2, Dani Tomlin 1,2 Richard Dowell 1,2 & Bram Van Dun 1,4. 1 The HEARing Cooperative.
SOUND IN THE WORLD AROUND US. OVERVIEW OF QUESTIONS What makes it possible to tell where a sound is coming from in space? When we are listening to a number.
Ruth Litovsky University of Wisconsin Madison, WI USA Brain Plasticity and Development in Children and Adults with Cochlear Implants
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Applied Psychoacoustics Lecture: Binaural Hearing Jonas Braasch Jens Blauert.
Chapter 12: Auditory Localization and Organization
Sound Vibration and Motion.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Localization of Auditory Stimulus in the Presence of an Auditory Cue By Albert Ler.
 Space… the sonic frontier. Perception of Direction  Spatial/Binaural Localization  Capability of the two ears to localize a sound source within an.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
Chapter 12: Sound Localization and the Auditory Scene.
Chapter 12: Sound Localization and the Auditory Scene.
‘Missing Data’ speech recognition in reverberant conditions using binaural interaction Sue Harding, Jon Barker and Guy J. Brown Speech and Hearing Research.
Jens Blauert, Bochum Binaural Hearing and Human Sound Localization.
Hearing Research Center
Otoacoustic Emissions
Listeners weighting of cues for lateral angle: The duplex theory of sound localization revisited E. A. MacPherson & J. C. Middlebrooks (2002) HST. 723.
Pitch What is pitch? Pitch (as well as loudness) is a subjective characteristic of sound Some listeners even assign pitch differently depending upon whether.
PSYC Auditory Science Spatial Hearing Chris Plack.
Fletcher’s band-widening experiment (1940)
The role of reverberation in release from masking due to spatial separation of sources for speech identification Gerald Kidd, Jr. et al. Acta Acustica.
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
SPATIAL HEARING Ability to locate the direction of a sound. Ability to locate the direction of a sound. Localization: In free field Localization: In free.
Jonas Braasch Architectural Acoustics Group Communication Acoustics and Aural Architecture Research Laboratory (C A 3 R L) Rensselaer Polytechnic Institute,
Sound Localization and Binaural Hearing
Auditory Localization in Rooms: Acoustic Analysis and Behavior
4aPPa32. How Susceptibility To Noise Varies Across Speech Frequencies
PSYCHOACOUSTICS A branch of psychophysics
Precedence-based speech segregation in a virtual auditory environment
Consistent and inconsistent interaural cues don't differ for tone detection but do differ for speech recognition Frederick Gallun Kasey Jakien Rachel Ellinger.
Ana Alves-Pinto, Joseph Sollini, Toby Wells, and Christian J. Sumner
Within a Mixed-Frequency Visual Environment
Loudness asymmetry in real-room reverberation: cross-band effects
Volume 62, Issue 1, Pages (April 2009)
RESULTS: Individual Data
Perceptual Echoes at 10 Hz in the Human Brain
Volume 62, Issue 1, Pages (April 2009)
Speech Perception (acoustic cues)
Coincidence Detection in the Auditory System
WAISMAN CENTER 2019 ARO Meeting Baltimore, MD PS621
Localization behavior in ITD-alone condition and head turn analysis.
Week 13: Neurobiology of Hearing Part 2
Presentation transcript:

Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or reflections, can be minimized. This is called the precedence effect (PE). The precedence effect is a perceptual phenomenon where the cues of the preceding direct sound, the “lead,” dominate over later-arriving reflections, the “lag.” Varying the temporal delay between the lead and lag determines the PE. Furthermore, it has been primarily studied in NH listeners using pure tones or broadband stimuli. There are two key features of the PE: Fusion: At short lead-lag delays (LLDs), a single sound source can be perceived (Fig. 1c). The echo threshold (ET) is the delay at which two sound sources are perceived (Fig. 1b). NH ETs are in the range of 5-10 ms (Litovsky et al., 1999). Lateralization/Localization Dominance: Short LLDs allow the percept of the leading sound to dominate the overall perception of where the sound is located in space. The underlying mechanism that facilitates the PE is the binaural network in the brainstem that processes interaural timing differences (ITDs), or very short time delays between the ears, these are on the scale of microseconds (Brown and Stecker, 2013). In the current study, we manipulated lead-lag delays of narrowband noise bursts to assess the effect of frequency on the fusion and localization dominance in the PE over headphones. While all NH listeners exhibited PE similarly in the narrowband noise conditions according to two different frequencies (4 kHz and 10 kHz), these echo thresholds were slightly elevated in relation to previous PE literature (Litovsky et al., 1999; Brown and Stecker, 2013). The comparable effect seen across frequencies, in both the assessment of fusion and lateralization, shows that the PE has a strong presence across frequencies regarding a more complex signal; the use of narrowband stimuli may have contributed to these elevated echo thresholds. Furthermore, this data has implications for cochlear implant listeners because information is processed in these listeners through electrodes that have narrow frequency bands, therefore it may be possible to observe the PE in this population as well. Future studies will test cochlear implant listeners with this paradigm to explore if they are able to perceive sounds of a lead-lag pair at varying delays. Subjects: 6 normal hearing listeners; 3 male and 3 female ranging ages from years old. Stimuli presented over ER2 ear-insert headphones in a soundproof booth. Stimuli: “Lead-lag” pairs of either a 10 kHz or 4 kHz narrowband noise burst, presented in separate blocks; a low-pass noise masker was present in additional blocks. Stimuli were played at opposing timing differences of -500 µs and +500 µs with LLDs varied from 1-64 ms. Task: Simultaneous fusion (hearing one vs. two locations) and lateralization (indicating location of fused image in upper panel, or location of left-most of two images in lower panel) task. Participants were asked to touch on the screen to indicate where they heard the sound. Listeners completed 350 trials (50 trials at each LLD). METHODS Poster # Precedence Effect in Normal Hearing Listeners Frieda Powell, Tanvi Thakkar, and Ruth Litovsky University of Wisconsin-Madison, USA SRI Presentation Madison, WI 4 November, 2014 WAISMAN CENTER INTRODUCTION Figure 3: Average responses to whether one or two sounds were perceived (marked with points, with individual responses behind) depending on frequency and on presence of maskers LOCALIZATION DOMINANCE RESULTS Figure 4: Points show average lateralization responses for trials on which subjects reported one (black) or two (red) sounds. Participants performed as expected. CONCLUSIONS Brown AD, Stecker GC (2013). “The precedence effect: Fusion and lateralization measures for headphone stimuli lateralized by interaural time and level differences.” J Acoust Soc Am. 133(5): Brown AD et al. (2013). “The Precedence Effect: Insights from electric hearing.” Presented at the 16th Conference on Implantable Auditory Prosthesis, Lake Tahoe, CA Litovsky RY, Colburn HS, et al (1999). “The precedence effect.” J Acoust Soc Am. ;106(4 Pt 1): Wallach et al. (1949). “The precedence effect in sound localization” Am J Psychol. 62(3): REFERENCES We would like to thank all our participants and Cochlear Ltd for providing equipment and technical assistance. A special thank you to Shelly Godar for helpful comments and Andrew Brown for development of the software and program. NIH-NIDCD (R01 DC to RYL), and NIH-NICHD (P30 HD03352 to Waisman Center). ACKNOWLEDGEMENTS Binaural Hearing and Speech Laboratory FUSION RESULTS Listeners exhibited the PE in all four conditions tested: 4 kHz noise bursts with and without masker: average thresholds were comparable across these conditions (~16 ms). 10 kHz noise bursts with and without masker: average thresholds were comparable across these conditions (~16 ms). ETs for the different frequencies exhibited the same range of thresholds, illustrating that there was no effect of frequency. 4 kHz 4 kHz + Masker 10 kHz 10 kHz + Masker Lateralization/Localization Dominance: Localization dominance was generally robust in both the 4 and 10 kHz conditions, meaning that there was also no effect of frequency in the location of the lead vs. the lag when listeners were presented opposing locations of a lead-lag pair. Two locations were always reported when the LLD was above or near the ET, and one location was always reported at short LLDs below the ET. “One Location” “Two Locations” LLD = 1-64 ms Left Right Lead, +500 µs ITD Lag, -500 µs ITD Figure 2: a) and b) Image of screen where listeners indicated whether they heard one or two locations and where the sound was heard. Listeners were asked to judge the intracranial location and number of images heard on each trial. If two images were perceived, listeners were instructed to indicate the left-most image in the lower of 2 panels. c) Stimuli presented to listeners; stimuli were single click pairs (lead and lag), played at opposing ITDs and varying lead-lag delays (1-64 ms). Figure 1: Perception of a direct sound and a reflection according to the delay between them; a) Illustrates he location of the direct sound in relation to the listener, b) when the LLD is greater than the ET (on average when the LLD is greater than 5-10 ms), two sounds are perceived, c) when the LLD delay is below the ET (or less than 5-10 ms), only the leading/ direct sound is perceived. c. b. c. a.a. b. a. LLD > 5-10 ms LLD < 5-10 ms