Hearing Research Center

Slides:



Advertisements
Similar presentations
Revised estimates of human cochlear tuning from otoacoustic and behavioral measurements Christopher A. Shera, John J. Guinan, Jr., and Andrew J. Oxenham.
Advertisements

Frequency representation The ability to use the spectrum or the fine structure of sound to detect, discriminate, or identify sound.
Multipitch Tracking for Noisy Speech
Binaural Hearing Or now hear this! Upcoming Talk: Isabelle Peretz Musical & Non-musical Brains Nov. 12 noon + Lunch Rm 2068B South Building.
CS 551/651: Structure of Spoken Language Lecture 11: Overview of Sound Perception, Part II John-Paul Hosom Fall 2010.
Localizing Sounds. When we perceive a sound, we often simultaneously perceive the location of that sound. Even new born infants orient their eyes toward.
Improvement of Audibility for Multi Speakers with the Head Related Transfer Function Takanori Nishino †, Kazuhiro Uchida, Naoya Inoue, Kazuya Takeda and.
Chapter 6: Masking. Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking.
Source Localization in Complex Listening Situations: Selection of Binaural Cues Based on Interaural Coherence Christof Faller Mobile Terminals Division,
Visually-induced auditory spatial adaptation in monkeys and humans Norbert Kopčo Center for Cognitive Neuroscience, Duke University Hearing Research Center,
Masker-First Advantage in Cued Informational Masking Studies Virginia M. Richards a, Rong Huang a, and Gerald Kidd Jr b. (a) Department of Psychology,
Visually-induced auditory spatial adaptation in monkeys and humans Norbert Kopčo, I-Fan Lin, Barbara Shinn-Cunningham, Jennifer Groh Center for Cognitive.
ICA Madrid 9/7/ Simulating distance cues in virtual reverberant environments Norbert Kopčo 1, Scott Santarelli, Virginia Best, and Barbara Shinn-Cunningham.
Interrupted speech perception Su-Hyun Jin, Ph.D. University of Texas & Peggy B. Nelson, Ph.D. University of Minnesota.
Effect of roving on spatial release from masking for amplitude-modulated noise stimuli Norbert Kopčo *, Jaclyn J. Jacobson, and Barbara Shinn-Cunningham.
Vienna 4/28/ Click vs. click-click vs. blink-click: Factors influencing human sound localization in the horizontal plane Norbert Kopčo TU Košice.
Hearing & Deafness (3) Auditory Localisation
Two- tone unmasking and suppression in a forward-masking situation Robert V. Shannon 1976 Spring 2009 HST.723 Theme 1: Psychophysics.
Visually-induced auditory spatial adaptation in monkeys and humans Norbert Kopčo, I-Fan Lin, Barbara Shinn-Cunningham, Jennifer Groh Center for Cognitive.
Spectral centroid 6 harmonics: f0 = 100Hz E.g. 1: Amplitudes: 6; 5.75; 4; 3.2; 2; 1 [(100*6)+(200*5.75)+(300*4)+(400*3.2)+(500*2 )+(600*1)] / = 265.6Hz.
TOPIC 4 BEHAVIORAL ASSESSMENT MEASURES. The Audiometer Types Clinical Screening.
Cross-Spectral Channel Gap Detection in the Aging CBA Mouse Jason T. Moore, Paul D. Allen, James R. Ison Department of Brain & Cognitive Sciences, University.
Sound source segregation (determination)
Acoustical Society of America, Chicago 7 June 2001 Effect of Reverberation on Spatial Unmasking for Nearby Speech Sources Barbara Shinn-Cunningham, Lisa.
1 Recent development in hearing aid technology Lena L N Wong Division of Speech & Hearing Sciences University of Hong Kong.
Hearing.
Speech Segregation Based on Sound Localization DeLiang Wang & Nicoleta Roman The Ohio State University, U.S.A. Guy J. Brown University of Sheffield, U.K.
Alan Kan, Corey Stoelb, Matthew Goupell, Ruth Litovsky
Binaural Sonification of Disparity Maps Alfonso Alba, Carlos Zubieta, Edgar Arce Facultad de Ciencias Universidad Autónoma de San Luis Potosí.
SOUND IN THE WORLD AROUND US. OVERVIEW OF QUESTIONS What makes it possible to tell where a sound is coming from in space? When we are listening to a number.
Adaptive Design of Speech Sound Systems Randy Diehl In collaboration with Bjőrn Lindblom, Carl Creeger, Lori Holt, and Andrew Lotto.
METHODOLOGY INTRODUCTION ACKNOWLEDGEMENTS LITERATURE Low frequency information via a hearing aid has been shown to increase speech intelligibility in noise.
Applied Psychoacoustics Lecture: Binaural Hearing Jonas Braasch Jens Blauert.
Chapter 5: Normal Hearing. Objectives (1) Define threshold and minimum auditory sensitivity The normal hearing range for humans Define minimum audible.
Studies of Information Coding in the Auditory Nerve Laurel H. Carney Syracuse University Institute for Sensory Research Departments of Biomedical & Chemical.
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
speech, played several metres from the listener in a room - seems to have the same phonetic content as when played nearby - that is, perception is constant.
Calibration of Consonant Perception in Room Reverberation K. Ueno (Institute of Industrial Science, Univ. of Tokyo) N. Kopčo and B. G. Shinn-Cunningham.
Localization of Auditory Stimulus in the Presence of an Auditory Cue By Albert Ler.
Sound Localization Strategies in Simple and Complex Environments Norbert Kopčo Department of Cybernetics and AI, TU Košice, Slovakia Hearing Research Center.
Dynamic Aspects of the Cocktail Party Listening Problem Douglas S. Brungart Air Force Research Laboratory.
‘Missing Data’ speech recognition in reverberant conditions using binaural interaction Sue Harding, Jon Barker and Guy J. Brown Speech and Hearing Research.
Figures for Chapter 14 Binaural and bilateral issues Dillon (2001) Hearing Aids.
Gammachirp Auditory Filter
Applied Psychoacoustics Lecture 3: Masking Jonas Braasch.
Spatial and Spectral Properties of the Dummy-Head During Measurements in the Head-Shadow Area based on HRTF Evaluation Wersényi György SZÉCHENYI ISTVÁN.
Human Detection and Localization of Sounds in Complex Environments W.M. Hartmann Physics - Astronomy Michigan State University QRTV, UN/ECE/WP-29 Washington,
Additivity of auditory masking using Gaussian-shaped tones a Laback, B., a Balazs, P., a Toupin, G., b Necciari, T., b Savel, S., b Meunier, S., b Ystad,
MASKING BASIC PRINCIPLES CLINICAL APPROACHES. Masking = Preventing Crossover Given enough intensity any transducer can stimulate the opposite cochlea.
Listeners weighting of cues for lateral angle: The duplex theory of sound localization revisited E. A. MacPherson & J. C. Middlebrooks (2002) HST. 723.
On the improvement of virtual localization in vertical directions using HRTF synthesis and additional filtering Wersényi György SZÉCHENYI ISTVÁN UNIVERSITY,
3-D Sound and Spatial Audio MUS_TECH 348. Are IID and ITD sufficient for localization? No, consider the “Cone of Confusion”
>>ITD.m running… IC 800Hz 40 sp/sec 34 O azim Neuron April 16, 2009 Bo Zhu HST.723 Spring 2009 Theme 3 Paper Presentation April 1, 2009.
PSYC Auditory Science Spatial Hearing Chris Plack.
Fletcher’s band-widening experiment (1940)
The role of reverberation in release from masking due to spatial separation of sources for speech identification Gerald Kidd, Jr. et al. Acta Acustica.
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
Fletcher’s band-widening experiment (1940) Present a pure tone in the presence of a broadband noise. Present a pure tone in the presence of a broadband.
SPATIAL HEARING Ability to locate the direction of a sound. Ability to locate the direction of a sound. Localization: In free field Localization: In free.
Jonas Braasch Architectural Acoustics Group Communication Acoustics and Aural Architecture Research Laboratory (C A 3 R L) Rensselaer Polytechnic Institute,
Sound Localization and Binaural Hearing
Auditory Localization in Rooms: Acoustic Analysis and Behavior
Fletcher’s band-widening experiment (1940)
PSYCHOACOUSTICS A branch of psychophysics
Precedence-based speech segregation in a virtual auditory environment
Consistent and inconsistent interaural cues don't differ for tone detection but do differ for speech recognition Frederick Gallun Kasey Jakien Rachel Ellinger.
Ana Alves-Pinto, Joseph Sollini, Toby Wells, and Christian J. Sumner
FM Hearing-Aid Device Checkpoint 3
Volume 62, Issue 1, Pages (April 2009)
Volume 62, Issue 1, Pages (April 2009)
Presentation transcript:

Hearing Research Center Spatial release from masking of chirp trains in a simulated anechoic environment Norbert Kopčo Hearing Research Center Boston University Technical University Košice, Slovakia

Studies of binaural and spatial hearing Distance perception in reverberant environments - is consistent experience necessary for accurate distance perception? - also, studies looking at other parameters (mono- vs. binaural, anechoic vs. reverberant, real vs. simulated environments) “Room learning” and its effect on localization - is localization accuracy and “room learning” affected by changes in listener position in a room? Spatial cuing and localization - how does automatic attention, strategic attention, and room acoustics affect perceived location of a sound preceded by an informative cuing sound? Spatial release from masking - effect of signal and masker location on detectability/intelligibility of pure tones, broadband non-speech stimuli, and speech in anechoic and reverberant environments

Spatial release from masking of chirp trains in a simulated anechoic environment Collaborators Barbara Shinn-Cunningham (BU) – Thesis Advisor Courtney Lane (Mass. Eye and Ear Infirmary) Bertrand Delgutte (Mass. Eye and Ear Infirmary)

Intro: Spatial release from masking "Spatial unmasking" (or SRM) is an improvement in signal detection threshold when signal and noise are spatially separated.

Intro: Spatial release from masking "Spatial unmasking" (or SRM) is an improvement in signal detection threshold when signal and noise are spatially separated. Spatial unmasking of low-frequency pure-tone stimuli depends on - acoustic factors (change in the signal-to-noise energy ratio, SNR, due to change in location) - binaural processing (improvement in signal detectability due to signal and noise interaural cues)

Intro: Spatial release from masking Spatial unmasking of broadband stimuli depends on (Gilkey and Good, 1995): - energetic factors for all stimuli - additional binaural factors for low-frequency stimuli

Broadband stimuli: two possible mechanisms 1. auditory system integrates information across multiple channels 2. auditory system chooses single best channel with most favorable SNR ("single-best-filter" model) Best channel hypothesis supported by comparison of single-unit thresholds from cat's inferior colliculus to human behavioral data (Lane et al., 2003).

Current study Test the single-best-filter hypothesis of spatial unmasking for broadband and lowpass stimuli measure spatial unmasking for broadband and lowpass chirp-train signals in noise in human - compare performance to single-best-filter predictions

Experimental methods: procedure five listeners with normal hearing simulated anechoic environment (i.e., under headphones) measure detection threshold for combinations of signal (S) and noise (N) locations (at 1 m) signal location fixed at one of three azimuths (0, 30, 90°) noise azimuth varies 3-down-1-up adaptive procedure (tracking 79.4% correct) varying N level three-interval, two-alternative forced choice task

Experimental methods: stimuli signal: 200-ms 40-Hz chirp-train broadband: 0.3 - 12 kHz lowpass: 0.3 - 1.5 kHz noise: 250-ms white noise broadband: 0.2 - 14 kHz lowpass: 0.2 – 2 kHz convolved with non-individual anechoic human HRTFs to simulate source locations

Single-best-filter model Filterbank: 60 log-spaced gammatone filters per ear (Johannesma, 1972) SNR computed in each filter Single best filter found across all 120 filters Predicted threshold = -SNR - T0 (T0 is a model parameter fitted to data)

Results: broadband stimuli Data - spatial unmasking of nearly 30 dB Single-best-filter model - produces accurate predictions (within 4 dB) - tends to overestimate spatial unmasking - single best filter has high frequency, so ... - binaural processing unlikely to contribute The single-best-filter model predicts broadband data

Results: lowpass stimuli Data - thresholds worse than broadband - spatial unmasking less than broadband Single-best-filter model - produces accurate predictions (within 3 dB) - generally underestimates unmasking - underestimation may be due to binaural processing The single-best-filter model predicts lowpass data

Results: broadband vs. lowpass stimuli Data - for all azimuths, broadband thresholds better than lowpass Single-best-filter model - predicts roughly equal thresholds for broadband and lowpass when near each other The single-best-filter model cannot predict lowpass and broadband data at the same time

Results: narrowband vs. other stimuli Data - thresholds improve with increasing bandwidth - highpass and broadband thresholds similar - 10 ERB thresholds approach broadband - single ERB thresholds 10 dB worse than broadband approximately equal, indicating roughly equal SNR and information in each ERB Single-best-filter model predicts approximately equal thresholds for all conditions The single-best-filter model fails to predict thresholds' bandwidth dependence

Conclusions: data For these broadband stimuli, spatial unmasking - improves thresholds by nearly 30 dB - is dominated by energetic effects in the high frequencies For these lowpass stimuli, spatial unmasking - improves thresholds by at most 12 dB - is dominated by low-frequency energetic effects Binaural contribution is fairly small Detection thresholds improve with bandwidth

Conclusions: model The single-best-filter model predicts the amount of spatial unmasking for broadband or lowpass stimuli. However, the model threshold parameter must differ in order to achieve these fits. More generally, the model cannot predict the observed dependence on signal bandwidth.

Discussion It is unlikely that any single-best-filter SNR-based model (regardless of exact implementation) can account for these results. For broadband signal detection in noise, there appears to be across-frequency integration. Only a model that integrates information across multiple frequency channels is likely to be able to account for these observations. Brain centers higher than the midbrain seem necessary for the integration of information across frequency.

Research supported by AFOSR and National Academy of Sciences Acknowledgements Research supported by AFOSR and National Academy of Sciences Steve Colburn and other people in the BU Hearing Research Center for comments on this works