Learning the cues associated with non-individualised HRTFs John Worley and Jonas Braasch Binaural and Spatial Hearing Group.

Slides:



Advertisements
Similar presentations
Frequency response adaptation in binaural hearing David Griesinger Cambridge MA USA
Advertisements

Frequency response adaptation in binaural hearing
Evaluation of Dummy-Head HRTFs in the Horizontal Plane based on the Peak-Valley Structure in One-degree Spatial Resolution Wersényi György SZÉCHENYI ISTVÁN.
Psychoacoustics Perception of Direction AUD202 Audio and Acoustics Theory.
Binaural Hearing Or now hear this! Upcoming Talk: Isabelle Peretz Musical & Non-musical Brains Nov. 12 noon + Lunch Rm 2068B South Building.
Listening Tests and Evaluation of Simulated Sound Fields Using VibeStudio Designer Wersényi György Hesham Fouad SZÉCHENYI ISTVÁN UNIVERSITY, Hungary VRSonic,
3-D Sound and Spatial Audio MUS_TECH 348. Psychology of Spatial Hearing There are acoustic events that take place in the environment. These can give rise.
Extracting the pinna spectral notches Vikas C. Raykar | Ramani Duraiswami University of Maryland, CollegePark B. Yegnanaryana Indian Institute of Technology,
21 st February 2013 The voices in your head & utilizing head movement in hearing-aid signal processing Alan Boyd (CeSIP/IHR) Supervisors Prof. John Soraghan.
3-D Sound and Spatial Audio MUS_TECH 348. Wightman & Kistler (1989) Headphone simulation of free-field listening I. Stimulus synthesis II. Psychophysical.
Localizing Sounds. When we perceive a sound, we often simultaneously perceive the location of that sound. Even new born infants orient their eyes toward.
Improvement of Audibility for Multi Speakers with the Head Related Transfer Function Takanori Nishino †, Kazuhiro Uchida, Naoya Inoue, Kazuya Takeda and.
Communication Acoustics and Aural Architecture Research Laboratory Acoustics Group, Rensselaer Polytechnic Institute Pilot: A robust distributed intelligent.
Localization cues with bilateral cochlear implants Bernhard U. Seeber and Hugo Fastl (2007) Maria Andrey Berezina HST.723 April 8 th, 2009.
ICA Madrid 9/7/ Simulating distance cues in virtual reverberant environments Norbert Kopčo 1, Scott Santarelli, Virginia Best, and Barbara Shinn-Cunningham.
Visually-induced auditory spatial adaptation in monkeys and humans Norbert Kopčo, I-Fan Lin, Barbara Shinn-Cunningham, Jennifer Groh Center for Cognitive.
Spectral centroid 6 harmonics: f0 = 100Hz E.g. 1: Amplitudes: 6; 5.75; 4; 3.2; 2; 1 [(100*6)+(200*5.75)+(300*4)+(400*3.2)+(500*2 )+(600*1)] / = 265.6Hz.
Plasticity in sensory systems Jan Schnupp on the monocycle.
Binaural Sound Localization and Filtering By: Dan Hauer Advisor: Dr. Brian D. Huggins 6 December 2005.
Acoustical Society of America, Chicago 7 June 2001 Effect of Reverberation on Spatial Unmasking for Nearby Speech Sources Barbara Shinn-Cunningham, Lisa.
The effect of frequency compression and high- frequency directionality on horizontal localisation and speech in noise recognition Ingrid Yeend, Anna O’Brien,
Audio in VEs Ruth Aylett. Use of audio in VEs n Important but still under-utilised channel for HCI including virtual environments. n Speech recognition.
Speech Segregation Based on Sound Localization DeLiang Wang & Nicoleta Roman The Ohio State University, U.S.A. Guy J. Brown University of Sheffield, U.K.
Supervisor: Dr Richard So Speaker: Brian Ngan
On the Preprocessing and Postprocessing of HRTF Individualization Based on Sparse Representation of Anthropometric Features Jianjun HE, Woon-Seng Gan,
Visual Adaptation And Spatial Auditory Processing Peter Lokša, Norbert Kopčo Collaborators: Jenni Groh, Barb Shinn-Cunningham, I-Fan Lin.
On the Preprocessing and Postprocessing of HRTF Individualization Based on Sparse Representation of Anthropometric Features Jianjun HE, Woon-Seng Gan,
SOUND IN THE WORLD AROUND US. OVERVIEW OF QUESTIONS What makes it possible to tell where a sound is coming from in space? When we are listening to a number.
1/18 1.Intro 2. Implementation 3. Results 4. Con.
Applied Psychoacoustics Lecture: Binaural Hearing Jonas Braasch Jens Blauert.
42nd IEEE Conference on Decision and Control, Dec 9-12, 2003
Issac Garcia-Munoz Senior Thesis Electrical Engineering Advisor: Pietro Perona.
Virtual Worlds: Audio and Other Senses. VR Worlds: Output Overview Visual Displays: –Visual depth cues –Properties –Kinds: monitor, projection, head-based,
3-D Sound and Spatial Audio MUS_TECH 348. Main Types of Errors Front-back reversals Angle error Some Experimental Results Most front-back errors are front-to-back.
Chapter 12: Auditory Localization and Organization
THE MANIFOLDS OF SPATIAL HEARING Ramani Duraiswami | Vikas C. Raykar Perceptual Interfaces and Reality Lab University of Maryland, College park.
Chapter 12: Sound Localization and the Auditory Scene.
Chapter 12: Sound Localization and the Auditory Scene.
Dynamic Aspects of the Cocktail Party Listening Problem Douglas S. Brungart Air Force Research Laboratory.
‘Missing Data’ speech recognition in reverberant conditions using binaural interaction Sue Harding, Jon Barker and Guy J. Brown Speech and Hearing Research.
Simulation of small head-movements on a Virtual Audio Display using headphone playback and HRTF synthesis Wersényi György SZÉCHENYI ISTVÁN UNIVERSITY,
3-D Sound and Spatial Audio MUS_TECH 348. Physical Modeling Problem: Can we model the physical acoustics of the directional hearing system and thereby.
Spatial and Spectral Properties of the Dummy-Head During Measurements in the Head-Shadow Area based on HRTF Evaluation Wersényi György SZÉCHENYI ISTVÁN.
Hearing Research Center
Laboratory for Experimental ORL K.U.Leuven, Belgium Dept. of Electrotechn. Eng. ESAT/SISTA K.U.Leuven, Belgium Combining noise reduction and binaural cue.
Immersive Displays The other senses…. 1962… Classic Human Sensory Systems Sight (Visual) Hearing (Aural) Touch (Tactile) Smell (Olfactory) Taste (Gustatory)
Auditory Neuroscience 1 Spatial Hearing Systems Biology Doctoral Training Program Physiology course Prof. Jan Schnupp HowYourBrainWorks.net.
On the improvement of virtual localization in vertical directions using HRTF synthesis and additional filtering Wersényi György SZÉCHENYI ISTVÁN UNIVERSITY,
3-D Sound and Spatial Audio MUS_TECH 348. Are IID and ITD sufficient for localization? No, consider the “Cone of Confusion”
On the manifolds of spatial hearing
Development of Sound Localization 2 How do the neural mechanisms subserving sound localization develop?
PSYC Auditory Science Spatial Hearing Chris Plack.
Fletcher’s band-widening experiment (1940)
Figure 8-1. Figure 8-2 Correct Rejection Miss FA Hit ‘YES’ ‘NO’ β.
The owl Ecology and behavior
The role of reverberation in release from masking due to spatial separation of sources for speech identification Gerald Kidd, Jr. et al. Acta Acustica.
Ken W.L. Chan, Alan H.S. Chan* Displays 26 (2005) 109–119 Spatial S–R compatibility of visual and auditory signals: implications for human–machine interface.
SPATIAL HEARING Ability to locate the direction of a sound. Ability to locate the direction of a sound. Localization: In free field Localization: In free.
Fundamentals of Sensation and Perception
Franssen illusion within large rooms John W. Worley AudioGroup, WCL Department of Electrical and Computer Engineering University of Patras, Greece
3-D Sound and Spatial Audio MUS_TECH 348. What do these terms mean? Both terms are very general. “3-D sound” usually implies the perception of point sources.
Jonas Braasch Architectural Acoustics Group Communication Acoustics and Aural Architecture Research Laboratory (C A 3 R L) Rensselaer Polytechnic Institute,
Auditory Localization in Rooms: Acoustic Analysis and Behavior
A B C D VIS AUD V azi ele stim resp gaze shift error Figure 12-1.
Spatial Memory and Multisensory Perception in Children and Adults
Spatial Audio - Spatial Sphere Demo Explained
Mid-Term Review John W. Worley AudioGroup, WCL
Localizing Sounds.
3 primary cues for auditory localization: Interaural time difference (ITD) Interaural intensity difference Directional transfer function.
A, B, Startle magnitude and PPI after treatment with CNO
Presentation transcript:

Learning the cues associated with non-individualised HRTFs John Worley and Jonas Braasch Binaural and Spatial Hearing Group

Head-Related Transfer Functions

Binaural and Spatial Hearing Group Individual Differences in HRTFs  Highly idiosyncratic.  Differences in HRTF magnitude due to differences in the size and shape of pinnae.  Inter-subject differences in level as much as 28 dB (Wightman & Kistler, ‘89a).

Binaural and Spatial Hearing Group Using Non-Individualised HRTFs Binaural cues cue multiple locations. Cones of confusion lead to reversal errors. Non-individualised HRTFs result in a 3-fold increase in reversals over individualised HRTFs.

Binaural and Spatial Hearing Group Learning non-individualised HRTFs Listeners adapt to long-term pinna modifications (Hofman et al, ’98). Scaling non-individualised HRTFs improves localisation (Middlebrooks, ‘99). Localisation is unaffected by smoothing HRTFs (Kulkarni & Colburn, ’98). Visual system calibrates auditory input Early-blind listeners (Zwiers & Van Opstal, ’01). Compressed visual space compresses auditory space (Zwiers,’03).

Binaural and Spatial Hearing Group Methodology Headphone-based azimuth localisation of scrambled noise. HRTFs from prototype Neumann KU80 (pinna molded from average pinna). 12 possible locations, surrounding the listener. Instructed to respond to auditory event. Responses recorded via GELP method (Gilkey et al 1995). Test auditory localisation over time: Day 1 = Auditory alone Day 1 to 9 = Auditory location cued by visual stimuli Day 10 = Auditory alone

Binaural and Spatial Hearing Group Methodology - Testing 330° X

Binaural and Spatial Hearing Group Methodology - Training 210°

Rear Results

Rear Results Perfect localisation

Rear Results Rear Results Front-to-back mislocalisation

Results Day 1- Testing Responses clustered in the rear hemisphere.

Results Training Type – I (2 listeners) Type - II (3 listeners) Response bias

Results All Days- Testing Majority of front-to-back reversals. No reduction in reversals as a function of time.

Results All Days- Testing Type - I Type - II Type – I = No reversal predisposition. Type- II = Majority of front-to-back reversals. Response bias significantly determines reversal type

Conclusions Listeners display a response bias. Response bias determines reversal type. Overall, Non-sig. reduction in reversals over training period. Why?, Difference between passive -v- dynamic listening. Individualised head -v- dummy head.