Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University.

Similar presentations


Presentation on theme: "Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University."— Presentation transcript:

1

2 Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University of Cambridge **Medical Research Council Institute of Hearing Research, Nottingham The material in this document is Copyright and is to be used only for educational purposes

3 people often understand remarkably of sung text and many go to listen to the music without caring about the words but there are times when you want to hear the words, and people whose hearing is impaired, or who are not native speakers of the language, may feel especially disadvantaged What have performers learned so far?

4 To understand a single voice, listeners must correctly group together the sounds that come from a single source (singer) To understand polyphonic texts, listeners must distinguish each individual stream that comprise the set of ‘competing’ voices Rhythm and relative pitch are important in this 3 Auditory streaming

5 Adapted from Bob Carlyon's website (http://www.mrc-cbu.cam.ac.uk/research/speech-language/hearing/#_streaming) Auditory streaming: one sound source or two? This is a demonstration of one factor influencing whether the brain processes different pitches as coming from one place/source or two. Click on the upper loudspeaker icon. Most people hear the two pitches as coming from a single sound source. Click on the lower loudspeaker icon. By the time the clip finishes, most people hear the two pitches as coming from two different sound sources.

6 % correct Number of distractor voices 0123 100 60 80 40 20 0 Unison increases intelligibility even in the absence of distractor voices The power of Unison What have auditory scientists learned so far? 1 target voice 2 target voices 3 target voices When there are no distractors, intelligibility is good (90% or better). Increasing the number of distractor voices decreases intelligibility.

7 % correct Number of distracter voices 0123 100 60 80 40 20 0 Unison increases intelligibility even in the absence of distracter voices To ensure or maintain intelligibility, have AT LEAST as many target voices singing in unison as there are distracters The power of Unison What have auditory scientists learned so far? 1 target voice 2 target voices 3 target voices

8 % correct Number of distracter voices 0123 100 60 80 40 20 0 Number of distracter voices 0123 100 60 80 40 20 0 The power of Unison What have auditory scientists learned so far? 1 target voice 2 target voices 3 target voices Native English speakers The same patterns are seen for native and non-native speakers -- but non-native speakers tend to be more disadvantaged in harder conditions including for Non-native English speakers

9 The biggest difference between language groups occurs with ONE target voice Non-native speakers have disproportionate trouble when there are more distracter voices than target voices and when there is only one target and one distracter Language proficiency What have auditory scientists learned so far? Compared with native speakers:- % correct 60 40 20 0 % correct 100 60 80 40 20 0 native non-native T number of target voices D number of distracters 1 T 2 D 2 T 3 D 2 T 2 D 3 T 3 D 1 T 0 D 1 T 1 D

10 Is the extra intelligibility of unison simply the result of the particular skill of experienced singers? Is unison more intelligible because it is louder? Or that the sound comes from more spatial locations? intelligibility of combinations of male vs female voices (vs of individuals) physical location relative to one another in a room particular types of words and music (lab experiments) Still to be explored

11 http://en.wikipedia.org/wiki/File:Human_head_and_brain_diagram.svg How does the brain process auditory streams and speech in noisy environments?

12 http://www.cognitiveneurosciencearena.com/brain- scans/brunswick/brunswick04.php How does the brain process auditory streams and speech in noisy environments? Brain schematic with the 4 cortical lobes; as there are no imaging studies on intelligibility of sung speech, we will use results studies using from spoken speech

13 http://www.cognitiveneurosciencearena.com/brain- scans/brunswick/brunswick04.php Superior Temporal Gyrus Middle T G Inferior T G How does the brain process auditory streams and speech in noisy environments? This slide shows a flipped version of the same brain. The arrows point to the two lobes that are most intimately involved in sound processing: the temporal lobe and the frontal lobe. The area marked in red is the primary auditory cortex; this is the first (earliest) area that processes sound after it reaches the cortex. It responds to ALL sounds and reacts to any acoustic differences between them

14 Peelle JE, Johnsrude IS & Davis MH (2010). Hierarchical processing for speech in human auditory cortex and beyond. Frontiers in human neuroscience, 4. Word comprehension Sentence comprehension Semantic representations? Action/production? Reconstruct speech If the task is speech comprehension, then other areas in addition to primary auditory cortex are involved; these areas provide a consistent neural response to speech (words, phonemes) regardless of their acoustic differences (who says them and what the acoustic environment is). These include large portions of superior temporal gyrus, both anterior and posterior. Both neuroimaging and lesion data suggest that single word comprehension activates posterior areas in both hemispheres; whereas connected speech and sentence comprehension activates anterior areas mainly in the left hemisphere. Left inferior frontal areas are densely connected to temporal auditory areas, and it is suggested that they “recover” meaning when the speech is difficult to hear. How does the brain process auditory streams and speech in noisy environments?

15 How does the brain process auditory streams? Bee MA & Micheyl C (2008). The cocktail party problem: What is it? How can it be solved? And why should animal behaviourists study it? J Comparative Psych, 122, 235-251 Bigger frequency separation between A and B => more likely that two streams heard => greater activation in primary auditory cortex Wilson EC, Melcher JR, Micheyl C, Gutschalk A & Oxenham AJ (2007). Cortical fMRI activation to sequences of tones alternating in frequency: Relationship to perceived rate and streaming. J Neurophysiol, 97, 2230-2238) Stream segregation is typically studied with tones tones are either similar in frequency and are thus heard as a single stream, or are different in frequency and are heard as two different streams The reason that the activation for stream segregation is mostly seen in PAC might have to do with use of tones as stimuli the brain activation in response to whether it is a single stream or two streams: in primary auditory cortex (red blob)

16 Deike S, Gaschler-Markefski B, Brechmann A & Scheich H (2004). Auditory stream segregation relying on timbre involves left auditory cortex. Neuroreport, 15(9), 1511 – 1514. Listened to a stream of tones interleaved from TWO instruments: trumpet and organ Listened to stream of tones from ONE instrument: either trumpet or organ More activation when two streams had to be grouped and segregated How does the brain process auditory streams and speech in noisy environments? To do the task (perceive small changes in one instrument’s melody), listeners had to group the melodies of each instrument, this led to increased activity in lateral primary auditory cortex


Download ppt "Hearing in Time Slides used for talk to accompany Roger go to yellow three … Sarah Hawkins* and Antje Heinrich** *Centre for Music and Science, University."

Similar presentations


Ads by Google