Presentation on theme: "Chapter 10 Perception of Speech"— Presentation transcript:
1Chapter 10 Perception of Speech Perry C. Hanavan, Au.D.
2Question How do we perceive speech? Individual sounds (phonemes)? Syllables?Words?Sentences?Listening to Mozart?
3Speech Perception How do we perceive speech? Individual sounds (phonemes)?Syllables?Words?Sentences?How do we derive meaning from the ocean of sounds we hear?Speech is variableSpeakers vary in speechVariant or invariant cues?
4Question What is Pattern Playback? A music group from India Talking machine built by Dr. Franklin S. Cooper and colleagues at Haskins LaboratoriesA brain based device for speech perception
6Question What is an invariant speech cue? Phonemes coarticulated A phoneme produced in isolationTransition from one phoneme to the next
7Sensation: An internal representation of the stimulus. Excitation, Sensation, CognitionExcitation: The pattern of neural responses elicited by a given stimulus.Sensation: An internal representation of the stimulus.Cognition: The interpretation of a sensation on the basis of stored knowledge.
8Measured by the Just-Noticeable-Difference threshold. DiscriminationThe ability to distinguish between two levels of a stimulus parameter (e.g., different wavelengths of light.Measured by the Just-Noticeable-Difference threshold.Uses sensory representation.Modality dependent
9Uses cognitive representation: needs to refer to stored knowledge. RecognitionThe ability to distinguish categorize a stimulus as belonging to a particular class (e.g., colour or object type).Uses cognitive representation: needs to refer to stored knowledge.Representation dependent.
13The relationship between discrimination and recognition Recognition relies on discrimination… but does recognition also influence discrimination?Discriminability seems to be affected by category structures – this is categorical perception.
14Categorical perception: Discriminability across category boundaries is more sensitive than discriminability within categories.
15Phonemes are the sounds that make up language: e.g., /b/ & /p/. First example of categorical perception: the phoneme boundary effect.Phonemes are the sounds that make up language: e.g., /b/ & /p/.The phonemes /b/ and /p/ differ in the time between the onset (stop) and voicing.
16Liberman and colleagues (1957) showed a phoneme boundary effect: Alvin Liberman (1917 – 2000)Liberman and colleagues (1957) showed a phoneme boundary effect:A smaller change in delay was necessary to distinguish /b/ from /p/, than to distinguish two phonemes within these categories.
17The phoneme boundary effect Motor theory of speech perception: The phoneme boundary effect is caused by activation of the motor program required to produce a phoneme.
18Category boundary effects in the colour domain Question: Is the way we sense colour affected by the words for colours in our language?Benjamin Lee Whorf( )
19Color can be objectively measured in terms of its wavelength: The question about colour perception can be operationalized:Color can be objectively measured in terms of its wavelength:400n m nm nmWavelength
20Not subsumed by another term. The question about colour perception can be operationalized:The number of basic color terms in a language can be measured. Basic color terms are:Single words.Not subsumed by another term.Not restricted to a particular class of objects.
21Dani (New Guinea): Two basic colour terms - mili (light), mola (dark). Early research on color namingDifferent languages have a variation in the number of words for colour categories.Dani (New Guinea): Two basic colour terms - mili (light), mola (dark).English: eleven basic color terms – white, black, grey, red green, blue, yellow, orange, purple, pink, brown.
22Compared English and Tamahumara speakers. Kay and Kempton (1984)Compared English and Tamahumara speakers.Tamahumara does not make a distinction between blue and green.Kay and Kempton theorized that the perceptual distance between blue and green would be exaggerated in English speakers.
233 green G G G 2 green, 1 blue G G B B 3 blue B B Kay and Kempton (1984)3 greenGGG2 green,1 blueGGBB3 blueBB
24Kay and Kempton (1984)Tamahumara speakers were equally likely to choose either extreme for all three types of triplet.
25Kay and Kempton (1984)English speakers were the same when all chips came from the same category.When there was an odd one out, they were more likely to choose that one.
26Perception of Vowels/a/ vowel has greatest intensity with unvoiced /θ/ as weakest consonantFront vowels perceived on basis of F1 frequency and average of F2 and F3, whereas back vowels are perceived on the basis of the average of F1 and F2, as well as F3So is it the absolute frequency values of the formants?Or the ratio of F2 to F1?Perhaps it is the invariant cues (frequency changes that occur with coarticulationF1/F2F3F1F2/F3
27Invariant and Variant Cues Showing how onset formant transitions that define perceptually consonant [d] differ depending on the identity of the following vowel. (Formants highlighted by red dotted lines; transitions are the bending beginnings of the formant trajectories.)/di//da//du/
28Perception of Diphthongs Perceived on basis of formant transitionsSalient feature: rapidity of transition
29Consonant Perceptions Perception different for consonants than vowelsGreater variety of consonant types than vowelsGreater complexity for consonants
30QuestionWhich is TRUE regarding the following statements about categorical perception?Experience of percept invariances in sensory phenomena that can be varied along a continuum.Can be inborn or can be induced by learning.Related to how neural networks in our brains detect the features that allow us to sort the things in the world into separate categoriesAll the above are trueAll the above are false
31Categorical Perception Experience of percept invariances in sensory phenomena that can be varied along a continuum.Can be inborn or can be induced by learning.Related to how neural networks in our brains detect the features that allow us to sort the things in the world into separate categories area in the left prefrontal cortex has been localized as the place in the brain responsible for phonetic categorical perception
33CI Speech Coding Strategies ACE™: Unique to Cochlear’s Nucleus® 24 CI system. ACE optimizes detailed pitch and timing information of sound.SPEAK: (spectral peak) Increases the richness of important pitch information by stimulating electrodes across the entire electrode array.MPEAK: multipeakCIS : (Continuous-Interleaved Sampling) This high rate strategy uses a fixed set of electrodes. Emphasizes the detailed timing information of speech.
34ACE StrategySound enters the speech processor through the microphone and is divided into a maximum of 22 frequency bands.Up to 20 narrow-band filters divide sound into corresponding frequency (pitch) ranges.Each frequency band stimulates a specific electrode along the electrode array.The electrode stimulated depends on the pitch of the sound. For example, in the word "show," the high pitch sound (sh) causes stimulation of electrodes placed near the entrance cochlea, where hearing nerve fibers respond to high pitch sounds. The low pitch sound (ow) stimulates electrodes further into the cochlea, where hearing nerve fibers respond to low pitch sounds.ACE varies the rate of stimulation of the electrodes with a total maximum stimulation rate of 14,400 pulses per second.
35SPEAKSound enters the speech processor through the microphone and is divided into 20 frequency bands.SPEAK selects the six to ten frequency bands containing maximum speech information.Each frequency band stimulates a specific electrode along the electrode array.The electrode stimulated depends on the pitch of the sound. For example, in the word "show" the high pitch sound (sh) causes stimulation of electrodes placed near the entrance of the cochlea, where the hearing nerve fibers respond to high pitch sounds. The low pitch sound (ow) stimulates electrodes further into the cochlea, where the hearing nerve fibers respond to low pitch sounds.SPEAK's dynamic stimulation along 20 electrodes allows you to perceive the detailed pitch information of natural sound.
36CIS Sound enters the speech processor through the microphone. The sound is divided into 4, 6, 8 or 12 bands depending upon the number of channels used.Each band stimulates one specific electrode along the electrode array, sequentially.The same sites along the electrode are stimulated for every sound at a fast rate to deliver the rapid timing cues of speech.