Presentation is loading. Please wait.

Presentation is loading. Please wait.

SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University.

Similar presentations


Presentation on theme: "SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University."— Presentation transcript:

1

2 SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University

3 Course organization The syllabus, these slides and my recordings are available at http://www.tulane.edu/~howard/LING4110/.http://www.tulane.edu/~howard/LING4110/ If you want to learn more about EEG and neurolinguistics, you are welcome to participate in my lab. This is also a good way to get started on an honor's thesis. The grades are posted to Blackboard. 10/04/13Brain & Language, Harry Howard, Tulane University 2

4 REVIEW The quiz was the review. 10/04/13Brain & Language, Harry Howard, Tulane University 3

5 Linguistic model, Fig. 2.1 p. 37 10/04/13Brain & Language, Harry Howard, Tulane University 4 Discourse model Syntax Sentence prosody Morphology Word prosody Segmental phonology perception Segmental phonology perception Acoustic phonetics Feature extraction Segmental phonology production Segmental phonology production Articulatory phonetics Speech motor control INPUT Sentence level Word level

6 Categorical perception 10/04/13Brain & Language, Harry Howard, Tulane University 5 Chinchillas do this too! The Clinton-Kennedy continuum

7 SPEECH PERCEPTION Ingram §6 10/04/13Brain & Language, Harry Howard, Tulane University 6

8 Category boundary shifts Thus the phonetic feature detectors must compensate for the context –– because they know how speech is produced? 10/04/13Brain & Language, Harry Howard, Tulane University 7 But Japanese quail do this too. The shift in VOT is from ‘bin’ to ‘pin’:

9 Duplex speech (or perception) 10/04/13Brain & Language, Harry Howard, Tulane University 8 A and B refer to either ear; B is also called the base a

10 Results Listeners hear a syllable in the ear that gets the base (B), but it is not ambiguous. Its identification is determined by which of the nine F3 transitions are presented to the other ear (A). Listeners also hear a non-speech "chirp" in the ear that gets the isolated transition (A). 10/04/13Brain & Language, Harry Howard, Tulane University 9

11 Implications The fact that the same stimulus is simultaneously part of two quite distinct types of percepts argues that the percepts are produced by separate mechanisms that are both sensitive to the same range of stimuli. The discrimination of the isolated "chirp" and the speech percept are quite different, despite the fact that the acoustic event responsible for both is the same. The speech percept exhibits categorical perception; the chirp percept exhibits continuous perception. If the intensity of the isolated transition is lowered below the threshold of hearing, so that listeners cannot tell reliably whether or not it is there on a given trial, it is still capable of disambiguating the speech percept. [HH: hold that thought] 10/04/13Brain & Language, Harry Howard, Tulane University 10

12 Posterior research Tried to control for the potential temporal delay of dichotic listening by manipulating the intensity (loudness) of the chirp with respect to the base. Only if the chirp and the base have the same intensity are they perceived as a single speech sound. 10/04/13Brain & Language, Harry Howard, Tulane University 11

13 Gokcen & Fox (2001) 10/04/13Brain & Language, Harry Howard, Tulane University 12

14 Discussion Even if the explanation for the latency differences is simply because linguistic and nonlinguistic components have two different areas in the brain to which they must go for processing, and coordinating these two processing sources in order to make an identification of a stimulus takes longer, the data would be consistent with the contention of separate modules for phonetic and auditory stimuli. We would argue that these data do not support the claim that there is only a single unified cognitive module that processes all auditory information because the speech- only and duplex stimuli contained identical components and were equal in complexity. 10/04/13Brain & Language, Harry Howard, Tulane University 13

15 Back to sine-wave speech What is this?It is this. 10/04/13Brain & Language, Harry Howard, Tulane University 14

16 Dehaene-Lambertz et al. (2005) … used ERP and fMRI to investigate sine-wave [ba]- [da] sounds. For the EEG, the subjects had to be trained to hear the sound as speech. In the MRI, most subjects heard the sound as speech immediately. Switching to the speech mode significantly enhanced activation in the posterior parts of the left superior temporal sulcus. 10/04/13Brain & Language, Harry Howard, Tulane University 15

17 Summary MethodologySupport strong SMH? dichotic listeningyes, but Morse code shows same response (p. 127) categorical perceptionno, because animals have same response duplex perceptionno, because animals have same response sine-wave speechyes 10/04/13Brain & Language, Harry Howard, Tulane University 16

18 NEXT TIME P5 Finish Ingram §6; start §7. ☞ Go over questions at end of chapter. 10/04/13Brain & Language, Harry Howard, Tulane University 17


Download ppt "SPEECH PERCEPTION 2 DAY 17 – OCT 4, 2013 Brain & Language LING 4110-4890-5110-7960 NSCI 4110-4891-6110 Harry Howard Tulane University."

Similar presentations


Ads by Google