Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007.

Similar presentations


Presentation on theme: "Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007."— Presentation transcript:

1

2 Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007

3 SpeakerListener Speech sounds (phonemes) differ depending on: who is speaking what the immediate phonetic context is 1-Minute Background on Speech Perception Part 1: Perceptual constancy

4 SpeakerListener Speech sounds (phonemes) differ depending on: who is speaking what the immediate phonetic context is Perceptual constancy And Yet…

5 SpeakerListener 1. Learn the acoustic invariants as children, then extract those and discard everything else as we’re listening Problem: What acoustic invariants? 1-Minute Background on Speech Perception Part 2: Solutions?

6 SpeakerListener 1. Learn the acoustic invariants as children, then extract those and discard everything else as we’re listening Problem: What acoustic invariants? 2. Represent (learn) every variation that is encountered Problem: memory (if every variant is stored separately), ‘catastrophic interference’ (if you keep changing the same representation) 1-Minute Background on Speech Perception Part 2: Solutions

7 Getting at the Question: How does the perceptual system decide what to learn? General idea in perception: Maybe the system tries to learn invariants of the distal objects that produce the stimuli (in this case, that would mean the speaker) and not of the stimuli themselves (in this case, the acoustic signal) Our hypothesis: Maybe the system tries to learn those aspects of the signal that reflect characteristic properties of the speaker (and therefore are likely to remain stable across contexts and situations)

8 Getting at the Question: How does the perceptual system decide what to learn? Specifically: How might it determine which variations are characteristic? Our test: two kinds of information the system might use: 1. A ‘first impressions’ heuristic: In the absence of any other information, the properties that are present during first encounter are assumed to be representative and stable 2. Pragmatic cues that indicate that the variation is incidental (seeing that the speaker is talking with a pen in her mouth) can override the influence of primacy

9 What does Perceptual learning look like? 2-phase Method 1. Exposure Phase (Lexical Decision Task) Purpose: To expose participants to a speaker who pronounces a particular sound in an ambiguous way (e.g., /?s S /) Method: The /?s S / occurs in the context of words that cause the sound to be perceived as one or the other phoneme (e.g. dino?aur OR impa?ent). Example: dino?aur OR impa?ent

10 What does Perceptual learning look like? 2-phase Method 1. Exposure Phase (Lexical Decision Task) Purpose: To expose participants to a speaker who pronounces a particular sound in an ambiguous way (e.g., /?s S /) Method: The /?s S / occurs in the context of words that cause the sound to be perceived as one or the other phoneme (e.g. dino?aur OR impa?ent). * Listeners hear both ‘odd’ (dino?aur) and good versions of the (legacy) phonemes from the same speaker * 2. Test Phase (Category Identification) Purpose: Tests whether perceptual learning has occurred Method: Participants hear items from a continuum that ranges from /s/ to ( / S / ), with several ambiguous points in between. They have to label each sound as S or SH.

11 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH) Manipulation: 2X2

12 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH) Manipulation: 2X2

13 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH) Manipulation: 2X2

14 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH) Pronunciation attribute varies by modality: AudioOnly modality = Order manipulation (to test ‘first impressions heuristic) Order 1 st half 2 nd half Attribution Prediction Odd 1 st dino?aur legacy Characteristic learning Odd 2nd legacy dino?aur Incidental no learning Manipulation: 2X2

15 /s/ Odd Second No Perceptual learning (F(1,62)=.29, p=.59 Results: Audio Modality Odd First Perceptual learning (F(1,62)=5.93, p=.018) /s/ /?s S //S//S/ /S//S/

16 *All manipulations are during the Exposure phase* Modality (Audio Only, AudioVisual) X Pronunciation attribute (Characteristic, Incidental) (really X another 2 - Phoneme: ?S or ?SH) Pronunciation attribute varies by modality: AudioVisual modality = Pragmatic manipulation (can it override ‘first impressions’ heuristic?) Pragmatic Order Attribution Prediction No pen in mouth* odd first Characteristic learning Pen in mouth odd first Incidental no learning *No pen in mouth condition is just an AV version of our Audio, Odd-first condition Manipulation: 2X2

17 Example of manipulation: No pen in mouth Pen in mouth Manipulation: 2X2

18 /s/ Pen in Mouth No Perceptual learning (F(1,68)=.04, p>.05 Results: AudioVisual Modality No Pen in Mouth Perceptual learning (F(1,68)=6.29, p=.015) /s/ /?s S //S//S/ /S//S/

19 Overall results / Conclusions Results: Same acoustic signal is handled differently depending on whether it is assumed to be a characteristic pronunciation or an incidental (perhaps transient) one Main effect of phoneme (SH vs. S), no interaction with modality, significant interaction with Pronunciation attribute.

20 Overall results / Conclusions Converging Evidence: Our work on idiolectal/dialectal STR shows learning for ?s S when it is speaker-driven, but not when it is contextually-driven Conclusion: Perceptual learning is a powerful mechanism applied conservatively. Pragmatic information plays an immediate role in guiding learning

21 Thank you

22 Design Elaboration ?S ?SH Audio AudioVisual odd 1stodd 2nd odd 1st

23 Design Elaboration ?S ?SH Audio AudioVisual odd 1stodd 2nd odd 1st PenNo Pen PenNo Pen


Download ppt "Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007."

Similar presentations


Ads by Google