And a big thanks to Julie Markant

Slides:



Advertisements
Similar presentations
Frequency representation The ability to use the spectrum or the fine structure of sound to detect, discriminate, or identify sound.
Advertisements

09/01/10 Kuhl et al. (1992) Presentation Kuhl, P. K., Williams, K. A., Lacerda, F., Stevens, K. N., & Lindblom, B. (1992) Linguistic experience alters.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Hearing relative phases for two harmonic components D. Timothy Ives 1, H. Martin Reimann 2, Ralph van Dinther 1 and Roy D. Patterson 1 1. Introduction.
Infant sensitivity to distributional information can affect phonetic discrimination Jessica Maye, Janet F. Werker, LouAnn Gerken A brief article from Cognition.
Language Comprehension Speech Perception Semantic Processing & Naming Deficits.
Using prosody to avoid ambiguity: Effects of speaker awareness and referential context Snedeker and Trueswell (2003) Psych 526 Eun-Kyung Lee.
Ling 240: Language and Mind Acquisition of Phonology.
Speech perception 2 Perceptual organization of speech.
Development of Speech Perception. Issues in the development of speech perception Are the mechanisms peculiar to speech perception evident in young infants?
Chapter 6 Perception.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 8 Words in Fluent Speech.
Phonetic Detail in Developing Lexicon Daniel Swingley 2010/11/051Presented by T.Y. Chen in 599.
How General is Lexically-Driven Perceptual Learning of Phonetic Identity? Tanya Kraljic and Arthur G. Samuel Our Questions (e.g., learning a particular.
TEMPLATE DESIGN © Listener’s variation in phoneme category boundary as a source of sound change: a case of /u/-fronting.
Audiovisual benefit for stream segregation in elderly listeners Esther Janse 1,2 & Alexandra Jesse 2 1 Utrecht institute of Linguistics OTS 2 Max Planck.
Psych 156A/ Ling 150: Psychology of Language Learning Lecture 4 Words in Fluent Speech.
Language Acquisition Species-specific, species-universal accomplishment Central issue for cognitive science Important distinction between language comprehension.
Research with Infants PSY 415. General Issues Sampling –Time-consuming –Expensive –Not representative? Attrition –Fussiness –Drowsiness/sleep.
Adrienne Moore section COGS1
Language Comprehension Speech Perception Naming Deficits.
Two Types of Declarative Learning Early Learning No prior knowledge available. Must construct representation of some part of the world from perceptual.
CSD 5400 REHABILITATION PROCEDURES FOR THE HARD OF HEARING Auditory Training.
Discrimination-Shift Problems Background This type of task has been used to compare concept learning across species as well as across a broad range of.
Psych 156A/ Ling 150: Psychology of Language Learning
Sebastián-Gallés, N. & Bosch, L. (2009) Developmental shift in the discrimination of vowel contrasts in bilingual infants: is the distributional account.
Chapter 6 Perception.
Cognitive Development
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Infant Speech Perception & Language Processing. Languages of the World Similar and Different on many features Similarities –Arbitrary mapping of sound.
Negative Priming Vision vs. Audition Although there have been many studies examining the negative priming phenomenon, virtually all of the existing studies.
Statistical learning, cross- constraints, and the acquisition of speech categories: a computational approach. Joseph Toscano & Bob McMurray Psychology.
Is phonetic variation represented in memory for pitch accents ? Amelia E. Kimball Jennifer Cole Gary Dell Stefanie Shattuck-Hufnagel ETAP 3 May 28, 2015.
Two /b/ or not “too bee”: Gradient sensitivity to subphonemic variation, categorical perception and the effect of task. Bob McMurray Michael K. Tanenhaus.
Adele E. Goldberg. How argument structure constructions are learned.
Growing up Bilingual: One System or Two? Language differentiation and speech perception in infancy.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007.
The Discrimination of Vowels and Consonants by Lara Lalonde, Jacynthe Bigras, Jessica Flanagan, Véronick Boucher, Janie Paris & Lyzanne Cuddihy.
From Sound to Sense and back again: The integration of lexical and speech processes From Sound to Sense and back again: The integration of lexical and.
The long-term retention of fine- grained phonetic details: evidence from a second language voice identification training task Steve Winters CAA Presentation.
Chapter 4 The Development of Memory Basic laws of habituation and operant conditioning are sufficient for understanding behavioral relations called “memory.”
Acoustic Continua and Phonetic Categories Frequency - Tones.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
CSD 2230 INTRODUCTION TO HUMAN COMMUNICATION DISORDERS Normal Sound Perception, Speech Perception, and Auditory Characteristics at the Boundaries of the.
Neurophysiologic correlates of cross-language phonetic perception LING 7912 Professor Nina Kazanina.
Source of change –Combination of feedback and explain- experimenter’s-reasoning led to greater learning than feedback alone Path of change –Children relied.
The effects of working memory load on negative priming in an N-back task Ewald Neumann Brain-Inspired Cognitive Systems (BICS) July, 2010.
Infant Perception. William James, 1890 “The baby, assailed by eyes, ears, nose, skin and entrails all at once, feels it all as one great blooming, buzzing.
What infants bring to language acquisition Limitations of Motherese & First steps in Word Learning.
Parsing acoustic variability as a mechanism for feature abstraction Jennifer Cole Bob McMurray Gary Linebaugh Cheyenne Munson University of Illinois University.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Basic cognitive processes - 1 Psych 414 Prof. Jessica Sommerville.
Basic Cognitive Processes - 2
Bosch & Sebastián-Gallés Simultaneous Bilingualism and the Perception of a Language-Specific Vowel Contrast in the First Year of Life.
Stop-Consonant Perception in 7.5-month-olds: Evidence for gradient categories Bob McMurray & Richard N. Aslin Department of Brain and Cognitive Sciences.
Intersensory Redundancy Facilitates Infants’ Perception of Meaning in Speech Passages Irina Castellanos, Melissa Shuman, and Lorraine E. Bahrick Florida.
Motor Theory of Perception March 29, 2012 Tidbits First: Guidelines for the final project report So far, I have two people who want to present their.
Reinforcement Look at matched picture after sound ends & it moves 10 trials (5 of each pairing) 2 or 4 blocks (2 pairs of words, 2 pairs of swoops) Participants.
Speech Perception in Infants Peter D. Eimas, Einar R. Siqueland, Peter Jusczyk, and James Vigorito 1971.
/u/-fronting in RP: a link between sound change and diminished perceptual compensation for coarticulation? Jonathan Harrington, Felicitas Kleber, Ulrich.
Of Words, Birds, Worms, and Weeds: Infant Word Learning and Lexical Neighborhoods.
Does the brain compute confidence estimates about decisions?
February 1, 2011 Julie A. Kable, Ph.D.
Information Processing Child Development PSY 356 Dr. Schuetze
The Fidelity of Visual and Auditory Memory
Visual Memory is Superior to Auditory Memory
Temporal Integration at two time scales
The lexical/phonetic interface: Evidence for gradient effects of within-category VOT on lexical access Bob McMurray Richard N. Aslin Mickey K. TanenMouse.
Presentation transcript:

And a big thanks to Julie Markant Categorical perception of speech: Task variations in infants and adults Bob McMurray Jessica Maye Andrea Lathrop and Richard N. Aslin And a big thanks to Julie Markant

Categorical Perception & Task Variations Overview Previous work Categorical perception and gradient sensitivity to subphonemic detail. Categorical perception in infants Reassessing this with HTPP & AEM Infants show gradient sensitivity A new methodology Adult analogues

Categorical Perception Is subphonemic detail retained [and used] during speech perception? For a long time… NO! Subphonemic variation is discarded in favor of a discrete label. Categorical perception & gradiency

Non-categorical Perception A number of psychophysical-type results showed listeners’ sensitivity to within-category detail. 4AIX Task Pisoni & Lazarus (1974) Speeded Response Carney, Widen & Viemeister (1977) Training Samuel (1977) Pisoni, Aslin, Henessey & Perey (1982) Rating Task Miller (1997) Massaro & Cohen (1983)

Word Recognition These results did not reveal: Whether on-line word recognition is sensitive to such detail. Whether such sensitivity is useful during recognition.

Word Recognition Mounting evidence that word-recognition is sensitive: Lahiri & Marslen-Wilson (1991): vowel nasalization Andruski, Blumstein & Burton (1994): VOT Gow & Gordon (1995): word segmentation Salverda, Dahan & McQueen (in press): embedded words and vowel length. Dahan, Magnuson, Tanenhaus & Hogan (2001): coarticulatory cues in vowel.

Gradient Sensitivity McMurray, Tanenhaus & Aslin (2002) Eye-movements to objects after hearing items from 9-step VOT continuum. McMurray, Tanenhaus & Aslin (2002) Systematic relationship between VOT and looks to the competitor. 5 10 15 20 25 30 35 40 0.02 0.03 0.04 0.05 0.06 0.07 0.08 VOT (ms) Category Boundary Response= Looks to Competitor Fixations Bear

Gradient Sensitivity ….BUT This systematic, gradient relationship between lexical activation and acoustic detail would allow the system take advantage of fine-grained regularities in the signal. Gow, McMurray & Tanenhaus (Sat., 6:00 poster session) Anticipate upcoming material. Resolve Ambiguity If fine-grained detail is useful we might expect infants and children to Show gradient sensitivity to variation Tune their sensitivity to learning environment ….BUT

Infant categorical perception Early findings of categorical perception for infants (e.g. Eimas, Siqueland, Jusczyk & Vigorito) have never been refuted. Most studies use: Habituation (many repetitions) Synthetic Speech Single continuum Perhaps a different method would be more sensitive? Categorical perception in infants

Head-Turn Preference Procedure Jusczyk & Aslin (1995) Infants exposed to a chunk of language: Words in running speech. Stream of continuous speech (ala stat. learning) Word list After exposure, memory for exposed items (or abstractions) is assessed by comparing listening time to consistent items with inconsistent items.

How do we measure listening time? After exposure… Center Light blinks. Brings infant’s attention to center.

How do we measure listening time? When infant looks at center… One of the side-lights blinks.

How do we measure listening time? Beach… Beach… Beach… When infant looks at side-light… she hears a word.

How do we measure listening time? When infant looks at side-light… she hears a word. …as long as she keeps looking…

Experiment 1: Gradiency in Infants 7.5 month old infants exposed to either 4 b-, or 4 p-words Bomb Bear Bail Beach Palm Pear Pail Peach 80 repetitions total Form a category of the exposed class of words. Infants show gradient sensitivity Measure listening time on Bear Pear (Original word) Pear Bear (opposite) Bear* Pear* (VOT closer to boundary).

Experiment 1: Stimuli Stimuli constructed by cross-splicing natural, recorded tokens of each end point. B: M= 3.6 ms VOT P: M= 40.7 ms VOT B*: M=11.9 ms VOT P*: M=30.2 ms VOT Both were judged /b/ or /p/ at least 90% consistently by adult listeners. B: 98.5% B*: 97% P: 99% P*: 96%

Measuring gradient sensitivity Looking time is an indication of interest. After hearing all of those B-words P sounds pretty interesting. So: infants should look longer for pear than bear. What about in between? Listening Time Bear Pear Gradient Bear* Categorical

Individual Differences Novelty/Familiarity preference varies across infants and experiments. We’re only interested in the middle stimuli (b*, p*). Infants categorized as novelty or familiarity preferring by performance on the endpoints. Novelty Familiarity B 27 11 P 19 10 Within each group will we see evidence for gradiency?

Novelty Results Novelty infants, Trained on B VOT: p=.001** Linear Trend: p=.001** 4000 5000 6000 7000 8000 9000 10000 B B* P Listening Time (ms) .14 .004**

Novelty Results Novelty infants, Trained on P VOT: p=.001** Linear Trend: p=.001** 4000 5000 6000 7000 8000 9000 10000 P P* B Listening Time (ms) .1 .001**

Familiarity Results Familiarity infants showed similar effects. B exposure Trend: p=.001 B vs B*: p=.19 B* vs P: p=.21 P exposure Trend: p=.009 P vs P*: p=.057 P* vs. B: p=.096 Trained on B Trained on P

Experiment 1: Conclusions 7.5 month old infants show gradient sensitivity to subphonemic detail. Individual differences in familiarity/novelty preferences. Why? Length of exposure? Individual factors? Limitations of paradigm may hinder further study: More repeated measures Better understanding of “task” Wider age-range.

Anticipatory Eye-Movements An ideal methodology would Yield an arbitrary, identification response. Yield a response to a single stimuli Yield many repeated measures Much like a forced-choice identification A new methodology Anticipatory Eye-Movements (AEM): Train Infants to look left or right in response to a single auditory stimulus

Anticipatory Eye-Movements Visual stimulus moves under occluder. Reemergence serves as “reinforcer” Concurrent auditory stimulus predicts endpoint of occluded trajectory. Subjects make anticipatory eye-movements to the expected location—before the stimulus appears. Teak Lamb

Anticipatory Eye-Movements After training on original stimuli, infants are tested on a mixture of original, trained stimuli (reinforced) Maintain interest in experiment. Provide objective criterion for inclusion new, generalization stimuli (unreinforced) Examine category structure/similarity relative to trained stimuli.

Experiment 2: Pitch and Duration Goals: Use AEM to assess auditory categorization. Assess infants’ abilities to “normalize” for variations in pitch and duration… or… Are infants’ sensitive to acoustic-detail during a lexical identification task...

Experiment 2: Pitch and Duration Training: “Teak” -> rightward trajectory. “Lamb” -> leftward trajectory. “teak!” “lamb!” Test: Lamb & Teak with changes in: Duration: 33% and 66% longer. Pitch: 20% and 40% higher If infants ignore irrelevant variation in pitch or duration, performance should be good for generalization stimuli. If infants’ lexical representations are sensitive to this variation, performance will degrade.

The Stimuli Training stimulus

The Stimuli Testing stimulus

Results Each trials is scored as correct: longer looking time to the correct side. incorrect: longer looking time to incorrect side. Binary DV—similar to 2AFC. On trained stimuli: 11 of 29 infants performed better than chance–this is a tough tasks for infants. Perhaps more training.

Results On generalization stimuli: Pitch p>.1 Duration p=.002 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Pitch p>.1 Duration p=.002 Proportion Correct Trials Duration Pitch Training Stimuli D1 / P1 D2 / P2 Stimulus

Experiment 2: Conclusions Infants’ developing lexical categories show graded sensitive to variation in duration. Possibly not to pitch Might be an effect of “task relevance” AEM yields more repeated measurements better understood task: 2AFC Could it yield a picture of the entire developmental time course? Is AEM applicable to a wider age range?

Treating undergraduates like babies Extreme case: Adult perception. Adults generally won’t Look at blinking lights… Suck on pacifiers… Kick their feet at mobiles… Result: few infant methodologies allow direct analogues to adults. They do make eye-movements… …could AEM be adapted?

Treating undergraduates like babies Pilot study. 5 adults exposed to AEM stimuli. Training: “Ba” left “Pa” right Test Ba – Pa (0-40 ms) VOT continuum.

Results Second group of subjects run in an explicit 2AFC. Same category boundary. Steeper slope: less sensitivity to VOT. 1 0.9 0.8 0.7 0.6 % /p/ 2AFC 0.5 0.4 AEM 0.3 0.2 0.1 5 10 15 20 25 30 35 40 VOT (ms)

Adult AEM: Conclusions AEM paradigm can be used unchanged for adults. Should work with older children as well. Results show same category boundary as traditional 2AFC tasks, perhaps more sensitivity to fine-grained acoustic detail. Potentially useful for speech categorization when categories are not: nameable pictureable immediately obvious

Conclusions Like adults,7.5-month-old infants show gradient sensitivity to subphonemic detail. VOT Duration Perhaps not pitch (w.r.t. lexical categories)

Conclusions Task makes the difference: Moving to HTPP from habituation revealed subphonemic sensitivity. Taking into account individual differences crucial. Moving to AEM yields Better ability to examine tuning over time. Ability to assess perception across lifespan with a single paradigm.

And a big thanks to Julie Markant Categorical perception of speech: Task variations in infants and adults Bob McMurray Jessica Maye Andrea Lathrop and Richard N. Aslin And a big thanks to Julie Markant

Natural Stimuli Infants may show more sensitivity to natural speech Stimuli constructed from natural tokens of actual words with progressive cross-splicing. Palm Bomb

Experiment 1: Reprise Difficult to examine how sensitivity might be tuned to environmental factors in head-turn-preference procedure. High variance/individual differences—can’t predict novelty/familiarity. Only a single point to look at. Between-subject comparison. Difficult interaction to obtain Listening Time 6 m/o 10 m/o 8 m/o Bear Bear* Pear

Experiment 1: Reprise AEM presents a potential solution: 1) Looking at whole continuum would yield more power. 6 m/o 8 m/o 10 m/o Bear Pear 2) Is AEM applicable to a wider age range?

The Stimuli Training stimulus

Data analysis Data coded by naive coders from video containing pupil & scene monitors.

Data analysis Left Right - out Start In Center Off Left-out Right-out Left-In Right-In Left-out, Right-out, center & start treated as “neither”. Left-in, Left treated as anticipation to left. Right-in, Right treated as anticipation to right. Eye-movements coded from maximal size of stimulus to first appearance (or end of trial).