Elise A. Piazza, Marius Cătălin Iordan, Casey Lew-Williams 

Slides:



Advertisements
Similar presentations
Volume 16, Issue 13, Pages (July 2006)
Advertisements

Araceli Ramirez-Cardenas, Maria Moskaleva, Andreas Nieder 
Decision Making during the Psychological Refractory Period
Aaron R. Seitz, Praveen K. Pilly, Christopher C. Pack  Current Biology 
Volume 27, Issue 23, Pages e4 (December 2017)
Volume 27, Issue 22, Pages e3 (November 2017)
Perceptual Echoes at 10 Hz in the Human Brain
Whistled Turkish alters language asymmetries
Somatosensory Precision in Speech Production
Chimpanzees Trust Their Friends
Avi J.H. Chanales, Ashima Oza, Serra E. Favila, Brice A. Kuhl 
Norm-Based Coding of Voice Identity in Human Auditory Cortex
Representation of Object Weight in Human Ventral Visual Cortex
Volume 25, Issue 15, Pages (August 2015)
Grid Cells Encode Local Positional Information
Nori Jacoby, Josh H. McDermott  Current Biology 
Volume 24, Issue 5, Pages R204-R206 (March 2014)
Giovanni M. Di Liberto, James A. O’Sullivan, Edmund C. Lalor 
Perceptual Learning and Decision-Making in Human Medial Frontal Cortex
The Privileged Brain Representation of First Olfactory Associations
Jason Samaha, Bradley R. Postle  Current Biology 
Visual Attention: Size Matters
Yukiyasu Kamitani, Frank Tong  Current Biology 
Volume 79, Issue 4, Pages (August 2013)
Human Cortical Neurons in the Anterior Temporal Lobe Reinstate Spiking Activity during Verbal Memory Retrieval  Anthony I. Jang, John H. Wittig, Sara.
Deciphering Cortical Number Coding from Human Brain Activity Patterns
Children, but Not Chimpanzees, Prefer to Collaborate
Silent Reading: Does the Brain ‘Hear’ Both Speech and Voices?
Volume 28, Issue 5, Pages e3 (March 2018)
Volume 27, Issue 23, Pages e4 (December 2017)
Volume 27, Issue 23, Pages e3 (December 2017)
Single-Unit Responses Selective for Whole Faces in the Human Amygdala
Marian Stewart Bartlett, Gwen C. Littlewort, Mark G. Frank, Kang Lee 
Avi J.H. Chanales, Ashima Oza, Serra E. Favila, Brice A. Kuhl 
Contextual Feedback to Superficial Layers of V1
Volume 22, Issue 18, Pages (September 2012)
BOLD fMRI Correlation Reflects Frequency-Specific Neuronal Correlation
Decoding the Yellow of a Gray Banana
Volume 19, Issue 6, Pages (March 2009)
Grid Cells Encode Local Positional Information
Attentive Tracking of Sound Sources
Restorative Justice in Children
Bettina Sorger, Joel Reithler, Brigitte Dahmen, Rainer Goebel 
Volume 19, Issue 3, Pages (February 2009)
Dissociable Effects of Salience on Attention and Goal-Directed Action
Volume 27, Issue 3, Pages (February 2017)
Andrew Clouter, Kimron L. Shapiro, Simon Hanslmayr  Current Biology 
Attention Reorients Periodically
Sébastien Marti, Jean-Rémi King, Stanislas Dehaene  Neuron 
Evolutionary Psychology of Spatial Representations in the Hominidae
Volume 27, Issue 3, Pages (February 2017)
Event Boundaries Trigger Rapid Memory Reinstatement of the Prior Events to Promote Their Representation in Long-Term Memory  Ignasi Sols, Sarah DuBrow,
Function and Evolution of Vibrato-like Frequency Modulation in Mammals
Attention Samples Stimuli Rhythmically
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Category Selectivity in the Ventral Visual Pathway Confers Robustness to Clutter and Diverted Attention  Leila Reddy, Nancy Kanwisher  Current Biology 
Volume 17, Issue 15, Pages (August 2007)
Volume 16, Issue 13, Pages (July 2006)
Humans Can Continuously Optimize Energetic Cost during Walking
Sound Facilitates Visual Learning
Cross-Modal Associative Mnemonic Signals in Crow Endbrain Neurons
Christa Müller-Axt, Alfred Anwander, Katharina von Kriegstein 
Søren K. Andersen, Steven A. Hillyard, Matthias M. Müller 
Jonathan Redshaw, Thomas Suddendorf  Current Biology 
Memory Reactivation Enables Long-Term Prevention of Interference
Nori Jacoby, Josh H. McDermott  Current Biology 
Volume 28, Issue 19, Pages e8 (October 2018)
Maria J.S. Guerreiro, Lisa Putzar, Brigitte Röder  Current Biology 
Visual Crowding Is Correlated with Awareness
Motion-Induced Blindness and Motion Streak Suppression
Presentation transcript:

Mothers Consistently Alter Their Unique Vocal Fingerprints When Communicating with Infants  Elise A. Piazza, Marius Cătălin Iordan, Casey Lew-Williams  Current Biology  Volume 27, Issue 20, Pages 3162-3167.e3 (October 2017) DOI: 10.1016/j.cub.2017.08.074 Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 1 MFCC Feature Vectors from All Utterances for One Representative Participant Each vector (dashed lines) represents the time-averaged set of Mel-frequency cepstral coefficients for a single utterance of either adult-directed speech (ADS, shown in blue) or infant-directed speech (IDS, shown in pink). Each bold line represents the average MFCC vector across all 20 utterances for a given condition. Error bars on the averaged vectors represent ±SEM across 20 utterances. Figure S1 depicts average MFCC vectors for each of the 12 English-speaking participants; the vectors displayed in this figure come from s12. Current Biology 2017 27, 3162-3167.e3DOI: (10.1016/j.cub.2017.08.074) Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 2 Accuracy Rates for Classifying Mothers’ IDS versus ADS using MFCC Vectors The first two bars indicate results from training and testing the classifier on English (first bar) and on all other languages (second bar). The third bar results from training the classifier on English data and testing on non-English data (and vice versa for the fourth bar). Chance (dashed line) is 50%. N = 12. Classification performance is represented as mean percent correct and ±SEM across cross-validation folds (leave-one-subject-out). ∗∗∗p < 0.001. Current Biology 2017 27, 3162-3167.e3DOI: (10.1016/j.cub.2017.08.074) Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 3 Accuracy Rates for Classifying IDS versus ADS Based on Timbre, after Controlling for Pitch and Formants The first (darkest blue) bars indicate results for MFCC vectors derived from original speech, the second bars indicate results from speech with F0 regressed out, and the third bars indicate results from speech with F0, F1, and F2 regressed out. Bars corresponding to “original speech” are derived from only the segments of each utterance in which an F0 value was obtained, for direct comparison with the regression results (see STAR Methods). Chance (dashed line) is 50%. N = 12. Classification performance is represented as mean percent correct and ±SEM across cross-validation folds (leave-one-subject-out). ∗∗p < 0.01, ∗∗∗p < 0.001. Current Biology 2017 27, 3162-3167.e3DOI: (10.1016/j.cub.2017.08.074) Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 4 Accuracy Rates for Classifying IDS versus ADS Based on Vocal Timbre versus Background Noise All “speech” (blue) bars are duplicated exactly from Figure 2 and appear again here for visual comparison. “Silence” (yellow) bars are derived from cropped segments containing no sounds except for ambient noise from recordings of English speakers and non-English speakers. Chance (dashed line) is 50%. N = 12. Classification performance is represented as mean percent correct and ±SEM across cross-validation folds (leave-one-subject-out). Figure S2 displays accuracy rates for classifying individual speakers based on speech versus background noise. ∗p < 0.05, ∗∗∗p < 0.001. Current Biology 2017 27, 3162-3167.e3DOI: (10.1016/j.cub.2017.08.074) Copyright © 2017 Elsevier Ltd Terms and Conditions