Cultural Confusions Show that Facial Expressions Are Not Universal

Slides:



Advertisements
Similar presentations
The Human Face as a Dynamic Tool for Social Communication
Advertisements

A Source for Feature-Based Attention in the Prefrontal Cortex
A Sparse Object Coding Scheme in Area V4
Volume 58, Issue 3, Pages (May 2008)
Aaron R. Seitz, Praveen K. Pilly, Christopher C. Pack  Current Biology 
Pre-constancy Vision in Infants
Pattern and Component Motion Responses in Mouse Visual Cortical Areas
Volume 64, Issue 3, Pages (November 2009)
Perceptual Echoes at 10 Hz in the Human Brain
With Age Comes Representational Wisdom in Social Signals
Chenguang Zheng, Kevin Wood Bieri, Yi-Tse Hsiao, Laura Lee Colgin 
Olfactory Coding in the Honeybee Lateral Horn
Volume 23, Issue 18, Pages (September 2013)
Choice Certainty Is Informed by Both Evidence and Decision Time
Volume 27, Issue 2, Pages (January 2017)
Beauty Requires Thought
Rachael E. Jack, Oliver G.B. Garrod, Philippe G. Schyns 
Sheng Li, Stephen D. Mayhew, Zoe Kourtzi  Neuron 
Human Hippocampal Dynamics during Response Conflict
Volume 55, Issue 3, Pages (August 2007)
Jason Samaha, Bradley R. Postle  Current Biology 
Vincent B. McGinty, Antonio Rangel, William T. Newsome  Neuron 
Differential Impact of Behavioral Relevance on Quantity Coding in Primate Frontal and Parietal Neurons  Pooja Viswanathan, Andreas Nieder  Current Biology 
CA3 Retrieves Coherent Representations from Degraded Input: Direct Evidence for CA3 Pattern Completion and Dentate Gyrus Pattern Separation  Joshua P.
A Switching Observer for Human Perceptual Estimation
A Role for the Superior Colliculus in Decision Criteria
Data-Driven Methods to Diversify Knowledge of Human Psychology
Volume 82, Issue 5, Pages (June 2014)
Selective Entrainment of Theta Oscillations in the Dorsal Stream Causally Enhances Auditory Working Memory Performance  Philippe Albouy, Aurélien Weiss,
The Human Face as a Dynamic Tool for Social Communication
Cristina Márquez, Scott M. Rennie, Diana F. Costa, Marta A. Moita 
Pieter R. Roelfsema, Henk Spekreijse  Neuron 
Attention Reduces Spatial Uncertainty in Human Ventral Temporal Cortex
Human Orbitofrontal Cortex Represents a Cognitive Map of State Space
Confidence Is the Bridge between Multi-stage Decisions
Single-Unit Responses Selective for Whole Faces in the Human Amygdala
Volume 28, Issue 15, Pages e5 (August 2018)
A Switching Observer for Human Perceptual Estimation
Integration Trumps Selection in Object Recognition
Caleb E. Strait, Tommy C. Blanchard, Benjamin Y. Hayden  Neuron 
BOLD fMRI Correlation Reflects Frequency-Specific Neuronal Correlation
Decoding the Yellow of a Gray Banana
Opposite Effects of Recent History on Perception and Decision
Feng Han, Natalia Caporale, Yang Dan  Neuron 
Volume 25, Issue 3, Pages (February 2015)
Perception Matches Selectivity in the Human Anterior Color Center
Neuronal Response Gain Enhancement prior to Microsaccades
Spatiotopic Visual Maps Revealed by Saccadic Adaptation in Humans
Volume 26, Issue 14, Pages R663-R664 (July 2016)
Volume 25, Issue 5, Pages (March 2015)
Pattern and Component Motion Responses in Mouse Visual Cortical Areas
Volume 23, Issue 21, Pages (November 2013)
David Pitcher, Vincent Walsh, Galit Yovel, Bradley Duchaine 
Volume 72, Issue 6, Pages (December 2011)
Gilad A. Jacobson, Peter Rupprecht, Rainer W. Friedrich 
Volume 26, Issue 14, Pages (July 2016)
Humans Have an Expectation That Gaze Is Directed Toward Them
Masayuki Matsumoto, Masahiko Takada  Neuron 
Volume 16, Issue 20, Pages (October 2006)
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Decoding Successive Computational Stages of Saliency Processing
Daniela Vallentin, Andreas Nieder  Current Biology 
Perceptual Classification in a Rapidly Changing Environment
Volume 23, Issue 21, Pages (November 2013)
Caroline A. Montojo, Susan M. Courtney  Neuron 
Population Responses to Contour Integration: Early Encoding of Discrete Elements and Late Perceptual Grouping  Ariel Gilad, Elhanan Meirovithz, Hamutal.
Coupled Oscillator Dynamics of Vocal Turn-Taking in Monkeys
Olfactory Coding in the Honeybee Lateral Horn
Visual Motion Induces a Forward Prediction of Spatial Pattern
Visual Crowding Is Correlated with Awareness
Presentation transcript:

Cultural Confusions Show that Facial Expressions Are Not Universal Rachael E. Jack, Caroline Blais, Christoph Scheepers, Philippe G. Schyns, Roberto Caldara  Current Biology  Volume 19, Issue 18, Pages 1543-1548 (September 2009) DOI: 10.1016/j.cub.2009.07.051 Copyright © 2009 Elsevier Ltd Terms and Conditions

Figure 1 Fixation Distributions (A) Fixation distributions for each observer group collapsed across race of face and seven expression categories (see Figure S2 for fixation distributions for each condition separately). Color-coded distributions represent the density of fixations across face regions, with red showing the most densely fixated regions. Note that for East Asian (EA) observers, fixations are biased toward the upper part of the face as compared to Western Caucasian (WC) observers, where fixations are more evenly distributed across the face, including the mouth. (B) Fixation distributions for “surprise,” “fear,” “disgust,” and “anger.” Color-coded distributions presented on grayscale sample stimuli show the relative distributions of fixations across face regions. Color coding is as follows: blue, “left eye”; green, “right eye”; yellow, “bridge of nose”; orange, “center of face”; red, “mouth.” Higher color saturation indicates higher fixation density, shown relative to all conditions. Note that the red “mouth” fixations for EA observers are less intense as compared to WC observers across conditions. Color-coded bars to the left of each face represent the mean categorization accuracy for that condition, with red indicating a significant difference in categorization errors between groups (p < 0.05). Error bars indicate standard error of the mean. Current Biology 2009 19, 1543-1548DOI: (10.1016/j.cub.2009.07.051) Copyright © 2009 Elsevier Ltd Terms and Conditions

Figure 2 Fixation Sequences for “Surprise,” “Fear,” “Anger,” and “Disgust” Successions of color-coded circles represent the fixation sequences extracted via minimum description length analysis, with each circle representing a face region. Face regions are color-coded as in Figure 1B. For example, the succession of blue → green → blue circles (indicated by the black arrow) corresponds to the fixation sequence “left eye” → “right eye” → “left eye.” Single color-coded circles correspond to fixations that do not appear as part of a sequence. Black and white bars to the right of the fixation sequences represent how frequently the fixation sequence appeared in the data set, with black indicating correct trials and white indicating incorrect trials. Different levels of gray in each condition represent the order of the fixation sequences (see Experimental Procedures). Note the higher number of fixations sequences for EA observers compared to WC observers across expressions (see also Figure S3). Current Biology 2009 19, 1543-1548DOI: (10.1016/j.cub.2009.07.051) Copyright © 2009 Elsevier Ltd Terms and Conditions

Figure 3 The Model Observer: Illustration of the Procedure to Compute Estimated Patterns of Confusion (A) Information samples. To compute estimated patterns of confusion, we used the model to sample face information from the stimulus expression (e.g., “fear”) and from the same location on the other expressions (e.g., “surprise,” “anger,” and “disgust”). The face images illustrate an example of the information sampled. (B) Pattern of confusions. The model then Pearson correlated the stimulus expression sample with each of the other expression samples. These correlations (plotted in dashed color-coded lines beneath each corresponding face) represented the confusions of the model and were fitted (using ordinary least squares) against the behavioral confusions of the EA observers (plotted in black). The behavioral confusions of the EA observers were calculated by categorizing each incorrect trial by response for each expression (e.g., for “fear” trials, the numbers of incorrect responses were computed for “surprise,” “anger,” and “disgust”). We repeated the sampling and correlation process for 10,000 individual samples selected randomly across the face and finally sorted each information sample according to its fit to the behavioral confusions of the EA observers (“best” to “worst” fits are shown in Figure 4). We followed the same procedure for each expression (see Figure S5 for a full illustration). Current Biology 2009 19, 1543-1548DOI: (10.1016/j.cub.2009.07.051) Copyright © 2009 Elsevier Ltd Terms and Conditions

Figure 4 Model Observer and EA Observer Fixation Maps Contour plots: color-coded lines represent the rank order of information samples according to fit, with red showing the “best” fit (scale on left). For example, by sampling the eyebrow (delimited in red) and eye region (delimited in orange) in “fear,” the model produced a pattern of confusions most similar to that of the EA observers. In contrast, the lower part of the face (delimited in blue and green) produced a pattern of confusions most dissimilar to that of the EA observers. Fixation patterns: for each expression, fixations leading to behavioral confusions are shown by relative distributions presented on grayscale sample stimuli. Red areas indicate higher fixation density for each expression (scale on right). Note the higher density of EA observer fixations within face regions ranked as “best fit.” This demonstrates that the behavioral confusions of the EA observers are symptomatic of an information sampling strategy that selects ambiguous information (i.e., the eyes and eyebrows) while neglecting more diagnostic features (i.e., the mouth). Current Biology 2009 19, 1543-1548DOI: (10.1016/j.cub.2009.07.051) Copyright © 2009 Elsevier Ltd Terms and Conditions