Presentation on theme: "Understanding Emotion: auditory emotion recognition"— Presentation transcript:
1Understanding Emotion: auditory emotion recognition 1Human Social Interactionperspectives from neuroscienceDr. Roger NewportRoom B47Student Drop-in Time: Tuesdays 12-2Understanding Emotion: auditory emotion recognition
2This lecture:Recognising emotion from prosodyTheories of emotion processingCurrent Research
3Understanding emotion from auditory cues: prosody Prosody is the melody or musical nature of the spoken voice - conveyed by changes in e.g pitch, syllable duration, volumeWe are able to differentiate many emotions from prosody alone e.g. anger, sadness, happinessUniversal and early skill2What are the neural bases for this ability?Are they the same as for language?Are they the same as for differentiating emotion from visual cues?
4Prosody skills - innate and special abilities 3Babies can differentiate between the sounds of voices before they can understand speechChildren can produce the melody or intonation of speech before they can produce two-word combinationsWe can tell the difference between a spontaneous and a mechanical smileWe think we can tell something about size and attractiveness from the sound of someone’s voiceWe can understand cartoon characters even though they do not speak
5Uses of prosody in the film industry 4TromboneToy whistleBadgerWalrusMad DogStarving Bear
6Emotion from prosody - the same as from facial expression? Emotion from facial expressions reminder from last week:Clear evidence for involvement of amygdala in fear recognition and the insula / striatum in the recognition of disgustNot so clear for other emotionsDoes the same hold true for recognising emotions from the expression of the voice?5
76Emotion from prosody, the same as from facial expression?The insula and disgustMost emotions identified at 50-60% accuracy (chance = 10-20%)Disgust is nearly impossible to recognise from prosodyVery very difficult to measure in experimental conditionsTherefore very little successful research on recognition of disgust from prosody
8Evidence from human neuropsychology - the amygdala and fear Adolphs et al, 2001Recognition of emotion from prosody in 15 unilateral left, 11 unilateral right amygdala and 50 BD controls.No differences between groups(But bilateral amydala damage is usually necessary to abolish facial fear recognition)Adolphs and Tranel, 19992 complete bilateral amygdala (inc. SM), 15 BD and 14 NC controls. SM normal emotion recognition from prosodyAmygdala’s role not as critical for prosody as facial emotionWhich regions might be involved? Are prosody and facial skills dissociable?7
9Evidence from human neuropsychology for right hemisphere involvement - Patient KB and amusia Music activates brain regions that are also associated with emotion processingTrained musicians are better at identifying emotion from prosody and tonic imitations of prosodyChildren who study keyboard vs drama/nothing are also betterKB: amusic following RH stroke unable to discriminate pitch or rhythm patterns in linguistic or musical stimuli. Also impaired on prosodic perception tasks (e.g., discriminating statements from questions)8
109Evidence from human neuropsychology - hemispheric asymmetryBarrett, et al., (1999) - patient with large left hemisphere lesionNormal emotional prosody yet severe inability to process propositional speech.Schmitt, et al., (1997)27 RH patients; 25 LHdisproportionately impaired recognition of emotion from facial and prosodic cues in the RH group when judging multimodal video clipsPeper and Irle (1997) RH disproportionatelyimportant for processing of emotion from prosodyPell (2005)RH (mixed damage!) impaired at emotion from prosody;LH impaired at interpreting prosodic code within language content
1110Brain Damaged Patient GroupsHuman imaging (TMS)Adolphs et al., 2002:Lesion analysis66 brain damaged subjectsVan Rijn et al. 20051Hz TMS for 12 minutesSomatosensory area associated with lips/tongue/jawSlowed RTs to ‘withdrawal’ emotions (fear/sadness), but not approach (e.g. happy)
12Human imaging of voice selective cortex 11BellsHuman non-vocal soundsNoise (amplitude modulated)Vocal soundsScrambled voicesBelin et al., 2000
13Human imaging of voice selective cortex 12Friederici and Alter (2004) prosodic data adapted from Plante et al. 2001Using fMRI, Buchanan et al. showed that the detection of emotional prosody is associated with increased activation in the right hemisphere (inferior frontal lobe and right anterior auditory cortex),
1413Wildgruber et al., 2005 fMRI experimentDesigned to separate phonetic from affective prosodic componentsEmotionally neutral spoken sentences such asDer Gast hat sich für Donnerstag ein zimmer reserviert orDie Anrufe werden automatisch beantwortetRead with 5 different emotions (happiness, sadness, anger, fear and disgust)Tested for recognition behaviourally prior to fMRI(all 90-95% accuracy except disgust (77%).2 tasks: say the emotion; say the vowel after the first ‘a’.
1514fMRI results:Disgust (and fear) recognition dropped to near 50% accuracy*NS
1615The trouble with fMRI: both tasks involve listening and automatic processing of linguistic, syntactic, phonological and prosodic information as well as motor responsesSo get activation of auditory cortex, phonolgical store, supplementary motor areas etc.
1716Use subtraction method:No activations found for specific emotions. Note: no amygdala for fear and no insula for disgust activations (but low power in study).2 areas associated with emotion identification: Right STS and right inferior frontal cortex. rIFC involved in emotion comprehension of both facial and prosodic cues.
18Imaging of specific emotions 17 Prosody, anger and attentionSander et al., 2005Attend to left or right ear.Make gender judgmentRight middle STS more active for anger vs neutral irrespective of attentionRight amygdala more active for anger vs neutral irrespective of attentionRight OFC more active for attended to anger onlyRight Cuneus more active for attended to anger only
19Summary18The recognition of emotion from prosody is not analogous to the recognition of emotion from facial expressionRecognizing emotional prosody draws on multiple structures distributed between left and right hemispheresThe roles of these structures are not all equal but may be most apparent in processing specific auditory features that provide cues for recognizing the emotionDespite the distributed nature of the processing, the right hemisphere appears most critical - in particular the right inferior frontal regions, working together with superior temporal region in the right hemisphere, the left frontal regions, and subcortical structures, all interconnected by white matter.
2120So far we have looked atTypes of emotion expression (facial/prosodic)Brain regions associated with various emotions/types of expressionSo we know what the brain does with emotionally expressive stimuli and where might be important for doing this, but we have not looked at HOW the brain might process emotionally expressive stimuliBroadly speaking there are 2 main theories of emotion processing:1. Theory theory2. Simulation theory
2221Theory theory:Children as young as 5 have extensive causal knowledge in the form of intuitive theoriesScreening off task
23Theory Theory of emotion recognition information-based account 22Theory Theory of emotion recognitioninformation-based accountemploys naiive psychological ‘folk’ theory to infer the emotional states of othersvisually obtained knowledge of the facial configuration of the targetsemantic knowledge concerning facial configurationsgeneral knowledge concerning a given emotion, i.e. its typical elicitors or behavioural effectsknowledge that facial configuration ‘C’ is paired with emotion label ‘E’
24Simulation theory23We interpret the emotions of others by covertly simulating their response matching the outcome to our knowledge of outcomesHow might this work for emotion recognition?Person A sees person B pulling facial configuration ‘C’A covertly facsimiles ‘C’ (or what she thinks ‘C’ to be)A attributes resulting emotion label ‘E’ to person B
25Evidence for simulation theory account of emotion recognition 24Paired emotion deficits on FaBER tasksPaired deficits for fearE.g. patients SM and NM (Fear)SMBilateral amygdala damage, neighbouring areas sparedDid not express fear, could not recognise expression of fear in othersKnew what fear was supposed to beKnew what should cause itEven knew what response might beBut could not show it.Unable to learn the significance of unpleasant situationsDoes not show fear conditioning
26Paired deficits for disgust 25Imaging studiesPhilips et al, 1997, 1998 fMRIObserving FaBER disgust activates right insulaInsula known to involved in experience of unpleasant tastes and smellsWicker et al., fMRIwatching video of facial expressions in response to pleasant or disgusting smell vs. experiencing pleasant or disgusting smellBoth disgust conditions preferentially activated left ant. insula and right ant. cingulate cortex.Patient studiesNK (Calder et al., 2000) Insula and BG damage. Paired impairment on disgust measures (questionnaire and FaBER)
27Paired deficits for anger 26Dopamine system: a neural subsystem involved in the processing of aggression in social-agonistic encounters in a wide variety of speciesDopamine system plays an important role in mediating the experience of the emotion of anger.Dopamine levels in rats and other species are elevated in social-agonistic encounters.Administration of dopamine antagonist (e.g. sulpiride) selectively impairs responses to agonistic encounters.Sulpiride administration leads to selective disruption of FaBER for anger, while sparing recognition of other emotions.Following sulpiride administration, subjects were worse at recognizing angry faces, but no such FaBER impairments of other emotions.
28Generate hypothesised emotion Generate and TestHow is this done?Random? Too slowTheory? How?Generate hypothesised emotionProduce a facial expressionNoTest expression(does it match expression of other?)How is this done?V to P matchingLearnt or innate?YesClassify own emotional state and attribute this to otherA deficit in the production of an emotion (or its facsimile) leads to an impairment in the recognition of that emotion27
29Visual representation of other’s facial expression Reverse simulationVisual representation of other’s facial expressionSuch imitation is innate+ adults covertly mimic FaBER stimuli (measurable by EMG) (Dimberg et al.)Activation of facial muscles that imitate other’s facial expressionExperience of emotionClassify own state and attribute that to otherActivation of facial musculature precedes emotionEngages cognitive processes in reverse.Emotional state and facial expression are bidirectionalSensation of emotion --> facial expressionFacial expression --> mild sensation of emotion28
30Reverse simulationVisual representation of other’s facial expressionActivation of facial muscles that imitate other’s facial expressionAs ifExperience of emotionClassify own state and attribute that to otherA direct link from visual input of other’s face to somatosensory representation of what it would feel like to pull that face. Bypasses muscle activation so gets round Mobius problem, but no details of how link might work.29
31Unmediated resonance model or shared manifold hypothesis Observation of other’s facial expressionDirect activationRequires no mediating structures or processesAutomatic activation (mirroring) of neural systems associated with facial emotionShared emotionLabelling of emotion30
32Wild et al., 2003Contagious emotionsYou are slower to make an incongruent facial movement than a congruent one31
33Theory Theory of emotion recognition Can you lesion this model? 32Theory Theory of emotion recognitionCan you lesion this model?visually obtained knowledge of the facial configuration of the targetsemantic knowledge concerning facial configurationsgeneral knowledge concerning a given emotion, i.e. its typical elicitors or behavioral effectsknowledge that facial configuration ‘C’ is paired with emotion label ‘E’
3433SummaryResearch into understanding emotions has revealed that deficits in face-based recognition are paired with deficits in the production of the same emotion.Of theory and simulation approaches the simulation theory seems to offer the best explanation of the data.The precise mechanisms by which simulation theory might work are still unclear, but reverse models(with as if loop) and the more recent unmediated resonance model can both account for neuroscientific dataBut some people still don’t believe it
35Current ResearchJuly 2007Audiovisual integration of emotional signals in voice and face:An event-related fMRI studyRationaleThere has been plenty of research into visual emotion recognitionAnd some research into auditory emotion recognitionBut (almost) no-one has studied audiovisual integration of (dynamic) emotional stimuliMethodsBehavioural and Event-related fMRI taskViewed and heard faces and words either A only, V only or AVNeutral + 6 basic emotions (surprise!)Looked for areas of AV overlap that were not A or V(AV>A)∩(AV>V)34
37ResultsBehavioural results:People were better when both A and V information availableImaging results:Bilateral posterior superior temporal gyrus activation and right thalamus (not shown)This is what they predicted (honest).35
38Current ResearchSeptember 2006Amygdala damage impairs emotion recognition from musicRationaleWe know the amygdala is implicated in the recognition of fear (faces)Patients following temporal lobe removal (inc. amygdala) are impaired at scary music recognition.Is the amygdala (specifically) necessary for scary music recognition?MethodsNeuropsychology (Patient SM)Music discrimination and emotion recognition /rating tasksResultsSM ok at discriminationPoor at negative emotion music recognition (sad and scary)36
39Current ResearchSeptember 2006Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing. TICSReview article37
40Current ResearchSeptember 2006Beyond the right hemisphere: brain mechanisms mediating vocal emotional processingReview article38
41Current ResearchSeptember 2006Beyond the right hemisphere: brain mechanisms mediating vocal emotional processingReview article39
42Current Research2008Behold the voice of wrath: Cross-modal modulation of visual attention by anger prosody. Brosch et al. 2008RationaleWe know the amygdala directs our attention to visual socially relevant stimuliWe also know that the amygdala responds to anger prosodyDoes anger prosody direct our visual attention?MethodsDichotic listening with cueing paradigmResultsYes it does. Visual targets detected faster when on same side as anger delivered40
43Current Research2007Emotional prosodic processing in auditory hallucinationsRationaleSchizophrenics are impaired at prosody recognitionProsodic cues are important for speaker identityCould prosodic deficit be responsible for misattribution of voices in auditory hallucinationsMethodsRate emotional (but semantically neutral) spoken sentences from Sad to Happy on Likert scaleGroups: normal, Schz with hallucinations, Schz withoutResultsOnly hallucinating patients were impaired compared to controls41
44Next lecture: revision/FAQ lecture on the 3rd of November. What to do now.Don’t panic (I’ll tell you when).Read some articles and start planning and writing up an experimentSubmit revision questions using feedback page before revision/feedback lecture on November the 3rd