Presentation is loading. Please wait.

Presentation is loading. Please wait.

Human Social Interaction perspectives from neuroscience Dr. Roger Newport Room B47 Student Drop-in Time: Tuesdays 12-2 www.psychology.nottingham.ac.uk/staff/rwn.

Similar presentations


Presentation on theme: "Human Social Interaction perspectives from neuroscience Dr. Roger Newport Room B47 Student Drop-in Time: Tuesdays 12-2 www.psychology.nottingham.ac.uk/staff/rwn."— Presentation transcript:

1 Human Social Interaction perspectives from neuroscience Dr. Roger Newport Room B47 Student Drop-in Time: Tuesdays Understanding Emotion: auditory emotion recognition 1

2 This lecture: Recognising emotion from prosody Theories of emotion processing Current Research

3 Understanding emotion from auditory cues: prosody Prosody is the melody or musical nature of the spoken voice - conveyed by changes in e.g pitch, syllable duration, volume We are able to differentiate many emotions from prosody alone e.g. anger, sadness, happiness Universal and early skill What are the neural bases for this ability? Are they the same as for language? Are they the same as for differentiating emotion from visual cues? 2

4 We can tell the difference between a spontaneous and a mechanical smile We think we can tell something about size and attractiveness from the sound of someones voice Babies can differentiate between the sounds of voices before they can understand speech We can understand cartoon characters even though they do not speak Children can produce the melody or intonation of speech before they can produce two-word combinations Prosody skills - innate and special abilities 3

5 Uses of prosody in the film industry TromboneToy whistleBadger Walrus Mad Dog Starving Bear 4

6 Emotion from prosody - the same as from facial expression? Emotion from facial expressions reminder from last week: Clear evidence for involvement of amygdala in fear recognition and the insula / striatum in the recognition of disgust Not so clear for other emotions Does the same hold true for recognising emotions from the expression of the voice? 5

7 Emotion from prosody, the same as from facial expression? The insula and disgust Most emotions identified at 50-60% accuracy (chance = 10-20%) Disgust is nearly impossible to recognise from prosody Very very difficult to measure in experimental conditions Therefore very little successful research on recognition of disgust from prosody 6

8 Evidence from human neuropsychology - the amygdala and fear Adolphs et al, 2001 Recognition of emotion from prosody in 15 unilateral left, 11 unilateral right amygdala and 50 BD controls. No differences between groups (But bilateral amydala damage is usually necessary to abolish facial fear recognition) Adolphs and Tranel, complete bilateral amygdala (inc. SM), 15 BD and 14 NC controls. SM normal emotion recognition from prosody Amygdalas role not as critical for prosody as facial emotion Which regions might be involved? Are prosody and facial skills dissociable? 7

9 Evidence from human neuropsychology for right hemisphere involvement - Patient KB and amusia Music activates brain regions that are also associated with emotion processing Trained musicians are better at identifying emotion from prosody and tonic imitations of prosody Children who study keyboard vs drama/nothing are also better KB: amusic following RH stroke unable to discriminate pitch or rhythm patterns in linguistic or musical stimuli. Also impaired on prosodic perception tasks (e.g., discriminating statements from questions) 8

10 Evidence from human neuropsychology - hemispheric asymmetry Barrett, et al., (1999) - patient with large left hemisphere lesion Normal emotional prosody yet severe inability to process propositional speech. Schmitt, et al., (1997) 27 RH patients; 25 LH disproportionately impaired recognition of emotion from facial and prosodic cues in the RH group when judging multimodal video clips Peper and Irle (1997) RH disproportionately important for processing of emotion from prosody Pell (2005) RH (mixed damage!) impaired at emotion from prosody; LH impaired at interpreting prosodic code within language content 9

11 Adolphs et al., 2002:Lesion analysis 66 brain damaged subjects Brain Damaged Patient GroupsHuman imaging (TMS) 10 Van Rijn et al Hz TMS for 12 minutes Somatosensory area associated with lips/tongue/jaw Slowed RTs to withdrawal emotions (fear/sadness), but not approach (e.g. happy)

12 Human imaging of voice selective cortex Belin et al., Bells Human non-vocal sounds Noise (amplitude modulated) Vocal sounds Scrambled voices

13 Friederici and Alter (2004) prosodic data adapted from Plante et al Using fMRI, Buchanan et al. showed that the detection of emotional prosody is associated with increased activation in the right hemisphere (inferior frontal lobe and right anterior auditory cortex), Human imaging of voice selective cortex 12

14 Wildgruber et al., 2005 fMRI experiment Designed to separate phonetic from affective prosodic components Emotionally neutral spoken sentences such as Der Gast hat sich für Donnerstag ein zimmer reserviert or Die Anrufe werden automatisch beantwortet Read with 5 different emotions ( happiness, sadness, anger, fear and disgust ) Tested for recognition behaviourally prior to fMRI (all 90-95% accuracy except disgust (77%). 2 tasks: say the emotion; say the vowel after the first a. 13

15 fMRI results: Disgust (and fear) recognition dropped to near 50% accuracy 14 * NS

16 The trouble with fMRI: both tasks involve listening and automatic processing of linguistic, syntactic, phonological and prosodic information as well as motor responses So get activation of auditory cortex, phonolgical store, supplementary motor areas etc. 15

17 Use subtraction method: 2 areas associated with emotion identification: Right STS and right inferior frontal cortex. rIFC involved in emotion comprehension of both facial and prosodic cues. 16

18 Attend to left or right ear. Make gender judgment Sander et al., 2005 Prosody, anger and attention 17 Imaging of specific emotions

19 The recognition of emotion from prosody is not analogous to the recognition of emotion from facial expression Recognizing emotional prosody draws on multiple structures distributed between left and right hemispheres The roles of these structures are not all equal but may be most apparent in processing specific auditory features that provide cues for recognizing the emotion Despite the distributed nature of the processing, the right hemisphere appears most critical - in particular the right inferior frontal regions, working together with superior temporal region in the right hemisphere, the left frontal regions, and subcortical structures, all interconnected by white matter. Summary 18

20 Break

21 So far we have looked at Types of emotion expression (facial/prosodic) Brain regions associated with various emotions/types of expression So we know what the brain does with emotionally expressive stimuli and where might be important for doing this, but we have not looked at HOW the brain might process emotionally expressive stimuli Broadly speaking there are 2 main theories of emotion processing: 1. Theory theory 2. Simulation theory 20

22 Theory theory: Children as young as 5 have extensive causal knowledge in the form of intuitive theories Screening off task 21

23 visually obtained knowledge of the facial configuration of the target semantic knowledge concerning facial configurations general knowledge concerning a given emotion, i.e. its typical elicitors or behavioural effects knowledge that facial configuration C is paired with emotion label E Theory Theory of emotion recognition information-based account employs naiive psychological folk theory to infer the emotional states of others 22

24 Simulation theory We interpret the emotions of others by covertly simulating their response matching the outcome to our knowledge of outcomes How might this work for emotion recognition? Person A sees person B pulling facial configuration C A covertly facsimiles C (or what she thinks C to be) A attributes resulting emotion label E to person B 23

25 Paired emotion deficits on FaBER tasks E.g. patients SM and NM (Fear) SM Bilateral amygdala damage, neighbouring areas spared Did not express fear, could not recognise expression of fear in others Knew what fear was supposed to be Knew what should cause it Even knew what response might be But could not show it. Unable to learn the significance of unpleasant situations Does not show fear conditioning Evidence for simulation theory account of emotion recognition Paired deficits for fear 24

26 Paired deficits for disgust Imaging studies Philips et al, 1997, 1998 fMRI Observing FaBER disgust activates right insula Insula known to involved in experience of unpleasant tastes and smells Wicker et al., fMRI watching video of facial expressions in response to pleasant or disgusting smell vs. experiencing pleasant or disgusting smell Both disgust conditions preferentially activated left ant. insula and right ant. cingulate cortex. Patient studies NK (Calder et al., 2000) Insula and BG damage. Paired impairment on disgust measures (questionnaire and FaBER) 25

27 Dopamine system: a neural subsystem involved in the processing of aggression in social-agonistic encounters in a wide variety of species Dopamine system plays an important role in mediating the experience of the emotion of anger. Dopamine levels in rats and other species are elevated in social-agonistic encounters. Administration of dopamine antagonist (e.g. sulpiride) selectively impairs responses to agonistic encounters. Sulpiride administration leads to selective disruption of FaBER for anger, while sparing recognition of other emotions. Following sulpiride administration, subjects were worse at recognizing angry faces, but no such FaBER impairments of other emotions. Paired deficits for anger26

28 Generate and Test A deficit in the production of an emotion (or its facsimile) leads to an impairment in the recognition of that emotion How is this done? Random? Too slow Theory? How? How is this done? V to P matching Learnt or innate? 27 Generate hypothesised emotion Produce a facial expression Test expression (does it match expression of other?) Classify own emotional state and attribute this to other No Yes

29 Activation of facial musculature precedes emotion Reverse simulation 28 Visual representation of others facial expression Activation of facial muscles that imitate others facial expression Experience of emotion Classify own state and attribute that to other Such imitation is innate + adults covertly mimic FaBER stimuli (measurable by EMG) (Dimberg et al.) Engages cognitive processes in reverse. Emotional state and facial expression are bidirectional Sensation of emotion --> facial expression Facial expression --> mild sensation of emotion

30 Reverse simulation A direct link from visual input of others face to somatosensory representation of what it would feel like to pull that face. Bypasses muscle activation so gets round Mobius problem, but no details of how link might work. 29 Visual representation of others facial expression Activation of facial muscles that imitate others facial expression Experience of emotion Classify own state and attribute that to other As if

31 Observation of others facial expression Automatic activation (mirroring) of neural systems associated with facial emotion Shared emotion Labelling of emotion Unmediated resonance model or shared manifold hypothesis Direct activation Requires no mediating structures or processes 30

32 Wild et al., 2003 You are slower to make an incongruent facial movement than a congruent one Contagious emotions 31

33 visually obtained knowledge of the facial configuration of the target semantic knowledge concerning facial configurations general knowledge concerning a given emotion, i.e. its typical elicitors or behavioral effects knowledge that facial configuration C is paired with emotion label E Theory Theory of emotion recognition Can you lesion this model? 32

34 Summary Research into understanding emotions has revealed that deficits in face-based recognition are paired with deficits in the production of the same emotion. Of theory and simulation approaches the simulation theory seems to offer the best explanation of the data. The precise mechanisms by which simulation theory might work are still unclear, but reverse models(with as if loop) and the more recent unmediated resonance model can both account for neuroscientific data But some people still dont believe it 33

35 Audiovisual integration of emotional signals in voice and face: An event-related fMRI study There has been plenty of research into visual emotion recognition And some research into auditory emotion recognition But (almost) no-one has studied audiovisual integration of (dynamic) emotional stimuli Behavioural and Event-related fMRI task Viewed and heard faces and words either A only, V only or AV Neutral + 6 basic emotions (surprise!) Looked for areas of AV overlap that were not A or V (AV>A)(AV>V) Current ResearchJuly 2007 Rationale Methods 34

36 Happy Sad Fear Disgust Anger ?

37 Results Imaging results: Bilateral posterior superior temporal gyrus activation and right thalamus (not shown) Behavioural results: People were better when both A and V information available This is what they predicted (honest). 35

38 Amygdala damage impairs emotion recognition from music We know the amygdala is implicated in the recognition of fear (faces) Patients following temporal lobe removal (inc. amygdala) are impaired at scary music recognition. Is the amygdala (specifically) necessary for scary music recognition? Current ResearchSeptember 2006 Rationale Methods Neuropsychology (Patient SM) Music discrimination and emotion recognition /rating tasks Results SM ok at discrimination Poor at negative emotion music recognition (sad and scary) 36

39 Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing. TICS Current ResearchSeptember 2006 Review article 37

40 Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing Current ResearchSeptember 2006 Review article 38

41 Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing Current ResearchSeptember 2006 Review article 39

42 Behold the voice of wrath: Cross-modal modulation of visual attention by anger prosody. Brosch et al Current Research2008 We know the amygdala directs our attention to visual socially relevant stimuli We also know that the amygdala responds to anger prosody Does anger prosody direct our visual attention? Rationale Methods Dichotic listening with cueing paradigm Results Yes it does. Visual targets detected faster when on same side as anger delivered 40

43 Emotional prosodic processing in auditory hallucinations Current Research2007 Schizophrenics are impaired at prosody recognition Prosodic cues are important for speaker identity Could prosodic deficit be responsible for misattribution of voices in auditory hallucinations Rationale Methods Rate emotional (but semantically neutral) spoken sentences from Sad to Happy on Likert scale Groups: normal, Schz with hallucinations, Schz without Results Only hallucinating patients were impaired compared to controls 41

44 Next lecture: revision/FAQ lecture on the 3 rd of November. What to do now. Dont panic (Ill tell you when). Read some articles and start planning and writing up an experiment Submit revision questions using feedback page before revision/feedback lecture on November the 3rd


Download ppt "Human Social Interaction perspectives from neuroscience Dr. Roger Newport Room B47 Student Drop-in Time: Tuesdays 12-2 www.psychology.nottingham.ac.uk/staff/rwn."

Similar presentations


Ads by Google