HUMAINE - WP5 Belfast04 1 Experience with emotion labelling in naturalistic data L. Devillers, S. Abrilian, JC. Martin, LIMSI-CNRS, Orsay E. Cowie, C.

Slides:



Advertisements
Similar presentations
Belfast Naturalistic Database An Overview. Some factual information about BND Audiovisual Naturalistic/real life 127 speakers 298 ‘ emotional clips’ 1.
Advertisements

HUMAINE - Workshop on Databases - Belfast
Belfast Naturalistic Database
Labelling databases Roddy Cowie et al QUB. Labelling databases: what is the issue? HUMAINE has to create labelling schemes for spontaneous emotionally.
The world of emotion is two-dimensional,.. or is it? Etienne B. Roesch (Univ. Geneva) Johnny R. J. Fontaine (Univ. Ghent) Klaus R. Scherer (Univ. Geneva)
FEELTRACE Leaders Ellen Douglas-Cowie and Cate Cox.
Modelling and Analyzing Multimodal Dyadic Interactions Using Social Networks Sergio Escalera, Petia Radeva, Jordi Vitrià, Xavier Barò and Bogdan Raducanu.
(a)(b)(c)(d)(a)(b)(c)(d) 1 J-C. Martin, L. Devillers, A. Zara – LIMSI-CNRS V. Maffiolo, G. Le Chenadec – France Télécom R&D France EmoTABOU Corpus.
Baron Cohen et al (1997) Reading Minds The eye task.
Facial expression as an input annotation modality for affective speech-to-speech translation Éva Székely, Zeeshan Ahmed, Ingmar Steiner, Julie Carson-Berndsen.
Annotation and Detection of Blended Emotions in Real Human-Human Dialogs recorded in a Call Center L. Vidrascu and L. Devillers TLP-LIMSI/CNRS - France.
L-Devillers - Plenary 5 juin Emotional Speech detection Laurence Devillers, LIMSI-CNRS, Expression of emotions in.
Emotion in Meetings: Hot Spots and Laughter. Corpus used ICSI Meeting Corpus – 75 unscripted, naturally occurring meetings on scientific topics – 71 hours.
Combining Prosodic and Text Features for Segmentation of Mandarin Broadcast News Gina-Anne Levow University of Chicago SIGHAN July 25, 2004.
Advanced Technology Center Stuttgart EMOTIONAL SPACE IMPROVES EMOTION RECOGNITION Raquel Tato, Rocio Santos, Ralf Kompe Man Machine Interface Lab Advance.
Emotion. The heart has reasons that reason does not recognize -- Pascal Reason is and ought to be the slave of passion -- Hume Are Emotions Necessary.
MUSCLE movie data base is a multimodal movie corpus collected to develop content- based multimedia processing like: - speaker clustering - speaker turn.
1 Evidence of Emotion Julia Hirschberg
Techniques for Emotion Classification Julia Hirschberg COMS 4995/6998 Thanks to Kaushal Lahankar.
Techniques for Emotion Classification Kaushal N Lahankar Oct 12,2009 COMS 6998.
1 IUT de Montreuil Université Paris 8 Emotion in Interaction: Embodied Conversational Agents Catherine Pelachaud.
Producing Emotional Speech Thanks to Gabriel Schubiner.
The Discrete Emotions Theory Controversy in Psychology and Relevance to Consumer Behavior Louis Daily, Fiona Sussan, and Norris Krueger University of Phoenix.
TEMPORAL VIDEO BOUNDARIES -PART ONE- SNUEE KIM KYUNGMIN.
Qualitative Research Approaches Research Methods Module Assoc Prof. Chiwoza R Bandawe.
Chapter 3 How Psychologists Use the Scientific Method:
Components of Emotion: Facial expressions Physiological factors (e.g., heart rate, hormone levels) Subjective experience/feelings Cognitions that may elicit.
Laboratory of Computational Engineering Michael Frydrych, Making the head smile Smile? -> part of non-verbal communication Functions of non-verbal.
Emotions (Ch 7 from Berry et al., Cross-cultural Psychology, 2002) Ype H. Poortinga (Prof Em) Tilburg University, Netherlands & University of Leuven, Belgium.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
LIMSI-CNRS WP5 - Belfast September, 2004 Multimodal Annotation of Emotions in TV Interviews S. Abrilian, L. Devillers, J.C Martin, S. Buisine LIMSI – CNRS,
Recognition of meeting actions using information obtained from different modalities Natasa Jovanovic TKI University of Twente.
SPEECH CONTENT Spanish Expressive Voices: Corpus for Emotion Research in Spanish R. Barra-Chicote 1, J. M. Montero 1, J. Macias-Guarasa 2, S. Lufti 1,
Multimodal Information Analysis for Emotion Recognition
Exploiting Subjective Annotations Dennis Reidsma and Rieks op den Akker Human Media Interaction University of Twente
Laurence DEVILLERS & Jean-Claude MARTIN LIMSI-CNRS FP6 IST HUMAINE Network of Excellence / Association (
1 Presented by Jari Korhonen Centre for Quantifiable Quality of Service in Communication Systems (Q2S) Norwegian University of Science and Technology (NTNU)
A Behavioral Science and Research Perspective. What Behavioral sciences do you think are involved to Organizational Behavior?
1 Computation Approaches to Emotional Speech Julia Hirschberg
From Semantically Annotated Media To Video Documentaries Stefano Bocconi CWI Amsterdam, The Netherlands.
 Cognitive appraisal patterns the same for each emotion across cultures.
1 Workshop « Multimodal Corpora » Jean-Claude MARTIN Patrizia PAGGIO Peter KÜEHNLEIN Rainer STIEFELHAGEN Fabio PIANESI.
Feedback Elisabetta Bevacqua, Dirk Heylen,, Catherine Pelachaud, Isabella Poggi, Marc Schröder.
Multimodality, universals, natural interaction… and some other stories… Kostas Karpouzis & Stefanos Kollias ICCS/NTUA HUMAINE WP4.
M. Brendel 1, R. Zaccarelli 1, L. Devillers 1,2 1 LIMSI-CNRS, 2 Paris-South University French National Research Agency - Affective Avatar project ( )
Topic Models Presented by Iulian Pruteanu Friday, July 28 th, 2006.
Lecture Outline Components of Emotions Theories of Emotional Development Emotional Milestones Identifying Others’ Emotions and Understanding the Causes.
1/17/20161 Emotion in Meetings: Business and Personal Julia Hirschberg CS 4995/6998.
Emotional Intelligence
Unit 4: Emotions.
Social Development (Chapter 13) Lecture Outline: Emotional development The “self” and personality Temperament.
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Communication and Emotion
Getting the most out of interactive and developmental data Daniel Messinger
Acoustic Cues to Emotional Speech Julia Hirschberg (joint work with Jennifer Venditti and Jackson Liscombe) Columbia University 26 June 2003.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
COMMUNICATION MEDIA, AND . PRESENTERS: CHOGO,M,D (092SIS10). :AKPADE, YAYRA EDNA (092SIS1).
Qualitative data analysis. Principles of qualitative data analysis Important for researchers to recognise and account for own perspective –Respondent.
Presented By Meet Shah. Goal  Automatically predicting the respondent’s reactions (accept or reject) to offers during face to face negotiation by analyzing.
Interpreting Ambiguous Emotional Expressions Speech Analysis and Interpretation Laboratory ACII 2009.
EXAMPLES ABSTRACT RESEARCH QUESTIONS DISCUSSION FUTURE DIRECTIONS Very little is known about how and to what extent emotions are conveyed through avatar.
Multiple Information Pathways in the Expression of Conversational Affect S YNCHRONY & S YMMETRY S YMPOSIUM APS, M AY, 2016 A LLISON G RAY, T IMOTHY R.
Contextual Intelligence as a Driver of Services Innovation
Chapter 13 Quantitative Analysis of Text
Housekeeping: Bring body artifact to class Wednesday Film Analysis Project Proposals Due October 21st.
Voluntary (Motor Cortex)
How should we classify emotions?
Damien Dupré & Anna Tcherkassof
Dr. Debaleena Chattopadhyay Department of Computer Science
Presentation transcript:

HUMAINE - WP5 Belfast04 1 Experience with emotion labelling in naturalistic data L. Devillers, S. Abrilian, JC. Martin, LIMSI-CNRS, Orsay E. Cowie, C. Cox, I. Sneddon, QUB, Belfast

HUMAINE - WP5 Belfast042 QUB - LIMSI QUB and LIMSI are developing complementary approaches (coding scheme and tools) for annotating naturalistic emotional behaviour in English and French TV videos. This cooperation will enable: - to study cross cultural issues in naturalistic emotions - to compare and eventually combine discrete/continuous coding schemes. QUB and LIMSI have already exchanged some data and begun to annotate them

HUMAINE - WP5 Belfast043 Outline 1.Challenges in annotating naturalistic emotion 2.Experiments of emotion labelling on audio and audio-visual data: call centers, movies, TV 3.Experiment of emotion labelling on EMoTV1 4.On-going work: new emotions and metadata coding scheme 5.Illustrative examples (ANVIL + Feeltrace) 1. EMoTV1 2. Belfast Naturalistic database

HUMAINE - WP5 Belfast044 1 – Challenges in annotating naturalistic emotion Goals: Detection of « real-life » emotion Simulation of « real-life » emotion with ECAs  Which emotions modelled ?  Which context annotated ?  Which representation ? Descriptors of emotional states  Verbal categories (Ekman, Plutchick)  Abstract dimensions (Osgood, Cowie)  Appraisal-based description (Sherer), interaction model OCC (Ortony)

HUMAINE - WP5 Belfast045 Categories and dimensions: Redundancy/Complementarity Verbal categories:  Applied to segments: speaker turns, sub-units  Choose among a finite list, a task-dependent labels list: finance, emergency, interviews TV.  Limited in number to be tractable Dimensions: contiuous (Feeltrace Cowie, Schröder), scale (Craggs, Devillers&Vidrascu, Abrilian et al)  Segments: sequence, sub-units of the sequence  3 dimensions: Activation (Intensity) / Valence / Control Dimensions match with categories (coarse classes) but not allow to distinguish between by example fear and anger Study redundancy and complementarity of verbal categories and dimensions

HUMAINE - WP5 Belfast046 Context annotation for naturalistic emotion We have to take into account of the contextual information for naturalistic emotion at multiple levels Different contextual information are needed for different type of application and modalities A tentative proposal of relevant contextual cues is on-going for EMoTV corpus. This scheme can be refined through work with different databases

HUMAINE - WP5 Belfast047 In practice Iterative annotation protocol:  Definition  emotion labels abstract dimensions: Task-dependent labels Universal dimensions  segmental units: overall sequence, utterance, sub-units, words, etc.  Annotation  One label or combined labels with abstract dimensions per segment  Meta-annotation: context situation, appraisal related descriptors  Validation  inter-annotator agreement  perceptual tests

HUMAINE - WP5 Belfast048 2 – Naturalistic data: audio and audio-visual Audio: Call Centers  Pros: Natural H-H Interaction, Cons: social aspects, phone channel  Task-dependent emotion: financial aspects, emergency, etc Audio-visual: TV, movies TV: +/- natural dependent on the type of TV broadcastings (Games, reality- shows, news, interviews, etc), Live/Non live, recording context, etc  EMoTV1, Interviews, very variable emotional content behavior Realistic fictions: less naturalistic  Emotions (such as fear, distress) in abnormal dangerous situation are impossible to collect. Goals: Call centers and movies: detection of emotion (audio-based) EMoTV: provocative corpus for stuying ECA specification

HUMAINE - WP5 Belfast049 Task-dependent annotations (1) FP5 - Amities Project collaboration LIMSI/ENST Call center 1: Stock Exchange Customer Service Center Fear of losing money! fear/apprehension, anger/irritation, satisfaction, excuse, neutral 2 annotators - 12 % of speaker turns with emotion - kappa dialogs, 5000 speaker turns [Devillers, Vasilescu, LREC 2004, Speech prosody 2004, ICPhS 2003] Call center 2: Capital Bank Service Center Fear of missing money! fear/apprehension, anger/irritation, satisfaction, excuse, neutral 2 annotators on 1K speaker turns randomly extracted – 10 % of speaker turns with emotion - kappa dialogs, 5000 speaker turns extracted [Devillers, Vasilescu, LREC 2004]

HUMAINE - WP5 Belfast0410 Task-dependent annotations (2) Collaboration LIMSI-Emergency call center Call center 3: Emergency Service Center Fear of being sick, real fear/panic, call for help Larger emotional behaviour than for financial call centers - 18 classes obtained after labels selection among Humaine emotion list (R. Cowie) - 5 persons (transcribers), majority voting: anxiety, stress, fear, panic, annoyance, cold anger, hot anger, disappointement, sadness, despair, hurt, dismay, embarassment, relief, interest, amusement, compassion, surprise + negative, positive and neutral. Annotation of 20h (on-going process with Transcriber tool, refinement of the labels list) 1/ manual segmentation (sub-speaker-turn segments) 2/ segment annotation with major/minor emotion, with abstract dimensions (scale) 3/ meta-annotation: contextual information: motif for the call, patient lies (kin), etc. audio information: quality, accent, pathological voice, etc. PhD student: Laurence Vidrascu (LIMSI)

HUMAINE - WP5 Belfast0411 Task-dependent annotations(3) collaboration ENST/LIMSI/THALES Fiction: Fear manifestation in realistic movies Video surveillance application Fear of aggression ! Fear vs other neg emotions, other emotions, neutral Valence, Intensity, Control Video help for fear emotion annotation (providing context) context: ex: aggressor, victim POSTER – I Vasilescu [clavel, vasilescu, devillers, ehrette, ICSLP 2004] PhD student: Chloé Clavel

HUMAINE - WP5 Belfast0412 EmoTV1 – Large number of topics – interviews TV from news 51 clips, various context, 14 emotion labels, multimodal annotation [Ref] S. Abrilian, L. Devillers, JC Martin, S. Buisine, SummerSchoolWP5 Task-dependent annotations (4) EmoTV – FP6- HUMAINE

HUMAINE - WP5 Belfast – Experience of labelling on EmoTV1 Study the influence of the modalities on the perception of emotions Two independent annotators: master students in psychology- coder1 (male), coder2 (female) Annotations with Anvil tool (Kipp 2001) for 3 conditions:  Audio without video  Video without audio  Audio with video

HUMAINE - WP5 Belfast0414 Segmentation and annotation protocol for the 3 conditions Instructions: detect emotional events Segmentation (free) followed by agreed segmentation Annotation scheme combining:  Labels (free-choice)  Two abstract dimensions Intensity (from very low to very high) Valence (from negative to positive)  Context: theme, what-for, etc (for the audio+video) [Ref: Buisine, Abrilian, Devillers, Martin, poster WP3]

HUMAINE - WP5 Belfast Segmenting the extracts 2 independent coders Separate segmentation of audio and video extracts Unifying the segments 2. Intersection for videos Union for audio corpus Labeling the segments 3. Analyses inter-coder reliability For categories of labels (Cohen’s kappa) on audio and video corpus Step1: Audio-only and Video-only conditions

HUMAINE - WP5 Belfast Segmenting the extracts 2 independent coders Separate segmentation of audio-video extracts Unifying the segments 2. Labeling the segments 3. Analyses inter-coder reliability For categories of labels (Cohen’s kappa) On audio- video corpus Step2: Audio+Video conditions

HUMAINE - WP5 Belfast0417 Segmentation (free for two annotators): Audio-only and Video-only  2 times more segments for video than for audio condition for both annotators  Automatic decision for obtaining a common set of segments (decision semantically motivated)  Intersection for video condition: 295 segments  Union for audio condition: 181 segments Audio+video  Agreed on a common set of 281 emotional segments The use of audio-only segmentation for audio+video is not straightforward Audio+Video segments are included in audio-only segments. Analyse: Speech vs. audio visual differences for segmentation

HUMAINE - WP5 Belfast0418 Emotional labels From the three experiments of annotation: a list of 176 different labels after normalization -> classified into a set of 14 labels anger, despair, disgust, doubt, exaltation, fear, irritation, joy, neutral, pain, sadness, serenity, surprise and worry.

HUMAINE - WP5 Belfast0419 Analyse: Speech vs. audio visual differences for annotation Inter-coder agreements: Kappa values (on segments)  Emotional Labels (14 values): audio-video 0.37, video 0.43, audio 0.54  2 abstract dimensions: Intensity: low inter-coder agreements except audio Video and Audio+Video very low, audio 0.69 Valence (Neg/?/Pos): high level of agreement for audio, Audio+Video 0.3 and Video 0.4, Audio 0.75 Low kappa for valence: Positive/Negative confusion Audio+Video 11%, Video 7%, Audio 3% Real-life emotions -> blended, ambiguous, difficult to annotate

HUMAINE - WP5 Belfast0420 Emotion annotation agreement for the 3 conditions (1)

HUMAINE - WP5 Belfast0421 Emotion annotation agreement for the 3 conditions (2) Anger, Irritation, Joy, Exaltation, Sadness -> high level of agreement for Video, Audio + Audio-Video conditions Surprise, Worry -> for Video condition (visual cues) Doubt -> for Audio or Video conditions, not for Audio+Video Pain -> for Audio and Audio+Video (acoustic cues) Serenity -> only for Audio+Video condition (much more subtle) Neutral -> 1% of agreement for Video condition

HUMAINE - WP5 Belfast0422 Valence and Emotion Audio-video  

HUMAINE - WP5 Belfast0423 Clip 29: Joy/Disgust valence ?

HUMAINE - WP5 Belfast0424 Emotion perception - high subjectivity: examples Different perception between coder1(male)/coder2(female): in the same valence classe: ex: clip3 audio/video condition: anger/sadness, anger/despair -> blended emotion between negative/positive classes: ex: a woman cries for joy (relief) clip 4 audio condition: sadness/sadness video condition: sadness/don’t know audio-video condition:joy/sadness -> cause-effect conflicts 

HUMAINE - WP5 Belfast0425 Clip 4: Joy (relief)/ Sadness: valence ? 

HUMAINE - WP5 Belfast0426 Assessment of annotations: Next steps Inter-annotation agreement  Kappa low (14 classes) -> ambiguous annotated data but also rich data  Study of disagreements in order  to define the different type of complex or blended emotions: low- intensity, cause-effect, masked, sequential(transition), ambiguous, etc.  Define hierarchical levels of annotations Perceptual tests: Multilingual cross-cultural perceptual tests For validating annotation labels and type of emotions For studying the emotional perceptual abilities of coders: personality, sensibility to different emotional cues in audio, face, gesture, etc  Multilingual cross-cultural perceptual tests Collaboration WP3-WP5 Unige, QUB, LIMSI

HUMAINE - WP5 Belfast0427 Emotion categories: fine to coarse grain Hierachical level of annotation: fine to coarse grained-labels Surprise Shame Shame, embarrassment neutral/other Embarrassment Doubt Pride neutral/other Pain

HUMAINE - WP5 Belfast New annotation scheme (on-going) Annotation of the global sequence and emotional segments of the sequence with:  non-basic emotion patterns: blended, masked, sequential, etc.  two emotion labels (major/minor)  activation, Intensity, Control, Valence (scale 1-5)  discrete temporal pattern: describing temporal evolution inside segments Contextual annotations included derived appraisal-based descriptions: event causes emotion, Global multimodal descriptors: Audio, face (eyes, gaze), torso, gesture (free-text fields) Emotions and Metadata Coding Scheme: Annotation guide -> WP5 exemplar

HUMAINE - WP5 Belfast0429 Intra-emotional segment temporal evolution Abstract dimensions much more suitable than categorical description to describe gradual and mixed emotional behavior. In ANVIL scheme, temporal evolution is given by the sequence of emotional segments (some are transitions) but intra-segment dynamic is lacking On-going study: temporal evolution + categorical labels  Feetrace continuous dimension annotation(LIMSI/QUB)  Discrete temporal pattern describing intra segment evolution (LIMSI/Paris 8) such as:

HUMAINE - WP5 Belfast0430 Context annotation for naturalistic emotion (on-going) A tentative proposal of relevant contextual annotations: Emotion-context (some derived from appraisal-related descriptors)  Cause-emotion: text-free  Time-of-emotion: immediate, near past, past, future  Relation person-emotion: society subject, true story by self, by kin  Degree-of-implication: low, normal, high Overall communicative goal  What-for: to claim, to share a feeling, etc  To-whom: public, kin, society, etc Scene descriptors: theme, type of interaction, Character descriptors: age, gender, race Recording: quality, camera/person position, channel and time

HUMAINE - WP5 Belfast Example EMoTV Clip 3

HUMAINE - WP5 Belfast0432 Global sequence annotation

HUMAINE - WP5 Belfast0433 Illustration of segmentation problems Segmentation/Annotation (clip 3) (Summer School Belfast) Coder anger angerangeranger despairdespairdespairsadness Coder angerdespair sadness Coder ?despair irritationanger anger Coder angerangerangerdispointment Final On-going study to find adequate rules to segment audio-video sequence in emotional units

HUMAINE - WP5 Belfast0434 Emotional annotations per segment by several coders French coders - New scheme Majeur/Minor (I, A, C, V) 1-5 Coder angerangeranger sadnessdespair (4,4,3,2) (4,4,2,1) (5,5,2,1) blended blended Coder worryangerdespair worry 0,66 anger 0.5 despair 0.5 sadnessanger anger 0.34 sadness 0.34 anger 0.5 (4,4,3,2) (4,4,2,1) (5,4,1,1) disgust 0.16 blended blend (3,3,3,2) (4,4,2,1) (5,4,1,1) Coder worryangerdespair disgustanger (2,2,3,2) (4,4,2,1) (5,4,1,1) blended Instead of a priori choice -> weighted vector of categories could be kept

HUMAINE - WP5 Belfast0435 Feeltrace annotations combined with ANVIL labels despair anger Coder1 Coder2 worry French Coders

HUMAINE - WP5 Belfast0436 Clip3 annotated by QUB team with Feeltrace Coder: Cate Coder: Ellen Coder: Ian High similarity between Feeltrace Annotations (QUB and LIMSI) for this clip

HUMAINE - WP5 Belfast0437 Clip3: global sequence annotations from LIMSI/QUB Coder: CateCoder: EllenCoder: Ian High similarity between global label annotations (QUB and LIMSI) for this clip Angry Strong Sad Medium Hurt Medium Angry Strong Hurt Strong Despair Strong Angry Medium Resentful Medium Despair Weak Coder: Sarkis Anger (5, 5, 2, 1) Coder: Jean-Claude Anger (5, 4, 1, 1) Coder: Laurence Anger (5, 4, 2, 1) QUB LIMSI

HUMAINE - WP5 Belfast0438 Belfast naturalistic Corpus Collaboration QUB/LIMSI Example clip 61b A+

HUMAINE - WP5 Belfast0439 Weighted vectors combining emotion annotations from coders French coders (Clip61) - Majeur/Minor (I, A, C, V) scale coders: Instead of a priori choice -> weighted vector of categories per segment joy 0.66 exaltation 0.5 joy 0.5 joy 0.75 pleased 0.34 joy 0.25 pleased 0.25 pride 0.25 (4,4,3,4) pride 0.25 pride 0.25 (4,4,4,5) (4,4,4,5) (4,4,3,5) exaltation 0.5 joy 0.2 doubt 0.5 pride 0.5 joy 0.25 serenity 0.2 pleased 0.5 pleased 0.25 pride 0.25 pleased 0.2 (3,3,4,4) serenity 0.25 (5,5,5,5) doubt 0.2 (4,4,5,4) worry 0.2 (3,3,3,3)

HUMAINE - WP5 Belfast0440 Clip 61b: Feeltrace Coder1 Coder2 LIMSI Coders

HUMAINE - WP5 Belfast0441 Clip61b global annotation by QUB and LIMSI QUB coders – Intensity: strong, medium, weak Coder 1 Coder 2Coder 3 Coder 4Coder 5 Coder 6 Happy AffectionateAffectionate AffectionateHappy Confident Affectionate HappyHappy HappyExcited Amused Agreable LIMSI coders – (I, A, V, C) – scale Coder 1 Coder 2Coder 3 Joy JoyJoy ( ) ( )( ) High similarity between global label annotations (QUB and LIMSI) for this clip

HUMAINE - WP5 Belfast0442 Conclusion/Perspectives Conclusions Annotation of 2 verbal labels per segment for naturalistic emotions Combination of emotion annotations from coders (« soft categories ») Combination of categorical and dimension emotion representation (QUB/LIMSI) On-going work Temporal emotion evolution for ECAs (Univ. P8/LIMSI/QUB) Validation of the new annotation scheme Re-Annotation of EmoTV1 (others coders) Perceptual tests (UNIGE/QUB/LIMSI) Perspectives Correlation between multimodal and emotion annotations ECA with « real-life » emotion (Univ. P8/LIMSI) EMoTV2

HUMAINE - WP5 Belfast0443 Next talk Manual Annotation of Multimodal Behaviors in Emotional TV Interviews with ANVIL Thank you for your attention by Jean-Claude Martin: