HUMAINE - Workshop on Databases - Belfast

Slides:



Advertisements
Similar presentations
Eye Contact as a Determinant of Social Presence in Video Communication Yevgenia Bondareva Lydia Meesters Don Bouwhuis.
Advertisements

Alignment in multimodal dialogue corpora Robin Hill and Ellen Gurman Bard Edinburgh.
Belfast Naturalistic Database An Overview. Some factual information about BND Audiovisual Naturalistic/real life 127 speakers 298 ‘ emotional clips’ 1.
Humaine wp6 workshop 10/03/ annotation of emotions in meetings in the AMI project Roeland Ordelman* & Dirk Heylen Human Media Interaction University.
Towards Affective Interactive ECAs Catherine Pelachaud – University of Paris 8 Isabella Poggi – University of Rome 3 WP6 workshop, March 2005, Paris.
Belfast Naturalistic Database
Descriptive schemes for facial expression introduction.
“See What I Mean” Differences between Deaf and Hearing Cultures
(a)(b)(c)(d)(a)(b)(c)(d) 1 J-C. Martin, L. Devillers, A. Zara – LIMSI-CNRS V. Maffiolo, G. Le Chenadec – France Télécom R&D France EmoTABOU Corpus.
Module 2: Assessment in Creative Arts © 2006 Curriculum K-12 Directorate, NSW Department of Education and Training.
Human interaction is not constructed as a single channel – it is multimodal. Speech and gestures correlate to convey meaning. Moreover, human interaction.
Community Capacity Building Program Interpersonal Communications.
Coding Scheme in Gestures Analysis Liang Zhou Dr. Manolya Kwa.
Double level analysis of the Multimodal Expressions of Emotions in Human-Machine Interaction.
ICANN workshop, September 14, Athens, Greece Intelligent Affective Interaction technologies and applications.
Emotion, Intuition and “Hot” Thinking I.Psychological Perspectives on Emotion  the role of thinking in emotion II.Studying Emotion III.Blink: Thin Slicing.
MUSCLE movie data base is a multimodal movie corpus collected to develop content- based multimedia processing like: - speaker clustering - speaker turn.
Aaron Bass Jason Hughes Grace Ramirez Christina Haggerty.
Nonverbal Communication Voice Body Talk Environmental Cues.
Towards a definition of GestBase - an open database of gestures Milan Rusko Institute of Informatics of the Slovak Academy of Sciences, Bratislava.
HUMAINE summer school on databases: opening orientation Roddy Cowie, Ellen Douglas-Cowie, Edelle McMahon & Cate Cox.
HUMAINE - WP5 Belfast04 1 Experience with emotion labelling in naturalistic data L. Devillers, S. Abrilian, JC. Martin, LIMSI-CNRS, Orsay E. Cowie, C.
Human-Computer Interaction IS 588 Spring 2007 Week 4 Dr. Dania Bilal Dr. Lorraine Normore.
Data collection and experimentation. Why should we talk about data collection? It is a central part of most, if not all, aspects of current speech technology.
Towards an integrated scheme for semantic annotation of multimodal dialogue data Volha Petukhova and Harry Bunt.
DYNAMIC IDENTITY. Elements of film language grammar Introduction to the subject SHOT SIZES – FRAMING.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
LIMSI-CNRS WP5 - Belfast September, 2004 Multimodal Annotation of Emotions in TV Interviews S. Abrilian, L. Devillers, J.C Martin, S. Buisine LIMSI – CNRS,
SIMILAR NoE at the HUMAINE meeting - 5/06/2007 Multimodal Interaction R&D 4 Years Dec 2003 – Dec M€ 32 partners + 8 fellows.
Recognition of meeting actions using information obtained from different modalities Natasa Jovanovic TKI University of Twente.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
NM – LREC 2008 /1 N. Moreau 1, D. Mostefa 1, R. Stiefelhagen 2, S. Burger 3, K. Choukri 1 1 ELDA, 2 UKA-ISL, 3 CMU s:
Communication Sampling Examples in Assessment. Communication Sampling Gives us more info to support/negate a standardized test Use of communication skills.
SPEECH CONTENT Spanish Expressive Voices: Corpus for Emotion Research in Spanish R. Barra-Chicote 1, J. M. Montero 1, J. Macias-Guarasa 2, S. Lufti 1,
Laurence DEVILLERS & Jean-Claude MARTIN LIMSI-CNRS FP6 IST HUMAINE Network of Excellence / Association (
Sign Language and Communication
Collection of multimodal data Face – Speech – Body George Caridakis ICCS Ginevra Castellano DIST Loic Kessous TAU.
The nature of Texts: The ins and out of your folio CONTEXT CONTEXT CONTEXT.
ENTERFACE 08 Project 1 “MultiParty Communication with a Tour Guide ECA” Mid-term presentation August 19th, 2008.
NONVERBAL COMMUNICATION
Chapter 7 Naturalistic Methods - Stangor. Naturalistic Research Designed to describe and measure the behavior of people or animals as it occurs in their.
1 Workshop « Multimodal Corpora » Jean-Claude MARTIN Patrizia PAGGIO Peter KÜEHNLEIN Rainer STIEFELHAGEN Fabio PIANESI.
Österreichisches Forschnungsinstitut für Artificial Intelligence Representational Lego for ECAs Brigitte Krenn.
Feedback Elisabetta Bevacqua, Dirk Heylen,, Catherine Pelachaud, Isabella Poggi, Marc Schröder.
Multimodality, universals, natural interaction… and some other stories… Kostas Karpouzis & Stefanos Kollias ICCS/NTUA HUMAINE WP4.
M. Brendel 1, R. Zaccarelli 1, L. Devillers 1,2 1 LIMSI-CNRS, 2 Paris-South University French National Research Agency - Affective Avatar project ( )
Chapter 7 Naturalistic Methods. Naturalistic Research Designed to describe and measure the behavior of people or animals as it occurs in their everyday.
Virtual Characters. Overview What is a digital character? What is a digital character? Why do would we want digital characters? Why do would we want digital.
What We Know People Know About Gesture Barbara Kelly and Lauren Gawne University of Melbourne.
Speaking while monitoring addressees for understanding Torsten Jachmann Herbert H. Clark and Meredyth A. Krych Seminar „Gaze as function of.
MIT Artificial Intelligence Laboratory — Research Directions The Next Generation of Robots? Rodney Brooks.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents mid-term presentation.
Non-verbal communication. Non-verbal messages People tend to believe in non-verbal messages more than they do with verbal messages.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Communication. Communication It is a process of exchanging –  Information  Ideas  Thoughts  Feelings  Emotions Through –  Speech  Signals  Writing.
Presented By Meet Shah. Goal  Automatically predicting the respondent’s reactions (accept or reject) to offers during face to face negotiation by analyzing.
A POCKET GUIDE TO PUBLIC SPEAKING 5 TH EDITION Chapter 18 Your Body in Delivery.
Modeling Expressivity in ECAs
Types of Communication
Approaches to Discourse Analysis
2/e P T.
Copyright © American Speech-Language-Hearing Association
Get the Attention of the Audience
The Better Speaker Series
Non-Verbal Communication= Body Language
Presented by: Mónica Domínguez
The Better Speaker Series
COMMUNICATION Communication is the transfer of information and understanding from one person to another person. It is a way of reaching others with facts,
Presentation transcript:

HUMAINE - Workshop on Databases - Belfast Naturalistic data Emotion in dialogue/interaction: Television databases from QUB and LIMSI (Sarkis Abrilian, Laurence Devillers, Jean-Claude Martin, Ellen Douglas-Cowie) The EmoTV Corpus HUMAINE - Workshop on Databases - Belfast

Theoretical Issues Study naturalistic and non-acted data How to annotate real-life non basic emotions ? Define a typology of non-basic emotion How do emotion and multimodality correlate ? Cross cultural studies Specification of real-life emotions in ECAs Coordination between modalities Specification of mixed emotions

Collection Video selection criteria in EmoTV1 TV monologue in interview context Realistic non acted situations Presence of emotional behavior Speaker face and upper body (close-medium shot) Multimodal signs: speech, head, face, gaze, gesture, torso French language One person Ordinary people

EmoTV-1 Video clips 51 clips from French TV channels Mostly interviews from news 48 different persons 24 different topics : politics, sport, law, religion… Duration: 12 min (4 – 43 seconds per clip) Words: 2500 (800 different words) Wide range of positive / negative emotions

Topic distribution Les critères de sélection sont : La présence du visage du locuteur dans le champ (suffisamment proche de la caméra pour que les codeurs puissent analyser les expressions faciales) ; Le focus sur une seule personne ; La présence de paroles ; Le fait que le locuteur parle français ; La bonne qualité du signal sonore (pas de recouvrements, etc.) ; Le fait que les personnes soient inconnues ; Le réalisme de la situation ; La présence d’un ou plusieurs événements émotionnels, même subtils.

Advantages & Drawbacks Spontaneous Various contexts Reveals requirements on annotation schemes at several levels Drawbacks Visibility of facial expression: glasses / hairs / beard gestures Video and audio quality Les critères de sélection sont : La présence du visage du locuteur dans le champ (suffisamment proche de la caméra pour que les codeurs puissent analyser les expressions faciales) ; Le focus sur une seule personne ; La présence de paroles ; Le fait que le locuteur parle français ; La bonne qualité du signal sonore (pas de recouvrements, etc.) ; Le fait que les personnes soient inconnues ; Le réalisme de la situation ; La présence d’un ou plusieurs événements émotionnels, même subtils.

Future directions Copyrights : TF1 (70% of the corpus), ELDA EmoTV2 More news Interaction between 2 persons Annotations with new coding schemes Study the use for specification of ECAs Perceptual tests Les critères de sélection sont : La présence du visage du locuteur dans le champ (suffisamment proche de la caméra pour que les codeurs puissent analyser les expressions faciales) ; Le focus sur une seule personne ; La présence de paroles ; Le fait que le locuteur parle français ; La bonne qualité du signal sonore (pas de recouvrements, etc.) ; Le fait que les personnes soient inconnues ; Le réalisme de la situation ; La présence d’un ou plusieurs événements émotionnels, même subtils.

Output for exemplar « Provocative » corpus Coding schemes Emotion Context Multimodal signs Samples of annotation files and video Protocole and annotation guides Collection, annotation, validation

Related talks Annotation of emotion and context Annotation of multimodal signs Les critères de sélection sont : La présence du visage du locuteur dans le champ (suffisamment proche de la caméra pour que les codeurs puissent analyser les expressions faciales) ; Le focus sur une seule personne ; La présence de paroles ; Le fait que le locuteur parle français ; La bonne qualité du signal sonore (pas de recouvrements, etc.) ; Le fait que les personnes soient inconnues ; Le réalisme de la situation ; La présence d’un ou plusieurs événements émotionnels, même subtils.