Nataliya Nadtoka James Edge, Philip Jackson, Adrian Hilton CVSSP Centre for Vision, Speech & Signal Processing UNIVERSITY OF SURREY.

Slides:



Advertisements
Similar presentations
Human language technologies Data Collections & Studies WP4- Emotions: Faces.
Advertisements

University of Paris 8 Animation improvements and face creation tool for ECAs Animation improvements and face creation tool for ECAs Nicolas Ech Chafai.
Bruxelles, October 3-4, Interface Multimodal Analysis/Synthesis System for Human Interaction to Virtual and Augmented Environments IST Concertation.
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Audio-based Emotion Recognition for Advanced Information Retrieval in Judicial Domain ICT4JUSTICE 2008 – Thessaloniki,October 24 G. Arosio, E. Fersini,
Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig
Unit 6 Teaching Pronunciation
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
Coye Cheshire & Andrew Fiore June 13, 2015 // Computer-Mediated Communication Visual Interfaces for CMC.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
Producing Emotional Speech Thanks to Gabriel Schubiner.
Recognizing Emotions in Facial Expressions
Stephanie Witte Wisconsin Lutheran College Deborrah Uecker COM 205
Non-Verbal Communication
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
Database Construction for Speech to Lip-readable Animation Conversion Gyorgy Takacs, Attila Tihanyi, Tamas Bardi, Gergo Feldhoffer, Balint Srancsik Peter.
Computer Vision in Graphics Production Adrian Hilton Visual Media Research Group Centre for Vision, Speech and Signal Processing University of Surrey
Facial Feature Detection
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Speaker-Dependent Audio-Visual Emotion Recognition
EWatchdog: An Electronic Watchdog for Unobtrusive Emotion Detection based on Usage Analysis Rayhan Shikder Department.
+ EQ: How are emotions communicated nonverbally and across cultures?
Maria Neophytou Communication And Internet Studies ENG270 – English for Communication Studies III
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
Instructional Guide Communications : From Print to Radio.
SPEECH CONTENT Spanish Expressive Voices: Corpus for Emotion Research in Spanish R. Barra-Chicote 1, J. M. Montero 1, J. Macias-Guarasa 2, S. Lufti 1,
Multimodal Information Analysis for Emotion Recognition
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
Verbal Non-verbal Enactor (Encoding) (a) Recipient (Decoding) (c) Body Voice Speech 65% 35% 27% 19% 18% 35% Face Visual Audio Channels (b)
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Intelligent Control and Automation, WCICA 2008.
Emotions from text: machine learning for text-based emotion prediction Cecilia Alm, Dan Roth, Richard Sproat UIUC, Illinois HLT/EMPNLP 2005.
Performance Comparison of Speaker and Emotion Recognition
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
MIT Artificial Intelligence Laboratory — Research Directions The Next Generation of Robots? Rodney Brooks.
Jennifer Lee Final Automated Detection of Human Emotion.
Face Recognition Summary –Single pose –Multiple pose –Principal components analysis –Model-based recognition –Neural Networks.
Observing Lip and Vertical Larynx Movements During Smiled Speech (and Laughter) - work in progress - Sascha Fagel 1, Jürgen Trouvain 2, Eva Lasarcyk 2.
The Importance of Communication Skills. Humans are social animals. Whenever we are in the company of others, we are communicating... 
Ekman’s Facial Expressions Study A Demonstration.
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
Contrast optimization for structure-from-motion surveys James O’Connor 1 Mike Smith 1, Mike R. James
Expressionbot: An Emotive Lifelike Robotic Face for Face-to- Face Communication Ali Mollahossenini, Gabriel Gairzer, Eric Borts, Stephen Conyers, Richard.
Recognition and Expression of Emotions by a Symbiotic Android Head Daniele Mazzei, Abolfazl Zaraki, Nicole Lazzeri and Danilo De Rossi Presentation by:
What is it? Details Look at the whole.  Form of communication ◦ Communicating without words ◦ Non-verbal ◦ Using facial expressions ◦ Gestures  Can.
2D-CSF Models of Visual System Development. Model of Adult Spatial Vision.
Mood Detection.
Motivation and Emotions
An Emotive Lifelike Robotics Face for Face-to-Face Communication
How We Know About Emotions
Automated Detection of Human Emotion
Organizational Behavior – Session 12 Dr. S. B. Alavi, 2009.
Verbal and Non-verbal Communication Skills
Done this week Image emotion classifier
Voluntary (Motor Cortex)
Presented By, Ankit Ranka Oct 19, 2009
Expressing and Experiencing Emotion
What is blue eyes ? aims on creating computational machines that have perceptual and sensory ability like those of human beings. interactive computer.
Patricia Keating, Marco Baroni, Sven Mattys, Rebecca Scarborough,
Séance 1 HOW ARE YOU ?.
Emotion.
COMMUNICATION.
Automated Detection of Human Emotion
Paper presentation by: Dan Andrei Ganea and Anca Negulescu
End-to-End Speech-Driven Facial Animation with Temporal GANs
Séance 1 HOW ARE YOU ?.
Séance 1 HOW ARE YOU ?.
Presentation transcript:

Nataliya Nadtoka James Edge, Philip Jackson, Adrian Hilton CVSSP Centre for Vision, Speech & Signal Processing UNIVERSITY OF SURREY

Motivation  Non-verbal cues convey additional information  Existing visual speech from audio methods produce plausible animation of neutral speech, but fail to generate realistic expressive content  The factors that contribute to emotional speech are vastly understudied Aim  Learn the emotional characteristics  Model the emotional characteristics of speech

Overview

Dataset  4D sequence of geometry and texture (60 fps) and synchronized audio (44100Hz) recorded with 3dMD scanner  Emotions: Anger, Surprise, Fear, Happiness, Disgust, Sadness  All sentences are repeated in Neutral to facilitate cross- comparison  110 sentences with a strong expressive content  Phonetically balanced IR projector IR stereo cameras colour camera

Post-processing  Surface registration is done by using painted visual markers  Lip contour is tracked by using blue lipstick  Audio is used to phonetically annotate the data  Differences in duration are further used for emotion analysis

Durational differences Neutral Anger Disgust Fear Happiness Sadness Surprise t sec Don’t ask me to carry an oily rag like that

Isolated region analysis Don’t ask me to carry an oily rag like that Neutral Anger Disgust Fear Happiness Sadness Surprise

PCA based Analysis first principal component 55% of total variance Surprise Neutral Happiness t sec Don’t an that Don’t an that PC 1

Emotion Transfer Neutral Sentence ASentence A in Emotion Phonetic transcription emphasis Phonetic transcription emphasis DTW Audio of Emotion Sentence B emphasis Phonetic transcriptio n Neutral animation Δ = Emotion - Neutral Model of Emotion Animation of Emotion Sentence B

Conclusions  This work presents an isolated upper face region analysis for selected sentences  Promising relation between the principal component features and emotion  Observed dynamics reflects non-constant nature of emotion within a sentence  Future work will focus on expressive features with respect to emotion transfer