Audio-based Emotion Recognition for Advanced Information Retrieval in Judicial Domain ICT4JUSTICE 2008 – Thessaloniki,October 24 G. Arosio, E. Fersini,

Slides:



Advertisements
Similar presentations
InDiCo Luxembourg, 16 May 2002 Integrated Digital Conferencing.
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Human language technologies Data Collections & Studies WP4- Emotions: Faces.
Descriptive schemes for facial expression introduction.
Bruxelles, October 3-4, Interface Multimodal Analysis/Synthesis System for Human Interaction to Virtual and Augmented Environments IST Concertation.
Model-based Image Interpretation with Application to Facial Expression Recognition Matthias Wimmer
Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig
DE L EARYOUS TRAINING INTERPERSONAL COMMUNICATION SKILLS USING UNCONSTRAINED TEXT INPUT Frederik Vaassen, Walter Daelemans Jeroen Wauters, Frederik Van.
Prof. Carolina Ruiz Computer Science Department Bioinformatics and Computational Biology Program WPI WELCOME TO BCB4003/CS4803 BCB503/CS583 BIOLOGICAL.
DISCo WS - Milan - June 3, Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca Claudio Ferretti.
Facial expression as an input annotation modality for affective speech-to-speech translation Éva Székely, Zeeshan Ahmed, Ingmar Steiner, Julie Carson-Berndsen.
AUTOMATIC SPEECH CLASSIFICATION TO FIVE EMOTIONAL STATES BASED ON GENDER INFORMATION ABSTRACT We report on the statistics of global prosodic features of.
1 Texmex – November 15 th, 2005 Strategy for the future Global goal “Understand” (= structure…) TV and other MM documents Prepare these documents for applications.
Human Language Technologies. Issue Corporate data stores contain mostly natural language materials. Knowledge Management systems utilize rich semantic.
Spoken Language Technologies: A review of application areas and research issues Analysis and synthesis of F0 contours Agnieszka Wagner Department of Phonetics,
Advanced Technology Center Stuttgart EMOTIONAL SPACE IMPROVES EMOTION RECOGNITION Raquel Tato, Rocio Santos, Ralf Kompe Man Machine Interface Lab Advance.
1 A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions Zhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang Reported.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
Cristina Manfredotti D.I.S.Co. Università di Milano - Bicocca An Introduction to the Use of Bayesian Network to Analyze Gene Expression Data Cristina Manfredotti.
Producing Emotional Speech Thanks to Gabriel Schubiner.
Emotional Intelligence and Agents – Survey and Possible Applications Mirjana Ivanovic, Milos Radovanovic, Zoran Budimac, Dejan Mitrovic, Vladimir Kurbalija,
Recognizing Emotions in Facial Expressions
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
Aging and the use of emotional cues to guide social judgments Louise Phillips, Gillian Slessor & Rebecca Bull University of Aberdeen.
Facial Feature Detection
Occasion:HUMAINE / WP4 / Workshop "From Signals to Signs of Emotion and Vice Versa" Santorin / Fira, 18th – 22nd September, 2004 Talk: Ronald Müller Speech.
Music Emotion Recognition 許博智 謝承諺.
Multimedia Databases (MMDB)
KYLE PATTERSON Automatic Age Estimation and Interactive Museum Exhibits Advisors: Prof. Cass and Prof. Lawson.
SPEECH CONTENT Spanish Expressive Voices: Corpus for Emotion Research in Spanish R. Barra-Chicote 1, J. M. Montero 1, J. Macias-Guarasa 2, S. Lufti 1,
Automatic synchronisation between audio and score musical description layers Antonello D’Aguanno, Giancarlo Vercellesi Laboratorio di Informatica Musicale.
Multimodal Information Analysis for Emotion Recognition
Prof. Thomas Sikora Technische Universität Berlin Communication Systems Group Thursday, 2 April 2009 Integration Activities in “Tools for Tag Generation“
Multimodal Emotion Recognition Colin Grubb Advisor: Nick Webb.
Some questions -What is metadata? -Data about data.
Intelligent Control and Automation, WCICA 2008.
March 31, 1998NSF IDM 98, Group F1 Group F Multi-modal Issues, Systems and Applications.
School of Computer Science 1 Information Extraction with HMM Structures Learned by Stochastic Optimization Dayne Freitag and Andrew McCallum Presented.
ACADS-SVMConclusions Introduction CMU-MMAC Unsupervised and weakly-supervised discovery of events in video (and audio) Fernando De la Torre.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
Semantic Extraction and Semantics-Based Annotation and Retrieval for Video Databases Authors: Yan Liu & Fei Li Department of Computer Science Columbia.
Digital Video Library Network Supervisor: Prof. Michael Lyu Student: Ma Chak Kei, Jacky.
Chris Hewitt, Wild Mouse Male, Age 42, Happy ARC31 2.
Jennifer Lee Final Automated Detection of Human Emotion.
Natural Language and Speech (parts of Chapters 8 & 9)
Ekman’s Facial Expressions Study A Demonstration.
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
ELAN as a tool for oral history CLARIN Oral History Workshop Oxford Sebastian Drude CLARIN ERIC 18 April 2016.
3D Motion Classification Partial Image Retrieval and Download Multimedia Project Multimedia and Network Lab, Department of Computer Science.
Interpreting Ambiguous Emotional Expressions Speech Analysis and Interpretation Laboratory ACII 2009.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
What is it? Details Look at the whole.  Form of communication ◦ Communicating without words ◦ Non-verbal ◦ Using facial expressions ◦ Gestures  Can.
Nataliya Nadtoka James Edge, Philip Jackson, Adrian Hilton CVSSP Centre for Vision, Speech & Signal Processing UNIVERSITY OF SURREY.
Mood Detection.
Derek McColl Alexander Hong Naoaki Hatakeyama Goldie Nejat
University of Rochester
Automated Detection of Human Emotion
3D Motion Classification Partial Image Retrieval and Download
Sentence Modeling Representation of sentences is the heart of Natural Language Processing A sentence model is a representation and analysis of semantic.
Course Projects Speech Recognition Spring 1386
Done this week Image emotion classifier
Presentation by Sasha Beltinova
What is blue eyes ? aims on creating computational machines that have perceptual and sensory ability like those of human beings. interactive computer.
AHED Automatic Human Emotion Detection
Séance 1 HOW ARE YOU ?.
Ying Dai Faculty of software and information science,
AHED Automatic Human Emotion Detection
Emotion.
Automated Detection of Human Emotion
Presentation transcript:

Audio-based Emotion Recognition for Advanced Information Retrieval in Judicial Domain ICT4JUSTICE 2008 – Thessaloniki,October 24 G. Arosio, E. Fersini, E. Messina, F. Archetti Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano-Bicocca

 Affective Computing  Learning the emotional state of a human being  Learning from:  Vocal signals  Facial expressions  Biometric signals  Multimodal sources  Applications  Games (personal robots)  Call centers  Automotive  JUMAS: Emotion Recognition in Judicial Domain for Semantic Retrieval Emotion Recognition

JUMAS Project Audio&Video Document Current Scenario Manual Transcription Manual Retrieval Manual Information Extraction Automatic Recording Manual Retrieval Manual Information Extraction Automatic Recording Audio Stream Analogical / Digital Acquisition Video Stream Future Scenario Audio&Video Document Digital Acquisition Automatic Audio Transcription Automatic Audio&Video Annotation Automatic Information Extraction Automatic Semantic Retrieval Audio&Video Stream Emotion Annotation for

Neutral Fear Emotion Recognition  Output: XML Searchable Tags Neutral

 Challenges:  What features are able to describe and discriminate different emotional states?  Which kind of environment influences emotional state recognition?  Which kind of learning models produces the optimal performance? Emotion Recognition

 Italian DB: 391 samples  Sentences from movies  5 emotional states:  Anger  Happiness  Sadness  Neutral  Fear Step 1 – Vocal Signature Acquisition Emotion Recognition from vocal signatures  German DB: 531 samples  Acted sentences: emotion on request  7 emotional states  Anger  Fear  Happiness  Sadness  Neutral  Disgust  Boredom

Preliminary Experimental Results Flat Models  Learning Models are biased by:  Language  Gender  Neutral emotional state

Multi-Layer Support Vector Machines  Hierarchical Classification:  Multi-Layer Support Vector Machines

Experimental Results  Consclusion  Multi-layer SVMs outperform traditional learning techniques  Fututre Work  Dynamic Techniques  Integration with Semantic Information Retrieval System  Cooperation with Deception Recognition Flat vs Multi-Layer Multi-Layer Flat German DB Italian DB