Affective Interfaces Present and Future Challenges Introductory statement by Antonio Camurri (Univ of Genoa) Marc Leman (Univ of Gent) MEGA IST-20410 Multisensory.

Slides:



Advertisements
Similar presentations
5th FP7 Networked Media Concertation Meeting, Brussels 3-4 February 2010 A unIfied framework for multimodal content SEARCH Short Project Overview.
Advertisements

Information Society Technologies Third Call for Proposals Norbert Brinkhoff-Button DG Information Society European Commission Key action III: Multmedia.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Co-funded by the European Union Semantic CMS Community IKS impact on DFKI research Final Review Luxembourg March 13/14, 2013 Tilman Becker DFKI GmbH.
Computational Paradigms in the Humanities – eHumanities and their role and impact in transdisciplinary research Gerhard Budin University of Vienna.
Expressive Tangible Acoustic Interfaces Antonio Camurri, Corrado Canepa, and Gualtiero Volpe InfoMus Lab, DIST-University of Genova, Viale Causa 13, Genova,
The design process IACT 403 IACT 931 CSCI 324 Human Computer Interface Lecturer:Gene Awyzio Room:3.117 Phone:
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
IST MEGA Multisensory Expressive Gesture Applications Concertation Meeting Brussels, 3-4 October 2000 Antonio Camurri (DIST-University of Genoa)
Facial expression as an input annotation modality for affective speech-to-speech translation Éva Székely, Zeeshan Ahmed, Ingmar Steiner, Julie Carson-Berndsen.
ENTERFACE’08 Multimodal high-level data integration Project 2 1.
Media Coordination in SmartKom Norbert Reithinger Dagstuhl Seminar “Coordination and Fusion in Multimodal Interaction” Deutsches Forschungszentrum für.
ICANN workshop, September 14, Athens, Greece Intelligent Affective Interaction technologies and applications.
Introduction to HCC and HCM. Human Centered Computing Philosophical-humanistic position regarding the ethics and aesthetics of a workplace Any system.
- List of Multimodal Libraries - (EIF students only)
A Social and Emotional Approach to Games and Human Computer Interaction (HCI) Katherine Isbister Associate Professor, Digital Media and CSE.
ISTD 2003, Thoughts and Emotions Interactive Systems Technical Design Seminar work: Thoughts & Emotions Saija Gronroos Mika Rautanen Juha Sunnari.
Emotional Intelligence and Agents – Survey and Possible Applications Mirjana Ivanovic, Milos Radovanovic, Zoran Budimac, Dejan Mitrovic, Vladimir Kurbalija,
CSC230 Software Design (Engineering)
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
The design process z Software engineering and the design process for interactive systems z Standards and guidelines as design rules z Usability engineering.
On Education Gerrit C. van der Veer most work done by Anne Bowser Elizabeth Churchill Jennifer Preece.
Carlos Lamsfus. ISWDS 2005 Galway, November 7th 2005 CENTRO DE TECNOLOGÍAS DE INTERACCIÓN VISUAL Y COMUNICACIONES VISUAL INTERACTION AND COMMUNICATIONS.
© Siemens AG, CT SE 1, Dr. A. Ulrich C O R P O R A T E T E C H N O L O G Y Research at Siemens CT SE Software & Engineering Development Techniques.
Enabling enactive interaction in virtualized experiences Stefano Tubaro and Augusto Sarti DEI – Politecnico di Milano, Italy.
Psychology of usability
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
Conversational Applications Workshop Introduction Jim Larson.
AgentSheets ® Thought Amplifier End User Development WHO needs it? Alexander Repenning CS Prof. University of Colorado CEO AgentSheets Inc.
Recognition of meeting actions using information obtained from different modalities Natasa Jovanovic TKI University of Twente.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
Working group on multimodal meaning representation Dagstuhl workshop, Oct
ITCS 6010 SALT. Speech Application Language Tags (SALT) Speech interface markup language Extension of HTML and other markup languages Adds speech and.
Masters in Contemporary Performance and Composition (CoPeCo) LLP EE-ERASMUS-ECDSP Marje Lohuaru Estonian Academy of Music and Theatre.
WP7 - Architecture and implementation plan Objectives o Integrating the legal, governance and financial plans with technological implementation through.
Affective computing and the role of emotions in computer science and engineering Antonio Camurri Corso di Progettazione e Produzione Multimediale.
Michael Lawo Using Wearable Computing Technology to Empower the Mobile Worker TNC 2009 Malaga Michael Lawo, Otthein Herzog, Peter.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
Chapter 1 소프트웨어의 본질 The Nature of Software 임현승 강원대학교
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
Jeff Burke UCLA School of Theater, Film and Television Sensor networks in art and entertainment.
Towards multimodal meaning representation Harry Bunt & Laurent Romary LREC Workshop on standards for language resources Las Palmas, May 2002.
Vocabularies for Description of Accessibility Issues in MMUI Željko Obrenović, Raphaël Troncy, Lynda Hardman Semantic Media Interfaces, CWI, Amsterdam.
FET National Curriculum Statements Dramatic Arts Beyond 2006 WESTERN CAPE EDUCATION DEPARTMENT.
Virtual University - Human Computer Interaction 1 © Imran Hussain | UMT Imran Hussain University of Management and Technology (UMT) Lecture 40 Observing.
Introduction to multimedia. What is multimedia? ”Multimedia is the seamless integration of text, sound, images of all kinds and control software within.
User-System Interaction: from gesture to action Prof. dr. Matthias Rauterberg IPO - Center for User-System Interaction TU/e Eindhoven University of Technology.
School of something FACULTY OF OTHER Facing Complexity Using AAC in Human User Interface Design Lisa-Dionne Morris School of Mechanical Engineering
Intelligent Robot Architecture (1-3)  Background of research  Research objectives  By recognizing and analyzing user’s utterances and actions, an intelligent.
Celluloid An interactive media sequencing language.
Winnipeg Symphony Orchestra Adventures in Music Images and questions to support John Estacio’s Frenergy.
Conceptual Design Dr. Dania Bilal IS588 Spring 2008.
1 What the body knows: Exploring the benefits of embodied metaphors in hybrid physical digital environments Alissa N. Antle, Greg Corness, Milena Droumeva.
NeOn Components for Ontology Sharing and Reuse Mathieu d’Aquin (and the NeOn Consortium) KMi, the Open Univeristy, UK
A GOAL-DIRECTED RATIONAL COMPONENT FOR EMOTIONAL AGENTS Antonio Camurri and Gualtiero Volpe DIST - University of Genova Italy 10/04/1999 " AFFECTIVE COMPUTING:
Implementation recommendations 1st COPRAS review Presentation at 2nd COPRAS annual review, 15 March 2006, CEN/CENELEC meeting centre, Brussels Bart Brusse.
THE EYESWEB PLATFORM - OVERVIEWTHE EYESWEB PLATFORM - OVERVIEW The EyesWeb XMI multimodal platform Overview 5 March 2015.
W3C Multimodal Interaction Activities Deborah A. Dahl August 9, 2006.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
What is Multimedia Anyway? David Millard and Paul Lewis.
Chapter 1 The Nature of Software
Chapter 1 The Nature of Software
Chapter 1 The Nature of Software
The design process Software engineering and the design process for interactive systems Standards and guidelines as design rules Usability engineering.
The design process Software engineering and the design process for interactive systems Standards and guidelines as design rules Usability engineering.
Textbook Engineering Web Applications by Sven Casteleyn et. al. Springer Note: (Electronic version is available online) These slides are designed.
Model-Driven Analysis Frameworks for Embedded Systems
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
HUMAN COMPUTER INTERACTION. The main aims of the chapter are to: Explain the difference between good and poor interaction design. Describe what interaction.
Presentation transcript:

Affective Interfaces Present and Future Challenges Introductory statement by Antonio Camurri (Univ of Genoa) Marc Leman (Univ of Gent) MEGA IST Multisensory Expressive Gesture Applications

Introductory Statement (1/3) KANSEI, Affect, Emotion, Personality, Expressiveness, Gesture,... Psychology, AI, CHI, Art and Humanities Interfaces: –Enhance interfaces by means of non-verbal expressive data modeling and communication –Embed models of emotion and personality in natural language dialog (e.g., web agents,...)

...Introductory Statement… (2/3) (USA) Affective Computing: Bates/CMU, B.Hayes-Roth/Stanford, Picard/MIT,... O’Rorke-Ortony model –Games/entertainment, Web-agents with personality, wearables, mobile devices... (Japan) KANSEI Information Processing: S.Hashimoto/Waseda, K.Mase/ATR,... –KANSEI Industry (Art, Entertainment, Fashion, Cosmetics, Kansei-Bots,...); the HUMANOID Project (e.g., elderly assistance by “Kansei machines/interfaces”)

...Introductory Statement (3/3) An example - a spectator observes a “microdance”: –Which are the lower-level “cues” in movement which may explain “expressiveness”? (eg, rigid/fluid, direct/flexible, extrovert, energy, rhythm features…) –does the spectator understand the dancer’s intended emotional expression ? (eg, “sad”, “angry”) –“KANSEI” or “sensible” content in the microdance: how much intensely the spectator is “hit” by an inspired dancer performance ? (similar to the “arousal” problem in emotion psychology theories) Music, Sound and Voice Expressive spaces as a new perspective on multimodality

MEGA IST-40210: Toward an Original European Route MEGA (EU IST aims at consolidating a third original European route, gounded on: –art and humanistic theories of non-verbal communication –“expressive gestures”: expressive cues, emotion, “sensible”/KANSEI processing Focus: non-verbal expressive gestures communication in artistic scenarios, trying to inprove technology by arts and humanities

The notion of “Gesture” non-verbal epistemic unit: e.g., movements of the human body, particular occurrences in music defined by criteria related to human information processing, in particular in motor and perceptual actions has typical beginning and ending describes trajectories in a feature space may occur at different levels: e.g., in music, an individual note may be considered as a gesture, with a given attack, sustain and decay pattern. Also a sequence of notes may be considered a gesture, or a repeating rhythm pattern.

“Expressive Gesture” A gesture, in principle, may be neutral, hence operating as a self-contained unit, yet many gestures are called ‘expressive’ Expressive gestures refer to a semantics outside the gesture pattern. This semantic may relate to synaesthetic (e.g. sound-colour), kinaesthetic (sound-movement, cf. Laban), and cenaesthetic issues (sound-sensation-emotion) The notion of gesture as a fundamental category of human action and perception, that ultimately relates to affective and sensitive computing

“Gesture” in MEGA Definition of the notion of gesture in multimodal artistic and cultural scenarios Intermodality or mapping of gestures from one to another domain Recognition of features that are relevant for expressive gestures, in particular also for affective and sensitive computing

MEGA: Research Analysis of expressive gesture in –movement (incl.dance) WP3 –audio (incl. music) WP4 Synthesis of expressive gesture in –movement and visual media WP5 –sound (incl. music) WP6 Multimodal Integration Mapping strategies from input modalities to output media Open software platform (EyesWeb) to support research and to enable commercial exploitations

MEGA: open architecture Based on EyesWeb (DIST), –open platform supporting MEGA research and apps –the platform integrates “modules” for expressive analysis, synthesis, and mapping strategies. –visual language, Wizard for plug-in’s devel. –based on industrial standards (MS COM/DCOM) –engineered release, coming from previous versions used in concrete apps –EyesWeb user community –Integrated with other tools available in MEGA: IPEM ToolBox (IPEM), DOVRE (Telenor),...

MEGA: Applications Exploitation: –Art and theatre: MEGA consortium –Musical instruments: GeneralMusic –Visual “soundtrack”: Eidomedia –Distributed cooperative apps: Telenor Demos at IBC2001? Public artistic events planned: music theatre, interactive museum exhibits