Presentation is loading. Please wait.

Presentation is loading. Please wait.

ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents.

Similar presentations


Presentation on theme: "ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents."— Presentation transcript:

1 eNTERFACE’08 Multimodal Communication with Robots and Virtual Agents

2 Overview Context: Exploitation of multi-modal signals for the development of an active robot/agent listener Storytelling experience : –Speakers told a story of an animated cartoon they had just seen 1- See the cartoon 2- Tell the story to a robot or an agent

3 Overview Active listening : –During natural interaction, speakers see if the statements have been correctly understood (or at least heard). –Robots/agents should also have active listening skills… Characterization of multi-modal signals as inputs of the feedback model: –Speech analysis : prosody, keywords recognition, pauses –Partner analysis : face traking, smile detection Robot/agent feedbacks (outputs): –Lexical non-verbal behaviors Dialog management: –Feedback model: exploitation of both inputs and outputs signals Evaluation: –Storytelling experiences are usually evaluated by annotation

4 Organization: Workpackages: WP1: Speech feature extraction and analysis WP2: Partner analysis: face tracking and analysis WP3: Robot and Agent Behavior Analysis WP4: Dialog management for feedback behaviors WP5: Evaluation and Annotation WP6: Deliverables, reports.

5 Speech Analysis Automatic detection of prominence during the interaction Computational attention algorithms:

6 eNTERFACE’08 Multimodal Communication with Robots and Virtual Agents Speech analysis for prominence detection

7 Computational attention algorithms

8

9 Have more recently been tested for audio event detection M. MANCAS, L. COUVREUR, B. GOSSELIN, B. MACQ, 2007, "Computational Attention for Event Detection", Proceedings of ICVS Workshop on Computational Attention & Applications (WCAA-2007), Bielefeld, Germany, Mar In this project, we intend to test it for the automatic detection of salient speech events, for triggering avatar/robot feedback – Underlying hypothesis: listener is a child, with limited language knowledge  test the bottom-up approach, as opposed to the more language- driven top-down approach: A Top-down Auditory Attention Model For Learning Task Dependent Influences On Prominence Detection In Speech, Ozlem Kalinli and Shrikanth Narayanan, ICASSP’08, Computational attention algorithms

10 Partner analysis Analysis of human behaviour (non-verbal interaction). Development of a component able to detect the face and key features of feedback analysis: shaking head, smiling… Methodology: Face detection: Viola & Jones face detection, Head shaking: frequency analysis of interest points Smile detection: Combining colorimetric and geometric approaches

11 Robot and Agent Behavior Analysis Integratation of existing tools to produce an ECA/robot able to display expressive backchannels. The ECA architecture follows the SAIBA framwork. It is composed of several modules connected to each other via a Representation Language. The language FML (Functional Markup Language) connects the module 'intent planning' to 'behavior planning' and BML (Behavior) connects 'behavior planning to 'behavior realiser'. Modules are connected via psyclone, a white board architecture. Tasks: -define the capabilities the ECA/robot ought to have -create BML (Behavior Markup Language) entries for the lexicon -integrate modules that will endow ECA with such expressive capabilities. -work out carefully the synchronization scheme between modules, in particular between modules of Speaker and of Listener

12 Dialog Management Development of a feedback model with the respect of the input signals (common) and the output capabilities (behavior) Methodology: Representation of input data: –EMMA: Extensible MultiModal Annotation markup language –Definition of task-oriented representation Dialog management: –State Chart XML (SCXML): State Machine Notation for Control Abstraction –Interpretation of the speaker’s conversation

13 Evaluation and Annotation Investigate the impact of the feedback provided by the robot and the virtual agent on the user. A single model of feedback will be defined but implemented differently on the robot and the agent since they have different communication capabilities. The system will be partly simulated (WOZ). If time allows, a functional version of the system will be evaluated. Tasks: Evaluation protocol: scenario, variables … System implementation: WOZ Data collection: recordings Data analysis: coding schemes, analysis of annotation, computation of evaluation metrics

14 Thank for your attention…


Download ppt "ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents."

Similar presentations


Ads by Google