Presentation is loading. Please wait.

Presentation is loading. Please wait.

NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal.

Similar presentations


Presentation on theme: "NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal."— Presentation transcript:

1 NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal Interfaces Scope & focus in 2003 NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO

2 Outline  Why interface technologies?  Goals  What research?  Focus of first calls in FP6  Where are we today?  Ongoing activities

3 Goals of Interface Technologies  Allow people to use the surrounding hi-tech easily and in a way that is consonant to humans rather than to computers  Facilitate interpersonal communication anywhere, anytime, beyond linguistic and cultural barriers í Assist people and augment their abilities when interacting and communicating

4 Interface Technologies  Strongly multidisciplinary with many component technologies  Significant system level integration issues  Research resulting in autonomous (self- learning, self-organising), adaptive (time varying) systems that can work with partial and uncertain information

5 Actions Workprogramme 2003-2004  Multimodal Interfaces (in call 1) natural and adaptive multimodal interfaces  Objective: To develop natural and adaptive multimodal interfaces, that respond intelligently to speech and language, vision, gesture, haptics and other senses. Focus 1Multimodal interaction Focus 2Multilingual communication

6 Multimodal Interaction Interaction between and among humans and the virtual and physical environment  intuitive multimodal interfaces that are autonomous and capable of learning and adapting to the user environment in dynamically changing contexts. They should recognise emotive user reaction and feature robust dialogue capability with unconstrained speech and language input. Human-to-human: technology mediated communication Human-to-things: virtual and physical Human-to-self: health, wellbeing Human-to-content:information retrieval/browsing Device-to-device: human mediated device communication Human-to-embodied robots

7 Multilingual Communication facilitating translation for unrestricted domains, especially for spontaneous (unrestricted) or ill-formed (speech) inputs, in task-oriented settings Unrestricted spontaneous speech-to-speech translation in task-oriented settings Statistical/mixed approaches to translation Adaptation to task/user, learning Robustness

8 Basic Research Basic research, component technologies Examples: accurate vision gesture tracking speech and audio processing language technologies affective computing machine learning autonomous systems fusion of multiple channels

9 System Level Integration Proof of concept in challenging application domains, including:  wearable interfaces and smart clothes  intelligent rooms and interfaces for collaborative working tools  cross-cultural communications  usability issues and evaluation

10 Shared Infrastructure  Data: large amounts of multimodal data, synchronisation and IPR issues  Metrology, technology evaluation, usability  Infrastructural knowledge: machine learning, applied mathematics  Best practice and standards  Tools and platforms  Socio-economic issues (e.g. human factors)

11 Interface Technologies Ongoing Activities  Collection, processing and browsing of multimodal meeting data  systems that enable recording, structuring, browsing and querying of an archive of multi-modal recordings of meetings.  Multimodal, multicultural, multilingual communication  integration of multiple communications modes - vision, speech and object manipulation - by combining the physical and virtual worlds to provide support for multi-cultural communication and problem solving  Speech-to-speech translation  development of speech-to-speech translation and its integration in automatic in e-commerce and e-service environments  data collection for speech-to-speech translation

12 Interface Technologies Ongoing Activities  Preparing future multisensorial interaction research  providing technological baselines, comparative evaluations, and assessment of prospects of core technologies for speech-to- speech translation, the detection and expressions of emotional states, and technologies for children’s speech.  Automatic animation of human models  design and development of a virtual person animation system in controlled environments, which enables the modeling, analysis and simulation of human motion.  Recognition of the user’s emotional state  human computer interaction that can interpret its users' attitude or emotional state in terms of their speech and/or their facial gestures and expressions. See www.hltcentral.org for more information

13 Partnership  Highly competent and reliable partners  Complementarity: cover all areas you need  Duplication of competence:  acceptable for IPs depending on project needs  necessary for NoEs  At least 3 of which EU/NAS or associated states  Industry/SME/NAS/Academia participation depending solely on project needs

14 Expected Outcome Calls 1 and 2 Multimodal Interfaces and Cognitive Systems  ~ 90 M€ funding  2/3 rd of funds devoted new instruments  6-10 new instruments  8-12 old instruments ! ! PROVISIONAL ! ! ! PROVISIONAL


Download ppt "NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal."

Similar presentations


Ads by Google