Presentation is loading. Please wait.

Presentation is loading. Please wait.

German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49.

Similar presentations


Presentation on theme: "German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49."— Presentation transcript:

1 German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49 681) 302-5341 e-mail: wahlster@dfki.de WWW:http://www.dfki.de/~wahlster Wolfgang Wahlster Language Technologies for the Mobile Internet Era

2 © W. Wahlster Multimodal Interfaces to 3G Mobile Services Market studies (May 2002) predict: Cumulative revenues of almost 1 trillion € from launch until 2010 Multimodal UMTS Systems Non-voice service revenues will dominate voice revenues by year 3 and comprise 66% of 3G service revenues by 2010 Non-voice service revenues will dominate voice revenues by year 3 and comprise 66% of 3G service revenues by 2010 322 billion € in revenues in 2010 322 billion € in revenues in 2010 In 2010 the average 3G subscriber will spend about 30 € per month on 3G data services In 2010 the average 3G subscriber will spend about 30 € per month on 3G data services

3 © W. Wahlster Multimodal UMTS Systems Intelligent Interaction with Mobile Internet Services Access to web content and web services anywhere and anytime Access to corporate networks and virtual private networks from any device Access to edutainment and infotainment services Access to edutainment and infotainment services Access to all messages (voice, email, multimedia, MMS) from any single device Access to all messages (voice, email, multimedia, MMS) from any single device Personalization Localization

4 © W. Wahlster Mobile Messaging Services Evolution: From SMS to MMS Infrastructure Customer Expectation Applications Text Standard Phones Ubiquity Youth Focus Limited Enhancement SS7 SMSC MMS Relay and Servers UMTS IP/MPLS Protocols EMS Phones MMS Phones Integrated Image Capture Smart Phones Pictures Audio Multimedia Video Enhanced Text Personalized Services Location-based Services Emotional Experience Enhanced Message Creation MMS SMS EMS SMSC Terminals Language Technologies for MMS:- Speech Synthesis (with Affect) - Multimodal Authoring Interface - Speech-based Retrieval of Media Objects

5 © W. Wahlster 1.Using all Human Senses for Intuitive Interfaces 2.Media Fusion and Fission in the SmartKom System 3.Client/Server Architectures for Mobile Multimodal Dialogue Systems 4. Added-Value Mobile Services 5. Research Roadmaps for Multimodal Interfaces 6. Conclusions Outline of the Talk

6 © W. Wahlster From Spoken Dialogue to Multimodal Dialogue SmartKom Third Generation UMTS Phone Speech, Graphics and Gesture Verbmobil Today‘s Cell Phone Speech only

7 © W. Wahlster Spoken Dialogue Graphical User interfaces Gestural Interaction Multimodal Interaction Merging Various User Interface Paradigms Facial Expressions Haptic Input

8 © W. Wahlster System Input Channels Output Channels Storage HD Drive DVD visual tactile auditory haptic MEDIA (physical information carriers) MODALITIES (human senses) languagegraphicsgesture User CODE (systems of symbols) facial ex- pression Using All Human Senses for Intuitive Interaction: Code, Media and Modalities

9 © W. Wahlster Symbolic and Subsymbolic Fusion of Multiple Modes Speech Recognition Gesture Recognition Prosody Recognition Facial Expression Recognition Lip Reading Subsymbolic Fusion - Neuronal Networks - Hidden Markov Models Symbolic Fusion - Graph Unification - Bayesian Networks Reference Resolution and Disambiguation Semantic Representation

10 © W. Wahlster Mutual Disambiguation of Multiple Input Modes The combination of speech and vision analysis increases the robustness and understanding capabilities of multimodal user interfaces. Speech Recognition + Lip Reading increases robustness in noisy environments Speech Recognition + Gesture Recognition (XTRA, SmartKom) referential disambiguation and focus control Speech Recognition + Facial Expression Recognition (SmartKom) recognition of irony, sarcasm and scope disambiguation

11 © W. Wahlster SmartKom-Public: A Multimodal Communication Kiosk SmartKom-Mobile: A Handheld Communication Assistant SmartKom: A Transportable Interface Agent Media Analysis Kernel of SmartKom Interface Agent Interaction Management Application Manage- ment Media Design SmartKom-Home/Office: Multimodal Portal to Information Services

12 © W. Wahlster SmartKom: Intuitive Multimodal Interaction MediaInterface European Media Lab Uinv. Of Munich Univ. of Stuttgart Saarbrücken Aachen Dresden Berkeley Stuttgart MunichUniv. of Erlangen Heidelberg Main Contractor DFKI Saarbrücken The SmartKom Consortium: Project Budget: € 25.5 million, funded by BMBF (Dr. Reuse) and industry Project Duration: 4 years (September 1999 – September 2003) Ulm

13 © W. Wahlster SmartKom`s SDDP Interaction Metaphor SDDP = Situated Delegation-oriented Dialogue Paradigm User specifies goal delegates task cooperate on problems asks questions presents results Service 1 Service 2 Service 3 Webservices Personalized Interaction Agent See: Wahlster et al. 2001, Eurospeech

14 © W. Wahlster Multimodal Input and Output in the SmartKom System Where would you like to sit?

15 © W. Wahlster Personalized Interaction with WebTVs via SmartKom (DFKI with Sony, Philips, Siemens) User: Switch on the TV. Smartakus: Okay, the TV is on. User: Which channels are presenting the latest news right now? Smartakus: CNN and NTV are presenting news. User: Please record this news channel on a videotape. Smartakus: Okay, the VCR is now recording the selected program. Example: Multimodal Access to Electronic Program Guides for TV

16 © W. Wahlster Using Facial Expression Recognition for Affective Personalization (1) Smartakus: Here you see the CNN program for tonight. (2)User: That’s great. (3)Smartakus: I’ll show you the program of another channel for tonight. (2’)User: That’s great. (3’) Smartakus: Which of these features do you want to see? Processing ironic or sarcastic comments  

17 © W. Wahlster The SmartKom Demonstrator System Camera for Gestural Input Microphone Multimodal Control of TV-Set Multimodal Control of VCR/DVD Player

18 © W. Wahlster A Demonstration of SmartKom’s Multimodal Interface for the German President Dr. Rau

19 © W. Wahlster Seamless integration and mutual disambiguation of multimodalinput and output on semantic and pragmatic levels Situated understanding of possibly imprecise, ambiguous, or incom- plete multimodal input Context-sensitive interpretation of dialog interaction on the basis of dynamic discourse and context models Adaptive generation of coordinated, cohesive and coherent multimodal presentations Semi- or fully automatic completion of user-delegated tasks through the integration of information services Intuitive personification of the system through a presentation agent Salient Characteristics of SmartKom

20 © W. Wahlster Multimodal Input and Output in SmartKom Fusion and Fission of Multiple Modalities Input by the User Output by the Presentation agent Speech Gesture Facial Expressions + + + + + +

21 © W. Wahlster ? e.g. 60 x 90 pixel b/w e.g. 1024 * 768 pixel 24-bit color The Need for Personalization: Adaptive Interaction with Mobile Devices

22 © W. Wahlster PEACH: „Beaming“ A Life-Like Character From A Large Public Display to a Mobile Personal Device PEACH: Personalized Edutainment in Museums (IRST – DFKI)

23 © W. Wahlster A “Web of Meaning“ has more Personalization Potential than a “Web of Links“ Three Layers of Webpage Annotations cf.: Dieter Fensel, James Hendler, Henry Liebermann, Wolfgang Wahlster (eds.) Spinning the Semantic Web, MIT Press, November 2002 Personalization Potential OWL DAML + OIL Content high Structure XML medium Layout HTML low

24 © W. Wahlster Personalization Mapping Web Content Onto a Variety of Structures and Layouts From the “one-size fits-all“ approach of static webpages to the “perfect personal fit“ approach of adaptive webpages Structure XML 1 XML 2 XML n Content OWL Layout HTML 11 HTML 1m HTML 21 HTML 2o HTML 31 HTML 3p

25 © W. Wahlster SmartKom: Towards Multimodal and Mobile Dialogue Systems for Indoor and Outdoor Navigation Seamless Integration of Various Positioning Technologies GSM/UMTS cells GPS Infrared Wavelan, Bluetooth Using the same device for driving and walking directions Speech and Gesture Input Graphics and Speech Output

26 © W. Wahlster Spoken Dialogues with the Car Navigation System: SENECA Product Announcement for E-Class Mercedes: End of 2002

27 © W. Wahlster Getting Driving and Walking Directions via SmartKom User: I want to drive to Heidelberg. Smartakus: Do you want to take the fastest or the shortest route? User: The fastest. Smartakus: Here you see a map with your route from Saarbrücken to Heidelberg. SmartKom can be used for Multimodal Navigation Dialogues in a Car

28 © W. Wahlster Getting Driving and Walking Directions via SmartKom Smartakus: You are now in Heidelberg. Here is a sightseeing map of Heidelberg. User: I would like to know more about this church! Smartakus: Here is some information about the St. Peter's Church. User: Could you please give me walking directions to this church? Smartakus: In this map, I have high-lighted your walking route.

29 © W. Wahlster SmartKom: Multimodal Dialogues with a Hybrid Navigation System

30 © W. Wahlster SmartKom, please look for the nearest parking lot. SmartKom, please look for the nearest parking lot. The parking garage at the main station provides 300 slots. Opening hours are from 6 am to 11 pm. Do you want to get there? The parking garage at the main station provides 300 slots. Opening hours are from 6 am to 11 pm. Do you want to get there? Spoken Navigation Dialogues with SmartKom No, please tell me about the next parking option. No, please tell me about the next parking option. The Market parking lot provides 150 slots. It is opened 24 hours a day. Do you want to get there? The Market parking lot provides 150 slots. It is opened 24 hours a day. Do you want to get there? Yes, please I‘ll bring you to the Market parking lot. I‘ll bring you to the Market parking lot.

31 © W. Wahlster The High-Level Control Flow of SmartKom

32 © W. Wahlster The High-Level Control Flow of SmartKom

33 © W. Wahlster The High-Level Control Flow of SmartKom

34 © W. Wahlster The High-Level Control Flow of SmartKom

35 © W. Wahlster The High-Level Control Flow of SmartKom

36 © W. Wahlster The High-Level Control Flow of SmartKom

37 © W. Wahlster The High-Level Control Flow of SmartKom

38 © W. Wahlster The High-Level Control Flow of SmartKom

39 © W. Wahlster The High-Level Control Flow of SmartKom

40 © W. Wahlster The High-Level Control Flow of SmartKom

41 © W. Wahlster The High-Level Control Flow of SmartKom

42 © W. Wahlster The High-Level Control Flow of SmartKom

43 © W. Wahlster The High-Level Control Flow of SmartKom

44 © W. Wahlster The High-Level Control Flow of SmartKom

45 © W. Wahlster Embedded Speech Understanding Content Access (eg. Map Updates) Webservices Distributed Speech Understanding Aurora Speech Features Speech Understanding System With Feature Interface Remote Speech Understanding Java-based Voice Streaming Speech Understanding System A Spectrum of Client/Server Architectures for Mobile Multimodal Systems: From Thin to Fat Clients

46 © W. Wahlster M3I: A Mobile, Multimodal, and Modular Interface of DFKI IBM Embedded Via Voice iPAQ JORNADA C++ EmbeddedJava Java-based Voice Streaming SmartKom‘s Multimodal Dialogue Engine 1.Hybrid Speech Understanding = Embedded + Remote/Distributed Speech Understanding Small Vocabulary Large Vocabulary (Topic Detection) 2.Resource-Adaptive Speech Processing: Availability of a Server Improves the Coverage and Quality

47 © W. Wahlster Example of Embedded Multimodal Dialogue System M3I for Pedestrian Navigation (DFKI) Spoken and Gestural Input combined with graphics and speech output on an iPAQ

48 © W. Wahlster Java-Based Voice Streaming for Hybrid Speech Understanding in M3I (DFKI)

49 © W. Wahlster SmartKom sends a note to the user or activates an alarm as soon as the user approaches an exhibit that matches the specification of an an item on the ActiveList. ActiveList‘s spatial alarm can be combined with: - route planning and navigation -temporal and spatial optimization of a visit SmartKom‘s Added-Value Mobile Service ActiveList Please let me know, when I pass a shop selling batteries.

50 © W. Wahlster SmartKom‘s Added-Value Mobile Service SpotInspector What‘s going on at the castle right now? SmartKom allows the user to have remote visual access to various interesting spots via a selection of webcams – showing current waiting queues, special events and activities. SpotInspector can be combined with: - multimedia presentations of the expected program for these spots - route planning and navigation to these spots

51 © W. Wahlster SmartKom‘s Added-Value Mobile Service PartnerRadar Where are Lisa und Tom ? What are they looking at? SmartKom helps to locate and to bring together members of the same party. Involved Technologies -Navigation and tour instructions -Monitoring of group activity - Additional information on exhibits that are interesting for the whole party.

52 © W. Wahlster ReflectorsPhoto Detector Speaker Command Button Microphone Fingerprint Recognizer Ultimate Simplicity: One-Button Mobile Devices 8hertz technologies Germany CARC Cyber Assist Research Center Japan

53 UMTS-Doit: The First Test and Evaluation Center for UMTS-based Multimodal Speech Services in Germany Mobile Network Internet Content Provider Gigastream UMTS Navigation Switch E1/ATM RNC Munich Node B at DFKI Saarbrücken PSTN, Telephone System UMTS-Doit Server Cooperation betweenand

54 © W. Wahlster UMTS Applications in a Mercedes: Webcam Providing a Look-Ahead of the Traffic Situation

55 © W. Wahlster UMTS Application in a Mercedes: Language-based Music Download DFKI Spin-off: Natural Language Music Search

56 © W. Wahlster MP3 music files from the Web Rist & Herzog for Blaupunkt Personalized Car Entertainment (DFKI for Bosch)

57 © W. Wahlster Empirical and Data-Driven Models of Multimodality 2002 2005 Advanced Methods for Multimodal Communication Computational Models of Multimodality Adequate Corpora for MM Research Mobile, Human-Centered, and Intelligent Multimodal Interfaces Multimodal Interface Toolkit Research Roadmap of Multimodality 2002-2005 XML-Encoded MM Human-Human and Human-Machine Corpora Mobile Multimodal Interaction Tools Standards for the Annotation of MM Training Corpora Examples of Added-Value of Multimodality Multimodal Barge-In Markup Languages for Multimodal Dialogue Semantics Models for Effective and Trustworthy MM HCI Collection of Hardest and Most Frequent/Relevant Phenomena Task-, Situation- and User- Aware Multimodal Interaction Plug- and Play Infrastructure Toolkits for Multimodal Systems Situated and Task- Specific MM Corpora Common Representation of Multimodal Content Decision-theoretic, Symbolic and Hybrid Modules for MM Input Fusion Reusable Components for Multimodal Analysis and Generation Corpora with Multimodal Artefacts and New Multi- modal Input Devices Models of MM Mutual Disambiguation Multiparty MM Interaction 2 Nov. 2001 Dagstuhl Seminar Fusion and Coordination in Multimodal Interaction edited by: W. Wahlster Multimodal Toolkit for Universal Access

58 © W. Wahlster 2006 2010 Ecological Multimodal Interfaces Research Roadmap of Multimodality 2006-2010 Empirical and Data-Driven Models of Multimodality Advanced Methods for Multimodal Communication Toolkits for Multimodal Systems Usability Evaluation Methods for MM System Multimodal Feedback and Grounding Tailored and Adaptive MM Interaction Incremental Feedback between Modalities during Generation Models of MM Collaboration Parametrized Model of Multimodal Behaviour Demonstration of Performance Advances through Multimodal Interaction Real-time Localization and Motion/Eye Tracking Technology Multimodality in VR and AR Environments Resource-Bounded Multimodal Interaction User‘s Theories of System‘s Multimodal Capabilities Multicultural Adaptation of Multimodal Presentations Affective MM Communication Testsuites and Benchmarks for Multimodal Interaction Multimodal Models of Engagement and Floor Management Non-Monotonic MM Input Interpretation Computational Models of the Acquisition of MM Communication Skills Non-Intrusive & Invisible MM Input Sensors Biologically-Inspired Intersensory Coordination Models 2 Nov. 2001 Dagstuhl Seminar Fusion and Coordination in Multimodal Interaction edited by: W. Wahlster

59 © W. Wahlster Burning Issues in Multimodal Interaction Multimodality: from alternate modes of interaction towards mutual disambiguation and synergistic combinations Discourse Models: from information-seeking dialogs towards argumentative dialogs and negotiations Domain Models: from closed world assumptions towards the open world of web services Dialog Behaviour: from automata models towards a combination of probabilistic and plan-based models

60 © W. Wahlster Multimodal interfaces increase the robustness of interaction, enable mutual disambiguation, and lead to intuitive and efficient dialogues Hybrid and resource-adaptive client/server architectures improve the quality and coverage of mobile multimodal interfaces The combination of indoor and outdoor navigation for drivers and pedestrians on a single device for various wireless technologies is one of the „killer apps“ for UMTS services Conclusions

61 http://smartkom.dfki.de/ URL of this Presentation: http://www.dfki.de/~wahlster/LangTech-2002

62 © W. Wahlster © 2002 DFKI Design by R.O. Thank you very much for your attention


Download ppt "German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49."

Similar presentations


Ads by Google