Presentation is loading. Please wait.

Presentation is loading. Please wait.

German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49.

Similar presentations


Presentation on theme: "German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49."— Presentation transcript:

1 German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49 681) 302-5341 e-mail: wahlster@dfki.de WWW:http://www.dfki.de/~wahlster Wolfgang Wahlster Nederlands ICT-Kenniscongress 2002 Den Haag, 05 September 2002 Multimodal Interfaces to Mobile Webservices

2 © W. Wahlster From Spoken Dialogue to Multimodal Dialogue SmartKom Third Generation UMTS Phone Speech, Graphics and Gesture Verbmobil Today‘s Cell Phone Speech only

3 © W. Wahlster Spoken Dialogue Graphical User interfaces Gestural Interaction Multimodal Interaction Merging Various User Interface Paradigms

4 © W. Wahlster SmartKom-Public: A Multimodal Communication Kiosk SmartKom-Mobile: A Handheld Communication Assistant SmartKom: A Transportable Interface Agent Media Analysis Kernel of SmartKom Interface Agent Interaction Management Application Manage- ment Media Design SmartKom-Home/Office: Multimodal Portal to Information Services

5 © W. Wahlster SmartKom`s SDDP Interaction Metaphor SDDP = Situated Delegation-oriented Dialogue Paradigm User specifies goal delegates task cooperate on problems asks questions presents results Service 1 Service 2 Service 3 Webservices Personalized Interaction Agent See: Wahlster et al. 2001, Eurospeech

6 © W. Wahlster Multimodal Input and Output in the SmartKom System Where would you like to sit?

7 © W. Wahlster I‘d like to reserve tickets for this movie. Where would you like to sit? I‘d like these two seats. Multimodal Interaction with a Life-like Character User Input: Speech and Gesture Smartakus Output: Speech, Gesture and Facial Expressions User Input: Speech and Gesture

8 © W. Wahlster Using Facial Expression Recognition for Affective Personalization (1) Smartakus: Here you see the CNN program for tonight. (2)User: That’s great. (3)Smartakus: I’ll show you the program of another channel for tonight. (2’)User: That’s great. (3’) Smartakus: Which of these features do you want to see? Processing ironic or sarcastic comments  

9 © W. Wahlster SmartKom: Intuitive Multimodal Interaction MediaInterface European Media Lab Uinv. Of Munich Univ. of Stuttgart Saarbrücken Aachen Dresden Berkeley Stuttgart MunichUniv. of Erlangen Heidelberg Main Contractor DFKI Saarbrücken The SmartKom Consortium: Project Budget: € 25.5 million Project Duration: 4 years (September 1999 – September 2003) Ulm

10 © W. Wahlster Seamless integration and mutual disambiguation of multimodalinput and output on semantic and pragmatic levels Situated understanding of possibly imprecise, ambiguous, or incom- plete multimodal input Context-sensitive interpretation of dialog interaction on the basis of dynamic discourse and context models Adaptive generation of coordinated, cohesive and coherent multimodal presentations Semi- or fully automatic completion of user-delegated tasks through the integration of information services Intuitive personification of the system through a presentation agent Salient Characteristics of SmartKom

11 © W. Wahlster Fujitsu Stylistic™ 3500X 500 MHz Intel ® Celeron ™ 10.4" XGA TFT (1024x768 Pixels) 256 MB SDRAM 15 GB shock-mounted SmartKom-Home on a Portable Webpad Provides electronic program guides (EPG) for TV, controls consumer electronics like VCRs, and accesses standard applications like phone and e-mail Lean-forward mode: coordinated speech and gesture input Lean-backward mode: voice input alone

12 © W. Wahlster Can be added to a car navigation system or carried by a pedestrian Additional services like route planning interactive navigation through a city can be accessed via GPS and GSM/UMTS connectivity Smartkom-Mobile

13 © W. Wahlster Multimodal Input and Output in SmartKom Input by the User Output by the Presentation agent Speech Gesture Facial Expressions + + + + + +

14 © W. Wahlster SmartKom‘s Data Collection of Multimodal Dialogs User Side-view Camera Face-tracking Camera with Microphone Environmental Noise Microphone Array Screen Projected Webpage Face-tracking Camera Loudspeaker Microphone Array User Bird’s-eye Camera LCD Beamer SIVIT- Camera

15 © W. Wahlster Which feature films are shown tonight on TV? Combination of Speech and Gesture in SmartKom I show you a survey of tonight's TV films. I can't find anything interesting. Then I'll go to the movies. Here you see a programme listing of the movies shown in Heidelberg today. This one I would like to see. Where is it shown? On this map all movie theatres are highlighted, that are showing "A Little Christmas Story".

16 © W. Wahlster Multimodal Input and Output in SmartKom There I would like to get a reservation. In this movie theatre a reservation is not possible. Then let's check another theatre. What about this one? This overview lists all show times for the movie "A Little Christmas Story" in the movie theatre "Castle". Here I would like to get a reservation. Please show me where you would like to be seated. I would like to get two seats here. Is this okay? Sure. I have reserved the seats. Your confirmation number is 635. You can pick up the tickets till half an hour before the show at the ticket box. Okay. Thank you. Good Bye. Good bye.

17 © W. Wahlster Multimodal Access to Telephony Applications via SmartKom User: I would like to make a call. Smartakus: Please use this phone and dial the requested number. The user operates the virtual push button phone via natural tapping gestures. Smartakus can explain the functionality of the phone and help the user operate the virtual phone by a combination of verbal and gestural input.

18 © W. Wahlster Personalized Interaction with WebTVs via SmartKom (DFKI with Sony, Philips, Siemens) User: Switch on the TV. Smartakus: Okay, the TV is on. User: Which channels are presenting the latest news right now? Smartakus: CNN and NTV are presenting news. User: Please record this news channel on a videotape. Smartakus: Okay, the VCR is now recording the selected program. Example: Multimodal Access to Electronic Program Guides for TV

19 © W. Wahlster Mobile Presentation Unit for SmartKom-Public 2 Sony DSR-PD100AP Video Cameras LCD-Beamer ASK C5 SIVIT Gesture Recognition Unit with Infrared Camera Microphones (Microphone Array) Speakers 3 Dual Pentiums III, 500

20 © W. Wahlster ? e.g. 60 x 90 pixel b/w e.g. 1024 * 768 pixel 24-bit color The Need for Personalization: Adaptive Interaction with Mobile Devices

21 © W. Wahlster A “Web of Meaning“ has more Personalization Potential than a “Web of Links“ Three Layers of Webpage Annotations cf.: Dieter Fensel, James Hendler, Henry Liebermann, Wolfgang Wahlster (eds.) Spinning the Semantic Web, MIT Press, November 2002 Personalization Potential OWL DAML + OIL Content high Structure XML medium Layout HTML low

22 © W. Wahlster Personalization Mapping Web Content Onto a Variety of Structures and Layouts From the “one-size fits-all“ approach of static webpages to the “perfect personal fit“ approach of adaptive webpages Structure XML 1 XML 2 XML n Content OWL Layout HTML 11 HTML 1m HTML 21 HTML 2o HTML 31 HTML 3p

23 © W. Wahlster SmartKom: Towards Multimodal and Mobile Dialogue Systems for Indoor and Outdoor Navigation Seamless Integration of Various Positioning Technologies GSM/UMTS cells GPS Infrared Wavelan, Bluetooth Using the same device for driving and walking directions Speech and Gesture Input Graphics and Speech Output

24 © W. Wahlster Presentation from a birds-eye view with a higher speed and without presentation agent Information Booth: Adaptation to Time Pressure

25 © W. Wahlster IRREAL: Indoor Navigation Example Adaptation to the walking speed and to the orientation of the user by using a stochastic broadcast protocol

26 © W. Wahlster Getting Driving and Walking Directions via SmartKom User: I want to drive to Heidelberg. Smartakus: Do you want to take the fastest or the shortest route? User: The fastest. Smartakus: Here you see a map with your route from Saarbrücken to Heidelberg. SmartKom can be used for Multimodal Navigation Dialogues in a Car

27 © W. Wahlster Getting Driving and Walking Directions via SmartKom Smartakus: You are now in Heidelberg. Here is a sightseeing map of Heidelberg. User: I would like to know more about this church! Smartakus: Here is some information about the St. Peter's Church. User: Could you please give me walking directions to this church? Smartakus: In this map, I have high-lighted your walking route.

28 © W. Wahlster SmartKom: Multimodal Dialogues with a Hybrid Navigation System

29 © W. Wahlster SmartKom, please look for the nearest parking lot. SmartKom, please look for the nearest parking lot. The parking garage at the main station provides 300 slots. Opening hours are from 6 am to 11 pm. Do you want to get there? The parking garage at the main station provides 300 slots. Opening hours are from 6 am to 11 pm. Do you want to get there? Spoken Navigation Dialogues with SmartKom No, please tell me about the next parking option. No, please tell me about the next parking option. The Market parking lot provides 150 slots. It is opened 24 hours a day. Do you want to get there? The Market parking lot provides 150 slots. It is opened 24 hours a day. Do you want to get there? Yes, please I‘ll bring you to the Market parking lot. I‘ll bring you to the Market parking lot.

30 © W. Wahlster The High-Level Control Flow of SmartKom

31 © W. Wahlster The High-Level Control Flow of SmartKom

32 © W. Wahlster The High-Level Control Flow of SmartKom

33 © W. Wahlster The High-Level Control Flow of SmartKom

34 © W. Wahlster The High-Level Control Flow of SmartKom

35 © W. Wahlster The High-Level Control Flow of SmartKom

36 © W. Wahlster The High-Level Control Flow of SmartKom

37 © W. Wahlster The High-Level Control Flow of SmartKom

38 © W. Wahlster The High-Level Control Flow of SmartKom

39 © W. Wahlster The High-Level Control Flow of SmartKom

40 © W. Wahlster The High-Level Control Flow of SmartKom

41 © W. Wahlster The High-Level Control Flow of SmartKom

42 © W. Wahlster The High-Level Control Flow of SmartKom

43 © W. Wahlster The High-Level Control Flow of SmartKom

44 © W. Wahlster SmartKom sends a note to the user or activates an alarm as soon as the user approaches an exhibit that matches the specification of an an item on the ActiveList. ActiveList‘s spatial alarm can be combined with: - route planning and navigation -temporal and spatial optimization of a visit SmartKom‘s Added-Value Mobile Service ActiveList Please let me know, when I pass a shop selling batteries.

45 © W. Wahlster SmartKom‘s Added-Value Mobile Service SpotInspector What‘s going on at the castle right now? SmartKom allows the user to have remote visual access to various interesting spots via a selection of webcams – showing current waiting queues, special events and activities. SpotInspector can be combined with: - multimedia presentations of the expected program for these spots - route planning and navigation to these spots

46 © W. Wahlster SmartKom‘s Added-Value Mobile Service PartnerRadar Where are Lisa und Tom ? What are they looking at? SmartKom helps to locate and to bring together members of the same party. Involved Technologies -Navigation and tour instructions -Monitoring of group activity - Additional information on exhibits that are interesting for the whole party.

47 © W. Wahlster MP3 music files from the Web Rist & Herzog for Blaupunkt Personalized Car Entertainment (DFKI for Bosch)

48 © W. Wahlster Empirical and Data-Driven Models of Multimodality 2002 2005 Advanced Methods for Multimodal Communication Computational Models of Multimodality Adequate Corpora for MM Research Mobile, Human-Centered, and Intelligent Multimodal Interfaces Multimodal Interface Toolkit Research Roadmap of Multimodality 2002-2005 XML-Encoded MM Human-Human and Human-Machine Corpora Mobile Multimodal Interaction Tools Standards for the Annotation of MM Training Corpora Examples of Added-Value of Multimodality Multimodal Barge-In Markup Languages for Multimodal Dialogue Semantics Models for Effective and Trustworthy MM HCI Collection of Hardest and Most Frequent/Relevant Phenomena Task-, Situation- and User- Aware Multimodal Interaction Plug- and Play Infrastructure Toolkits for Multimodal Systems Situated and Task- Specific MM Corpora Common Representation of Multimodal Content Decision-theoretic, Symbolic and Hybrid Modules for MM Input Fusion Reusable Components for Multimodal Analysis and Generation Corpora with Multimodal Artefacts and New Multi- modal Input Devices Models of MM Mutual Disambiguation Multiparty MM Interaction 2 Nov. 2001 Dagstuhl Seminar Fusion and Coordination in Multimodal Interaction edited by: W. Wahlster Multimodal Toolkit for Universal Access

49 © W. Wahlster 2006 2010 Ecological Multimodal Interfaces Research Roadmap of Multimodality 2006-2010 Empirical and Data-Driven Models of Multimodality Advanced Methods for Multimodal Communication Toolkits for Multimodal Systems Usability Evaluation Methods for MM System Multimodal Feedback and Grounding Tailored and Adaptive MM Interaction Incremental Feedback between Modalities during Generation Models of MM Collaboration Parametrized Model of Multimodal Behaviour Demonstration of Performance Advances through Multimodal Interaction Real-time Localization and Motion/Eye Tracking Technology Multimodality in VR and AR Environments Resource-Bounded Multimodal Interaction User‘s Theories of System‘s Multimodal Capabilities Multicultural Adaptation of Multimodal Presentations Affective MM Communication Testsuites and Benchmarks for Multimodal Interaction Multimodal Models of Engagement and Floor Management Non-Monotonic MM Input Interpretation Computational Models of the Acquisition of MM Communication Skills Non-Intrusive & Invisible MM Input Sensors Biologically-Inspired Intersensory Coordination Models 2 Nov. 2001 Dagstuhl Seminar Fusion and Coordination in Multimodal Interaction edited by: W. Wahlster

50 © W. Wahlster SmartKom is a multimodal dialog system that combines speech, gesture, and mimics input and output. Spontaneous speech understanding is combined with the video- based recognition of natural gestures. One of the major scientific goals of SmartKom is to design new computational methods for the seamless integration and mutual disambiguation of multimodal input and output on a semantic and pragmatic level. SmartKom is based on the situated delegation-oriented dialog paradigm, in which the user delegates a task to a virtual communication assistant, visualized as a life-like character on a graphical display. Conclusions

51 © W. Wahlster © 2002 DFKI Design by R.O. Thank you very much for your attention


Download ppt "German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49."

Similar presentations


Ads by Google