Break-out Group # D Research Issues in Multimodal Interaction.

Slides:



Advertisements
Similar presentations
PUI 98 Leveraging Human Capabilities in Perceptual Interfaces George G. Robertson Microsoft Research George G. Robertson Microsoft Research.
Advertisements

National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
© 2004 Mobile VCE User Interactions: achievements, lessons and open issues Eamonn ONeill Academic Coordinator October.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Breakout session B questions. Research directions/areas Multi-modal perception cognition and interaction Learning, adaptation and imitation Design and.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Multimodal Communication Jens Allwood Swecog National Swedish graduate school in cognitive science.
Cognitive Issues in Virtual Reality Wickens, C.D., and Baker, P., Cognitive issues in virtual environments, in Virtual Environments and Advanced Interface.
Alford Academy Business Education and Computing1 Advanced Higher Computing Based on Heriot-Watt University Scholar Materials GUI – advantages and disadvantages.
Languages & The Media, 5 Nov 2004, Berlin 1 New Markets, New Trends The technology side Stelios Piperidis
ENTERFACE’08 Multimodal high-level data integration Project 2 1.
Topics Dr. Damian Schofield Director of Human Computer Interaction.
Class 6 LBSC 690 Information Technology Human Computer Interaction and Usability.
1 Ivan Lanese Computer Science Department University of Bologna Roberto Bruni Computer Science Department University of Pisa A mobile calculus with parametric.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
Stanford hci group / cs376 research topics in human-computer interaction Multimodal Interfaces Scott Klemmer 15 November 2005.
CS 5764 Information Visualization Dr. Chris North.
CSD 5230 Advanced Applications in Communication Modalities 7/3/2015 AAC 1 Introduction to AAC Orientation to Course Assessment Report Writing.
T HE VISUAL INTERFACE Human Visual Perception Includes material from Dix et al, 2006, Human Computer Interaction, Chapter 1 1.
Introduction to Graphics and Virtual Environments.
Virtual Reality Design and Representation. VR Design: Overview Objectives, appropriateness Creating a VR application Designing a VR experience: goals,
2.5/2.6/2.7.  Virtual Reality presents a world in 3d space  Regular input devices such as a mouse only has 2 degrees of movement when 6 is needed for.
Copyright John Wiley & Sons, Inc. Chapter 3 – Interactive Technologies HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane.
Welcome to CGMB574 Virtual Reality Computer Graphics and Multimedia Department.
Enabling enactive interaction in virtualized experiences Stefano Tubaro and Augusto Sarti DEI – Politecnico di Milano, Italy.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
Spring /6.831 User Interface Design and Implementation1 Lecture 3: Visibility.
ENTERFACE ‘08: Project4 Design and Usability Issues for multimodal cues in Interface Design/ Virtual Environments eNTERFACE ‘08| Project 4.
Chapter 5: Spatial Cognition Slide Template. FRAMES OF REFERENCE.
Introduction to Virtual Environments Slater, Sherman and Bowman readings.
Virtual Reality David Johnson. What is Virtual Reality?
Stanford hci group / cs376 u Scott Klemmer · 16 November 2006 Speech & Multimod al.
Dr. Gallimore10/18/20151 Cognitive Issues in VR Chapter 13 Wickens & Baker.
Vocabularies for Description of Accessibility Issues in MMUI Željko Obrenović, Raphaël Troncy, Lynda Hardman Semantic Media Interfaces, CWI, Amsterdam.
VIRTUAL REALITY Sagar.Khadabadi. Introduction The very first idea of it was presented by Ivan Sutherland in 1965: “make that (virtual) world in the window.
Virtual Reality Lecture2. Some VR Systems & Applications 고려대학교 그래픽스 연구실.
Simulation Driven Virtual Reality Lacey Duckworth December, 2009 Dissertation Update A Framework for Large Scale Virtual Simulation Dr. Andrew Strelzoff,
ENTERFACE 08 Project 2 “multimodal high-level data integration” Mid-term presentation August 19th, 2008.
Introduction to Virtual Reality. Topic Papers Brooks, F. P. (1999). What's Real About Virtual Reality? Zyda, M. (2005). From Visual Simulation to Virtual.
1 Sense of Presence in Virtual Reality Sherman & Craig, p. 9.
1 What is a Virtual Environment? Wide field presentation of computer- generated, multi-sensory information with user tracked in real time Computer simulation.
Toward a Unified Scripting Language 1 Toward a Unified Scripting Language : Lessons Learned from Developing CML and AML Soft computing Laboratory Yonsei.
CONTENTS INTRODUCTION TO A.I. WORKING OF A.I. APPLICATIONS OF A.I. CONCLUSIONS ON A.I.
Mixed Reality: A Model of Mixed Interaction Céline Coutrix, Laurence Nigay User Interface Engineering Team CLIPS-IMAG Laboratory, University of Grenoble.
Abstract This presentation questions the need for reinforcement learning and related paradigms from machine-learning, when trying to optimise the behavior.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
1 Perception and VR MONT 104S, Fall 2008 Lecture 14 Introduction to Virtual Reality.
1 City With a Memory CSE 535: Mobile Computing Andreea Danielescu Andrew McCord Brandon Mechtley Shawn Nikkila.
VIRTUAL REALITY PRESENTED BY, JANSIRANI.T, NIRMALA.S, II-ECE.
© 2003 Gina Joue & Brian Duffy Dr. Brian Duffy
Immersive Displays The other senses…. 1962… Classic Human Sensory Systems Sight (Visual) Hearing (Aural) Touch (Tactile) Smell (Olfactory) Taste (Gustatory)
Aiming Computing Technology at Enhancing the Quality of Life of People with ALS Some Sketches on Directions in Minimal Signaling Communication Communication.
Copyright John Wiley & Sons, Inc. Chapter 3 – Interactive Technologies HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane.
1 Interaction Devices CIS 375 Bruce R. Maxim UM-Dearborn.
Stanford hci group / cs376 u Jeffrey Heer · 19 May 2009 Speech & Multimodal Interfaces.
W3C Multimodal Interaction Activities Deborah A. Dahl August 9, 2006.
Software Architecture for Multimodal Interactive Systems : Voice-enabled Graphical Notebook.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
What is Multimedia Anyway? David Millard and Paul Lewis.
Learning to Answer Questions from Image Using Convolutional Neural Network Lin Ma, Zhengdong Lu, and Hang Li Huawei Noah’s Ark Lab, Hong Kong
THINKING OUTSIDE THE BOX . (AR) IN GAMES might-look-like-playing-videogames-very-Shttp://
Ubiquitous Computing and Augmented Realities
Multimodal Interfaces
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
Speech & Multimodal Scott Klemmer · 16 November 2006.
CHAPTER 4: Virtual Environments
Chapter 9 System Control
Computer Vision Readings
Mapping GUIs to Auditory Interfaces
Presentation transcript:

Break-out Group # D Research Issues in Multimodal Interaction

What are the different types Speech Haptics Gesture Deictic Head and eye movement EEG Electrocephalograms physiological measurements

What has been done so far? Semantic fusion of information – Speech and Gesture Preliminary efforts as to what types of modalities to intergrate. This is application dependent. Need standardization at the level of devices and types of information to be fused

Open Research Problems: Should we stay with current paradigms or invent new methodologies? There is no unifying framework for interaction in terms of devices/semantic integration. This is due to the lack of general purpose application. We see specific applications eg, simulation, medical training

Open Research Questions How to deal with specific tasks in terms of fusing channels. How should channels be fused. How to do transitions between tasks, e.g., manipulation vs loccomotion Need more experimentation and a theory as to where VR is needed?

Open Research Questions Formal study of tasks within applications(e.g., manipulation, selection, navigation, changing of attributes, numerical input) Need more research on output. So far mostly visual and oral.

First breakout group Taxonomy Semantics Cross-modal Representations (actions/perceptions)

Applications/Output Group Second Meeting New Issues we Discussed in the afternoon

DM: Third Breakout Group: Applications/Output Human Perception of the environment Integration with Input Relationships to basic principles

Human Perceptive abilities Vision Technology: Limitation in terms or lighting or real time rendering Limitations for other channels: Haptics, audio, olfaction, taste The type/mix of output depends on the application This is related to the internal representation

Continued Issue of using many modalities to offset the limitations of each modality. –Right now we do not have enough research data to support that. Do we or not need to represent exactly the environment? –Application dependent

Continued Abstraction vs exact representation –Application dependent Exact physical simulation vs fake physics. Ok or not to fool the user? –Probably application and technology dependent.

Other Human Perceptive Modalities Olfaction and taste: very little research Some modalities are better understood than others (e.g., visual vs haptic or olfaction)

Continued-Summary Big issues: –Sensory substitution –Level of detail (variable resolution) –Sampled vs synthetic generation –Online or offline computation –Preservation (or not) of individuality e.g two people with different sense of taste or heat etc –Higher-level emotional augmentation

Integration with Input Haptics is the most widely used output sense that is also used for input –Head orientation, whole-body position, eye gaze also Some output must be tightly coupled to input (it’s at the physical level) –Head motion to view changes, 3D audio

Integration with Input (cont.) Eye gaze-based control requires some interpretation Intentional vs unintentional movement –When is a gesture a gesture?

Relationship to Basic Principles Mapping semantics to output –One or multiple representations for all modalities eg. Language and visual output where we have a common representation but gets translated differently for output –Spatio-temporal synchronization –Cross-modal representation (actions/perceptions) Account for individual differences

Future Paper Topics All the previously mentioned open problems Short Term –Update of the NRC report on modalities Medium-Term –Modeling, Coupling and Output of modalities –In particular model smell and taste

Future Paper Topics Long Term –Further modeling and coupling –Advanced display technology –Personalization of output