Queen Mary, University of London

Slides:



Advertisements
Similar presentations
National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
Advertisements

Review of AI from Chapter 3. Journal May 13  What advantages and disadvantages do you see with using Expert Systems in real world applications like business,
Validating the Evaluation of Adaptive Systems by User Profile Simulation Javier Bravo and Alvaro Ortigosa {javier.bravo, Universidad.
Context-Aware User Interfaces. Gent, 21 maart 2005 Context-Aware User Interfaces Context-Aware User Interfaces is a requirement for all defined scenarios.
CS 325: Software Engineering January 13, 2015 Introduction Defining Software Engineering SWE vs. CS Software Life-Cycle Software Processes Waterfall Process.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Empirical and Data-Driven Models of Multimodality Advanced Methods for Multimodal Communication Computational Models of Multimodality Adequate.
John Hu Nov. 9, 2004 Multimodal Interfaces Oviatt, S. Multimodal interfaces Mankoff, J., Hudson, S.E., & Abowd, G.D. Interaction techniques for ambiguity.
Spoken Dialogue Technology How can Jerry Springer contribute to Computer Science Research Projects?
1 Rainer Leupers, University of Dortmund, Computer Science Dept. ISSS ´98 HDL-based Modeling of Embedded Processor Behavior for Ret. Compilation Rainer.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
28/06/2015 Electronic Imaging Conference - San Jose 1 The impact of interactive exploration on the recognition of objects Frank Meijer a, Egon L. van den.
Ivano Malavolta, Henry Muccini, Patrizio Pelliccione Computer Science Department University of L'Aquila - Italy Architectural notations interoperability.
A practical approach to formal methods Lecturer: Ralph Back Place: A5059 Time:e very second Monday, Dates: 28.1, 11.2, 25.2, 10.3, 31.3, 14.4,
Multimodal Architecture for Integrating Voice and Ink XML Formats Under the guidance of Dr. Charles Tappert By Darshan Desai, Shobhana Misra, Yani Mulyani,
Lecturing with Digital Ink Richard Anderson University of Washington.
Human-Computer Interaction for Universal Computing James A. Landay EECS Dept., CS Division UC Berkeley Endeavor Mini Retreat, 5/25/99 Task Support.
Support for Palm Pilot Collaboration Including Handwriting Recognition.
Marie-Luce Bourguet Projects in the areas of: Multimedia processing Multimedia / Web design.
Methods For Web Page Design 6. Methods Why use one? What it covers –Possibly all stages Feasibility Analysis Design Implementation Testing –Maybe just.
Some Thoughts on HPC in Natural Language Engineering Steven Bird University of Melbourne & University of Pennsylvania.
INTELLIGENT SYSTEMS MOTIVATIONS M. Gams. Definition (scientific): Intelligent ststem is a system that learns during its existence. It senses its environment.
DFKI GmbH, , R. Karger Indo-German Workshop on Language Technologies Reinhard Karger, M.A. Deutsches Forschungszentrum für Künstliche Intelligenz.
Center for Human Computer Communication Department of Computer Science, OG I 1 Designing Robust Multimodal Systems for Diverse Users and Mobile Environments.
T Marjaana Träskbäck Design Principles for Intelligent Environments Coen M. H., AAAI-98 Introduction Intelligent Room Room Vision System.
Microsoft Research Faculty Summit When Is The Pen Mightier Than The Keyboard? Andries van Dam Professor of Computer Science Brown University.
Multimodal Information Access Using Speech and Gestures Norbert Reithinger
Research, Development, Consulting, Training High Fidelity Modeling and Simulation Where we are going… …future plans.
DS-UCAT: A New Multimodal Dialogue System For An Academic Application Ramón López-Cózar 1, Zoraida Callejas 1, Germán Montoro 2 1 Dept. of Languages and.
Integrated Development Environment for Policies Anjali B Shah Department of Computer Science and Electrical Engineering University of Maryland Baltimore.
Modeling Complex Dynamic Systems with StarLogo in the Supercomputing Challenge
Human-Computer Interaction
UNIT 7 Describing how an item functions [2] (infinitive with or without ‘to’)
Audio-visual Speaker association Zhijie Shao Master of Computer Science Supervisor: Trent Lewis.
Model of the Human  Name Stan  Emotion Happy  Command Watch me  Face Location (x,y,z) = (122, 34, 205)  Hand Locations (x,y,z) = (85, -10, 175) (x,y,z)
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
Individual Differences in Human-Computer Interaction HMI Yun Hwan Kang.
Intelligent Robot Architecture (1-3)  Background of research  Research objectives  By recognizing and analyzing user’s utterances and actions, an intelligent.
Mixed Reality: A Model of Mixed Interaction Céline Coutrix, Laurence Nigay User Interface Engineering Team CLIPS-IMAG Laboratory, University of Grenoble.
Artificial intelligence
Choosing interaction devices: hardware components
INTELLIGENT SYSTEMS INFORMATION SOCIETY MOTIVATIONS M. Gams.
Theme 2: Data & Models One of the central processes of science is the interplay between models and data Data informs model generation and selection Models.
Toulouse, September 2003 Page 1 JOURNEE ALTARICA Airbus ESACS  ISAAC.
Artificial Intelligence, Expert Systems, and Neural Networks Group 10 Cameron Kinard Leaundre Zeno Heath Carley Megan Wiedmaier.
INTELLIGENT SYSTEMS AND INFORMATION SOCIETY M. Gams Institut Jožef Stefan Ljubljana University.
TRIGONE Laboratory LIS Laboratory 1 CADUI' June FUNDP Namur The DIANE+ Method Jean-Claude TARBY TRIGONE Laboratory University Lille 1 LILLE.
DFKI GmbH, , R. Karger Perspectives for the Indo German Scientific and Technological Cooperation in the Field of Language Technology Reinhard.
Skeleton Based Action Recognition with Convolutional Neural Network
January 29, January 29, 2014 Gesture recognition technology moving into mainstream Tal Krzypow VP Product Management eyeSight Technologies Ltd.
SEESCOASEESCOA SEESCOA Meeting Activities of LUC 9 May 2003.
1 Interaction Devices CIS 375 Bruce R. Maxim UM-Dearborn.
THE GEORGE WASHINGTON UNIVERSITY SCHOOL OF ENGINEERING & APPLIED SCIENCE DEPARTMENT OF COMPUTER SCIENCE PRELIMINRY DESIGN TECHLER Sisay Habte CSCI
Self-assessing with adaptive exercises Chye-Foong Yong & Colin A Higgins University of Nottingham.
1 3 Computing System Fundamentals 3.7 Utility Software.
5. Methodology Compare the performance of XCS with an implementation of C4.5, a decision tree algorithm called J48, on the reminder generation task. Exemplar.
NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal.
Multimodal and Natural computer interaction Evelina Stanevičienė.
MULTIMODAL AND NATURAL COMPUTER INTERACTION Domas Jonaitis.
CHAPTER 1 Introduction BIC 3337 EXPERT SYSTEM.
Projects in the areas of: Multimedia processing (speech, image, video)
DESIGNING WEB INTERFACE Presented By, S.Yamuna AP/CSE
CSSSPEC6 SOFTWARE DEVELOPMENT WITH QUALITY ASSURANCE
Physics-based simulation for visual computing applications
Smart Learning concepts to enhance SMART Universities in Africa
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
Data science online training.
Themes for the future... Simple complex systems Well behaved systems
Chapter 9 System Control
Presentation transcript:

Queen Mary, University of London Automatic generation of multimodal interaction models from behavioural data Marie-Luce Bourguet Computer Science Dept. Queen Mary, University of London

Multimodal Interaction Models Application Multimodal Engine … Speech recognition Gesture recognition

Modeling Multimodal Interaction “remove” pen down pen move pen up gesture delete speech remove reset

Modeling Multimodal Interaction: Synchronisation patterns pen down pen move pen up gesture delete speech remove reset “remove” pen down pen move pen up gesture delete speech remove reset pen down pen move pen up gesture delete speech remove reset

Modeling Multimodal Interaction: Mutual disambiguation pen down pen move pen up gesture delete speech remove reset pen down pen move pen up speech move reset

Design Tool

Simulation Tool

Automatic Generation of Interaction Models Training Module … Speech recognition Gesture recognition

Conclusion Building interaction models from user interaction data presents numerous advantages: It bypasses the difficult task of designing multimodal interaction models The resulting interaction models are highly personalised They should enable natural and robust interaction The FSM based formalism is well adapted because very simple analysis of the data should permit the automatic generation of interaction models.