Conversational Applications Workshop Introduction Jim Larson.

Slides:



Advertisements
Similar presentations
CHART or PICTURE INTEGRATING SEMANTIC WEB TO IMPROVE ONLINE Marta Gatius Meritxell González TALP Research Center (UPC) They are friendly and easy to use.
Advertisements

INTEGRATION OF VOICE SERVICES IN INTERNET APPLICATIONS By Eduardo Carrillo (lecturer), J. J Samper, J.J. Martínez-Durá Universidad Autónoma de Bucaramanga.
VoiceXML: A Field Evaluation By: Kristy Bradnum Supervisor: Peter Clayton Presented in partial fulfilment of the CS Honours Project.
Automatic Switchboard Operator Luboš Šmídl, Tomáš Valenta Department of Cybernetics Faculty of Applied Sciences University of West Bohemia in Pilsen.
Speech Synthesis Markup Language V1.0 (SSML) W3C Recommendation on September 7, 2004 SSML is an XML application designed to control aspects of synthesized.
An overview of EMMA— Extensible MultiModal Annotation Michael Johnston AT&T Labs Research 8/9/2006.
Speech Synthesis Markup Language SSML. Introduced in September 2004 XML based Assists the generation of synthetic speech Specifies the way speech is outputted.
Applying the Pronunciation Lexicon Specification to ASR & TTS 1 Patrizio Bergallo 1 Monday, August 20, 2007 SpeechTEK ASTS - Advances in Text-to-Speech.
Collaborative Customer Relationship Management (CCRM) User Group June 23 rd, 2004.
H E L S I N K I U N I V E R S I T Y O F T E C H N O L O G Y G O p r o j e c t : S e r v i c e A r c h i t e c t u r e f o r t h e N o m a d i c I n t e.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents.
Which development tool is right for you? Commercial Tools John Fuentes – Principal Solutions Architect
SSML extensions for multi-language usage Davide Bonardo W3C Workshop on Internationalizing SSML Crete, May 2006.
XISL language XISL= eXtensible Interaction Sheet Language or XISL=eXtensible Interaction Scenario Language.
Understand Web Services
The State of the Art in VoiceXML Chetan Sharma, MS Graduate Student School of CSIS, Pace University.
Pace VoiceXML Absentee System Paul Visokey, Ping Gallivan, Yani Mulyani, Lisa Jordan, Elaine Li, George Mathew, Qisheng Hong Presenter Name : Paul Visokey.
About VoiceXML 2.0 Stefanie Shriver a lot of this stuff is pulled directly from the 2.0 spec:
Analysis Concepts and Principles
14 1 Chapter 14 Database Connectivity and Web Development Database Systems: Design, Implementation, and Management, Seventh Edition, Rob and Coronel.
Thomas Kisner.  Unified Communications Architect at BNSF Railway  Board Member, DFW Unified Communications User Group ◦ Meets 4 th Thursday of Every.
VoiceXML Basic COCOMO Calculator By Greg Kutcher.
Find The Better Way Expand Your Voice with VXML May 10 th, 2005.
Position Paper for W3C Workshop on Internationalizing SSML The Usage of Part-Of-Speech for Resolving Multiple Pronunciations in SSML Myoung-Wan.
Speech Synthesis Markup Language -----Aim at Extension Dr. Jianhua Tao National Laboratory of Pattern Recognition (NLPR) Institute of Automation, Chinese.
Synthetic Agents that Speak and Listen Talking with Highbrow Avatars on Your Cell Phone Prof. Matthew Nickerson, Southern Utah University.
Introduction and overview
VoiceXML Builder Arturo Ramirez ACS 494 Master’s Graduate Project May 04, 2001.
Standard Development Languages—Are They Important? Disadvantages of standards – May be restrictive – May not include recent technology advances – May be.
VoiceXML and VoIP Rob Marchand Genesys Telecommunications Laboratories Inc. August 7 th, 2006.
SCXML State Chart Markup Language. SCXML controls the flow of an application SCXML controls modalities –VoiceXML –XHTML –Others, e.g., InkML, SVG SCXML.
Some Thoughts on HPC in Natural Language Engineering Steven Bird University of Melbourne & University of Pennsylvania.
Software Development Stephenson College. Classic Life Cycle.
PrepTalk a Preprocessor for Talking book production Ted van der Togt, Dedicon, Amsterdam.
1 © 2004 Cisco Systems, Inc. All rights reserved. Session Number Presentation_ID Media Resource Control Protocol v2 Sarvi Shanmugham, Editor: MRCP v1/v2.
HTML Structure & syntax
Multimodal user interfaces: Implementation Chris Vandervelpen
ITCS 6010 SALT. Speech Application Language Tags (SALT) Speech interface markup language Extension of HTML and other markup languages Adds speech and.
Integrating VoiceXML with SIP services
1Copyright © PIPEBEACH AB All rights reserved. Scott McGlashan zW3C Voice Browser Dialog Requirements and Specifications zTranscoding WML into VoiceXML.
The Voice-Enabled Web: VoiceXML and Related Standards for Telephone Access to Web Applications 14 Feb Christophe Strobbe K.U.Leuven - ESAT-SCD-DocArch.
Outline Grammar-based speech recognition Statistical language model-based recognition Speech Synthesis Dialog Management Natural Language Processing ©
Spoken Dialog Systems and Voice XML Lecturer: Prof. Esther Levin.
Voice User Interface
Speech. Understanding. Action. The Voice Web Players Dr. Christian Dugast Director Europe 05/00 The Voice Web Players Dr. Christian Dugast Director Europe.
OWL Representing Information Using the Web Ontology Language.
© 2013 by Larson Technical Services
Listener-Control Navigation of VoiceXML. Nuance Speech Analysis 92% of customer service is through phone. 84% of industrialists believe speech better.
1 Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents S. Kawamoto, et al. October 27, 2004.
VoiceXML Version 2.0 Jon Pitcherella. What is it? A W3C standard for specifying interactive voice dialogues. Uses a “voice” browser to interpret documents,
Virtual Agent 1 Dialog Manager Resources Input Technologies Output Technologies Data User © 2013 by Larson Technical Services Pronunciation Lexicon Pronunciation.
Presentation Title 1 1/27/2016 Lucent Technologies - Proprietary Voice Interface On Wireless Applications Protocol A PDA Implementation Sherif Abdou Qiru.
SEESCOASEESCOA SEESCOA Meeting Activities of LUC 9 May 2003.
James A. Larson Developing & Delivering Multimodal Applications 1 EMMA Extensible MultiModal Annotation markup language Canonical structure for semantic.
W3C Multimodal Interaction Activities Deborah A. Dahl August 9, 2006.
VoiceXML. Nuance Speech Analysis 92% of customer service is through phone. 84% of industrialists believe speech better than web.
Software Architecture for Multimodal Interactive Systems : Voice-enabled Graphical Notebook.
PLS for SSML Paolo Baggia Loquendo Workshop II on Internationalizing SSML.
Added Value to XForms by Web Services Supporting XML Protocols Elina Vartiainen Timo-Pekka Viljamaa T Research Seminar on Digital Media Autumn.
Presented By Sharmin Sirajudeen S7 CS Reg No :
Agenda * What is HTML5 -- Its history and motivation * HTML/XHTML as Human / Machine Readable Format * HTML and its related technologies * Brief summary.
Web Ontology Language for Service (OWL-S)
Introduction to Web Accessibility
(No need of Desktop computer)
SALT & The Microsoft Speech Application SDK
Database Connectivity and Web Development
Software Development Process Using UML Recap
Meta-Data: the key to accessing Data and Information
VoiceXML An investigation Author: Mya Anderson
Presentation transcript:

Conversational Applications Workshop Introduction Jim Larson

W3C started with Speech Interface Framework Speech Recognizer Dialog Manager Speech Synthesizer SSML 1.0 SRGS 1.0 VoiceXML 2.0/2.1 World Wide Web User DTMF Tone Recognizer Prerecorded Audio Player Telephone System CCXML 1.0 Semantic Interpretation 1.0 PLS 1.0

Next came the W3C Multimodal Interaction Framework

Input Components

Output Components

World Wide Web Consortium Standardizes Languages Voice Browser Working Group – Voice XML 2.0 & 2.1 – Speech Recognition Grammar Specification 1.0 – Speech Synthesis Markup Language 1.1 – Semantic Interpretation for Speech Recognition 1.0 – Pronunciation Lexicon 1.0 – Call Control XML 1.0 – State Chart XML 1.0 Multimodal Interaction Working Group – Multimodal Architecture and Interfaces 1.0 – Extended Multimodal Architecture 1.0 – Emotion Markup Language 1.0 – InkML 1.0

Goal of this Workshop Advise W3C Voice Browser and Multimodal Working Groups what to do next to better enable conversational voice systems Identify and justify new languages for example: – Context Sensitive Grammar Language – Statistical Markup Language – Semantic Representation Language Identify and justify extensions to existing languages, for example: – PLS 1.0 Parts of Speech, grammatical features – SRGS 1.1 Boolean constraints

Not to goal of this workshop Do not specify architectures – Languages should work under multiple architectures – Venders are free to design their own architectures to support W3C languages Do not specify the language details – This is the responsibility of W3C working groups – Take care to avoid IP issues May need to discuss architectures and language details to understand the to provide context for the use of a new language and to explain its purpose

To justify new language or language extension Explain what new applications are enabled by the language – Use cases – Concrete examples Identify existing implementations of the language – Demonstrate that it is implementable and useful – Demonstrate real interest in the language among vendors

Prioritize new languages and language extensions Must have, should have, nice to have

Workshop Deliverables Summary of discussions – Minute takers send minutes to Kazuyuki who will integrate them onto a web page Document list of new languages and extensions to existing languages – Brief description – User Cases and concrete examples – Justification – Existing implementations

Our agenda First day – Identify suggestions for new languages and extensions to existing languages by reviewing position papers Second day – Justify each new language and extension to existing language Brief description (one paragraph) User Cases Identify existing implementations – Prioritize recommendations