Observing Lip and Vertical Larynx Movements During Smiled Speech (and Laughter) - work in progress - Sascha Fagel 1, Jürgen Trouvain 2, Eva Lasarcyk 2.

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Speech and Gesture Corpus From Designing to Piloting Gheida Shahrour Supervised by Prof. Martin Russell Dr Neil Cooke Electronic, Electrical and Computer.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Introduction To Tracking
3/5/2002Phillip Saltzman Video Motion Capture Christoph Bregler Jitendra Malik UC Berkley 1997.
Discussion on Strategies Introductory Notes - omega vs. phi scans - beam polarization - single sweep vs. multi sweep - xtal shape as re-orientation/re-centering.
By shooting 2009/10/1. outline imTop overview imTop detection Finger Mobile Finger detection evaluation Mobile detection improvement.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
Capturing the 3D motion of ski jumpers Trip to Bonn (13-16 Nov 2005) Atle Nes Faculty of Informatics and e-Learning Trondheim University College.
Facial Tracking and Animation Todd Belote Bryan Harris David Brown Brad Busse.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
May 10, 2004Facial Tracking and Animation Todd Belote Bryan Harris David Brown Brad Busse.
Digital Video 1.
CS290 Spring 2000 Slide:1 Structured Light for Laparoscopic Surgery CS290 Computer Vision Jeremy Ackerman CS290 Computer Vision Jeremy Ackerman.
Tracking Video Objects in Cluttered Background
W M AM A I AI IM AIM Time (samples) Response (V) True rating Predicted rating  =0.94  =0.86 Irritation Pleasantness.
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
1 The University of South Florida audiovisual phoneme database v 1.0 Frisch, S.A., Stearns, A.M., Hardin, S.A., & Nikjeh, D.A. University of South Florida.
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept.
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
KLT tracker & triangulation Class 6 Read Shi and Tomasi’s paper on good features to track
ENEE 408G Multimedia Signal Processing Video Stabilization for Pocket PC Application Professor: Dr. Liu Group 4 Student: Hamed Hsiu-huei.
Motion Capture of Ski Jumpers in 3D Trondheim University College Faculty of informatics and e-learning PhD student, Atle Nes Bonn, 24-28th of October 2004.
ICBV Course Final Project Arik Krol Aviad Pinkovezky.
Database Construction for Speech to Lip-readable Animation Conversion Gyorgy Takacs, Attila Tihanyi, Tamas Bardi, Gergo Feldhoffer, Balint Srancsik Peter.
The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities.
Helsinki University of Technology Laboratory of Computational Engineering Modeling facial expressions for Finnish talking head Michael Frydrych, LCE,
Creating your stop motion video  1. Brainstorm your visuals  Discuss an “image” as a “scene.” Your image should be focused on, and should describe the.
Measurements in Fluid Mechanics 058:180:001 (ME:5180:0001) Time & Location: 2:30P - 3:20P MWF 218 MLH Office Hours: 4:00P – 5:00P MWF 223B-5 HL Instructor:
3D SLAM for Omni-directional Camera
SS5305 – Motion Capture Initialization 1. Objectives Camera Setup Data Capture using a Single Camera Data Capture using two Cameras Calibration Calibration.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
Visibility map (high low) Surveillance with Visual Tagging and Camera Placement J. Zhao and S.-C. Cheung — Center for Visualization and Virtual Environment,
Apple iPad 2 and iMovie in the Classroom Gadget Report Using iMovie with the iPad 2 EDUC 635 Jenna Windle.
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
The Wavelet Packets Equalization Technique: Applications on LASCO Images M.Mierla, R. Schwenn, G. Stenborg.
Rate-distortion Optimized Mode Selection Based on Multi-channel Realizations Markus Gärtner Davide Bertozzi Classroom Presentation 13 th March 2001.
GAZE ESTIMATION CMPE Motivation  User - computer interaction.
A Theory for Photometric Self-Calibration of Multiple Overlapping Projectors and Cameras Peng Song Tat-Jen Cham Centre for Multimedia & Network Technology.
Roland Goecke Trent Lewis Michael Wagner 1Big ASC Meeting April 2010.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Analysis of Runner Biomechanics Using Edge Detection and Image Processing Techniques to Determine Pronation Levels Asa Kusuma.
ICVGIP 2012 ICVGIP 2012 Speech training aids Visual feedback of the articulatory efforts during acquisition of speech production by a hearing-impaired.
June 10, 2013 Presented by Joshua Vallecillos Supervised By Christine Wittich, Ph.D. Student, Structural Engineering.
1 Imaging Techniques for Flow and Motion Measurement Lecture 20 Lichuan Gui University of Mississippi 2011 Stereo High-speed Motion Tracking.
5. Vowels he who.
AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1 1 Institute of Automation.
A Prototype System for 3D Dynamic Face Data Collection by Synchronized Cameras Yuxiao Hu Hao Tang.
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Jennifer Lee Final Automated Detection of Human Emotion.
Digital video Henning Schulzrinne Columbia University Fall 2003.
Digital Video Representation Subject : Audio And Video Systems Name : Makwana Gaurav Er no.: : Class : Electronics & Communication.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
MIT Artificial Intelligence Laboratory — Research Directions Intelligent Perceptual Interfaces Trevor Darrell Eric Grimson.
Nataliya Nadtoka James Edge, Philip Jackson, Adrian Hilton CVSSP Centre for Vision, Speech & Signal Processing UNIVERSITY OF SURREY.
Acoustic to Articoulatory Speech Inversion by Dynamic Time Warping
Automated Detection of Human Emotion
Correlational and Regressive Analysis of the Relationship between Tongue and Lips Motion - An EMA and Video Study of Selected Polish Speech Sounds Robert.
Copyright © American Speech-Language-Hearing Association
Ali Ercan & Ulrich Barnhoefer
眼動儀與互動介面設計 廖文宏 6/26/2009.
Vision Based UAV Landing
Greg Schwartz, Sam Taylor, Clark Fisher, Rob Harris, Michael J. Berry 
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Automated Detection of Human Emotion
End-to-End Speech-Driven Facial Animation with Temporal GANs
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Presentation transcript:

Observing Lip and Vertical Larynx Movements During Smiled Speech (and Laughter) - work in progress - Sascha Fagel 1, Jürgen Trouvain 2, Eva Lasarcyk 2 1 Berlin Institute of Technology 2 Saarland University

2 Outline Motivation Speech & laughter corpus 3D Motion Capture of the face Image processing of the larynx region Sparse preliminary results

3 Previous Findings smiling in speech can be identified auditorily synthetic speech with raised larynx is identified as more smiled larynx lowering can compensate for reduced lip rounding conflict between reaching the linguistic requirements for rounded vowels and the para-linguistic signaling of smiling

4 Goal differences in lip rounding/spreading in neutral and smiled speech differences in vertical larynx position in neutral and smiled speech co-production of lip spreading/rounding and vertical larynx movements

5 Corpus controlled material sustained isolated vowels [i: y: a: u:] in each of three conditions: –i) neutral –ii) with slightly retracted mouth corners –iii) with maximally retracted mouth corners + swallowing + inhaling spontaneous material –laughter

6 3D Motion Capture Synchronous video recordings from different viewpoints (+ audio) Camera calibration Tracking of feature points over time Triangulation of 3D points from multiple 2D views

7 Experimental Setup Larynx region (1) with black background (2) left (3), center (4), right (5) view of the face

8 Multiple View Recordings 4 x DragonflyExpress (Point Grey Reseach) 30 (60) fps = 33 (17) ms per frame VGA resolution = 640x480 pixels full frame

9 Camera Calibration calibration object with known marker positions system demo

10

11 Data analysis 3D reconstruction derived measures motion analysis statistical analysis

12 Data Analysis

13 Vertical Larynx Position

14 Sparse Preliminary Results /u/ neutral (simulation) simulation /u/ slightly retracted /u/ maximally retracted

15 Sparse Preliminary Results Eyeballing - larynx: –Adam’s apple not sufficiently visible –simultaneous head, thorax and larynx movements: What is vertical larynx position? Eyeballing - face: –less protrusion and more mouth corner retraction in rounded vowels during mechanical smiles –inner lip contour hardly predictable from outer lip contour –protrusion strategy changes with lip retraction

16 Future Work achieve final results spontaneous material –smiled speech speech with induced smile –interview method speech in interaction

17 ?