Electronic visualization laboratory, university of illinois at chicago Designing an Expressive Avatar of a Real Person 9/20/2010 Sangyoon Lee, Gordon Carlson,

Slides:



Advertisements
Similar presentations
Integrating 3D Geodata in Service-Based Visualization Systems Jan Klimke, Dieter Hildebrandt, Benjamin Hagedorn, and Jürgen Döllner Computer Graphics Systems.
Advertisements

The design process IACT 403 IACT 931 CSCI 324 Human Computer Interface Lecturer:Gene Awyzio Room:3.117 Phone:
Alternative Software Life Cycle Models By Edward R. Corner vol. 2, chapter 8, pp Presented by: Gleyner Garden EEL6883 Software Engineering II.
Revolutionary next generation human computer interaction.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
ADVISE: Advanced Digital Video Information Segmentation Engine
Face Poser: Interactive Modeling of 3D Facial Expressions Using Model Priors Manfred Lau 1,3 Jinxiang Chai 2 Ying-Qing Xu 3 Heung-Yeung Shum 3 1 Carnegie.
Internet Resources Discovery (IRD) IBM DB2 Digital Library Thanks to Zvika Michnik and Avital Greenberg.
Information Modeling: The process and the required competencies of its participants Paul Frederiks Theo van der Weide.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
MOBIES Project Progress Report Engine Throttle Controller Design Using Multiple Models of Computation Edward Lee Haiyang Zheng with thanks to Ptolemy Group.
Emotional Intelligence and Agents – Survey and Possible Applications Mirjana Ivanovic, Milos Radovanovic, Zoran Budimac, Dejan Mitrovic, Vladimir Kurbalija,
Recognizing Emotions in Facial Expressions
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
1 A survey on Reconfigurable Computing for Signal Processing Applications Anne Pratoomtong Spring2002.
Facial Feature Detection
The design process z Software engineering and the design process for interactive systems z Standards and guidelines as design rules z Usability engineering.
Electronic Visualization Laboratory University of Illinois at Chicago Interaction between Real and Virtual Humans: Playing Checkers R. Torre, S. Balcisoy.
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Humanoid Robot Head May Team Members: Client/Faculty Advisor: Dan Potratz (CprE) Tim Meer (EE) Dr. Alex Stoytchev Cody Genkinger (CprE) Jason Pollard.
Chapter 6 System Engineering - Computer-based system - System engineering process - “Business process” engineering - Product engineering (Source: Pressman,
11 SECURITY TEMPLATES AND PLANNING Chapter 7. Chapter 7: SECURITY TEMPLATES AND PLANNING2 OVERVIEW  Understand the uses of security templates  Explain.
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
EWatchdog: An Electronic Watchdog for Unobtrusive Emotion Detection based on Usage Analysis Rayhan Shikder Department.
CO1301: Games Concepts Dr Nick Mitchell (Room CM 226) Material originally prepared by Laurent Noel.
A FACEREADER- DRIVEN 3D EXPRESSIVE AVATAR Crystal Butler | Amsterdam 2013.
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
Prepared by: Waleed Mohamed Azmy Under Supervision:
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
1 PLAN RECOGNITION & USER INTERFACES Sony Jacob March 4 th, 2005.
2Object-Oriented Analysis and Design with the Unified Process The Requirements Discipline in More Detail  Focus shifts from defining to realizing objectives.
Cinematography in Augmented Reality Richard Shemaka.
HCI in Software Process Material from Authors of Human Computer Interaction Alan Dix, et al.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
1 Reconstructing head models from photograph for individualized 3D-audio processing Matteo Dellepiane, Nico Pietroni, Nicolas Tsingos, Manuel Asselot,
Systems Analysis and Design in a Changing World, Fourth Edition
Multimedia System and Networking UTD Slide- 1 University of Texas at Dallas B. Prabhakaran Rigging.
VoiceUNL : a proposal to represent speech control mechanisms within the Universal Networking Digital Language Mutsuko Tomokiyo (GETA-CLIPS-IMAG) & Gérard.
Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R.
Electronic Visualization Laboratory (EVL) University of Illinois at Chicago Paper-4 Interactive Translucent Volume Rendering and Procedural Modeling Joe.
TSS Database Inventory. CIRA has… Received and imported the 2002 and 2018 modeling data Decided to initially store only IMPROVE site-specific data Decided.
1 Technical & Business Writing (ENG-715) Muhammad Bilal Bashir UIIT, Rawalpindi.
Topic 4 - Database Design Unit 1 – Database Analysis and Design Advanced Higher Information Systems St Kentigern’s Academy.
Project Management Perception Management HCI 575 Final Proect Enanga Fale.
Electronic visualization laboratory, university of illinois at chicago Towards Lifelike Interfaces That Learn Jason Leigh, Andrew Johnson, Luc Renambot,
1 Chinese-Speaking 3D Talking Head Project No: H08040 Sang Siew Hoon Supervisor: Dr Ng Teck Khim.
Virtual Tutor Application v1.0 Ruth Agada Dr. Jie Yan Bowie State University Computer Science Department.
SEESCOASEESCOA SEESCOA Meeting Activities of LUC 9 May 2003.
Ekman’s Facial Expressions Study A Demonstration.
By: Andrew Hovingh & Matt Roon Project # ME Faculty and Industrial Mentor: Dr. James Kamman Industrial Sponsor: Parker Hannifin Corporation.
Expressionbot: An Emotive Lifelike Robotic Face for Face-to- Face Communication Ali Mollahossenini, Gabriel Gairzer, Eric Borts, Stephen Conyers, Richard.
LECTURE 5 Nangwonvuma M/ Byansi D. Components, interfaces and integration Infrastructure, Middleware and Platforms Techniques – Data warehouses, extending.
Recognition and Expression of Emotions by a Symbiotic Android Head Daniele Mazzei, Abolfazl Zaraki, Nicole Lazzeri and Danilo De Rossi Presentation by:
1 INTRODUCTION TO COMPUTER GRAPHICS. Computer Graphics The computer is an information processing machine. It is a tool for storing, manipulating and correlating.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
An Emotive Lifelike Robotics Face for Face-to-Face Communication
Using High-Resolution Visualization over Optical Networks
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.
CO1301: Games ncepts 2016 Lecture 2
Virtual Reality.
Towards lifelike Computer Interfaces that learn
Introduction to Visual Analytics
Computer Graphics Lecture 15.
Lecture 3. Virtual Worlds : Representation,Creation and Simulation ( II ) 고려대학교 그래픽스 연구실.
Presentation transcript:

electronic visualization laboratory, university of illinois at chicago Designing an Expressive Avatar of a Real Person 9/20/2010 Sangyoon Lee, Gordon Carlson, Steve Jones, Andrew Johnson, Jason Leigh, and Luc Renambot University of Illinois at Chicago

electronic visualization laboratory, university of illinois at chicago Outline Background / Introduction Related Works Lifelike Responsive Avatar Framework Design of an Avatar Pilot Study Conclusion

electronic visualization laboratory, university of illinois at chicago Background / Introduction An avatar, a human-like computer interface, has been actively studied in various area and is becoming more and more prevalent With widely spread advanced technology, a lifelike avatar becomes capable of increasingly natural interaction However, developing such realistic avatar is non-trivial process in many aspects (i.e. requires a fair amount of manual intervention and/or high fidelity hardware)

electronic visualization laboratory, university of illinois at chicago Background / Introduction A proposed framework and design method reduces such initial barrier and is applicable to various domain –We especially focus on expressiveness of an avatar and its realistic visualization –Developed prototype applications and avatar for a specific person –Pilot study confirmed that our method is partially successful in conveying expressions

electronic visualization laboratory, university of illinois at chicago Related Work Effectiveness of avatar: better in subjective responses but not much in performance (Yee 07) In contrast, more natural agent model illustrated better results in both measures (Bickmore 05,09) With respect to expressive avatar, various graphics techniques contribute to subjective certainty and quality of its conveyance (Wallraven 05, 06, Courgeon 09, de Melo 09)

electronic visualization laboratory, university of illinois at chicago Lifelike Responsive Avatar Framework

electronic visualization laboratory, university of illinois at chicago Lifelike Responsive Avatar Framework Speech-enabled lifelike computer interface Support various aspects that are necessary to realize natural interaction between a user and computer - an avatar Framework is written in C++ / Python Underlying modules of framework relies on several open-source libraries (i.e. graphics and speech)

electronic visualization laboratory, university of illinois at chicago System Architecture

electronic visualization laboratory, university of illinois at chicago Application Logic Control Two separate methods can be used to control avatar behavior by external logic –Network communication via TCP –Python bindings to LRAF activity manager

electronic visualization laboratory, university of illinois at chicago Python Binding Example

electronic visualization laboratory, university of illinois at chicago Fullbody Motion Synthesis Motion Acquisition via Optical Motion Capture System Constructed Action-based motion DB Semi-Deterministic Hierarchical Finite State Machine (SDHFSM) –Enables random selection of motion within a pool of motion DB (categorized motion clips) –Minimize monotonous repetition of motion

electronic visualization laboratory, university of illinois at chicago SDHFSM Model

electronic visualization laboratory, university of illinois at chicago Facial Expression Synthesis Blendshape-based weight parametric model Total 40 shape models are used –7 base emotions, 16 modifiers, 16 Viseme Three phases blending –Emotional state change –Pseudo-random expressions (i.e. blinking) –Lip-Synchronization to speech (TTS, Wave)

electronic visualization laboratory, university of illinois at chicago Facial Expression Samples

electronic visualization laboratory, university of illinois at chicago Design of Avatar A target person: Dr. Alex SchwarzkopfAn avatar of Dr. Alex Schwarzkopf

electronic visualization laboratory, university of illinois at chicago In Brief Head model is generated from photos of a target person using FaceGen software –40 blendshape morph target is available for facial expression Skeleton-based full body skin mesh model –Modeled in commercial modeling tool (Maya) –Physically matches to a target person –70 bones rigged to body meshes

electronic visualization laboratory, university of illinois at chicago Skin Texture Refinement High-res photo-based texture map enhances the quality of renderings in great degree Default Texture 512 x 512 resolutionPhoto-based Texture 4k x 4k resolution

electronic visualization laboratory, university of illinois at chicago Skin Texture Refinement High-res photo-based texture map enhances the quality of renderings in great degree Rendering result /w default textureRendering result /w photo-based texture

electronic visualization laboratory, university of illinois at chicago Face Normal Map Most effective way to preserve skin details (i.e. skin pores & shallow wrinkles) Rendering result /w diffuse textureRendering result /w diffuse & normal texture

electronic visualization laboratory, university of illinois at chicago Reflective Material Shader

electronic visualization laboratory, university of illinois at chicago Pilot Study

electronic visualization laboratory, university of illinois at chicago Overview Can we identify emotions correctly in both human and avatar? Ekman’s 6 classic emotions used Three pairs of images for each emotions n = 1,744 (online survey by Univ. students) –Even split of gender (864 vs. 867) –Ages from 18 to 64 (mean 23.5)

electronic visualization laboratory, university of illinois at chicago Sample Preparation Photos of the target person’s expression Adjusted avatar expression parameters Parameters: Smile(1.0), BlinkLeft(0.2), BlinkRight(0.1), BrowUpLeft(0.2), BrowUpRight(0.1), and Phoneme B(0.65)

electronic visualization laboratory, university of illinois at chicago Sample Emotions

electronic visualization laboratory, university of illinois at chicago Sample Emotions

electronic visualization laboratory, university of illinois at chicago Results EmotionsAngerDisgustFearHappinessSadnessSurprise n Anger 41.9 / / / / / / / / / / / / / / / / / / Disgust 6.4 / / / / / / / / / / / / / / / / / / Fear 3.2 / / / / / / / / / / / / / / / / / / Happiness 0.0 / / / / / / / / / / / / / / / / / / Sadness 0.7 / / / / / / / / / / / / / / / / / / Surprise 5.3 / / / / / / / / / / / / / / / / / /

electronic visualization laboratory, university of illinois at chicago Summary Two emotions (Happiness, Sadness) were recognized successfully in both cases Other four emotions showed mixed results –Chosen person’s expression may be not prototypical –Reading human emotion is not trivial task –Context-less still image might be not enough to convey emotions

electronic visualization laboratory, university of illinois at chicago Conclusion Demonstrated our avatar framework and design method for a specific person Our implementation is partially capable of successfully conveying human emotions In the future study –Better visualization need to be studied –Temporal aspect of expression –Context-sensitivity in expression –Task oriented performance measures

electronic visualization laboratory, university of illinois at chicago Thank you! More information on project website * This project is supported by NSF, awards CNS , CNS Also like to thank to all co-authors!