I mage and M edia U nderstanding L aboratory for Performance Evaluation of Vision-based Real-time Motion Capture Naoto Date, Hiromasa Yoshimoto, Daisaku.

Slides:



Advertisements
Similar presentations
We consider situations in which the object is unknown the only way of doing pose estimation is then building a map between image measurements (features)
Advertisements

Chayatat Ratanasawanya Min He May 13, Background information The goal Tasks involved in implementation Depth estimation Pitch & yaw correction angle.
Eigenfaces for Recognition Presented by: Santosh Bhusal.
Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
Wrist Recognition and the Center of the Palm Estimation Based on Depth Camera Zhengwei Yao ; Zhigeng Pan ; Shuchang Xu Virtual Reality and Visualization.
Digital Interactive Entertainment Dr. Yangsheng Wang Professor of Institute of Automation Chinese Academy of Sciences
Parallel Tracking and Mapping for Small AR Workspaces Vision Seminar
Generation of Virtual Image from Multiple View Point Image Database Haruki Kawanaka, Nobuaki Sado and Yuji Iwahori Nagoya Institute of Technology, Japan.
3/5/2002Phillip Saltzman Video Motion Capture Christoph Bregler Jitendra Malik UC Berkley 1997.
Vision Based Control Motion Matt Baker Kevin VanDyke.
AlgirdasBeinaravičius Gediminas Mazrimas Salman Mosslem.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
HCI Final Project Robust Real Time Face Detection Paul Viola, Michael Jones, Robust Real-Time Face Detetion, International Journal of Computer Vision,
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Stanford hci group / cs376 research topics in human-computer interaction Vision-based Interaction Scott Klemmer 17 November 2005.
Today Project 2 Recap 3D Motion Capture Marker-based Video Based Mocap demo on Monday (2/26) Image segmentation and matting.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
A Bayesian Formulation For 3d Articulated Upper Body Segmentation And Tracking From Dense Disparity Maps Navin Goel Dr Ara V Nefian Dr George Bebis.
Tracking Migratory Birds Around Large Structures Presented by: Arik Brooks and Nicholas Patrick Advisors: Dr. Huggins, Dr. Schertz, and Dr. Stewart Senior.
Vision-based Control of 3D Facial Animation Jin-xiang Chai Jing Xiao Jessica Hodgins Carnegie Mellon University.
Viola and Jones Object Detector Ruxandra Paun EE/CS/CNS Presentation
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
FYP Project LYU0303: 1 Video Object Tracking and Replacement for Post TV Production.
Segmentation and tracking of the upper body model from range data with applications in hand gesture recognition Navin Goel Intel Corporation Department.
Ghost: A Human Body Part Labeling System Using Silhouettes
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
A Brief Overview of Computer Vision Jinxiang Chai.
Virtual Humanoid “Utsushiomi” Michihiko SHOJI Venture Business Laboratories, Yokohama National University
Real-Time Human Posture Reconstruction in Wireless Smart Camera Networks Chen Wu, Hamid Aghajan Wireless Sensor Network Lab, Stanford University, USA IPSN.
Machine Vision for Robots
3D Fingertip and Palm Tracking in Depth Image Sequences
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Zhengyou Zhang Microsoft Research Digital Object Identifier: /MMUL Publication Year: 2012, Page(s): Professor: Yih-Ran Sheu Student.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
Adapting Simulated Behaviors For New Characters Jessica K. Hodgins and Nancy S. Pollard presentation by Barış Aksan.
A General Framework for Tracking Multiple People from a Moving Camera
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
SUPERVISOR: MR. J. CONNAN.  The main purpose of system is to track an object across multiple frames using fixed input source.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
CVPR Workshop on RTV4HCI 7/2/2004, Washington D.C. Gesture Recognition Using 3D Appearance and Motion Features Guangqi Ye, Jason J. Corso, Gregory D. Hager.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Algirdas Beinaravičius Gediminas Mazrimas Salman Mosslem.
出處: Signal Processing and Communications Applications, 2006 IEEE 作者: Asanterabi Malima, Erol Ozgur, and Miijdat Cetin 2015/10/251 指導教授:張財榮 學生:陳建宏 學號: M97G0209.
Visual Tracking on an Autonomous Self-contained Humanoid Robot Mauro Rodrigues, Filipe Silva, Vítor Santos University of Aveiro CLAWAR 2008 Eleventh International.
Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science, 2015.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
VEGETABLE SEEDLING FEATURE EXTRACTION USING STEREO COLOR IMAGING Ta-Te Lin, Jeng-Ming Chang Department of Agricultural Machinery Engineering, National.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Fingertip Detection with Morphology and Geometric Calculation Dung Duc Nguyen ; Thien Cong Pham ; Jae Wook Jeon Intelligent Robots and Systems, IEEE/RSJ.
Rick Parent - CIS681 Motion Capture Use digitized motion to animate a character.
Visual Odometry David Nister, CVPR 2004
A Recognition Method of Restricted Hand Shapes in Still Image and Moving Image Hand Shapes in Still Image and Moving Image as a Man-Machine Interface Speaker.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Target Tracking In a Scene By Saurabh Mahajan Supervisor Dr. R. Srivastava B.E. Project.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Vision Based hand tracking for Interaction The 7th International Conference on Applications and Principles of Information Science (APIS2008) Dept. of Visual.
Stanford hci group / cs376 u Scott Klemmer · 28 November 2006 Vision- Based Interacti on.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Motion Estimation of Moving Foreground Objects Pierre Ponce ee392j Winter March 10, 2004.
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
MAV Optical Navigation Software System April 30, 2012 Tom Fritz, Pamela Warman, Richard Woodham Jr, Justin Clark, Andre DeRoux Sponsor: Dr. Adrian Lauf.
Animating Human Locomotion
Video-based human motion recognition using 3D mocap data
Computer Graphics Lecture 15.
Presentation transcript:

I mage and M edia U nderstanding L aboratory for Performance Evaluation of Vision-based Real-time Motion Capture Naoto Date, Hiromasa Yoshimoto, Daisaku Arita, Satoshi Yonemoto, Rin-ichiro Taniguchi Kyushu University, Japan

Laboratory for Image and Media Understanding Background of Research Motion Capture System – Interaction of human and machine in a virtual space – Remote control of humanoid robots – Creating character actions in 3D animations or video games Sensor-based Motion Capture System – Using Special Sensors (Magnetic type, Infrared type etc.) – User ’ s action is restricted by attachment of sensors Vision-based Motion Capture System – No sensor attachments – Multiple cameras and PC cluster

Laboratory for Image and Media Understanding Key Issue Available features acquired by vision process is limited. – Head, faces and feet can be detected robustly. How to estimate human postures from the limited visual features – Three kinds of estimation algorithms – Comparative study of them

Laboratory for Image and Media Understanding System Overview 人物 2 CG model PC camera

Laboratory for Image and Media Understanding System Overview 人物 2 CG model PC camera PC Using 10 cameras for robust motion capture

Laboratory for Image and Media Understanding System Overview 人物 2 CG model 1 top-view camera on the ceiling PC camera

Laboratory for Image and Media Understanding System Overview 人物 2 CG model 9 side-view cameras around the user PC camera

Laboratory for Image and Media Understanding System Overview 人物 2 CG model Using PC cluster for real-time feature PC camera

Laboratory for Image and Media Understanding System Overview 人物 2 CG model First, take images with each camera PC camera

Laboratory for Image and Media Understanding System Overview 人物 2 CG model Extract image-features on the first stage PCs PC camera

Laboratory for Image and Media Understanding System Overview 人物 2 CG model PC camera Reconstruct human CG model by feature parameters in each image

Laboratory for Image and Media Understanding System Overview 人物 2 CG model Synchronous IEEE1394 cameras: 15fps PC camera

Laboratory for Image and Media Understanding System Overview 人物 2 CG model CPU : Pentium Ⅲ 700MHz x 2 OS : Linux Network : Gigabit LAN Myrinet camera PC

Laboratory for Image and Media Understanding Top-view camera process Background subtraction Opening operation Inertia principal axis Detect body direction and transfer it

Laboratory for Image and Media Understanding Top-view camera process Background subtraction Opening operation Inertia principal axis Detect body direction and transfer it

Laboratory for Image and Media Understanding Top-view camera process Background subtraction Opening operation Inertia principal axis Detect body direction and transfer it

Laboratory for Image and Media Understanding Top-view camera process Background subtraction Opening operation Feature extraction – Inertia principal axis – Body direction

Laboratory for Image and Media Understanding Side-view camera process Background subtraction Calculate centroids of skin-color blobs

Laboratory for Image and Media Understanding Side-view camera process Background subtraction Calculate centroids of skin-color blobs

Laboratory for Image and Media Understanding Side-view camera process Background subtraction Calculate centroids of skin-color blobs

Laboratory for Image and Media Understanding From all the combination of cameras and blob centroids, we select all possible pairs of lines of sight. Then we calculate an intersection point of each line pair. Unless the distance of the two lines is smaller than a threshold, we decide there is no intersection point. Estimate 3D position of skin-color blob

Laboratory for Image and Media Understanding Estimate 3D position of skin-color blob The calculated points are clustered according to distances from the feature points (head, hands, feet) of the previous frame. Select points where feature points are dense as the 3D positions of the true feature points.

Laboratory for Image and Media Understanding Estimate 3D position of torso L1 L2 head right shoulder V: V is the vector which intersects perpendicularly with a body axis and with a body direction. V torso ・A method based on simple body model Center point

Laboratory for Image and Media Understanding Performance evaluation of right hand position estimation

Laboratory for Image and Media Understanding Estimate 3D positions of elbows and knees 3 estimation methods – Inverse Kinematics (IK) – Search by Reverse Projection (SRP) – Estimation with Physical Restrictions (EPR)

Laboratory for Image and Media Understanding Estimate 3D positions of elbows and knees IK –  assumed to be a constant

Laboratory for Image and Media Understanding Estimate 3D positions of elbows and knees SRP

Laboratory for Image and Media Understanding EPR  An arm is assumed to be the connected two spring model.  The both ends of a spring are fixed to the position of the shoulder, and the position of a hand.  The position of an elbow is converged to the position where a spring becomes natural length. (the natural length of springs is the length of the bottom arm and the upper arm which acquired beforehand.) Estimate 3D positions of elbows and knees

Laboratory for Image and Media Understanding Accuracy of estimating right elbow position

Laboratory for Image and Media Understanding Accuracy of posture parameters

Laboratory for Image and Media Understanding Visual comparison of 3 methods

Laboratory for Image and Media Understanding Computation time required in each algorithm Top-view camera processing: 50ms Side-view camera processing: 26ms 3D blob calculation : 2ms IK calculation : 9ms SRP calculation : 34ms EPR calculation : 22ms

Laboratory for Image and Media Understanding Online demo movie (EPR)

Laboratory for Image and Media Understanding We have constructed a Vision-based Real-time Motion Capture System and evaluated its performance Future works – Improvement of posture estimation algorithm – Construction of various applications  Man and machine interaction in a virtual space  Humanoid robot remote control system Conclusions