Hand Gesture Recognition System for HCI and Sign Language Interfaces Cem Keskin Ayşe Naz Erkan Furkan Kıraç Özge Güler Lale Akarun.

Slides:



Advertisements
Similar presentations
1 Gesture recognition Using HMMs and size functions.
Advertisements

We consider situations in which the object is unknown the only way of doing pose estimation is then building a map between image measurements (features)
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Bayesian Decision Theory Case Studies
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Hand Gesture Based User Interface For Generic Windows Applications –Computer Interface A 2D/3D input device based on real time hand trackingA 2D/3D input.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Introduction To Tracking
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
COLORCOLOR A SET OF CODES GENERATED BY THE BRAİN How do you quantify? How do you use?
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
Generation of Virtual Image from Multiple View Point Image Database Haruki Kawanaka, Nobuaki Sado and Yuji Iwahori Nagoya Institute of Technology, Japan.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
A Robust Method of Detecting Hand Gestures Using Depth Sensors Yan Wen, Chuanyan Hu, Guanghui Yu, Changbo Wang Haptic Audio Visual Environments and Games.
Formation et Analyse d’Images Session 8
Recent Developments in Human Motion Analysis
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Cindy Song Sharena Paripatyadar. Use vision for HCI Determine steps necessary to incorporate vision in HCI applications Examine concerns & implications.
Augmented Reality: Object Tracking and Active Appearance Model
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
1 Applying Vision to Intelligent Human-Computer Interaction Guangqi Ye Department of Computer Science The Johns Hopkins University Baltimore, MD
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
Computer vision: models, learning and inference Chapter 6 Learning and Inference in Vision.
HAND GESTURE BASED HUMAN COMPUTER INTERACTION. Hand Gesture Based Applications –Computer Interface A 2D/3D input device (Hand Tracking) Translation of.
Chapter 6 Feature-based alignment Advanced Computer Vision.
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University 3D Shape Classification Using Conformal Mapping In.
Computer Vision James Hays, Brown
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Mean-shift and its application for object tracking
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
GESTURE ANALYSIS SHESHADRI M. (07MCMC02) JAGADEESHWAR CH. (07MCMC07) Under the guidance of Prof. Bapi Raju.
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
Dynamic Programming.
Person detection, tracking and human body analysis in multi-camera scenarios Montse Pardàs (UPC) ACV, Bilkent University, MTA-SZTAKI, Technion-ML, University.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
CVPR Workshop on RTV4HCI 7/2/2004, Washington D.C. Gesture Recognition Using 3D Appearance and Motion Features Guangqi Ye, Jason J. Corso, Gregory D. Hager.
Recognition, Analysis and Synthesis of Gesture Expressivity George Caridakis IVML-ICCS.
出處: Signal Processing and Communications Applications, 2006 IEEE 作者: Asanterabi Malima, Erol Ozgur, and Miijdat Cetin 2015/10/251 指導教授:張財榮 學生:陳建宏 學號: M97G0209.
資訊工程系智慧型系統實驗室 iLab 南台科技大學 1 A Static Hand Gesture Recognition Algorithm Using K- Mean Based Radial Basis Function Neural Network 作者 :Dipak Kumar Ghosh,
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Head Tracking in Meeting Scenarios Sascha Schreiber.
Expectation-Maximization (EM) Case Studies
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
CSSE463: Image Recognition Day 23 Midterm behind us… Midterm behind us… Foundations of Image Recognition completed! Foundations of Image Recognition completed!
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Visual Odometry David Nister, CVPR 2004
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Object Tracking - Slide 1 Object Tracking Computer Vision Course Presentation by Wei-Chao Chen April 05, 2000.
IEEE International Conference on Multimedia and Expo.
Target Tracking In a Scene By Saurabh Mahajan Supervisor Dr. R. Srivastava B.E. Project.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Bayesian Decision Theory Case Studies CS479/679 Pattern Recognition Dr. George Bebis.
A segmentation and tracking algorithm
Modeling the world with photos
EM Algorithm and its Applications
Presentation transcript:

Hand Gesture Recognition System for HCI and Sign Language Interfaces Cem Keskin Ayşe Naz Erkan Furkan Kıraç Özge Güler Lale Akarun

The System Overview Real-time gesture recognition system Marker based hand segmentation 3D from stereo vision using two web cams Support for hand posture recognition Modules to –register the markers –train the system, –calibrate the cameras –choose HCI commands to trigger for gestures –support tracking of two hands

Camera Calibration: Special calibration object – using least squares approach Marker Segmentation: Hue based connected components using double thresholding 3D Reconstruction: 3D reconstruction from stereo vision least squares approach Vector Quantization: 15 codewords symbolizing 3D spatial motion Gesture Modeling:Left-Right HMMs for gesture modeling Gesture Spotting: A dynamic threshold HMM to spot meaningful gestures Gesture Training: Baum-Welch algorithm to estimate HMM parameters Right Image Left Image Marker Segmentation 2D Kalman Filter 3D Reconstruction 3D Kalman Filter Vector Quantizer Gesture Spotter Application Methodology

Test Results We train the system with 8 3D gestures We bind the gestures to commands of a third party Windows painting application and test the recognition rates Gesture# States# Training# Trials# Correct# WrongRate %Tool Zoom Arrow Freehand Eraser Brush Selection Draw Object Draw Line With 2 misclassifications out of 160 trials, the system yields a recognition performance of 98.75%

Improved Hand Tracking and Gesture Recognition with Posture Information Advanced Hand Tracking: –Aim: Robust hand tracking without using markers –Considerations: Different color spaces for connected components algorithm –RGB, Normalized RGB, RGB Ratios, HSI, TSL, LUX, CIE L*a*b and CIE L*u*v Mean-Shift segmentation –A kernel-based density estimation technique for detection and clustering of the skin color Particle Filters –A method that tries to solve the generalized tracking problem by approximation –Similar to genetic algorithm –Overcomes restrictions of the Kalman filter

Hand Shape Recognition Module A module to identify the static pose of the hand Required for sign language or similar applications Also useful for HCI applications Our Approach: –Model based analysis of hand shapes: minimize the difference between a predefined model and the input images –We use a genetic algorithm (GA) for global search and the downhill- simplex method (DS) for local search –The similarity measure for the model-input matching is the non-overlapping areas of the model silhouette and the hand region in the input image Hand Model: –A complete geometric hand model constructed with simple quadrics, namely cylinders and spheres, with 22 DOFs

Test Results POPULATION SIZE: 600 CROSSOVER % 70 MUTATION % 8 ELITICISM % 8 DS TOLERANCE Generations of GA 25.13s DS 15.47s Population Size: 600 DS Tolerance: Generations of GA 20.84s DS 15.81s Population Size: 600 DS Tolerance: Generations of GA 5.28s DS 14.84s Population Size: 400 DS Tolerance: –For the general case, where all parameters are estimated, this module doesn’t work in real time –We will test the module for classification problems with restricted subsets

Fusion of Hand Gesture and Posture Information We will convert the HMMs to Input/Output HMMs –Take gesture codeword sequence as the output sequence –Take pose information sequence as the conditional input sequence Several reasons to use IOHMMs instead of HMMs Disadvantages of HMMs: –Weak incorporation of context –Ineffective coding of actual duration of gestures and gesture parts –Not good for prediction –Not good for synthesis – for a visualization module IOHMMs overcome these problems: –Better learning of long term dependencies –Effective modeling of duration –Represent data with richer, non-linear models –More discriminant training Current research area: A dynamic threshold model for IOHMMs