REAL TIME EYE TRACKING FOR HUMAN COMPUTER INTERFACES Subramanya Amarnag, Raghunandan S. Kumaran and John Gowdy Dept. of Electrical and Computer Engineering,

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Applications of one-class classification
Dynamic View Selection for Time-Varying Volumes Guangfeng Ji* and Han-Wei Shen The Ohio State University *Now at Vital Images.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science TU München, Germany A Person and Context.
Detecting Faces in Images: A Survey
EUCLIDEAN POSITION ESTIMATION OF FEATURES ON A STATIC OBJECT USING A MOVING CALIBRATED CAMERA Nitendra Nath, David Braganza ‡, and Darren Dawson EUCLIDEAN.
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Object Inter-Camera Tracking with non- overlapping views: A new dynamic approach Trevor Montcalm Bubaker Boufama.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
1 Abstract This paper presents a novel modification to the classical Competitive Learning (CL) by adding a dynamic branching mechanism to neural networks.
A Bayesian Formulation For 3d Articulated Upper Body Segmentation And Tracking From Dense Disparity Maps Navin Goel Dr Ara V Nefian Dr George Bebis.
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
CRICOS No J † e-Health Research Centre/ CSIRO ICT Centre * Speech, Audio, Image and Video Research Laboratory Comparing Audio and Visual Information.
Augmented Reality: Object Tracking and Active Appearance Model
Viola and Jones Object Detector Ruxandra Paun EE/CS/CNS Presentation
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
Computer Vision Systems for the Blind and Visually Disabled. STATS 19 SEM Talk 3. Alan Yuille. UCLA. Dept. Statistics and Psychology.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
2 Outline Introduction –Motivation and Goals –Grayscale Chromosome Images –Multi-spectral Chromosome Images Contributions Results Conclusions.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Boris Babenko Department of Computer Science and Engineering University of California, San Diego Semi-supervised and Unsupervised Feature Scaling.
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
Implementing Codesign in Xilinx Virtex II Pro Betim Çiço, Hergys Rexha Department of Informatics Engineering Faculty of Information Technologies Polytechnic.
Person detection, tracking and human body analysis in multi-camera scenarios Montse Pardàs (UPC) ACV, Bilkent University, MTA-SZTAKI, Technion-ML, University.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Face Detection Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Intelligent Database Systems Lab N.Y.U.S.T. I. M. Externally growing self-organizing maps and its application to database visualization and exploration.
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
Hand Gesture Recognition Using Haar-Like Features and a Stochastic Context-Free Grammar IEEE 高裕凱 陳思安.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
2016/1/141 A novel method for detecting lips, eyes and faces in real time Real-Time Imaging (2003) 277–287 Cheng-Chin Chiang*,Wen-Kai Tai,Mau-Tsuen Yang,
Dimensionality Reduction in Unsupervised Learning of Conditional Gaussian Networks Authors: Pegna, J.M., Lozano, J.A., Larragnaga, P., and Inza, I. In.
Multimedia Systems and Communication Research Multimedia Systems and Communication Research Department of Electrical and Computer Engineering Multimedia.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Face Detection Using Neural Network By Kamaljeet Verma ( ) Akshay Ukey ( )
Wonjun Kim and Changick Kim, Member, IEEE
Dynamic Node Collaboration for Mobile Target Tracking in Wireless Camera Sensor Networks Liang Liu†,‡, Xi Zhang†, and Huadong Ma‡ † Networking and Information.
Robodog Frontal Facial Recognition AUTHORS GROUP 5: Jing Hu EE ’05 Jessica Pannequin EE ‘05 Chanatip Kitwiwattanachai EE’ 05 DEMO TIMES: Thursday, April.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
LOGO FACE DETECTION APPLICATION Member: Vu Hoang Dung Vu Ha Linh Le Minh Tung Nguyen Duy Tan Chu Duy Linh Uong Thanh Ngoc CAPSTONE PROJECT Supervisor:
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Hiba Tariq School of Engineering
Paper – Stephen Se, David Lowe, Jim Little
Fast Preprocessing for Robust Face Sketch Synthesis
Real-Time Human Pose Recognition in Parts from Single Depth Image
K-means and Hierarchical Clustering
Using Tensorflow to Detect Objects in an Image
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Applications of Cellular Neural Networks to Image Understanding
Lip movement Synthesis from Text
Paper Reading Dalong Du April.08, 2011.
Presentation transcript:

REAL TIME EYE TRACKING FOR HUMAN COMPUTER INTERFACES Subramanya Amarnag, Raghunandan S. Kumaran and John Gowdy Dept. of Electrical and Computer Engineering, Clemson University. {asubram, ksampat, Eye tracking Intrusive Non Intrusive Advantages: Can be highly accurate. Disadvantages: Can be very cumbersome for the user. Not ideal for practical purposes. Advantages: User friendly. Disadvantages: The accuracy of systems developed thus far is not good when compared to intrusive systems Our System - Highlights Non IR based Non Intrusive Uses an ordinary camera to track the eyes Utilizes a Dynamic training strategy thus making it user and lighting condition invariant. Ideal for systems where high accuracy is not required Pre - Processing In this stage the intensity of the pixels is considered for eliminating a number of pixels. A threshold of 0.27 has been experimentally determined to be ideal for most cases. If the intensity of a pixel is above the threshold, then that pixel is eliminated. The remaining pixels are passed to the next stage. Bayesian Classifier In this stage the problem consists of classifying the pixels into eye and non-eye classes. Bayesian Classifier is used as the binary classifier. Gaussian PDFs are used to model both the eye and non-eye classes. Means and covariance of the classes are dynamically updated after processing each frame. Clustering Bayesian Classifier does not eliminate all the non-eye pixels, especially facial hair and other dark pixels. Clustering is performed to identify the ‘dark islands’ in the remaining image. Our algorithm can be considered as an unsupervised c-means algorithm. The difference being that here no assumptions are made regarding the number of cluster or the cluster centers. For i=1 to N For j=1 to noe Is dist( x(i), exemplar(j) ) < threshold Update exemplar Exemplar(1) = x(1); noe = 0 Create a new cluster, noe = noe + 1 Yes No j = noe noe = Number of exemplars Post Processing Clustering returns the total number of ‘dark islands’ in the image. Post processing is done to identify the ‘eyes’ among these ‘dark islands’. The first step is to merge clusters which are close to each other ( less than 5 pixels). The next step uses the geometrical features of the clusters such as the size, width and the height to eliminate them. Finally we should be left with 2 clusters which represent the eyes. The location of the eyes are used to limit the search region for the next frame. Results The system was implemented on an Intel Pentium III 997 MHz machine and achieved a frame rate of 26 fps. The system was tested on 2 databases : Clemson University Audio Visual Experiments ( CUAVE ) database and the CMU audio-visual dataset. Accuracy achieved: –CMU database : 88.3% –CUAVE database, stationary speaker : 86.4% –CUAVE database, moving speaker : 76.5% Frame Search Region Pre-Processing Bayesian Classifier Clustering Post-Processing Eyes Located Successfully? No, Process Next Frame Update Means And Covariance. Update frame Search Region Yes Location Of the Eyes Yes Input Frame References [1] S. Baluja and D. Pomerleau, “Non Intrusive gaze tracking using Artificial Neural Networks,” Technical Report CMU-CS , Carnegie Mellon University. [2] Advanced Multimedia Processing Lab, CMU, [3] E.K. Patterson, S. Gurbuz, Z. Tufekci, and J.N. Gowdy, “ CUAVE: A New Audio-Visual Database for Multimodal Human-Computer Interface Research,” ICASSP, Orlando, May This figure illustrates the performance of the System against complex backgrounds Results for a sequence of frames from the CMU dataset Results for a sequence of frames from the CUAVE dataset Abstract In recent years considerable interest has developed in real time eye tracking for various applications including lip tracking. Although there exist many lip tracking algorithms, they are bound by a number of constraints such as color of the lips, the size and shape of the lips, constant motion of the lips etc, for their successful implementation. However, eye tracking algorithms may be designed to overcome these constraints. Hence eye tracking appears to be a reasonable solution to the lip tracking problem as a fix on the speakers eyes will give us a rough estimate on the position of the lips.