Presented by Jason Moore

Slides:



Advertisements
Similar presentations
“EyeMouse”: An interaction device for severely motor-disabled people
Advertisements

Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Introduction to Eye Tracking
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
Xiaoyong Ye Franz Alexander Van Horenbeke David Abbott
Speech and Gesture Corpus From Designing to Piloting Gheida Shahrour Supervised by Prof. Martin Russell Dr Neil Cooke Electronic, Electrical and Computer.
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
David Wild Supervisor: James Connan Rhodes University Computer Science Department Gaze Tracking Using A Webcamera.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Eye/gaze tracking in video; identify the user’s “focus of attention” oMihaela Romanca – Technical University of Cluj-Napoca oPeter Robert - Technical University.
Adviser:Ming-Yuan Shieh Student:shun-te chuang SN:M
Robust Object Tracking via Sparsity-based Collaborative Model
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Practical Gaze Tracking Peter Elliott CS 498 Spring 2009.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Iris localization algorithm based on geometrical features of cow eyes Menglu Zhang Institute of Systems Engineering
Augmented Reality: Object Tracking and Active Appearance Model
Harshita Karamchandani Placement, Masters Project and Travels…..
1 REAL-TIME IMAGE PROCESSING APPROACH TO MEASURE TRAFFIC QUEUE PARAMETERS. M. Fathy and M.Y. Siyal Conference 1995: Image Processing And Its Applications.
Eye tracking: principles and applications 廖文宏 Wen-Hung Liao 12/10/2009.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. | From a certain point of view Eye tracking with Java Gerrit Grunwald Java Technology.
The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities.
Designing and implementing a method for locating and presenting a Laser pointer spot Eran Korkidi Gil-Ad Ben-Or.
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
Real-Time High Resolution Photogrammetry John Morris, Georgy Gimel’farb and Patrice Delmas CITR, Tamaki Campus, University of Auckland.
Eyes Alive Sooha Park - Lee Jeremy B. Badler - Norman I. Badler University of Pennsylvania - The Smith-Kettlewell Eye Research Institute Presentation Prepared.
3D Fingertip and Palm Tracking in Depth Image Sequences
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
 An eye tracking system records how the eyes move when a subject is sitting in front of a computer screen.  The human eyes are constantly moving until.
Tablet-Based Gaze Tracker P / Tina Podrasky (ISE)Michael Krenzer (EE)Hemsley Pichardo (EE) Brad Wideman (CE)Matt Kelly (CE) Susan Farnand.
Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
KAMI KITT ASSISTIVE TECHNOLOGY Chapter 7 Human/ Assistive Technology Interface.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Counting How Many Words You Read
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Model Refinement from Planar Parallax Anthony DickRoberto Cipolla Department of Engineering University of Cambridge.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Mobile eye tracker construction and gaze path analysis By Wen-Hung Liao 廖文宏.
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
EYE TRACKING TECHNOLOGY
A Plane-Based Approach to Mondrian Stereo Matching
EYE-GAZE COMMUNICATION
Learning Patterns of Activity
Eye Movement & Reading Awareness lab
Eye Movement & Reading Awareness lab
Paper – Stephen Se, David Lowe, Jim Little
Fitting.
Tracking Objects with Dynamics
Introduction to Digital Photography
Real-time Wall Outline Extraction for Redirected Walking
Mean Shift Segmentation
A segmentation and tracking algorithm
Head pose estimation without manual initialization
Senior Capstone Project Gaze Tracking System
Do-It-Yourself Eye Tracker: Impact of the Viewing Angle on the
Play game, pause video, move cursor… with your eyes
Vehicle Segmentation and Tracking in the Presence of Occlusions
IMAGE BASED VISUAL SERVOING
Interior Camera - A solution to Driver Monitoring Status
Identifying Confusion from Eye-Tracking Data
眼動儀與互動介面設計 廖文宏 6/26/2009.
Introduction to Digital Photography
LiGaze Ultra-Low Power Gaze Tracking for Virtual Reality
Presentation transcript:

Presented by Jason Moore Robust Tracking and Remapping of Eye Appearance with Passive Computer Vision Presented by Jason Moore

1. Introduction This paper discusses a single-camera iris-tracking and remapping approach based on passive computer vision. In other words: Using a camera to track the gaze of an individual specifically for the purpose of computer control.

Gaze Estimation Applications Ophthalmology Psychology Neurology Marketing and Advertising HCI Aids for the disabled

Pupil & Iris The pupil and iris are monitored for gaze estimation.

Gaze Estimation Techniques Categorized by: Degree of Intrusiveness Technology employed Active vs. Passive Cost Target application domains

Intrusive Techniques Require equipment to be put in physical contact with the user. Examples of equipment: Electrodes Contact lenses Head mounted devices

Nonintrusive Techniques Use cameras to capture images of the eye. Most commercial devices use IR light reflected by the eye. These systems are fairly accurate but require special and expensive hardware. Retain a degree of intrusiveness because of active light emission. Can perform poorly with bad lighting conditions or if the user is wearing glasses.

More IR problems Most IR-based systems require the user’s head to remain still. This limits the degree of usability. IR-based systems that do not require the user’s head to remain still do not yield great accuracy.

Active Vs. Passive Active approaches rely on light emission to track the eye. Passive approaches rely only on natural light. Use off-the-shelf hardware to perform iris localization and tracking. Iris is ideal for tracking due to its perfectly circular shape and contrast to sclera.

Gaze Tracking for HCI Is Difficult The remapping transformation of pupil position to the computer screen is time dependent and changes whenever the user moves his head. Although difficult, it is an interesting concept for its potential social and commercial impact.

2. Iris Tracking Composed of three states: Iris localization Iris candidates are selected and passed to the tracing state. Iris Tracing The iris is searched for. If it is found, wait for the next frame, otherwise go back to iris localization. If the eye is closed, go to Wait state. Wait This state is for both voluntary and involuntary eye blinks.

Iris Tracking

Iris Localization In this state, the current frame is analyzed to generate some initial guesses on the position of the iris. The image is filtered to enhance the contrast between the iris and the sclera. The potential iris locations are selected based on image intensity.

Iris Localization One point of interest has been identified on the x-axis. Two points of interest have been identified on the y-axis. Both hypotheses will be passed to the tracing state.

Iris Tracing Before considering the hypotheses presented by the Iris Localization state, search for the iris based on its last location. If the iris is not found near the last position, then consider the hypotheses presented by the Iris Localization state.

Iris Tracing The estimated iris position after initial failure to find the iris based on its previous location.

The RANSAC Algorithm RANSAC stands for Random Sample Consensus. A popular algorithm for model selection in a data set containing both inliers and outliers. Here the RANSAC algorithm has been used to fit a line to a set of data points regardless of the large number of outliers.

C-RANSAC RANSAC modified to have more knowledge about the tracing task. This knowledge concerns the range of possible ellipse dimensions. The left image shows the failure of standard RANSAC to find the iris properly. The image on the right shows the success of C-RANSAC.

C-RANSAC Left image shows success in finding the iris while wearing glasses. Right image shows success in low light conditions. Left image shows success in finding the iris while the eyebrow is in the interest window. Right image shows success when the iris is in a lateral position.

Eye Blink Detection Blinking is detected by a vertical shift in the cumulative histogram. The eyelashes have a similar level of intensity as the iris, but are not in the same vertical position.

3. Remapping Remapping iris position to screen position is somewhat math intensive. Involves an initial calibration phase. Takes head motion into consideration.

4. Experimental Results Hardware: Digital camera with 12x optical zoom and 640x480 image resolution. 19” computer screen with a resolution of 1024x768 Standard laptop with 1.73GHz processor.

User / Hardware Positions

Tracking Results 594 frames (approx. 23.7s @ 25fps) were recorded with the user looking at several different points on the screen. Iris position and shape were manually annotated. Manual annotations then compared to results from RANSAC and C-RANSAC.

RANSAC vs. Ground Truth -Tracking accuracy for the y-coordinate of the ellipse center. - Notice the two spikes due to ocular occlusions.

C-RANSAC vs. Ground Truth - Tracking accuracy for the y-coordinate of the ellipse center. - C-RANSAC does not spike when the eye is occluded.

RANSAC vs. C-RANSAC: Y - RANSAC experiences more error on the y-axis than C-RANSAC.

RANSAC vs. C-RANSAC: X - RANSAC and C-RANSAC do not differ much on their x-axis error.

RANSAC vs. C-RANSAC - The distributions are similar, but C-RANSAC appears to have less error when attempting to find the y-position of the iris’ center.

Calibration User looks at eight points on the screen, staring at each point for four seconds. The points are arranged as follows: four at the corners of the screen, and four (with a rhomboidal layout) at its center. One hundred measurements are collected for the iris center for each calibration point.

Compensated vs. Uncompensated Head Movement Circle/Red represents uncompensated movement. Square/Green represents compensated movement. The compensated clusters are more compact and more reasonably arranged.

Remapping Calibration Clusters to the Screen - The crosses represent the center of the calibration circles.

Remapping Without Feedback Twenty random screen points denoted by crosses. Results obtained shortly after calibration represented by circles. Results obtained ten minutes after calibration represented by stars.

Remapping With Feedback - Results obtained with head compensation represented by circles. Results obtained without head compensation represented by stars. Lack of head compensation is actually better!

Line Tracing With Visual Feedback

Conclusion A robust, single-camera, real-time eye-tracking algorithm is presented. An eye blink detector works equally well for both voluntary and involuntary eye closures. A constrained RANSAC approach for iris tracking is proposed that performs better than standard RANSAC in the presence of distracters and occlusions in the image sequence. The on-screen remapping method is capable of compensating for small head movements. Experiments outlined the importance of providing visual feedback to the user and the benefit gained from performing head compensation, especially during image-to-screen map calibration.

Future Work Improve further the image to screen mapping model, by taking explicitly into account the spherical shape of the eyeball. Relax the “neutral expression” constraint set for head compensation. Generalize the approach to passive interaction surfaces such as books, newspapers, and paintings. Extend the framework to the problem of determining the 3D coordinates of a location pointed at in space.