HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM.

Slides:



Advertisements
Similar presentations
CHDD BOL Equipment Primer Rooms CD 326 & CD 391 Behavioral Evaluation Center Behavior Observation Laboratory.
Advertisements

Xiaoyong Ye Franz Alexander Van Horenbeke David Abbott
Presented by Xinyu Chang
Angles & Motion Tips for shooting video projects..
Object Inter-Camera Tracking with non- overlapping views: A new dynamic approach Trevor Montcalm Bubaker Boufama.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Jeff B. Pelz Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of Technology Using Eyetracking to Improve Image Composition.
Correlation Between Image Reproduction Preferences and Viewing Patterns Measured with a Head Mounted Eye Tracker Lisa A. Markel Jeff B. Pelz, Ph.D. Center.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Electro-Oculography (EOG) Measurement System The goal : To measure eye movement with maximum accuracy using skin electrodes around the eyes that detect.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Task Dependency of Eye Fixations & The Development of a Portable Eyetracker Jeff Cunningham Senior Research Project Dr. Jeff Pelz Visual Perception Laboratory.
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept.
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 11, NOVEMBER 2011 Qian Zhang, King Ngi Ngan Department of Electronic Engineering, the Chinese university.
Ambient Displays of User Mood Tony Morelli Department of Computer Science, University of Nevada, Reno Abstract: Determining a user’s mood can be a very.
Animation Theory.
Eye Movements and Visual Attention
1 REAL-TIME IMAGE PROCESSING APPROACH TO MEASURE TRAFFIC QUEUE PARAMETERS. M. Fathy and M.Y. Siyal Conference 1995: Image Processing And Its Applications.
Eye tracking: principles and applications 廖文宏 Wen-Hung Liao 12/10/2009.
An Introduction to Computer Vision George J. Grevera, Ph.D.
10/2/2012.  Build glasses that record two video streams  Transfer the streams to an eye-tracking application hosted in an Android device  Process the.
UNDERSTANDING DYNAMIC BEHAVIOR OF EMBRYONIC STEM CELL MITOSIS Shubham Debnath 1, Bir Bhanu 2 Embryonic stem cells are derived from the inner cell mass.
© 1999 Rochester Institute of Technology Color. Imaging Science Workshop for Teachers ©Chester F. Carlson Center for Imaging Science at RIT Color Images.
Comparing Experts and Novices In Solving Electrical Circuit Problems With the Help of Eye-Tracking David Rosengrant, Colin Thomson & Taha Mzoughi Department.
 An eye tracking system records how the eyes move when a subject is sitting in front of a computer screen.  The human eyes are constantly moving until.
3M Brand Identity System © 3M All Rights Reserved.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
PSY 369: Psycholinguistics Language Comprehension: Methods for sentence comprehension.
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
PAN This is a horizontal camera movement in which the camera moves left and right about a central axis. It is usually used to gather more into a scene.
Image Processing Jitendra Malik. Different kinds of images Radiance images, where a pixel value corresponds to the radiance from some point in the scene.
National institute of science & technology BLINK DETECTION AND TRACKING OF EYES FOR EYE LOCALIZATION LOPAMUDRA CS BLINK DETECTION AND TRACKING.
Symmetry Detecting Symmetry in a Random Background Monica Cook.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Research Background: Depth Exam Presentation
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Su-ting, Chuang 1. Outline Introduction Related work Hardware configuration Detection system Optimal parameter estimation framework Conclusion 2.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
Detecting Eye Contact Using Wearable Eye-Tracking Glasses.
David Wild Supervisor: James Connan Rhodes University Computer Science Department Eye Tracking Using A Simple Webcamera.
Visual Perception By Katie Young and Joe Avery. Overview Visual Perception Eye Trackers Change Blindness.
P15051: Robotic Eye Project Definition Review TIM O’HEARNANDREW DROGALISJORGE GONZALEZ KATIE HARDY DANIEL WEBSTER.
A Hybrid Edge-Enhanced Motion Adaptive Deinterlacer By Marc Ramirez.
Examining the Conspicuity of Infra-Red Markers For Use With 2-D Eye Tracking Abstract Physical infra-red (IR) markers are sometimes used to help aggregate.
Feedforward Eye-Tracking for Training Histological Visual Searches Andrew T. Duchowski COMPUTER SCIENCE, CLEMSON UNIVERSITY Abstract.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Mobile eye tracker construction and gaze path analysis By Wen-Hung Liao 廖文宏.
Date of download: 7/10/2016 Copyright © 2016 SPIE. All rights reserved. A graphical overview of the proposed compressed gated range sensing (CGRS) architecture.
Instantaneous Geo-location of Multiple Targets from Monocular Airborne Video.
Analyzing Eye Tracking Data
Automatic Video Shot Detection from MPEG Bit Stream
Research Background: Depth Exam Presentation
Video Vocabulary Illustrated
Video-based human motion recognition using 3D mocap data
Play game, pause video, move cursor… with your eyes
Authoring Directed Gaze for Full-Body Motion Capture
眼動儀與互動介面設計 廖文宏 6/26/2009.
Graphics Systems SUBJECT: COMPUTER GRAPHICS LECTURE NO: 02 BATCH: 16BS(INFORMATION TECHNOLOGY) 1/4/
Spatial Coding of the Predicted Impact Location of a Looming Object
Course 6 Stereo.
Neural Mechanisms of Visual Motion Perception in Primates
Spatial Coding of the Predicted Impact Location of a Looming Object
Head-Eye Coordination at a Microscopic Scale
Presentation transcript:

HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS F1 F7 Fixation signal (below) produced by each human and the algorithm. Time is shown across the horizontal axis and each strip continues from the end of the strip above. Next-Generation Portable Eye Tracking Susan M. Munn* and Jeff B. Pelz Multidisciplinary Vison Research Laboratory Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Abstract 3.0 Identification of fixations in dynamic scenes Compensating for headgear motion 2,3 References 1.0 Equipment: The RIT Wearable Eye Tracker 1 Portable eye tracking allows us to monitor and analyze a person's eye movements as he or she moves around and performs natural tasks. Video-based portable eye trackers include two cameras: an eye camera that records a video of the person's eye, and a scene camera that records a video of the central portion of the person's field-of-view. These videos are typically processed to determine where the person is looking – his/her point-of-regard (POR) – in each frame of the scene video. Since portable eye tracking is fairly new to the eye tracking field, some of the current algorithms and techniques for processing eye tracking data may not be suitable for use with portable eye trackers. For instance, the typical method used to determine how the eye has moved (based on the movements of its features through the eye video) is very sensitive to movements of the eye camera with respect to the eye; this problem is most severe when the infrared-emitting diode (IRED) – which illuminates the eye and creates a corneal reflection (CR) – is close to the eye (as is the case with portable eye trackers). In addition, algorithms that process the POR data to identify fixations (periods of time when the subject is looking at the same location) expect these data to be defined in static two-dimensional scene images. We developed a new technique to more robustly determine movements of the eye in the presence of movements of the eye camera (with respect to the eye); this technique also reduces noise in the final eye movement data. Additionally, a fixation- identification algorithm has been implemented and tested by comparing its results to those produced by humans. These algorithms will greatly impact the field of portable eye tracking and allow us to do much more with our eye tracking data. We present one example application which uses these algorithms towards obtaining three- dimensional locations of fixations. PROBLEM: Headgear movements, which result in movements of the eye camera with respect to the eye, can be mistaken for eye movements. Effect of headgear movement: GOAL: Determine if a velocity-based fixation-identification algorithm can be applied to eye tracking data in dynamic scenes. [1] Babcock, J. S. and J. B. Pelz. Building a lightweight eyetracking headgear. In ETRA 2004: Proceedings of the Eye Tracking Research & Applications Symposium, pp , ACM Press. [2] Kolakowski, S. M. and J. B. Pelz. Compensating for eye tracker camera movement. In ETRA 2006: Proceedings of the Eye Tracking Research & Applications Symposium, pp 79-85, ACM Press. [3] Li, F., S. M. Munn and J. B. Pelz. A model-based approach to video-based eye tracking. Journal of Modern Optics, Vol. 55, Nos. 4-5, pp , [4] Munn, S. M., L. Stefano and J. B. Pelz. Fixation identification in dynamic scenes: comparison of an algorithm to human coders. APGV 2008: Symposium on Applied Perception in Graphics and Visualization, submitted. [5] Munn, S. M. and J. B. Pelz. 3D point-of-regard, position and head orientation from a portable monocular video-based eye tracker. In ETRA 2008: Proceedings of the Eye Tracking Research and Applications Symposium, pp , ACM Press. Task 2. Walk through hallway (find a specific room number) Agreement among all three humansResult of automatic algorithm IN FIXATION (BREAK B/W 2 FIXATIONS) ONE SINGLE FIXATION Comparison of algorithm results to three human coders Task 1. Watch a computer animation In fixation Between fixations Algorithm disagrees with 3 humans All agree Top 4 rows HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS 4.0 Application: 3D POR, position and head orientation for each fixation 5 SOLUTIONSOLUTION TIME TO CODE VIDEOS Coded byTask 1Task 2 Human 150* minutes60 minutes Human 270 minutes90 minutes Human 3105 minutes90 minutes Algorithm † 11.8 seconds5.9 seconds *Error made (later corrected) † Implemented in MATLAB on an Apple 2.16 GHz MacBook Fixation 1 (F1) Fixation 7 (F7) … Seven vertices of a cubic structure were fixated and the corresponding PORs were reconstructed and are shown (connected) in the figure on the right. Two subjects performed two different tasks and the start and end of each fixation was coded by an algorithm and by three experienced humans. NOISE REDUCTION: The headgear moves slowly which means that the Headgear array can be smoothed resulting in a final Eye array that will contain about as much noise as the robust array of Pupil positions ( Pupil ). Point-of-regard (POR): point in scene where subject is looking. Pupil CR Headgear SCENE CAM EYE CAM IRED Position Head orientation POR Key eadgear moved ye moved We track the centers of the PUPIL and CR to determine how the eye is moving. Bottom row (“ERRORS”) Actual movement of the eye: