Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research Background: Depth Exam Presentation

Similar presentations


Presentation on theme: "Research Background: Depth Exam Presentation"— Presentation transcript:

1 Research Background: Depth Exam Presentation
Susan Kolakowski Committee: Juan Cockburn, Chair Jeff Pelz, Adviser Andrew Herbert Mitchell Rosen Carl Salvaggio Eye Tracking: Why We Use It and Trying To Improve It March 20, 2006

2 Research Background Introduction Human Visual System Eye Movements
Eye Trackers RIT Wearable Eye Tracker My Research First I will give you a small introduction about why eye tracking is used. Then I will talk a little bit about the Human Visual System and Eye Movements - move our eyes more than 150,000 times per day I will conclude the first part of this presentation with the mention of a few eye trackers and The eye tracker which my research has been utilizing, the RIT Wearable Eye Tracker.

3 Introduction Why are eye trackers used? Examples:
Objective measure of where people look Interest in Human Visual System Examples: Understanding Behaviors: How do humans read? Improving Skill: Train people to move their eyes as an expert would. Improving Quality: What parts of an image are important to the image’s overall quality? Study HVS - how we capture an image, where we are looking when we make decisions, the order in which we move our eyes. We make ____ eye movements a day and we couldn’t possibly describe where we are looking throughout the day or as we perform a specific task. Eye trackers allow us to see where a subject is looking during a specific task. For instance, where do we look when we are trying to judge the overall quality of an image? This information may be useful when creating images but may not be easy to describe on our own - eye trackers can show us exactly where we are looking. What if we want to understand how humans read - watching their eye movements while subjects read an exerpt can help us see this. VIDEO Finally, why are some people better at search tasks than others? We can eye track someone who is really good at findiing a target and use this information to train others. In many CV tasks it is desired to mimick the HVS (have a robot perform a task as well as a human)

4 The Human Eye Optic Axis Pupil Cornea Iris Ciliary Muscle Retina
Eyelens Cornea and Eye Lens bend light rays to form an image on the retina. The retina is analgous to the film in a camera, it is where light is received. The fovea is a small portion of the retina where the most detailed view is created (talk about this later). The Ciliary muscle contracts and ___ the lens to change its focus. The iris is the stop which determines how much light may pass through its opening, the pupil. The optic nerve transfers the signal received within the retina to be processed. Optic Nerve Fovea

5 Human Visual System What we see is determined by
How the photoreceptors in our retina are connected and distributed How our brain processes this information What we already accept as truth (previous knowledge) How we move our eyes throughout a scene The way we receive information from our eyes is determined by how our rods and cones are connected and how the light they perceive gets to our visual cortex and what we already accept as truth Processes… adaptations… abberrations

6 The Retina Contains two types of photoreceptors
Rods that offer wide field of view (and night vision) Cones that provide high acuity (and color vision) Our retina is like the film in a camera - collects the light to create the image The retina contains two photoreceptors: Cones which perceive detail. and Rods which are more sensitive and used in low light, more connected together - act as an averaging filter - blurs the periphery These photoreceptors are connected such that the signal from neighboring receptors inhibits the signal at a detector. This is called lateral inhibition and describes much about how we perceive things and illusions we may see.

7 The Craik-O’Brien Illusion
The Visual System responds to changes, therefore it gets most of its information from the edge where there is a sharp intensity change and fills in the rest of the information where the change is very gradual so that we perceive two uniform regions, one lighter than the other. There is much more to color perception than the cone mechanisms and additive and subtractive color production systems. The complexity of the human visual system gives it huge advantages in image processing and understanding, but can also lead to incorrect interpretations of objects The visual system is designed to be more sensitive to edges than it is to large areas. The “Cornsweet edge” shown above separates two identical colors with a narrow edge, inducing the perception of a color difference across the edge. Covering the edge allows the visual system to correctly interpret the two colors as identical.

8 Lateral Inhibition Center grey squares have SAME intensity

9 Affect of Previous Knowledge
Rotating Mask

10 Affect of Previous Knowledge
ot/cog_dalmatian/

11 The Fovea At its center: contains only cones (no rods)
Perceive greatest detail and color vision To get the most detailed representation of a scene, must move your eyes rapidly so that different areas of the scene fall on your fovea Along visual axis - lowest potential for aberrations This slide needs to be completely redone

12 Serial Execution (fovea covers <0.1% of the field)
Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec! (fovea covers <0.1% of the field)

13 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

14 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

15 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

16 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

17 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

18 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

19 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

20 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

21 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

22 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

23 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

24 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

25 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

26 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

27 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

28 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

29 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

30 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

31 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

32 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

33 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

34 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

35 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

36 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

37 Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

38

39

40

41 Eye Movements… … and lack thereof Saccades Smooth Pursuit
Optokinesis (OKN) Vestibular-Ocular Reflex (VOR) Fixations … and lack thereof To study with monocular eye tracking…

42 Fixations Stabilizations of the eye for higher acuity at a given point
Drifts and tremors of the eye occur during fixations such that the view is always changing slightly Tremor, drift, - the eye responds to change…

43 Saccades X X X Eye Movements
Rapid ballistic movement of eye from one position to another Shift point of gaze such that a new region falls on the fovea Can make up to 4 saccades per second Less than 1 deg to greater than 45 deg Velocity 500 deg/sec Preprogrammed, if the target which the eye is moving to moves during the saccade, the eye cannot change direction mid-saccade. X X X

44 Smooth Pursuit X X Eye Movements
Smooth eye movement to track a moving target Involuntary - can’t be produced without a moving object Velocity of eye matches that of object up to about 30 deg/sec Follow a small moving target X X

45 Vestibular-Ocular Reflex
Eye Movements Optokinesis Invoked to stabilize an image on the retina Eye rotates with large object or with its field-of-view Vestibular-Ocular Reflex Invoked to stabilize an image on the retina Stabilizes an image as the head or body moves relative to the image Similar to smooth pursuit When the field of view moves, looking out the window of a car X

46 Eye Trackers Invasive Restrictive Modern Video-Based Trackers
Painful devices which discomfort subject’s eye Restrictive Devices that require strict stabilization of subject’s head, not allowing for natural movement The first permanent objective eye movement records to be recorded non-invasively occurred in 1901 (Dodge and Cline). Recorded horizontal eye movements on photographic plate. Before this, eye movements were studied using just subject introspection or experimenters observations or recorded invasively such as Delabarre’s method in 1898 in which he attached a mechanical stalk to the eye using plaster of paris. Early invasive methods such as these were criticized for impeding motion and straining the eye. 1960’s - Yarbus used suction, mechanical stalk and mirrors. Robinson (1963) used scleral search coil - two orthogonal wire coils that would perturb a magnetic field surrounding the subject’s head. High level discomfort - no longer used today. Video-based eye tracker came about in the early 1970s. Detect the limbus - very rapid measure of horizontal movement - poor measure of vertical movement. Dark-pupil tracking - requires contrast between pupil and iris. Bright-pupil tracking - on axis, bright pupil, larger contrast. Everything up to here - track eye in relation to head - head needed to be completely stabilized to understand where eye was looking in the world s measured two features of the eye to account for head movement - corneal reflection. Still required the head to be restrained with a bite bar or chin rest but allowed for slight movements of the head. 1973 Cornsweet and Crane - Dual Purkinje image eye tracker. Detects first and fourth Purkinje images - reflection off the outer surface of the cornea and the rear of the lens. A series of servo motors are adjusted in response to the movement of these images, degree servos move = eye rotation (independent of head rotation) HEAD MUST STILL BE IN BITE BAR OR CHIN REST so that the eye can be detected by the instruments. Remarkably fast and accurate (limited only by speed of servo motors). Head mounted eye trackers point scene camera at subject’s field-of-view that moves with subject’s head such that the point-of-regard can be superimposed on the scene camera’s image. Remote eye trackers have been developed to allow some head movement while subject sits in front of computer for 2D stimulus presentation. Modern Video-Based Trackers Remote - constrained to 2D stimuli Head-mounted - allows natural movement

47 Intrusive Eye Trackers
Delabarre 1898 Yarbus 1965 Mechanical stalk

48 Intrusive Eye Trackers
Robinson 1963, Search Coils 3D eye movements

49 Video-based Eye Trackers
Early 1970’s, Limbus RESTRICTIVE

50 Video-based Eye Trackers
Cornsweet and Crane 1973, Dual Purkinje RESTRICTIVE

51 Video-based Eye Trackers
Early 1970’s Dark-Pupil Bright Pupil Show illumination angle

52 Video-based Eye Trackers
Head-Mounted Remote

53 R.I.T. Wearable Eye Tracker
Video-based Eye Trackers R.I.T. Wearable Eye Tracker SCENE CAMERA Most eye trackers require subject to sit still – can only track them looking at a screen or image. Our tracker allows people to walk around and perform tasks like walking through the woods Talk about backpack and whats inside it and how it works. IR LED EYE CAMERA

54 R.I.T. Wearable Eye Tracker
How it works Off-axis illumination Off-line processing Off-axis illumination produces dark-pupil image. Off-line processing allows us to perform extra processing on the data without the constraint of a real-time application.

55 Example Video

56 My Research Objective: Improve the performance of video-based eye trackers in the processing stage. Compensate for camera movement with respect to the subject’s head Reduce noise

57 R.I.T. Wearable Eye Tracker
Advantage: Subject is less constrained, can perform more natural tasks Disadvantage: Camera (eye tracker) not stabilized - need to account for any movement of camera relative to head RIT wearable eye trackers LOWER PRECISION

58 Lower Precision Analysis of Disadvantages
Need to account for movement of camera with respect to the head requires additional data: corneal reflection Corneal Reflection data is not as precise as Pupil data. Show large corneal reflection, same size as pupil Too bad we can’t just use the Pupil data

59 Oversimplifying Assumption
Analysis of Disadvantages Oversimplifying Assumption Assumption: When the camera moves with respect to the head, the pupil and corneal reflection move the same amount. To account for camera movement: The assumption, why its wrong, the problems it causes SHOW P-CR EQUATION and images Virtual image of pupil affected by optics - cornea and eyelens - as camera moves, light bends at different angles to create this virtual image which is not moving the exact same amount. See illustration of this later.

60 Why this assumption is wrong
Corneal Reflection data comes from the center of the reflection off the curved outer surface of the eye Pupil data comes from the center of the flat virtual image of the pupil inside the eye. These features are not on a flat surface that the camera is translating with respect to… SHOW ILLUSTRATION Show pupil and cr arrays during camera movement only DON’T MOVE THE SAME AMOUNT WHEN THE CAMERA MOVES

61 Result of Oversimplification
P-CR vector difference changes with camera movement Artifacts in final data SHOW DATA, P-CR changes May appear to be a saccade or noise Eye is looking horizontal through three points, camera is moving. Resulting P-CR - circle artifacts Show Pupil array, CR array and Camera array X X X

62 The Solution Determine the actual relationship between the pupil and corneal reflection during BOTH: Camera movements Eye movements Use these relationships to develop a new equation in terms of pupil and corneal reflection position Camera movements WITH RESPECT TO HEAD

63 Eye Movements When you look into a person’s eye, the pupil which you are seeing is actually the virtual image of the pupil as produced by the optics of the eye. CR hardly moves Pupil moves Eye gain - amount the CR moves when the pupil moves 1 degree during an eye movement

64 Camera Movements Pictures from paper, Cam gain

65 Camera and Eye Gains Eye Gain: amount corneal reflection moves when pupil moves 1 degree during an eye movement Camera Gain: amount corneal reflection moves when pupil moves 1 degree during a camera movement

66 The Equations 4 Initial Equations 4 Unknowns: (1) (3) (4) (2)
4 unknowns - given that E and C can be found experimentally

67 Added Benefit Can smooth Camera array without loss of information from Pupil array: Assuming camera moves more slowly than eye moves. Result is on same level as Pupil only data Compensate for cam movement better AND Reduce noise Show smoothing animation - Eye array changes as camera array is smoothed

68 Determining the Gains Eye Gain: (Instruct subject to…)
Look at center of field-of-view. Keep camera and head perfectly still. Look through calibration points. Cam Gain: (Instruct subject to…) Keep eye fixated while moving the camera on nose. How to do this, Start by looking at the center of the field-of-view - no bias value to worry about and can deal with fraction … Move camera very small amount, the camera would not move off the subjects nose during a task. Make realistic camera movements that we would like to compensate for. Linear regression - why is this okay? Show graph

69 Eye Gain Results Single subject ABC

70 Eye Gain Results Single subject ABC

71 Eye Gain Results y = x R2 = Single subject ABC

72 Camera Gain Results Single subject ABC
If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

73 Camera Gain Results Single subject ABC
If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

74 Camera Gain Results y = 0.8143x + 4.5981 R2 = 0.9768
Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

75 Camera Gain Results y = 0.8143x + 4.5981 R2 = 0.9768
Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

76 Camera Gain Results slope = average gain = 0.8524 of 5 subjects
y = x R2 = Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

77 Testing the Algorithm Collect data: Extract eye movements:
5 subjects look through 9 calibration points while moving the eye tracker’s headgear Extract eye movements: Use average gains to calculate Camera array Smoothed Camera array Subtracted smoothed Camera array from Pupil array Eye array Experiments, subjects, use average gain, smoothing filters used Eye array can also be considered “Corrected Pupil array” - represents amount pupil has moved during eye movements only

78 Horizontal Results Results X X X
Have subjects start looking at center calibration point - call this 0 degrees

79 Horizontal Results Results Continued X X X
Have subjects start looking at center calibration point - call this 0 degrees

80 Horizontal Results Results Continued X X X
Have subjects start looking at center calibration point - call this 0 degrees

81 Vertical Results Results Continued X X X
Have subjects start looking at center calibration point - call this 0 degrees

82 Vertical Results Results Continued X X X
Have subjects start looking at center calibration point - call this 0 degrees

83 Vertical Results Results Continued X X X
Have subjects start looking at center calibration point - call this 0 degrees

84 Vertical Results Results Continued X X X
Have subjects start looking at center calibration point - call this 0 degrees

85 Vertical Results Results Continued X X X
Have subjects start looking at center calibration point - call this 0 degrees

86 Results Continued Noise Reduction Get color version of these images

87 Noise Reduction Results Continued Get color version of these images
It may seem as though the reduction in noise for the third trial was not as successful but if we look at the corresponding pupil data we see that the pupil data was noisier for the third trial and that is why the eye array is also noisier (than for the first trial shown on the previous slide)

88 Conclusions Successful application to head-mounted video-based eye trackers Use same gain values for all subjects Final Eye array precision is on the order of the Pupil array precision Noise due to Corneal Reflection data is reduced Successful for head-mounted video-based eye trackers which track both the pupil and corneal reflection. Noise due to corneal reflection is reduced CR used to determine camera movement but does not affect final data precision

89 Next Steps Calibration - Eye array represents
eye movement in head - need to map this to the world (via scene camera) Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

90 Next Steps Calibration - Eye array represents
eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

91 Next Steps Calibration - Eye array represents
eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Obtain gain values for larger group of subjects Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

92 Next Steps Calibration - Eye array represents
eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Obtain gain values for larger group of subjects Test on larger eye movements Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

93 Next Steps Calibration - Eye array represents
eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Obtain gain values for larger group of subjects Test on larger eye movements Revision for remote trackers Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

94 Questions, Suggestions…

95 R.I.T. Wearable Eye Tracker
Advantage: Subject is less constrained, can perform more natural tasks Disadvantages: Head not stabilized - need to know where subject is looking at all times Camera (eye tracker) not stabilized - need to account for any movement of camera relative to head RIT wearable eye trackers LOWER PRECISION

96 Compensating for Eye Tracker Camera Movement
Susan M. Kolakowski and Jeff B. Pelz Visual Perception Laboratory Rochester Institute of Technology March 28, 2006


Download ppt "Research Background: Depth Exam Presentation"

Similar presentations


Ads by Google