Stan Van Pelt and W. Pieter Medendorp

Slides:



Advertisements
Similar presentations
Regression and correlation methods
Advertisements

Why do we move our eyes? - Image stabilization
Quantifying Generalization from Trial-by-Trial Behavior in Reaching Movement Dan Liu Natural Computation Group Cognitive Science Department, UCSD March,
Regression Analysis. Unscheduled Maintenance Issue: l 36 flight squadrons l Each experiences unscheduled maintenance actions (UMAs) l UMAs costs $1000.
Tracking multiple independent targets: Evidence for a parallel tracking mechanism Zenon Pylyshyn and Ron Storm presented by Nick Howe.
Active Visual Observer Integration of Visual Processes for Control of Fixation KTH (Royal Institute of Technology, Stockholm)and Aalborg University C.S.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Magnitude and time course of illusory translation perception during off-vertical axis rotation Rens Vingerhoets Pieter Medendorp Jan Van Gisbergen.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Verticality perception during body rotation in roll
A human parietal face area contains aligned head-centered visual and tactile maps Sereno & Huang (2006)
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
Neural mechanisms for timing visual events are spatially selective in real-world coordinates. David Burr, Arianna Tozzi, & Concetta Morrone.
Eye-Hand coordination Stan Gielen Radboud University Nijmegen.
Automatic online control of motor adjustments -P NANDHA KUMAR.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Reaching and Grasping: what problems does the brain have to solve?
Primary Cortical Sub-divisions The mapping of objects in space onto the visual cortex.
Counting How Many Words You Read
Spatial coding of the Predicted Impact Location of a Looming* Object M. Neppi-Mòdona D. Auclair A.Sirigu J.-R. Duhamel.
Direct visuomotor transformations for reaching (Buneo et al.) 협동과정 뇌과학 김은영.
Sensation and Perception
How is vision used to catch a ball?
LOGO Change blindness in the absence of a visual disruption Professor: Liu Student: Ruby.
1 Simulation Scenarios. 2 Computer Based Experiments Systematically planning and conducting scientific studies that change experimental variables together.
October 1st, Shared computational mechanism for tilt compensation accounts for biased verticality percepts in motion and pattern vision Maaike de.
Eye movements: Lab # 1 - Catching a ball
What visual image information is needed for the things we do? How is vision used to acquire information from the world?
Baseball Players Cannot Keep Their Eyes On The Ball Study by: A. Terry Bahill & T. LaRitz.
Determining How Costs Behave
Conclusions & Recommendations
The Visual system Maude LeRoux
Chapter 4 Sensory Contributions to Skilled Performance
From: Inhibition of saccade and vergence eye movements in 3D space
Regression and Residual Plots
Binocular Stereo Vision
Backward Masking and Unmasking Across Saccadic Eye Movements
Uwe J Ilg, Stefan Schumann, Peter Thier  Neuron 
Visual Perceptions: Motion, Depth, Form
Techniques for Data Analysis Event Study
Signal, Noise, and Variation in Neural and Sensory-Motor Latency
Volume 28, Issue 7, Pages e5 (April 2018)
Measuring Gaze Depth with an Eye Tracker During Stereoscopic Display
Alteration of Visual Perception prior to Microsaccades
Robert O. Duncan, Geoffrey M. Boynton  Neuron 
Sampling and Power Slides by Jishnu Das.
Binocular Disparity and the Perception of Depth
Spatial Coding of the Predicted Impact Location of a Looming Object
Responses of Collicular Fixation Neurons to Gaze Shift Perturbations in Head- Unrestrained Monkey Reveal Gaze Feedback Control  Woo Young Choi, Daniel.
Volume 66, Issue 4, Pages (May 2010)
Motion Silences Awareness of Visual Change
Liu D. Liu, Christopher C. Pack  Neuron 
Neural Mechanisms of Visual Motion Perception in Primates
Spatial Coding of the Predicted Impact Location of a Looming Object
Brian D Corneil, Etienne Olivier, Douglas P Munoz  Neuron 
Motion-Based Mechanisms of Illusory Contour Synthesis
Volume 29, Issue 2, Pages (February 2001)
Segregation of Object and Background Motion in Visual Area MT
Xiaomo Chen, Marc Zirnsak, Tirin Moore  Cell Reports 
Dynamics of Eye-Position Signals in the Dorsal Visual System
Scientific Method.
Volume 23, Issue 3, Pages (April 2018)
Martijn Barendregt, Ben M. Harvey, Bas Rokers, Serge O. Dumoulin 
MT Neurons Combine Visual Motion with a Smooth Eye Movement Signal to Code Depth-Sign from Motion Parallax  Jacob W. Nadler, Mark Nawrot, Dora E. Angelaki,
DESIGN OF EXPERIMENTS by R. C. Baker
Biological Science Applications in Agriculture
Albert V. van den Berg, Jaap A. Beintema  Neuron 
Volume 28, Issue 7, Pages e5 (April 2018)
Andreas Schindler, Andreas Bartels  iScience 
Presentation transcript:

Stan Van Pelt and W. Pieter Medendorp Gaze-Centered Updating of Remembered Visual Space During Active Whole-Body Translations Stan Van Pelt and W. Pieter Medendorp J. Neurophysiol 97: 1209-1220, 2007 Journal Club January 17 2008

Research on rotational (eye-) movements suggest that various cortical and sub-cortical structures update the gaze-centred coordinates of remembered visual stimuli to maintain an accurate representation of visual space and further to produce motor plans Translation is a challenge for the visual system as motion parallax shifts position of objects in front or behind fixation point to opposite side of the retina

They propose two prediction models to determine the reference frame Authors aim to investigate which reference frame is used as a computational basis during translations They propose two prediction models to determine the reference frame Gaze-dependent model Gaze-independent model

1 1 2 2 3 3 Observer on one side, fixating FP Target flashed in front or behind FP Observer translates, fixating FP Observer reaches to the memorised target 2 2 3 3

Reversed retinal position for targets Tf and Tn Must estimate translation correctly to update memorised targets correctly If not there will be reach errors Basic assumption Observers generally misestimate the amount of self-motion when translating

Gaze-dependent prediction Simulation of motion parallax involved in the update Reach errors in opposite directions for targets Tf and Tn

Gaze-independent prediction Parallax geometry plays no role if body-fixed reference frame Reach errors in same direction for targets Tf and Tn

Methods Main experiment and 3 control experiments 12 observers in main experiment, 5 in controls Optotrak 3020 tracking of head and index finger Eyelink II Eyetracker 2 conditions Stationary task (160 trials) Translation task (160 trials) Catch trials (32 (x2?) trials) On two separate occasions Both times: Half of translation trials, then half of stationary

Methods Observer on one side, fixating FP Target flash in front or behind FP (4 positions) Observer translates sideways Reach to memorised target Tracked position of observer and finger tip Recording of horizontal body position, gaze vergence, gaze version, reaching This is a figure describing the translation task. The stationary task was basically the same. The observer was placed on one side. When the trial starts (signal?) he should look at FP. After 1.5 s a target was flashed for 0.5 sec in 1 out of 4 positions. Target off, and observer moved to the other side. Auditory signal to indicate time for reaching. Signal for when to stop reaching. As I understand it, observer reach AT the remembered position

Data analysis Relationship between the two models by Model 2 regression (NV and ME) Slope and confidence limits estimated by Bootstrap method Using stationary task as a measure of errors from perception and motor effects – assuming equal contribution in both tasks 2-D vectorial analysis for interaction between initial target position, translational motion and reach response for the two reference frames Exclusion of trials Saccades or not keeping fixation within 3 deg from FP Reaching or stepping too early

Main findings Reach responses showed parallax-sensitive updating errors Errors reversed in lateral direction for targets presented at opposite depths from FP Errors increased with larger depth from FP

Reach error = translation error – mean stationary error for each target distance Random points from each target distance compared to random points from another target distance: T1/T4 and T2/T3 Reach responses showed parallax-sensitive updating errors Errors reversed in lateral direction for targets presented at opposite depths from FP Larger errors in outermost pos

Combinations of T1/T4 and T2/T3 according to their values Best fit line through data Favour gaze-dependent

table

Predicted updating error E Position before translation Ti (estimated average response stationary) Position after translation Tf Actual translational motion (Tf -Ti) Reach response R Predicted updating error E Gaze-dependent closer match Better prediction 9/12 Higher correlation coeff (p<0.01) Authors suggest that this model can account for systematic errors in the data (gaze-ind does not) R – Ti = a(Tf – Ti ) +b Coordinate axes Gaze-dependent model With and orthog to gaze line Gaze-independent model With and orthog to shoulder line table

Control experiments 5 observers Horizontal reaching errors Conclusion remains No feed-back from hand Fixate FP-position all the time FP disappear, but fixate location until reaching 1. 2. 3.

Authors’ conclusions According to their quantitative geometrical analyses, they claim that updating translational errors were better described in gaze-centred than in gaze-independent coordinates Authors conclude that spatial updating for translational motion operates in gaze-centred coordinates

Considerations from authors 3/12 observers did not favour the model The authors admit that they base their models on very simple geometry, and that the brain might have a far more complex visual representation of space They have focussed on the central representation of body translation as an underlying mechanism for errors, and there may be many other sources to errors in updates

And now… US! Are the models valid? Are there fundamental problems in the experimental setup? Is this a sensible way of processing these data? Do we believe in the results?

Horizontal data

Vectorial analysis