From cortical anisotropy to failures of 3-D shape constancy Qasim Zaidi Elias H. Cohen State University of New York College of Optometry.

Slides:



Advertisements
Similar presentations
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Advertisements

Chapter 5: Space and Form Form & Pattern Perception: Humans are second to none in processing visual form and pattern information. Our ability to see patterns.
V1 Physiology. Questions Hierarchies of RFs and visual areas Is prediction equal to understanding? Is predicting the mean responses enough? General versus.
for image processing and computer vision
Spike Train Statistics Sabri IPM. Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement.
Proportion Priors for Image Sequence Segmentation Claudia Nieuwenhuis, etc. ICCV 2013 Oral.
A saliency map model explains the effects of random variations along irrelevant dimensions in texture segmentation and visual search Li Zhaoping, University.
A BAYESIAN PERSPECTIVE ON SPATIAL PERCEPTION Maaike de Vrijer Jan van Gisbergen February 20, 2008.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.
Optical Illusions KG-VISA Kyongil Yoon 3/31/2004.
Object Perception. Perceptual Grouping and Gestalt Laws Law of Good continuation. This is perceived as a square and triangle, not as a combination of.
Blob detection.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Consequences of Attentional Selection Single unit recordings.
Memory color effects of natural objects on color constancy
Group 4. SURVIVAL!!!  For humans and other animals motion perception is essential for maneuvering in everyday life.  Approaching motion represents a.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Another viewpoint: V1 cells are spatial frequency filters
Oriented Local Binary Patterns for Offline Writer Identification
BPS - 3rd Ed. Chapter 211 Inference for Regression.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Modeling perceptual variations by neural decoding Qasim Zaidi Elias H. Cohen NEI grants EY07556 and EY13312.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
A Statistical Approach to Speed Up Ranking/Re-Ranking Hong-Ming Chen Advisor: Professor Shih-Fu Chang.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Mondrians, Aesthetics, and the Horizontal Effect 1 Department of Psychological and Brain Sciences, University of Louisville 2 Department of Ophthalmology.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
1 Computational Vision CSCI 363, Fall 2012 Lecture 20 Stereo, Motion.
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
Studies of Information Coding in the Auditory Nerve Laurel H. Carney Syracuse University Institute for Sensory Research Departments of Biomedical & Chemical.
Medical University of Lübeck Institute for Signal Processing Nonlinear visual coding from an intrinsic-geometry perspective E. Barth* & A. B. Watson NASA.
Chapter 5: Perceiving Objects and Scenes. The Puzzle of Object and Scene Perception The stimulus on the receptors is ambiguous. –Inverse projection problem:
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
How natural scenes might shape neural machinery for computing shape from texture? Qiaochu Li (Blaine) Advisor: Tai Sing Lee.
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
BCS547 Neural Decoding.
Fundamentals of Sensation and Perception RECOGNIZING VISUAL OBJECTS ERIK CHEVRIER NOVEMBER 23, 2015.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
Outline Of Today’s Discussion 1.Review of Wave Properties, and Fourier Analysis 2.The Contrast Sensitivity Function 3.Metamers 4.Selective Adaptation And.
Some Background on Visual Neuroscience.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
BPS - 5th Ed. Chapter 231 Inference for Regression.
Bayesian Perception.
The Neural Code Baktash Babadi SCS, IPM Fall 2004.
From local motion estimates to global ones - physiology:
Bayesian inference in neural networks
From: Optimal disparity estimation in natural stereo images
Measuring motion in biological vision systems
A Sparse Object Coding Scheme in Area V4
Satoru Suzuki, Marcia Grabowecky  Neuron 
Shape representation in the inferior temporal cortex of monkeys
A Vestibular Sensation: Probabilistic Approaches to Spatial Perception
Yu-Cheng Pei, Steven S. Hsiao, James C. Craig, Sliman J. Bensmaia 
Basic Practice of Statistics - 3rd Edition Inference for Regression
Probabilistic Population Codes for Bayesian Decision Making
A Channel for 3D Environmental Shape in Anterior Inferotemporal Cortex
A Switching Observer for Human Perceptual Estimation
Satoru Suzuki, Marcia Grabowecky  Neuron 
A Switching Observer for Human Perceptual Estimation
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Prediction of Orientation Selectivity from Receptive Field Architecture in Simple Cells of Cat Visual Cortex  Ilan Lampl, Jeffrey S. Anderson, Deda C.
Stimuli: “Hexagon world”
The Normalization Model of Attention
Volume 24, Issue 8, Pages e6 (August 2018)
Visual Perception: One World from Two Eyes
Dynamics of Orientation Selectivity in the Primary Visual Cortex and the Importance of Cortical Inhibition  Robert Shapley, Michael Hawken, Dario L. Ringach 
Presentation transcript:

From cortical anisotropy to failures of 3-D shape constancy Qasim Zaidi Elias H. Cohen State University of New York College of Optometry

Shape Constancy Shape is the geometrical property of an object that is invariant to location, rotation and scale. The ability to perceive the shape of a rigid object as constant across viewpoints has been considered essential to perceiving objects accurately. The visual system does not discount all perspective distortions, so the shapes of many 3-D objects change with viewpoint. Can shape constancy be expected for rotations of the image plane? North viewSouth view The Future Building, Manhattan (Griffiths & Zaidi, 2000)

Vertical Oblique ConvexConcave Does rotating from vertical to oblique preserve perceived depth? 3-D Shape Constancy across image rotations?

Vertical Oblique ConvexConcave Stimuli Perspective projection of convex and concave wedges (in circular window). Experiment 1 compared 5 vertical shapes to 5 oblique shapes in depth (concave to concave & convex to convex). Shapes Sine-wave gratings 3 spatial frequencies, 1,3,6 cpd. Oriented at 90, ± 67.5, ± 45, & ± 22.5 degrees ( wrt 3D axis). Added in randomized phases to make 10 different textures per shape. Texture

Exp 1 Failures of 3-D shape constancy Vertical vs. Oblique comparison task. Subjects view two shapes sequentially. Which shape is greater in depth?

500 msec

Exp 1: Shape Comparison Results The same shape was perceived to be deeper when it was oriented vertically than when it was oriented obliquely. Oblique shapes were matched to vertical shapes of 0.77 times depth of the oblique shape (S.E. =.007).

3D Shape From Texture Perception of shape from texture depends on patterns of orientation flows (Li & Zaidi, 2001; 2004) Textured shape with no orientation component orthogonal to axis of curvature.

Is there a corresponding OB for single 2D angles? Origins of oblique bias for 3D shape Is the 3D OB explained by an OB for 2D oriented components?

Exp 2 Failures of 2-D angle constancy Vertical vs. Oblique comparison task. Subjects view two shapes sequentially. Which angle is sharper?

500 msec

Exp 2: Angle Comparison Results The same angle was perceived to be sharper when it was oriented vertically than when it was oriented obliquely. Oblique angles were matched to vertical angles 4.5 ° shallower on average.

Predicting the 3-D depth bias from the 2-D angle bias The average ratio of perceptually equivalent 2-D slopes = (SE =.001) Ratio of perceptually equivalent 3-D depths = (SE =.007) 3-D depth inconstancy can be explained by anisotropy in perception of 2-D features. irrespective of h.

Orientation anisotropies in cat V1 cells (Li et al 2003) Oriented energy in natural images (Hansen & Essock, 2004)

Stimulus orientation decoded from cortical responses The probability that an orientation-tuned cell will give a spike in response to an orientation θ is determined by its tuning curve f(θ) (Sanger, 1996): The probability of the cell giving n i spikes is given by a Poisson distribution: For independently responding neurons, the probability of n i spikes each from k cells is given by the product of the probabilities:

Stimulus orientation decoded from cortical responses Using Bayes formula, the optimal estimate of the stimulus is the peak of the posterior probability distribution (P(θ) = Probability of θ in natural images) : Equivalently the peak of the log of the posterior: Given d i cells tuned to each orientation θ i the equation is grouped using average responses:

Stimulus angle decoded from cortical responses Using orientation tuned cells in V1, plus cross-orientation inhibition, we derived a matrix valued tuning function for (V4?) cells selective for angles  composed of two lines θ p and θ q : For the prior P(  ) we made the rough approximation: Finally, stimulus angles were decoded from the population responses of orientation tuned cells using an equation similar to that for orientations:

ASSUMPTION: Observer perceives an angle equal to the optimally decoded angle, i.e. the peak of the posterior probability distribution Stimulus angle 140º Decoded oblique angle 142º Decoded vertical angle 138º

From cortical anisotropy to shape inconstancy 1.We show an oblique bias for 3-D appearance. 2.The 3-D effect can be explained by an oblique bias for 2- D angles. 3.Simulations show that the anisotropy in orientation tuning of cortical neurons plus cross-orientation inhibition explains the 2-D oblique bias. 4.Anisotropy in numbers of cells predicts the opposite bias. 5.The predictions were insensitive to the prior distribution.

Consequences of the oblique bias for angle perception Zucker et al Fleming et al Cohen & Singh Tse

Conclusions 1.If the perception of 3D shape depends on the extraction of simple image features, then bias in the appearance of the image features will lead to bias in the appearance of 3D shape. 2.Variations in properties within neural populations can have direct effects on visual percepts, and need to be included in neural decoding models. REFERENCE Cohen EH and Zaidi Q Fundamental failures of shape constancy due to cortical anisotropy. Journal of Neuroscience (Under review).