Journal of Vision. 2011;11(9):17. doi: / Figure Legend:

Slides:



Advertisements
Similar presentations
Figure Legend: From: Human grasp point selection
Advertisements

Journal of Vision. 2016;16(13). doi: / Figure Legend:
From: What limits performance in the amblyopic visual system: Seeing signals in noise with an amblyopic brain Journal of Vision. 2008;8(4):1. doi: /8.4.1.
Date of download: 10/16/2017 Copyright © ASME. All rights reserved.
From: Attentive and pre-attentive aspects of figural processing
From: Statistics for optimal point prediction in natural images
From: Focus information is used to interpret binocular images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
From: Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design Journal of Vision. 2017;17(1):24. doi: /
Journal of Vision. 2008;8(7):12. doi: / Figure Legend:
From: Asymmetric interference between the perception of shape and the perception of surface properties Journal of Vision. 2009;9(5):13. doi: /
Journal of Vision. 2009;9(6):10. doi: / Figure Legend:
From: Responses of the human visual cortex and LGN to achromatic and chromatic temporal modulations: An fMRI study Journal of Vision. 2010;10(13):13. doi: /
From: Color signals in the primary visual cortex of marmosets
Journal of Vision. 2004;4(9):3. doi: /4.9.3 Figure Legend:
From: What are the units of storage in visual working memory?
Figure Legend: From: Visual detection of symmetry of 3D shapes
From: Event-related potentials reveal an early advantage for luminance contours in the processing of objects Journal of Vision. 2011;11(7):1. doi: /
Figure Legend: From: The modern Japanese color lexicon
Journal of Vision. 2015;15(16):13. doi: / Figure Legend:
Journal of Vision. 2013;13(4):21. doi: / Figure Legend:
From: The attenuation surface for contrast sensitivity has the form of a witch's hat within the central visual field Journal of Vision. 2012;12(11):23.
Figure Legend: From: Adaptation to blurred and sharpened video
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
Figure Legend: From: An escape from crowding
From: Saccadic suppression during voluntary versus reactive saccades
Figure Legend: From: Biological “bar codes” in human faces
From: Crowding is tuned for perceived (not physical) location
Figure Legend: From: Probing intermediate stages of shape processing
Journal of Vision. 2012;12(6):33. doi: / Figure Legend:
From: Contextual effects on real bicolored glossy surfaces
Journal of Vision. 2008;8(3):33. doi: / Figure Legend:
From: Does spatial invariance result from insensitivity to change?
Figure Legend: From: Comparing integration rules in visual search
Date of download: 11/11/2017 Copyright © ASME. All rights reserved.
Journal of Vision. 2009;9(5):31. doi: / Figure Legend:
Journal of Vision. 2004;4(6):4. doi: /4.6.4 Figure Legend:
From: Surface color perception and equivalent illumination models
Figure Legend: From: A short-term memory of multi-stable perception
Figure Legend: From: Gating of remote effects on lightness
From: Perceptual entrainment of individually unambiguous motions
From: Perception of light source distance from shading patterns
Journal of Vision. 2015;15(6):4. doi: / Figure Legend:
Journal of Vision. 2011;11(6):14. doi: / Figure Legend:
Journal of Vision. 2005;5(11):9. doi: / Figure Legend:
Journal of Vision. 2010;10(12):1. doi: / Figure Legend:
Figure Legend: From: Latitude and longitude vertical disparities
From: The vertical horopter is not adaptable, but it may be adaptive
From: Probing visual consciousness: Rivalry between eyes and images
From: Linear–nonlinear models of the red–green chromatic pathway
From: Expertise with multisensory events eliminates the effect of biological motion rotation on audiovisual synchrony perception Journal of Vision. 2010;10(5):2.
From: Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise Journal of Vision. 2009;9(1):2.
Journal of Vision. 2015;15(10):5. doi: / Figure Legend:
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
Journal of Vision. 2016;16(15):21. doi: / Figure Legend:
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
From: Rethinking ADA signage standards for low-vision accessibility
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
From: Four types of ensemble coding in data visualizations
Figure Legend: From: Spatial structure of contextual modulation
Figure Legend: From: The absolute threshold of cone vision
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
Journal of Vision. 2017;17(12):21. doi: / Figure Legend:
Figure Legend: From: Crowding and eccentricity determine reading rate
Figure Legend: From: Crowding and eccentricity determine reading rate
Journal of Vision. 2013;13(3):1. doi: / Figure Legend:
Journal of Vision. 2016;16(13). doi: / Figure Legend:
Presentation transcript:

From: A real head turner: Horizontal and vertical head directions are multichannel coded Journal of Vision. 2011;11(9):17. doi:10.1167/11.9.17 Figure Legend: Opponent versus multichannel coding. The graphs on the left schematically represent the responses of a multichannel system with three hypothetical channels preferring heads oriented to the left (or down in the case of vertical head direction; red), heads facing directly ahead (black), and heads oriented to the right (or up in the case of vertical head direction; blue). The graphs on the right schematically represent the responses of an opponent-coding system comprising two hypothetical opponent-coded pools of cells preferring heads oriented to the left (or down in the case of vertical head direction; red) and heads oriented to the right (or up in the case of vertical head direction; blue). The graphs show responses in the baseline condition (a, d), following adaptation to alternating left–right (or up–down) oriented heads (b, e), and following adaptation to direct-facing heads (c, f). Gray-shaded regions indicate the range in which a probe would be categorized as “direct” and is determined by the crossover points between the channels. In the graphs for adaptation conditions, dashed lines represent engagement following adaptation; for purposes of comparison, responses in the baseline condition are graphed as solid lines. The bar graphs at the top of the figure indicate the proposed engagement of each channel for three optimal stimuli: left- (or down), direct-, and right- (or up) oriented heads. Date of download: 10/27/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.