Presentation is loading. Please wait.

Presentation is loading. Please wait.

Biases: An Example Non-accidental properties: Properties that appear in an image that are very unlikely to have been produced by chance, and therefore.

Similar presentations


Presentation on theme: "Biases: An Example Non-accidental properties: Properties that appear in an image that are very unlikely to have been produced by chance, and therefore."— Presentation transcript:

1 Biases: An Example Non-accidental properties: Properties that appear in an image that are very unlikely to have been produced by chance, and therefore are likely to reflect properties of the 3-D world. view-point invariant properties straight lines parallel lines

2 ? mental representation view-point invariant of objects
book key mental representation of objects view-point invariant cat etc.. tree ? mental representation of bars of light view-point dependent

3 mental representation of objects view-point invariant
book key mental representation of objects view-point invariant cat etc.. tree 1) Direct models mental representation of bars of light view-point dependent

4 Lades et al Model Poggio & Edelman Model mental representation
book key mental representation of objects view-point invariant cat etc.. tree Input layer like representation in V1 (called “Gabor Jets”) Inputs on that layer are matched to inputs in a memory layer The object is identified based on the match with least distortion 1) Direct models: Lades et al Model Poggio & Edelman Model Input layer like representation in V1 3-layer network is trained to rotate all views of an image to one view. The hidden units are seen as a way of rotating images to match memory images (“radial basis functions”) mental representation of bars of light view-point dependent

5 Lades et al Model Poggio & Edelman Model mental representation
book key mental representation of objects view-point invariant cat etc.. tree Input layer like representation in V1 (called “Gabor Jets”) Inputs on that layer are matched to inputs in a memory layer The object is identified based on the match with least distortion 1) Direct models: Lades et al Model Poggio & Edelman Model Input layer like representation in V1 3-layer network is trained to rotate all views of an image to one view. The hidden units are seen as a way of rotating images to match memory images (“radial basis functions”) mental representation of bars of light view-point dependent

6 Lowe’s SCERPO Model Ullman’s Model mental representation
book key mental representation of objects view-point invariant cat etc.. tree 2) View-point invariant mental representation of non-accidental properties of an image view-point invariant Input layer takes information represented as it is in V1 view-point invarient information is extracted this allows the input image to be rotated in order to fit an image stored in memory Lowe’s SCERPO Model Ullman’s Model mental representation of bars of light view-point dependent

7 The problem with all of these theories:
The representation of objects in memory is stored as a two-dimensional image, which visual images are rotated, distorted, and matched to. But in actuality, objects are three dimensional things in the world. So lets make a model which has the basic units that make up mental representations of objects being three-dimensional solids, rather than lines and edges.

8 mental representation of objects view-point invariant
book key mental representation of objects view-point invariant cat etc.. tree mental representation of geons view-point invariant mental representation of non-accidental properties of an image view-point invariant mental representation of bars of light view-point dependent

9 Geons straight, parallel curved, parallel corner Y intersections

10 mental representation of objects view-point invariant
book key mental representation of objects view-point invariant cat etc.. tree mental representation of geons view-point invariant mental representation of non-accidental properties of an image view-point invariant mental representation of bars of light view-point dependent

11 what processing where processing rectangle unit above unit
book key mental representation of objects view-point invariant cat etc.. briefcase mental representation of geons view-point invariant what processing where processing rectangle unit above unit cylinder unit below unit cone unit left of unit tube unit righ of unit temporal binding

12 Evidence for geons? Experiment 1: Visual Priming
Reaction Time: 900 ms. (response) (response) time

13 Evidence for geons? Experiment 1: Visual Priming
Reaction Time: 700 ms. (response) (response) time

14 Evidence for geons? Experiment 1: Visual Priming
Reaction Time: 800 ms. (response) (response) time

15 Evidence For Geons: Priming Studies
Shared? First Second Lines Geons Basic Response Time Edges Cat. Yes Yes Yes Yes No Yes Yes Yes No No No No 700 ms 700 ms 900 ms

16 Evidence For Geons: Priming Studies
Shared? First Second Lines Geons Spec. Basic Resp. Time Edges Cat. Cat. Yes Yes Yes Yes Yes No No Yes Yes Yes No No No Yes Yes 700 ms 800 ms 800 ms

17 Objects, Faces and Rotation:

18 Face Recognition Recognizing faces is an entirely different problem
(computationally) from recognizing objects? Objects “M” shaped function Faces Shape of rotation function?

19 Face Recognition Experiment

20 Objects, Faces and Rotation:

21 Face Recognition different processes?
Recognizing faces is an entirely different problem (computationally) from recognizing objects? Objects Faces “M” shaped function Less affected by illumination Recognition by components (Geon Theory) Shape of rotation function? More affected by illumination Recognition by coordinates (Templates) different processes?

22 Double Dissociation Patient Studies Prosopagnosia: Objects can be recognized, Faces can not. Processes responsible for object recognition: Processes responsible for face recognition:

23 Double Dissociation Patient Studies Prosopagnosia: Objects can be recognized, Faces can not. (Patient CK): Faces can be recognized, Objects can not Processes responsible for object recognition: Processes responsible for face recognition:

24 Double Dissociation Patient Studies Prosopagnosia: Objects can be recognized, Faces can not. (Patient CK): Faces can be recognized, Objects can not Neuroimaging Studies Faces activate the FFA area of the temporal lobe Objects activate the PPA

25 Double Dissociation IF:
General Form IF: The ability to perform some task X is affected by or correlated with some factor A (damage to an area, activity in a part of the brain, or performing some other task) but not factor B (damage to a different area, activity in a different part, or some other task), AND: The ability to perform some task Y is affected by or correlated with some factor B but not factor A, THEN: There is a double dissociation between X and Y, and the mechanisms required to perform them are functionally independent.

26 What and Where Pathways
Dissociation What and Where Pathways Some patients Can identify objects by shape Cannot Say where objects are Cannot navigate the world Damage Parietal lobe Dorsal visual stream X

27 What and Where Pathways
Dissociation What and Where Pathways Some patients Cannot identify objects by shape Can Say where objects are Cannot navigate the world Damage Temporal lobe Ventral visual stream X

28 What and Where Pathways
Dissociation What and Where Pathways Patient D.F. Cannot Explicitly line up a line with a line Task: Take an envelope and line it up with this line Can Use motor system (part of “where” pathway) to do same task just failed at Task: Take this envelope and “mail the letter” pretending that this line is a mailbox

29 Two face recognition systems
Dissociation Two face recognition systems Recognize faces normal way “ Hey I know you” Recognize by change in skin conductance for familiar faces

30 Two face recognition systems
Dissociation Two face recognition systems Person A can recognize normal way but no change in skin conductance Person B cannot recognize in normal way but does have change in skin conductance


Download ppt "Biases: An Example Non-accidental properties: Properties that appear in an image that are very unlikely to have been produced by chance, and therefore."

Similar presentations


Ads by Google