Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 2.

Similar presentations


Presentation on theme: "Chapter 2."— Presentation transcript:

1 Chapter 2

2 Outline Linear filters Visual system (retina, LGN, V1)
Spatial receptive fields V1 LGN, retina Temporal receptive fields in V1 Direction selectivity

3 Linear filter model Given s(t) and r(t), what is D?

4

5 White noise stimulus White noise is used to as a stimulus to measure the spike triggered average response as in the electric fish experiment.

6 Fourier transform In practice, only approximate white noise signals can be generated with a flat spectrum up to a cut-off frequency.

7 H1 neuron in visual system of blowfly
A: Stimulus is velocity profile; B: response of H1 neuron of the fly visual system; C: rest(t) using the linear kernel D(t) (solid line) and actual neural rate r(t) agree when rates vary slowly. D(t) is constructed using white noise

8 Deviation from linearity

9

10 Early visual system: Retina
5 types of cells: Rods and cones: photo-transduction into electrical signal Lateral interaction of Bipolar cells through Horizontal cells. No action potentials for local computation Action potentials in retinal ganglion cells coupled by Amacrine cells. Note G_1 off response G_2 on response A: Anatomy of retina of dog (Cajal 1911) B: Recording in mud puppy (amphibian). Stimulus is flash of light (1 sec)

11 Pathway from retina via LGN to V1
Lateral geniculate nucleus (LGN) cells receive input from Retinal ganglion cells from both eyes. Both LGNs represent both eyes but different parts of the world Neurons in retina, LGN and visual cortex have receptive fields: Neurons fire only in response to higher/lower illumination within receptive field Neural response depends (indirectly) on illumination outside receptive field

12 Simple and complex cells
Cells in retina, LGN, V1 are simple or complex Simple cells: Model as linear filter Complex cells Show invariance to spatial position within the receptive field Poorly described by linear model

13 Retinotopic map Neighboring image points are mapped onto neighboring neurons in V1 Visual world is centered on fixation point. The left/right visual world maps to the right/left V1 Distance on the display (eccentricity) is measured in degrees by dividing by distance to the eye Distances on the display screen (eccentricity) are measured in degrees by dividing by distance to the eye

14 Retinotopic map A: The pattern on the cortex was produced by imaging a radioactive analogue of glucose that was taken up by active neurons while a monkey viewed a visual image consisting of concentric circles and radial lines $\lambda=12 mm,\epsilon_0=1^0$

15 Retinotopic map

16 Visual stimuli

17 Nyquist Frequency

18 Spatial receptive fields

19 V1 spatial receptive fields

20 Gabor functions

21 Response to grating

22 Temporal receptive fields
Space-time evolution of V1 cat receptive field ON/OFF boundary changes to OFF/ON boundary over time. Extrema locations do not change with time: separable kernel D(x,y,t)=Ds(x,y)Dt(t) Cat V1

23 Temporal receptive fields

24 Space-time receptive fields

25 Space-time receptive fields

26 Space-time receptive fields

27 Direction selective cells

28 Complex cells Early stage of invariant object recognition

29 Example of non-separable receptive fields LGN X cell

30 Example of non-separable receptive fields LGN X cell
Cat LGN X cell

31 Comparison model and data

32 Constructing V1 receptive fields
Oriented V1 spatial receptive fields can be constructed from LGN center surround neurons

33

34 Stochastic neural networks
The top two layers form an associative memory whose energy landscape models the low dimensional manifolds of the digits. The energy valleys have names 2000 top-level neurons 10 label neurons 500 neurons The model learns to generate combinations of labels and images. To perform recognition we start with a neutral state of the label units and do an up-pass from the image followed by a few iterations of the top-level associative memory. 500 neurons 28 x 28 pixel image Hinton

35 Samples generated by letting the associative memory run with one label clamped using Gibbs sampling
Hinton

36 Examples of correctly recognized handwritten digits that the neural network had never seen before
Hinton

37 How well does it discriminate on MNIST test set with no extra information about geometric distortions? Generative model based on RBM’s % Support Vector Machine (Decoste et. al.) % Backprop with 1000 hiddens (Platt) ~1.6% Backprop with >300 hiddens ~1.6% K-Nearest Neighbor ~ 3.3% See Le Cun et. al for more results Its better than backprop and much more neurally plausible because the neurons only need to send one kind of signal, and the teacher can be another sensory input. Hinton

38 Summary Linear filters Visual system (retina, LGN, V1) Visual stimuli
White noise stimulus for optimal estimation Visual system (retina, LGN, V1) Visual stimuli V1 Spatial receptive fields Temporal receptive fields Space-time receptive fields Non-separable receptive fields, Direction selectivity LGN and Retina Non-separable ON center OFF surround cells V1 direction selective simple cells as sum of LGN simple cells

39 Exercise 2.3 Is based on Kara, Reinagel, Reid (Neuron, 2000).
Simultaneous single unit recordings of retinal ganglion cells, LGN relay cells and simple cells from primary visual cortex Spike count variability (Fano) less than Poisson, doubling from RGC to LGN and from LGN to cortex. Data explained by Poisson with refractory period Fig. 1,2,3


Download ppt "Chapter 2."

Similar presentations


Ads by Google