Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.

Similar presentations


Presentation on theme: "Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget."— Presentation transcript:

1 Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget

2 Stimulus (s) neurons encode Response (r) decode

3 Tuning curves sensory and motor info often encoded in “tuning curves” neurons give a characteristic “bell shaped” response

4 Difficulty of decoding noisy neurons create variable responses to same stimuli brain must estimate encoded variables from the “noisy hill” of a population response

5 Population vector estimator assign each neuron a vector vector length is proportional to activity vector direction corresponds to preferred direction Sum vectors

6 Population vector estimator Vector summation is equivalent to fitting a cosine function peak of cosine is estimate of direction

7 How good is an estimator? need to compare variance of estimator after repeated presentations to a lower bound the maximum likelihood estimate gives the lower variance bound for a given amount of independent noise VS

8 Stimulus (s) neurons encode Response (r) decode

9 Maximum Likelihood Decoding Maximum likelihood estimator Decoding Encoding

10 Goal: biological ML estimator recurrent neural network with broadly tuned units can achieve ML estimate with noise independent of firing rate can approximate ML estimate with activity- dependent noise

11 General Architecture units are fully connected and are arranged in frequency columns and orientation rows weights implement a 2-D Gaussian filter: 20 Preferred Frequency Preferred orientation PΘPΘ PλPλ

12 Input tuning curves circular normal functions with some spontaneous activity: Gaussian noise is added to inputs:

13 Unit updates & normalization units are convolved with filter (local excitation) responses are normalized divisively (global inhibition)

14 Results Rapidly converges strongly dependent on contrast

15 Results sigmoidal response curve after 3 iterations, becomes a step after 20 actual neuron

16 Noise Effects Width of input tuning curve held constant width of output tuning curve varied by adjusting spatial extent of the weights Flat Noise Proportional Noise

17 Analysis Q1: Why does the optimal width depend on noise? Q2: Why does the network perform better for flat noise? Flat Noise Proportional Noise

18 Analysis Smallest achievable variance: = inverse of the covariance matrix of the noise = vector of the derivative of the input tuning curve with respect to For Gaussian noise: Trace term is 0 when R is independent of Θ (flat noise) Θ

19 Summary network gives a good approximation of the optimal tuning curve determined by ML type of noise (flat vs proportional) affected variance and optimal tuning width


Download ppt "Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget."

Similar presentations


Ads by Google