What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t),

Slides:



Advertisements
Similar presentations
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Advertisements

What is the neural code?. Alan Litke, UCSD Reading out the neural code.
EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
The linear/nonlinear model s*f 1. The spike-triggered average.
Chapter 2.
Lecture 7: Basis Functions & Fourier Series
Biological Modeling of Neural Networks: Week 9 – Coding and Decoding Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 What is a good neuron model? - Models.
Spike Train Statistics Sabri IPM. Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement.
Auditory Nerve Laboratory: What was the Stimulus? Bertrand Delgutte HST.723 – Neural Coding and Perception of Sound.
An Introductory to Statistical Models of Neural Data SCS-IPM به نام خالق ناشناخته ها.
Neurophysics Part 1: Neural encoding and decoding (Ch 1-4) Stimulus to response (1-2) Response to stimulus, information in spikes (3-4) Part 2: Neurons.
Entropy and Information
Spectral analysis for point processes. Error bars. Bijan Pesaran Center for Neural Science New York University.
NOISE and DELAYS in NEUROPHYSICS Andre Longtin Center for Neural Dynamics and Computation Department of Physics Department of Cellular and Molecular Medicine.
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Point process and hybrid spectral analysis.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
For a random variable X with distribution p(x), entropy is given by H[X] = -  x p(x) log 2 p(x) “Information” = mutual information: how much knowing the.
Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Statistical analysis and modeling of neural data Lecture 4 Bijan Pesaran 17 Sept, 2007.
Neural Coding Through The Ages February 1, 2002 Albert E. Parker Complex Biological Systems Department of Mathematical Sciences Center for Computational.
Goals Looking at data –Signal and noise Structure of signals –Stationarity (and not) –Gaussianity (and not) –Poisson (and not) Data analysis as variability.
Cracking the Population Code Dario Ringach University of California, Los Angeles.
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Review of Probability.
Biological Modeling of Neural Networks: Week 15 – Population Dynamics: The Integral –Equation Approach Wulfram Gerstner EPFL, Lausanne, Switzerland 15.1.
Physics Fluctuomatics (Tohoku University) 1 Physical Fluctuomatics 2nd Mathematical Preparations (1): Probability and statistics Kazuyuki Tanaka Graduate.
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Unsupervised learning
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Review for Exam I ECE460 Spring, 2012.
1 Part 5 Response of Linear Systems 6.Linear Filtering of a Random Signals 7.Power Spectrum Analysis 8.Linear Estimation and Prediction Filters 9.Mean-Square.
Biological Modeling of Neural Networks Week 8 – Noisy input models: Barrage of spike arrivals Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
FMRI Methods Lecture7 – Review: analyses & statistics.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Approximate Analytical Solutions to the Groundwater Flow Problem CWR 6536 Stochastic Subsurface Hydrology.
Neural codes and spiking models. Neuronal codes Spiking models: Hodgkin Huxley Model (small regeneration) Reduction of the HH-Model to two dimensions.
Biological Modeling of Neural Networks Week 8 – Noisy output models: Escape rate and soft threshold Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
Theoretical Neuroscience Physics 405, Copenhagen University Block 4, Spring 2007 John Hertz (Nordita) Office: rm Kc10, NBI Blegdamsvej Tel (office)
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
What is the neural code?. Alan Litke, UCSD What is the neural code?
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Additional Topics in Prediction Methodology. Introduction Predictive distribution for random variable Y 0 is meant to capture all the information about.
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU Oct 5, Course.
1 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB) [Bayesian Brain]
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Lecture 2: Everything you need to know to know about point processes Outline: basic ideas homogeneous (stationary) Poisson processes Poisson distribution.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 14: Applications of Filters.
Neural Codes. Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo.
1 Neural Codes. 2 Neuronal Codes – Action potentials as the elementary units voltage clamp from a brain cell of a fly.
Biological Modeling of Neural Networks Week 11 – Variability and Noise: Autocorrelation Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Variation of.
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
SIGNALS PROCESSING AND ANALYSIS
Graduate School of Information Sciences, Tohoku University
Probabilistic Robotics
Predicting Every Spike
spike-triggering stimulus features
SPM2: Modelling and Inference
Generally Discriminant Analysis
Multivariate Methods Berlin Chen
Firing Rates and Spike Statistics
Adaptive Rescaling Maximizes Information Transmission
Presentation transcript:

What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t), or a modulated spiking probability, P(r = spike|s(t)). Two extremes of description: A Poisson model, where spikes are generated randomly with rate r(t). However, most spike trains are not Poisson (refractoriness, internal dynamics). Fine temporal structure might be meaningful.  Consider spike patterns or “words”, e.g. symbols including multiple spikes and the interval between retinal ganglion cells: “when” and “how much”

Spike Triggered Average 2-Spike Triggered Average (10 ms separation) 2-Spike Triggered Average (5 ms) Multiple spike symbols from the fly motion sensitive neuron

Spike statistics Stochastic process that generates a sequence of events: point process Probability of an event at time t depends only on preceding event: renewal process All events are statistically independent: Poisson process Poisson: r(t) = r independent of time, probability to see a spike only depends on the time you watch. P T [n] = (rT) n exp(-rT)/n! Exercise: the mean of this distribution is rT the variance of this distribution is also rT. The Fano factor = variance/mean = 1 for Poisson processes. The Cv = coefficient of variation = STD/mean = 1 for Poisson Interspike interval distribution P(T) = r exp(-rT)

The Poisson model (homogeneous) Probability of n spikes in time T as function of (rate  T) Poisson approaches Gaussian for large rT (here = 10)

How good is the Poisson model? Fano Factor Fano factor Data fit to: variance = A  mean B A B Area MT

How good is the Poisson model? ISI analysis ISI Distribution from an area MT Neuron ISI distribution generated from a Poisson model with a Gaussian refractory period

How good is the Poisson Model? C V analysis Coefficients of Variation for a set of V1 and MT Neurons Poisson Poisson with ref. period

spike-triggering stimulus feature stimulus X(t) decision function spike output Y(t) x1x1 f1f1 P(spike|x 1 ) x1x1 Decompose the neural computation into a linear stage and a nonlinear stage. Modeling spike generation Given a stimulus, when will the system spike? To what feature in the stimulus is the system sensitive? Gerstner, spike response model; Aguera y Arcas et al. 2001, 2003; Keat et al., 2001 Simple example: the integrate-and-fire neuron

Let’s start with a rate response, r(t) and a stimulus, s(t). The optimal linear estimator is closest to satisfying Predicting the firing rate Want to solve for K. Multiply by s(t-  ’) and integrate over t: Note that we have produced terms which are simply correlation functions: Given a convolution, Fourier transform: Now we have a straightforward algebraic equation for K(w): Solving for K(t),

Predicting the firing rate For white noise, the correlation function C ss (  ) =    So K(  ) is simply C rs (  ). Going back to:

spike-triggering stimulus feature stimulus X(t) decision function spike output Y(t) x1x1 f1f1 P(spike|x 1 ) x1x1 The decision function is P(spike|x 1 ). Derive from data using Bayes’ theorem: P(spike|x 1 ) = P(spike) P(x 1 | spike) / P(x 1 ) P(x 1 ) is the prior : the distribution of all projections onto f 1 P(x 1 | spike) is the spike-conditional ensemble : the distribution of all projections onto f 1 given there has been a spike P(spike) is proportional to the mean firing rate Modeling spike generation

Models of neural function spike-triggering stimulus feature stimulus X(t) decision function spike output Y(t) x1x1 f1f1 P(spike|x 1 ) x1x1 Weaknesses

STA Gaussian prior stimulus distribution Spike-conditional distribution covariance Reverse correlation: a geometric view

Dimensionality reduction C ij = - - The covariance matrix is simply Properties: If the computation is low-dimensional, there will be a few eigenvalues significantly different from zero The number of eigenvalues is the relevant dimensionality The corresponding eigenvectors span the subspace of the relevant features Stimulus prior Bialek et al., 1997

Functional models of neural function spike-triggering stimulus feature stimulus X(t) decision function spike output Y(t) x1x1 f1f1 P(spike|x 1 ) x1x1

spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of neural function

? spike-triggering stimulus features ? stimulus X(t) multidimensional decision function spike history feedback spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 ? Functional models of neural function

Covariance analysis Let’s develop some intuition for how this works: the Keat model Keat, Reinagel, Reid and Meister, Predicting every spike. Neuron (2001) Spiking is controlled by a single filter Spikes happen generally on an upward threshold crossing of the filtered stimulus  expect 2 modes, the filter F(t) and its time derivative F’(t)

Covariance analysis

Let’s try some real neurons: rat somatosensory cortex (Ras Petersen, Mathew Diamond, SISSA: SfN 2003). Record from single units in barrel cortex

Covariance analysis Spike-triggered average: Pre-spike time (ms) Normalised velocity

Covariance analysis Is the neuron simply not very responsive to a white noise stimulus?

Covariance analysis PriorSpike- triggered Difference

Covariance analysis Pre-spike time (ms) Velocity EigenspectrumLeading modes

Input/output relations wrt first two filters, alone: and in quadrature: Covariance analysis

Pre-spike time (ms) Velocity (arbitrary units) How about the other modes? Next pair with +ve eigenvalues Pair with -ve eigenvalues Covariance analysis

Firing rate decreases with increasing projection: suppressive modes (Simoncelli et al.) Input/output relations for negative pair