Spike Train Statistics Sabri IPM. Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement.

Slides:



Advertisements
Similar presentations
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Advertisements

The linear/nonlinear model s*f 1. The spike-triggered average.
Probabilistic models Haixu Tang School of Informatics.
Biological Modeling of Neural Networks: Week 9 – Coding and Decoding Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 What is a good neuron model? - Models.
2011 COURSE IN NEUROINFORMATICS MARINE BIOLOGICAL LABORATORY WOODS HOLE, MA GENERALIZED LINEAR MODELS Uri Eden BU Department of Mathematics and Statistics.
Spike Trains Kenneth D. Harris 3/2/2015. You have recorded one neuron How do you analyse the data? Different types of experiment: Controlled presentation.
An Introductory to Statistical Models of Neural Data SCS-IPM به نام خالق ناشناخته ها.
Neurophysics Part 1: Neural encoding and decoding (Ch 1-4) Stimulus to response (1-2) Response to stimulus, information in spikes (3-4) Part 2: Neurons.
III-28 [122] Spike Pattern Distributions in Model Cortical Networks Joanna Tyrcha, Stockholm University, Stockholm; John Hertz, Nordita, Stockholm/Copenhagen.
What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t),
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
Shin Ishii Nara Institute of Science and Technology
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Part II: Population Models BOOK: Spiking Neuron Models, W. Gerstner and W. Kistler Cambridge University Press, 2002 Chapters 6-9 Laboratory of Computational.
BCS547 Neural Encoding.
Point process and hybrid spectral analysis.
How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
Reinagel lectures 2006 Take home message about LGN 1. Lateral geniculate nucleus transmits information from retina to cortex 2. It is not known what computation.
Overview of Neuroscience Tony Bell Helen Wills Neuroscience Institute University of California at Berkeley.
Stable Propagation of Synchronous Spiking in Cortical Neural Networks Markus Diesmann, Marc-Oliver Gewaltig, Ad Aertsen Nature 402: Flavio Frohlich.
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
Statistical analysis and modeling of neural data Lecture 5 Bijan Pesaran 19 Sept, 2007.
Statistical analysis and modeling of neural data Lecture 4 Bijan Pesaran 17 Sept, 2007.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Overview G. Jogesh Babu. Probability theory Probability is all about flip of a coin Conditional probability & Bayes theorem (Bayesian analysis) Expectation,
Modeling Your Spiking Data with Generalized Linear Models.
Michael P. Kilgard Sensory Experience and Cortical Plasticity University of Texas at Dallas.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
Biological Modeling of Neural Networks Week 8 – Noisy input models: Barrage of spike arrivals Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Biological Modeling of Neural Networks Week 8 – Noisy output models: Escape rate and soft threshold Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
Theoretical Neuroscience Physics 405, Copenhagen University Block 4, Spring 2007 John Hertz (Nordita) Office: rm Kc10, NBI Blegdamsvej Tel (office)
Take home point: Action potentials in a distributed neural network can be precisely timed via distributed oscillatory fields events. When you remove those.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Estimating the firing rate
BCS547 Neural Decoding.
Overview G. Jogesh Babu. Overview of Astrostatistics A brief description of modern astronomy & astrophysics. Many statistical concepts have their roots.
Biological Modeling of Neural Networks Week 7 – Variability and Noise: The question of the neural code Wulfram Gerstner EPFL, Lausanne, Switzerland 7.1.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Two Mean Neuronal Waveforms Distribution of Spike Widths Interaction of Inhibitory and Excitatory Neurons During Visual Stimulation David Maher Department.
Lecture 2: Everything you need to know to know about point processes Outline: basic ideas homogeneous (stationary) Poisson processes Poisson distribution.
From LIF to HH Equivalent circuit for passive membrane The Hodgkin-Huxley model for active membrane Analysis of excitability and refractoriness using the.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Ch 3. Cell assemblies and serial computation in neural circuits Information Processing by Neuronal Populations, ed. C Holscher and M Munk. Also published.
Biological Modeling of Neural Networks Week 10 – Variability and Noise: The question of the neural code Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1.
The Neural Code Baktash Babadi SCS, IPM Fall 2004.
Overview G. Jogesh Babu. R Programming environment Introduction to R programming language R is an integrated suite of software facilities for data manipulation,
Biological Modeling of Neural Networks Week 11 – Variability and Noise: Autocorrelation Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Variation of.
Gaby Maimon, John A. Assad  Neuron  Volume 62, Issue 3, Pages (May 2009)
Volume 20, Issue 5, Pages (May 1998)
Volume 82, Issue 1, Pages (April 2014)
EE513 Audio Signals and Systems
Volume 20, Issue 5, Pages (May 1998)
Volume 30, Issue 2, Pages (May 2001)
Shigeyoshi Fujisawa, György Buzsáki  Neuron 
Visually Cued Action Timing in the Primary Visual Cortex
Sensory Population Decoding for Visually Guided Movements
Athanassios G Siapas, Matthew A Wilson  Neuron 
Analyses of neurons population data
The Temporal Correlation Hypothesis of Visual Feature Integration
Jozsef Csicsvari, Hajime Hirase, Akira Mamiya, György Buzsáki  Neuron 
Volume 91, Issue 3, Pages (August 2016)
Firing Rates and Spike Statistics
Relationship between spiking and 5K signal in vivo.
Reliability and Representational Bandwidth in the Auditory Cortex
Rony Azouz, Charles M. Gray  Neuron 
Presentation transcript:

Spike Train Statistics Sabri IPM

Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement  unknown inputs and states  what kind of code:  rate: rate coding (bunch of spikes)  spike time: temporal coding (individual spikes) [Dayan and Abbot, 2001]

Non-parametric Methods recording stimulus repeated trials stimulus onset Firing rate estimation methods: PSTH Kernel density function Information is in the difference of firing rates over time

Parametric Methods recording stimulus repeated trials stimulus onset Two sets of different values for two raster plots Parameter estimation methods: ML – Maximum likelihood MAP – Maximum a posterior EM – Expectation Maximization

Models based on distributions: definitions & symbols  Fitting distributions to spike trains:  Probability corresponding to every sequence of spikes that can be evoked by the stimulus: : probability of an event (a single spike) : probability density function Spike time: Joint probability of n events at specified times

Discrete random processes  Point Processes:  The probability of an event could depend of the entire history of proceeding events  Renewal Processes  The dependence extends only to the immediately preceding event  Poisson Processes  If there is no dependence at all on preceding events t titi t i-1

Firing rate:  The probability of firing a single spike in a small interval around t i  Is not generally sufficient information to predict the probability of spike sequence  If the probability of generating a spike is independent of the presence or timing of other spikes, the firing rate is all we need to compute the probabilities for all possible spike sequences repeated trials Homogeneous Distributions: firing rate is considered constant over time Inhomogeneous Distributions: firing rate is considered to be time dependent

Homogenous Poisson Process  Poisson: each event is independent of others  Homogenous: the probability of firing is constant during period T  Each sequence probability: 0T …. t1t1 titi tntn : Probability of n events in [0 T] rT=10 [Dayan and Abbot, 2001]

Fano Factor  Distribution Fitting validation  The ratio of variance and mean of the spike count  For homogenous Poisson model: MT neurons in alert macaque monkey responding to moving visual images: (spike counts for 256 ms counting period, 94 cells recorded under a variety of stimulus conditions) [Dayan and Abbot, 2001]

Interspike Interval (ISI) distribution  Distribution Fitting validation  The probability density of time intervals between adjacent spikes  for homogeneous Poisson model: titi t i+1 Interspike interval MT neuron Poisson model with a stochastic refractory period [Dayan and Abbot, 2001]

Coefficient of variation  Distribution Fitting validation  In ISI distribution:  For homogenous Poisson:  For any renewal process, the Fano Factor over long time intervals approaches to value a necessary but not sufficient condition to identify Poisson spike train Coefficient of variation for V1 and MT neurons compared to Poisson model with a refractory period: [Dayan and Abbot, 2001]

Renewal Processes  For Poisson processes:  For renewal processes:  in which t 0 is the time of last spike  And H is hazard function  By these definitions ISI distribution is:  Commonly used renewal processes:  Gamma process: (often used non Poisson process)  Log-Normal process:  Inverse Gaussian process:

ISI distributions of renewal processes [van Vreeswijk, 2010]

Gamma distribution fitting spiking activity from a single mushroom body alpha-lobe extrinsic neuron of the honeybee in response to N=66 repeated stimulations with the same odor [Meier et al., Neural Networks, 2008]

Renewal processes fitting [Riccardo et al., 2001, J. Neurosci. Methods] spike train from rat CA1 hippocampal pyramidal neurons recorded while the animal executed a behavioral task Inhomogeneous Poisson Inhomogeneous Gamma Inhomogeneous inverse Gaussian

Spike train models with memory  Biophysical features which might be important  Bursting: a short ISI is more probable after a short ISI  Adaptation: a long ISI is more probable after a short ISI  Some examples:  Hidden Markov Processes:  The neuron can be in one of N states  States have different distributions and different probability for next state  Processes with memory for N last ISIs:  Processes with adaptation  Doubly stochastic processes

Take Home messages  A class of parametric interpretation of neural data is fitting point processes  Point processes are categorized based on the dependence of memory:  Poisson processes: without memory  Renewal processes: dependence on last event (spike here)  Can show refractory period effect  Point processes: dependence more on history  Can show bursting & adaptation  Parameters to consider  Fano Factor  Coefficient of variation  Interspike interval distribution

Spike train autocorrelation  Distribution of times between any two spikes  Detecting patterns in spike trains (like oscillations) Autocorrelation and cross-correlation in cat’s primary visual cortex: Cross-correlation: a peak at zero: synchronous a peak at non zero: phase locked [Dayan and Abbot, 2001]

Neural Code  In one neuron:  Independent spike code: rate is enough (e.g. Poisson process)  Correlation code: information is also correlation of two spike times (not more than 10% of information in rate codes, Abbot 2001)  In population:  Individual neuron  Correlation between individual neurons adds more information  Synchrony  Rhythmic oscillations (e.g. place cells) [Dayan and Abbot, 2001]