Kinetic Theory for the Dynamics of Fluctuation-Driven Neural Systems David W. McLaughlin Courant Institute & Center for Neural Science New York University.

Slides:



Advertisements
Similar presentations
Spectral sensitivity of cones
Advertisements

Introduction to Neural Networks
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Chapter 4: The Visual Cortex and Beyond
The Primary Visual Cortex
What do we know about Primary Visual Cortex (V1)
Chapter 2.
Gabor Filter: A model of visual processing in primary visual cortex (V1) Presented by: CHEN Wei (Rosary) Supervisor: Dr. Richard So.
Central Visual Processes. Anthony J Greene2 Central Visual Pathways I.Primary Visual Cortex Receptive Field Columns Hypercolumns II.Spatial Frequency.
Synchrony in Neural Systems: a very brief, biased, basic view Tim Lewis UC Davis NIMBIOS Workshop on Synchrony April 11, 2011.
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Marseille, Jan 2010 Alfonso Renart (Rutgers) Jaime de la Rocha (NYU, Rutgers) Peter Bartho (Rutgers) Liad Hollender (Rutgers) Néstor Parga (UA Madrid)
V1 Dynamics and Sparsity and Multiple Feature Maps Michael Shelley – Courant Institute/CNS, NYU Collaborators: Bob Shapley – CNS/CIMS David Cai -- CIMS.
Biological Modeling of Neural Networks: Week 11 – Continuum models: Cortical fields and perception Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Transients.
2002/01/21PSCY , Term 2, Copyright Jason Harrison, The Brain from retina to extrastriate cortex.
Neuron Neurons Purkinje cells from cerebellum, dendrites showing calcium concentration.
Writing Workshop Find the relevant literature –Use the review journals as a first approach e.g. Nature Reviews Neuroscience Trends in Neuroscience Trends.
Question Examples If you were a neurosurgeon and you needed to take out part of the cortex of a patient, which technique would you use to identify the.
Exam 1 week from today in class assortment of question types including written answers.
How does the visual system represent visual information? How does the visual system represent features of scenes? Vision is analytical - the system breaks.
Scale-up of Cortical Representations in Fluctuation Driven Settings David W. McLaughlin Courant Institute & Center for Neural Science New York University.
Scaling-up Cortical Representations in Fluctuation-Driven Systems David W. McLaughlin Courant Institute & Center for Neural Science New York University.
Scaling-up Cortical Representations David W. McLaughlin Courant Institute & Center for Neural Science New York University
Levels in Computational Neuroscience Reasonably good understanding (for our purposes!) Poor understanding Poorer understanding Very poorer understanding.
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
A globally asymptotically stable plasticity rule for firing rate homeostasis Prashant Joshi & Jochen Triesch
Connected Populations: oscillations, competition and spatial continuum (field equations) Lecture 12 Course: Neural Networks and Biological Modeling Wulfram.
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
1 Computational Vision CSCI 363, Fall 2012 Lecture 3 Neurons Central Visual Pathways See Reading Assignment on "Assignments page"
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
THE VISUAL SYSTEM: EYE TO CORTEX Outline 1. The Eyes a. Structure b. Accommodation c. Binocular Disparity 2. The Retina a. Structure b. Completion c. Cone.
Lecture 11: Networks II: conductance-based synapses, visual cortical hypercolumn model References: Hertz, Lerchner, Ahmadi, q-bio.NC/ [Erice lectures]
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
Deriving connectivity patterns in the primary visual cortex from spontaneous neuronal activity and feature maps Barak Blumenfeld, Dmitri Bibitchkov, Shmuel.
Lecture 9: Introduction to Neural Networks Refs: Dayan & Abbott, Ch 7 (Gerstner and Kistler, Chs 6, 7) D Amit & N Brunel, Cerebral Cortex 7, (1997)
Pencil-and-Paper Neural Networks Prof. Kevin Crisp St. Olaf College.
Chapter 7. Network models Firing rate model for neuron as a simplification for network analysis Neural coordinate transformation as an example of feed-forward.
Chapter 3: Neural Processing and Perception. Neural Processing and Perception Neural processing is the interaction of signals in many neurons.
Theoretical Neuroscience Physics 405, Copenhagen University Block 4, Spring 2007 John Hertz (Nordita) Office: rm Kc10, NBI Blegdamsvej Tel (office)
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
1 3. Simplified Neuron and Population Models Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Oscillatory Models of Hippocampal Activity and Memory Roman Borisyuk University of Plymouth, UK In collaboration with.
Scaling-up Cortical Representations David W. McLaughlin Courant Institute & Center for Neural Science New York University
Sara A. Solla Northwestern University
Activity Dependent Conductances: An “Emergent” Separation of Time-Scales David McLaughlin Courant Institute & Center for Neural Science New York University.
Mean Field Theories in Neuroscience B. Cessac, Neuromathcomp, INRIA.
Fundamentals of Computational Neuroscience, T. P. Trappenberg, 2002.
Nens220, Lecture 11 Introduction to Realistic Neuronal Networks John Huguenard.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Persistent activity and oscillations in recurrent neural networks in the high-conductance regime Rubén Moreno-Bote with Romain Brette and Néstor Parga.
BIOPHYSICS 6702 – ENCODING NEURAL INFORMATION
Biointelligence Laboratory, Seoul National University
What does the synapse tell the axon?
Early Processing in Biological Vision
Physiology of Photoreceptors Vertebrate photoreceptors hyperpolarize and produce graded potentials Photoreceptors use glutamate as transmitter.
Perception: Structures
Presented by Rhee, Je-Keun
OCNC Statistical Approach to Neural Learning and Population Coding ---- Introduction to Mathematical.
Effects of Excitatory and Inhibitory Potentials on Action Potentials
Nicholas J. Priebe, David Ferster  Neuron 
Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations  Anders Lansner  Trends in Neurosciences  Volume.
Prediction of Orientation Selectivity from Receptive Field Architecture in Simple Cells of Cat Visual Cortex  Ilan Lampl, Jeffrey S. Anderson, Deda C.
Joshua A. Goldberg, Uri Rokni, Haim Sompolinsky  Neuron 
Yann Zerlaut, Alain Destexhe  Neuron 
Volume 24, Issue 8, Pages e6 (August 2018)
Volume 58, Issue 1, Pages (April 2008)
The Brain as an Efficient and Robust Adaptive Learner
Grid Cells and Neural Coding in High-End Cortices
Rapid Neocortical Dynamics: Cellular and Network Mechanisms
Shunting Inhibition Modulates Neuronal Gain during Synaptic Excitation
Presentation transcript:

Kinetic Theory for the Dynamics of Fluctuation-Driven Neural Systems David W. McLaughlin Courant Institute & Center for Neural Science New York University Toledo – June ‘06

Happy Birthday, Peter & Louis

Kinetic Theory for the Dynamics of Fluctuation-Driven Neural Systems In collaboration with:   David Cai Louis Tao Michael Shelley Aaditya Rangan

Visual Pathway: Retina --> LGN --> V1 --> Beyond

Integrate and Fire Representation   t v = -(v – V R ) – g (v-V E )   t g = - g +  l f  (t – t l ) + (Sa/N)  l,k  (t – t l k ) plus spike firing and reset v (t k ) = 1; v (t = t k + ) = 0

Nonlinearity from spike-threshold: Whenever V(x,t) = 1, the neuron "fires", spike-time recorded, and V(x,t) is reset to 0,

The “primary visual cortex (V1)” is a “layered structure”, with O(10,000) neurons per square mm, per layer.

O(10 4 ) neuons per mm 2 per mm 2 Map of OrientationPreference With both regular & random patterns of neurons’ preferences

Lateral Connections and Orientation -- Tree Shrew Bosking, Zhang, Schofield & Fitzpatrick J. Neuroscience, 1997

Line-Motion-Illusion LMI

Coarse-Grained Asymptotic Representations Needed for “Scale-up” Larger lateral area Multiple layers

First, tile the cortical layer with coarse-grained (CG) patches

Coarse-Grained Reductions for V1 Average firing rate models [Cowan & Wilson (’72); ….; Shelley & McLaughlin(’02)] Average firing rate of an excitatory (inhibitory) neuron, within coarse-grained patch located at location x in the cortical layer: m  (x,t),  = E,I

Cortical networks have a very “noisy” dynamics Strong temporal fluctuations On synaptic timescale Fluctuation driven spiking

Experiment Observation Fluctuations in Orientation Tuning (Cat data from Ferster’s Lab) Ref: Anderson, Lampl, Gillespie, Ferster Science, (2000)

Fluctuation-driven spiking Solid: average ( over 72 cycles) Dashed: 10 temporal trajectories (very noisy dynamics, on the synaptic time scale)

To accurately and efficiently describe these networks requires that fluctuations be retained in a coarse-grained representation. “Pdf ” representations –   (v,g; x,t),  = E,I will retain fluctuations. But will not be very efficient numerically Needed – a reduction of the pdf representations which retains 1.Means & 2.Variances Kinetic Theory provides this representation Ref: Cai, Tao, Shelley & McLaughlin, PNAS, pp (2004)

Kinetic Theory begins from PDF representations   (v,g; x,t),  = E,I Knight & Sirovich; Nykamp & Tranchina, Neural Comp (2001) Haskell, Nykamp & Tranchina, Network (2001) ;

For convenience of presentation, I’ll sketch the derivation a single CG patch, with 200 excitatory Integrate & Fire neurons First, replace the 200 neurons in this CG cell by an equivalent pdf representation Then derive from the pdf rep, kinetic theory The results extend to interacting CG cells which include inhibition – as well as different cell types such as “simple” & “complex” cells.

N excitatory neurons (within one CG cell) Random coupling throughout the CG cell; AMPA synapses (with a short time scale  )   t v i = -(v i – V R ) – g i (v i -V E )   t g i = - g i +  l f  (t – t l ) + (Sa/N)  l,k  (t – t l k ) plus spike firing and reset v i (t i k ) = 1; v i (t = t i k + ) = 0

N excitatory neurons (within one CG cell) Random coupling throughout the CG cell; AMPA synapses (with time scale  )   t v i = -(v – V R ) – g i (v-V E )   t g i = - g i +  l f  (t – t l ) + (Sa/N)  l,k  (t – t l k )  (g,v,t)  N -1  i=1,N E{  [v – v i (t)]  [g – g i (t)]}, Expectation “E” over Poisson spike train { t l }

  t v i = -(v – V R ) – g i (v-V E )   t g i = - g i +  l f  (t – t l ) + (Sa/N)  l,k  (t – t l k ) Evolution of pdf --  (g,v,t): (i) N>1; (ii) the total input to each neuron is (modulated) Poisson spike trains.  t  =  -1  v {[(v – V R ) + g (v-V E )]  } +  g {(g/  )  } + 0 (t) [  (v, g-f/ , t) -  (v,g,t)] + N m(t) [  (v, g-Sa/N , t) -  (v,g,t)], 0 (t) = modulated rate of incoming Poisson spike train; m(t) = average firing rate of the neurons in the CG cell =  J (v) (v,g;  )| (v= 1) dg, and where J (v) (v,g;  ) = -{[(v – V R ) + g (v-V E )]  }

 t  =  -1  v {[(v – V R ) + g (v-V E )]  } +  g {(g/  )  } + 0 (t) [  (v, g-f/ , t) -  (v,g,t)] + N m(t) [  (v, g-Sa/N , t) -  (v,g,t)], N>>1; f << 1; 0 f = O(1);  t  =  -1  v {[(v – V R ) + g (v-V E )]  } +  g {[g – G(t)]/  )  } +  g 2 /   gg  + … where  g 2 = 0 (t) f 2 /(2  ) + m(t) (Sa) 2 /(2N  ) G(t) = 0 (t) f + m(t) Sa

Kinetic Theory Begins from Moments  (g,v,t)  (g) (g,t) =   (g,v,t) dv  (v) (v,t) =   (g,v,t) dg  1 (v) (v,t) =  g  (g,t  v) dg where  (g,v,t) =  (g,t  v)  (v) (v,t).  t  =  -1  v {[(v – V R ) + g (v-V E )]  } +  g {[g – G(t)]/  )  } +  g 2 /   gg  + … First, integrating  (g,v,t) eq over v yields:   t  (g) =  g {[g – G(t)])  (g) } +  g 2  gg  (g)

Fluctuations in g are Gaussian   t  (g) =  g {[g – G(t)])  (g) } +  g 2  gg  (g)

Integrating  (g,v,t) eq over g yields:  t  (v) =  -1  v [(v – V R )  (v) +  1 (v) (v-V E )  (v) ] Integrating [g  (g,v,t)] eq over g yields an equation for  1 (v) (v,t) =  g  (g,t  v) dg, where  (g,v,t) =  (g,t  v)  (v) (v,t)

 t  1 (v) = -  -1 [  1 (v) – G(t)] +  -1 {[(v – V R ) +  1 (v) (v-V E )]  v  1 (v) } +  2 (v)/ (  (v) )  v [(v-V E )  (v) ] +  -1 (v-V E )  v  2 (v) where  2 (v) =  2 (v) – (  1 (v) ) 2. Closure: (i)  v  2 (v) = 0; (ii)  2 (v) =  g 2 One obtains:

 t  (v) =  -1  v [(v – V R )  (v) +  1 (v) (v-V E )  (v) ]  t  1 (v) = -  -1 [  1 (v) – G(t)] +  -1 {[(v – V R ) +  1 (v) (v-V E )]  v  1 (v) } +  g 2 / (  (v) )  v [(v-V E )  (v) ] Together with a diffusion eq for  (g) (g,t):   t  (g) =  g {[g – G(t)])  (g) } +  g 2  gg  (g)

PDF of v Theory→ ←I&F (solid) Fokker-Planck→ Theory→ ←I&F ←Mean-driven limit ( ): Hard thresholding Fluctuation-Driven Dynamics N=75 σ=5msec S=0.05 f=0.01 firing rate (Hz)

PDF of v Theory→ ←I&F (solid) Fokker-Planck→ Theory→ ←I&F ←Mean-driven limit ( ): Hard thresholding Fluctuation-Driven Dynamics N=75 σ=5msec S=0.05 f=0.01 Experiment firing rate (Hz)

Mean­Driven: Bistability and Hysteresis   Network of Simple, Excitatory only Fluctuation­Driven: N=16 Relatively Strong Cortical Coupling: N=16!

Mean­Driven: N=16! Bistability and Hysteresis   Network of Simple, Excitatory only Relatively Strong Cortical Coupling:

Computational Efficiency For statistical accuracy in these CG patch settings, Kinetic Theory is more efficient than I&F;

Realistic Extensions Extensions to coarse-grained local patches, to excitatory and inhibitory neurons, and to neurons of different types (simple & complex). The pdf then takes the form  , (v,g; x,t), where x is the coarse-grained label,  = E,I and labels cell type

Three Dynamic Regimes of Cortical Amplification: 1) Weak Cortical Amplification No Bistability/Hysteresis 2) Near Critical Cortical Amplification 3) Strong Cortical Amplification Bistability/Hysteresis (2) (1) (3) Excitatory Cells Shown

Firing rate vs. input conductance for 4 networks with varying pN : 25 (blue), 50 (magneta), 100 (black), 200 (red). Hysteresis occurs for pN =100 and 200. Fixed synaptic coupling S exc /pN

Summary Kinetic Theory is a numerically efficient ( more efficient than I&F), and remarkably accurate, method for “scale-up” Ref: PNAS, pp (2004) Kinetic Theory introduces no new free parameters into the model, and has a large dynamic range from the rapid firing “mean-driven” regime to a fluctuation driven regime. Sub-networks of point neurons can be embedded within kinetic theory to capture spike timing statistics, with a range from test neurons to fully interacting sub-networks. Ref: Tao, Cai, McLaughlin, PNAS, (2004)

Too good to be true? What’s missing? First, the zeroth moment is more accurate than the first moment, as in many moment closures

Too good to be true? What’s missing? Second, again as in many moment closures, existence can fail -- (Tranchina, et al – 2006). That is, at low but realistic firing rates, equations too rigid to have steady state solutions which satisfy the boundary conditions. Diffusion (in v) fixes this existence problem – by introducing boundary layers

Too good to be true? What’s missing? But a far more serious problem Kinetic Theory does not capture detailed “spike-timing” information

Why does the kinetic theory (Boltzman-type approach in general) not work? Note

Too good to be true? What’s missing? But a far more serious problem Kinetic Theory does not capture detailed “spike-timing” statistics

Too good to be true? What’s missing? But a far more serious problem Kinetic Theory does not capture detailed “spike-timing” statistics And most likely the cortex works, on very short time time scales, through neurons correlated by detailed spike timing. Take, for example, the line-motion illusion

Line-Motion-Illusion LMI

Model Voltage Model NMDA time space Trials 40% ‘coarse’ 0% ‘coarse’ Direct ‘naïve’ coarse graining may not suffice: Priming mechanism relies on Recruitment Recruitment relies on locally correlated cortical firing events Naïve ensemble average destroys locally correlated events Stimulus

Conclusion Kinetic Theory is a numerically efficient ( more efficient than I&F), and remarkably accurate. Kinetic Theory accurately captures firing rates in fluctuation dominated systems Kinetic Theory does not capture detailed spike- timed correlations – which may be how the cortex works, as it has no time to average. So we’ve returned to integrate & fire networks, and have developed fast “multipole” algorithms for integrate & fire systems (Cai and Rangan, 2005).