Dyskretne i niedyskretne modele sieci neuronów Winfried Just Department of Mathematics, Ohio University Sungwoo Ahn, Xueying Wang David Terman Department.

Slides:



Advertisements
Similar presentations
Ulams Game and Universal Communications Using Feedback Ofer Shayevitz June 2006.
Advertisements

Discrete time Markov Chain
A brief introduction to neuronal dynamics Gemma Huguet Universitat Politècnica de Catalunya In Collaboration with David Terman Mathematical Bioscience.
A Mathematical Formalism for Agent- Based Modeling 22 nd Mini-Conference on Discrete Mathematics and Algorithms Clemson University October 11, 2007 Reinhard.
DYNAMICS OF RANDOM BOOLEAN NETWORKS James F. Lynch Clarkson University.
Dynamic Planar Convex Hull Operations in Near- Logarithmic Amortized Time TIMOTHY M. CHAN.
MATHEMATICAL MODELS OF SPIKE CODES AS DETERMINISTIC DYNAMICAL SYSTEMS Eleonora Catsigeras Universidad de la República Uruguay Neural Coding Montevideo.
Neural Modeling Suparat Chuechote. Introduction Nervous system - the main means by which humans and animals coordinate short-term responses to stimuli.
Representing Relations Using Matrices
Spontaneous recovery in dynamic networks Advisor: H. E. Stanley Collaborators: B. Podobnik S. Havlin S. V. Buldyrev D. Kenett Antonio Majdandzic Boston.
Membrane capacitance Transmembrane potential Resting potential Membrane conductance Constant applied current.
Marseille, Jan 2010 Alfonso Renart (Rutgers) Jaime de la Rocha (NYU, Rutgers) Peter Bartho (Rutgers) Liad Hollender (Rutgers) Néstor Parga (UA Madrid)
Introduction to Mathematical Methods in Neurobiology: Dynamical Systems Oren Shriki 2009 Modeling Conductance-Based Networks by Rate Models 1.
II.A Business cycle Model A forced van der Pol oscillator model of business cycle was chosen as a prototype model to study the complex economic dynamics.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computational problems, algorithms, runtime, hardness
Gene Regulatory Networks - the Boolean Approach Andrey Zhdanov Based on the papers by Tatsuya Akutsu et al and others.
The Decisive Commanding Neural Network In the Parietal Cortex By Hsiu-Ming Chang ( 張修明 )
LEARNING DECISION TREES
Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues By Peter Woolf University of Michigan Michigan Chemical Process Dynamics.
CHAPTER 6 Statistical Analysis of Experimental Data
Lecture Slides Elementary Statistics Twelfth Edition
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Radial Basis Function Networks
Boyce/DiPrima 10th ed, Ch 10.5: Separation of Variables; Heat Conduction in a Rod Elementary Differential Equations and Boundary Value Problems, 10th.
Applied Discrete Mathematics Week 10: Equivalence Relations
CMPE 252A: Computer Networks Review Set:
MAKING COMPLEX DEClSlONS
Biological Modeling of Neural Networks Week 3 – Reducing detail : Two-dimensional neuron models Wulfram Gerstner EPFL, Lausanne, Switzerland 3.1 From Hodgkin-Huxley.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Area/Sigma Notation Objective: To define area for plane regions with curvilinear boundaries. To use Sigma Notation to find areas.
Ch 9.5: Predator-Prey Systems In Section 9.4 we discussed a model of two species that interact by competing for a common food supply or other natural resource.
Hebbian Coincidence Learning
Ch 9.8: Chaos and Strange Attractors: The Lorenz Equations
Networks of Queues Plan for today (lecture 6): Last time / Questions? Product form preserving blocking Interpretation traffic equations Kelly / Whittle.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
John Wordsworth, Peter Ashwin, Gabor Orosz, Stuart Townley Mathematics Research Institute University of Exeter.
Multiple attractors and transient synchrony in a model for an insect's antennal lobe Joint work with B. Smith, W. Just and S. Ahn.
Neural codes and spiking models. Neuronal codes Spiking models: Hodgkin Huxley Model (small regeneration) Reduction of the HH-Model to two dimensions.
Simplified Models of Single Neuron Baktash Babadi Fall 2004, IPM, SCS, Tehran, Iran
Study on synchronization of coupled oscillators using the Fokker-Planck equation H.Sakaguchi Kyushu University, Japan Fokker-Planck equation: important.
Lecture 21 Neural Modeling II Martin Giese. Aim of this Class Account for experimentally observed effects in motion perception with the simple neuronal.
Synchronization in complex network topologies
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Asymptotic behaviour of blinking (stochastically switched) dynamical systems Vladimir Belykh Mathematics Department Volga State Academy Nizhny Novgorod.
Area/Sigma Notation Objective: To define area for plane regions with curvilinear boundaries. To use Sigma Notation to find areas.
An Oscillatory Correlation Approach to Scene Segmentation DeLiang Wang The Ohio State University.
1 Chapter 4, Part 1 Basic ideas of Probability Relative Frequency, Classical Probability Compound Events, The Addition Rule Disjoint Events.
Alternating and Synchronous Rhythms in Reciprocally Inhibitory Model Neurons Xiao-Jing Wang, John Rinzel Neural computation (1992). 4: Ubong Ime.
Chapter 5 Probability Distributions 5-1 Overview 5-2 Random Variables 5-3 Binomial Probability Distributions 5-4 Mean, Variance and Standard Deviation.
Network Models (2) LECTURE 7. I.Introduction − Basic concepts of neural networks II.Realistic neural networks − Homogeneous excitatory and inhibitory.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
ECE-7000: Nonlinear Dynamical Systems 3. Phase Space Methods 3.1 Determinism: Uniqueness in phase space We Assume that the system is linear stochastic.
Ch 9.6: Liapunov’s Second Method In Section 9.3 we showed how the stability of a critical point of an almost linear system can usually be determined from.
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
Area/Sigma Notation Objective: To define area for plane regions with curvilinear boundaries. To use Sigma Notation to find areas.
Probability Distributions ( 확률분포 ) Chapter 5. 2 모든 가능한 ( 확률 ) 변수의 값에 대해 확률을 할당하는 체계 X 가 1, 2, …, 6 의 값을 가진다면 이 6 개 변수 값에 확률을 할당하는 함수 Definition.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Chapter 4 Dynamical Behavior of Processes Homework 6 Construct an s-Function model of the interacting tank-in-series system and compare its simulation.
Chapter 4 Dynamical Behavior of Processes Homework 6 Construct an s-Function model of the interacting tank-in-series system and compare its simulation.
Ch7: Hopfield Neural Model
Biointelligence Laboratory, Seoul National University
Hidden Markov Models Part 2: Algorithms
Collins Assisi, Mark Stopfer, Maxim Bazhenov  Neuron 
Recurrent Networks A recurrent network is characterized by
H.Sebastian Seung, Daniel D. Lee, Ben Y. Reis, David W. Tank  Neuron 
Collins Assisi, Mark Stopfer, Maxim Bazhenov  Neuron 
Discrete Mathematics and its Applications Lecture 5 – Random graphs
Locality In Distributed Graph Algorithms
Presentation transcript:

Dyskretne i niedyskretne modele sieci neuronów Winfried Just Department of Mathematics, Ohio University Sungwoo Ahn, Xueying Wang David Terman Department of Mathematics, Ohio State University Mathematical Biosciences Institute

A neuronal system consists of 3 components: 1)Intrinsic properties of cells 2) Synaptic connections between cells 1)Network architecture Each of these involve many parameters and multiple time scales. Basic questions: Can we classify network behavior? Can we design a network that does something of interest?

Outline of the talk Network connectivity and discrete dynamics Definition of discrete dynamics Reduction of ODE dynamics to discrete dynamics A small network suggests discrete model ODE models of single-cell dynamics Relation to other models of discrete dynamics

Single Cell v’ = f(v,w) w’ =  g(v,w) A cell may be: excitable or oscillatory Variable v measures voltage across membrane. It changes on fast timescale. Variable w is called “gating variable” and roughly measures the proportion of open ion channels. It changes on slow timescale. ____ v – nullcline w - nullcline

v 1 ’ = f(v 1,w 1 ) – g syn s 2 (v 1 – v syn ) w 1 ’ =  g(v 1,w 1 ) s 1 ’ =  (1-s 1 )H(v 1 -  )-  s 1 v 2 ’ = f(v 2,w 2 ) – g syn s 1 (v 2 -v syn ) w 2 ’ =  g(v 2,w 2 ) s 2 ’ =  (1-s 2 )H(v 2 -  )-  s 2 v 1 ’ = f(v 1,w 1 ) – g syn s 2 (v 1 – v syn ) w 1 ’ =  g(v 1,w 1 ) s 1 ’ =  (1-s 1 )H(v 1 -  )-  s 1 v 2 ’ = f(v 2,w 2 ) – g syn s 1 (v 2 -v syn ) w 2 ’ =  g(v 2,w 2 ) s 2 ’ =  (1-s 2 )H(v 2 -  )-  s 2 Two Mutually Coupled Cells Sometimes consider indirect synapses: x i ’ = ε  x (1-x i )H(v i -  ) - ε  x x i s i ’ =  (1-s i )H(x i -  x ) -  s i Introduces a delay in response of synapse Fast or slow. Depends on  and  Excitatory or Inhibitory. Depends on v syn Synapses may be: s – fraction of open synaptic channels H – Heaviside function g syn – constant maximal conductance

Empirical observations When the dynamics of this system is simulated on the computer, one observes rather sharply defined episodes of roughly equal lengths during which groups of cells fire (reside on the right branch of the v-nullcline) together, while other cells rest (reside on the left branch of the v-nullcline). Membership in these groups may change from episode to episode; a phenomenon that is called dynamic clustering. Experimental studies of actual neuronal networks, such as the olfactory bulb in insects or the thalamic cells involved in sleep rhythms appear to show similar patterns. This suggests that one could attempt to reduce the ODE dynamics to a simpler discrete model and study the properties of the discrete model instead.

Transient: linear sequence of activation Period: stable, cyclic sequence of activation Reduction to discrete dynamics (1,6) (4,5) (2,3,7) (1,5,6) (2,4,7) (3,6) (1,4,5) Assume: A cell does not fire in consecutive episodes

Some other solutions (1,6) (4,5) (2,3,7) (1,5,6) (2,4,7) (3,6) (1,4,5) Different transient Same attractor (1,3,7) (4,5,6) Different transient Different attractor Network Architecture (1,2,5) (4,6,7) (2,3,5) (1,6,7) (3,4,5) (1,2,7) (3,4,5,6)

What is the state transition graph of the dynamics? How many attractors and transients are there? Network architecture

Remarks 1) We have assumed that refractory period = 1 If a cell fires then it must wait one episode before it can fire again. 2) We have assumed that threshold = 1 If a cell is ready to fire, then it will fire if it received input from at least one other active cell. For now, we assume that: refractory period of every cell = p threshold for every cell = 1

Discrete Dynamics Start with a directed graph D = [ V D, A D ] and integer p

Discrete Dynamics Start with a directed graph D = [ V D, A D ] and integer p. A state s(k) at the discrete time k is a vector: s(k) = [s 1 (k), …., s n (k)] where s i (k)  {0, 1, …,p} for each i. (n = # cells) The state s i (k) = 0 means neuron i fires at time k. Dynamics on the discrete network: If s i (k) < p, then s i (k+1) = s i (k)+1 If s i (k) = p, and there exists a j with s j (k)=0 and  A D, then s i (k+1) = 0. If s i (k) = p, and there is no j with s j (k)=0 and  A D, then s i (k+1) = p.

Two Issues 1)When can we reduce the differential equations model to the discrete model? 2)What can we prove about the discrete model? In particular, how does the network connectivity influence the discrete dynamics?

Reducing the neuronal model to discrete dynamics Given integers n ( size of network ) and p ( refractory period ), can we choose intrinsic and synaptic parameters so that for any network architecture, every orbit of the discrete model can be realized by a stable solution of the neuronal model? Answer: - for purely inhibitory networks. No Yes - for excitatory-inhibitory networks.

Post-inhibitory rebound We will consider networks of neurons in which the w-nullcline intersects the left branch of the v-nullcline(s). If such a neuron receives excitatory input, the v-nullcline moves up, if it receives inhibitory input, the v-nullcline moves down. If two such neurons are coupled by inhibitory synapses, the resulting dynamics is known under the name post- inhibitory rebound.

Purely Inhibitory Network cell 1 cell 2 cell 3 cell 4 C(0) C(1) g = 0 C(2) (1,2) (3,4) Note that the distance between cells within each cluster increases.

Excitatory-Inhibitory Networks E-cells I-cells excitation inhibition

Formally reduce E-I network  purely inhibitory network E-cells I-cells excitation inhibition E-cell fires I-cells fire E-cells fire due to rebound We can then define a graph on the set of E-cells and define discrete dynamics as before.

More precisely: The vertex set of the digraph consists of the numbers of all E-neurons. An arc is included in the digraph if and only if there is some I-neuron x that may receive excitatory input from i and may send inhibitory input to j.

Rigorously reducing E-I networks to discrete dynamics Assume: All-to-all coupling among I-cells Inhibitory synapses are indirect (slow) Suitable functions f and g The ODE dynamics is assumed to be the dynamics on the slow timescale; all trajectories move along the v-nullclines; jumps are instantaneous

Discrete vs. ODE models Consider any such network with any fixed architecture and fix p, the refractory period. We can then define both the continuous neuronal and discrete models. Let P(0) be any state of the discrete model. We then wish to show that there exists a solution of the neuronal system in which different subsets of cells take turns jumping up to the active phase. The active cells during each subsequent episode are precisely those determined by the discrete orbit of P(0), and this exact correspondence to the discrete dynamics remains valid throughout the trajectory of the initial state. We will say that such a solution realizes the orbit predicted by the discrete model. This solution will be stable in the sense that there is a neighborhood of the initial state such that every trajectory that starts in this neighborhood realizes the same discrete orbit.

Main Theorem (Terman, Ahn, Wang, Just; Physica D, 237(3) (2008)) Suppose a discrete model defined by a digraph is given. Then there re are intervals for the choice of the intrinsic parameters of the cells and the synaptic parameters in the ODE model so that: 1. Every orbit of the discrete model is realized by a stable solution of the differential equations model. 2. Every solution of the differential equations model eventually realizes a periodic orbit of the discrete model. That is, if X(t) is any solution of the differential equations model, then there exists T > 0 such that the solution {X(t) : t > T } realizes a periodic orbit or a steady state of the discrete model.

Strategy Suppose we are given an E-I network. Let s(0) be any initial state of the discrete system. We wish to choose initial positions of the E- and I- cells so that the E-I network produces firing patterns as predicted by the discrete system. g = 0 E-cells

J2J2 J0J0 J1J1 J2J2 J0J0 J1J1 We construct disjoint intervals J k, k = 0,…,p, so that: Let s(0) = (s 1, ….., s n ). Consider E-cells, (v i,w i ). Assume: If s i (0) = k, then w i (0)  J k. Then:  T * > 0 such that if s i (1) = k, then w i (T * )  J k. The only E-cells that fire for t  [0,T * ] are those with s i (0) = 0. p = 2

Generalized Discrete Dynamics Start with a directed graph D = [ V D, A D ] and vectors of integers p = [p 1, …, p n ] and th = [th 1, …, th n ]. A state s(k) at the discrete time k is a vector: s(k) = [s 1 (k), …., s n (k)] where s i (k)  {0, 1, …,p i } for each i. (n = # cells) The state s i (k) = 0 means neuron i fires at time k. Dynamics on the discrete network: If s i (k) < p, then s i (k+1) = s i (k)+1 If s i (k) = p i, and there exists at least th i nodes j with s j (k)=0 and  A D, then s i (k+1) = 0. If s i (k) = p i, and there are fewer than th i nodes j with s j (k)=0 and  A D, then s i (k+1) = p i.

Rigorous analysis of discrete model Start with a directed graph D = [ V D, A D ] p i = refractory period of neuron i th i = threshold of neuron i n = # of vertices. How does the expected dynamics of the discrete model depend on the density of connections? We will study this question by considering random initial states in random digraphs with a specified connection probability.

Some Definitions Let L = {s(1), …., s(k)} be an attractor. Act(L) = { i: s i (t) = 0 for some t} (the active set of L) L is fully active if Act(L) = [n] = {1, …, n} L is a minimal attractor if Act(L)  Ø and, for each i  Act(L), s i (0), …., s i (k) cycle through 0, 1, …, p i. Let: MA = {states that belong to a minimal attractor} FAMA = {states that belong to a fully active minimal attractor}

Consider random digraphs:  (n) = probability  A D for given. Theorem: Assume that p i and th i are bounded independent of n. (i)If  (n) =  (ln n / n), then as n  , with probability one, |FAMA|/|states|  1. (ii) If  (n) = o(ln n / n), then as n  , with probability one, |MA|/|states|  0. A phase transition occurs when  (n) ~ ln n / n. Just, Ahn, Terman; Physica D 237(24) (2008)

Autonomous sets Definition: Let s(0) = [s 1 (0), …., s n (0)] be a state. We say A  V D is autonomous for s(0) if for every i  A, s i (t) is minimally cycling (that is, s i (0), s i (1), …, s i (t) cycles through {0, …., p i }) in the discrete system that is obtained by restricting the nodes of the system to V D. Example: Active sets of minimal attractors are autonomous. Note that the dynamics on an autonomous set does not depend on the states of the remainder of the nodes.

The following result suggests that there exists another phase transition ~ C/n. Theorem: Assume that each p i C/n, then with probability tending to one as n  , a randomly chosen state will have an autonomous set of size at least n. In particular, most states have a large set of minimally cycling nodes. Just, Ahn, Terman; Physica D 237(24) (2008)

Numerical explorations

Current work in progress When the connection probability is ~ 1/n, another phase transition occurs for the case when all refractory periods and all firing thresholds are 1. Below this phase transition, with high probability the basin of attraction of the steady state [1, …, 1] becomes the whole state space. We are investigating what happens for connection probabilities slightly above this phase transition. Theoretical results predict longer transients near the phase transition, and this is what we are seeing in simulations. One question we are interested in is whether chaotic dynamics would be generic for connection probabilities in a critical range.

Other ongoing research Can we generalize our first theorem to architectures where the connections between the I-cells are somewhat random rather than all-to-all? How to incorporate learning and processing of inputs into this model? Can we obtain analogous results for networks based on different single-cell ODE dynamics?

Hopfield Networks Networks are modeled as digraphs with weighted arcs; weights may be positive or negative Each neuron has a firing threshold th i At each step, neurons are in state zero or one The successor state of a given state is determined by summing the weights of all incoming arcs that originate at neurons that are in state one. If this weight exceeds th i, the neuron goes into state one (fires), otherwise it goes into state zero

Hopfield vs. Terman Networks Hopfield networks don’t model refractory periods Terman networks don’t allow negatively weighted arcs For refractory period p = 1, both kinds are examples of Boolean networks Dynamics of random Hopfield networks tends to become more chaotic as connectivity increases Random Terman networks may allow chaotic dynamics only for narrow range of connectivity

Why am I interested in this? My major interests are centered around models of gene regulatory networks. These can be modeled with ODE systems; but Boolean and other discrete models are also being studied in the literature, with the (generally) unrealistic assumption of synchronous updating. Question: Under which conditions can we prove a correspondence between discrete and continuous models of gene regulatory networks as in our first theorem?