Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dyskretne i niedyskretne modele sieci neuronów Winfried Just Department of Mathematics, Ohio University Sungwoo Ahn, Xueying Wang David Terman Department.

Similar presentations


Presentation on theme: "Dyskretne i niedyskretne modele sieci neuronów Winfried Just Department of Mathematics, Ohio University Sungwoo Ahn, Xueying Wang David Terman Department."— Presentation transcript:

1 Dyskretne i niedyskretne modele sieci neuronów Winfried Just Department of Mathematics, Ohio University Sungwoo Ahn, Xueying Wang David Terman Department of Mathematics, Ohio State University Mathematical Biosciences Institute

2 A neuronal system consists of 3 components: 1)Intrinsic properties of cells 2) Synaptic connections between cells 1)Network architecture Each of these involve many parameters and multiple time scales. Basic questions: Can we classify network behavior? Can we design a network that does something of interest?

3 Outline of the talk Network connectivity and discrete dynamics Definition of discrete dynamics Reduction of ODE dynamics to discrete dynamics A small network suggests discrete model ODE models of single-cell dynamics Relation to other models of discrete dynamics

4 Single Cell v’ = f(v,w) w’ =  g(v,w) A cell may be: excitable or oscillatory Variable v measures voltage across membrane. It changes on fast timescale. Variable w is called “gating variable” and roughly measures the proportion of open ion channels. It changes on slow timescale. ____ v – nullcline - - - - w - nullcline

5 v 1 ’ = f(v 1,w 1 ) – g syn s 2 (v 1 – v syn ) w 1 ’ =  g(v 1,w 1 ) s 1 ’ =  (1-s 1 )H(v 1 -  )-  s 1 v 2 ’ = f(v 2,w 2 ) – g syn s 1 (v 2 -v syn ) w 2 ’ =  g(v 2,w 2 ) s 2 ’ =  (1-s 2 )H(v 2 -  )-  s 2 v 1 ’ = f(v 1,w 1 ) – g syn s 2 (v 1 – v syn ) w 1 ’ =  g(v 1,w 1 ) s 1 ’ =  (1-s 1 )H(v 1 -  )-  s 1 v 2 ’ = f(v 2,w 2 ) – g syn s 1 (v 2 -v syn ) w 2 ’ =  g(v 2,w 2 ) s 2 ’ =  (1-s 2 )H(v 2 -  )-  s 2 Two Mutually Coupled Cells Sometimes consider indirect synapses: x i ’ = ε  x (1-x i )H(v i -  ) - ε  x x i s i ’ =  (1-s i )H(x i -  x ) -  s i Introduces a delay in response of synapse Fast or slow. Depends on  and  Excitatory or Inhibitory. Depends on v syn Synapses may be: s – fraction of open synaptic channels H – Heaviside function g syn – constant maximal conductance

6 Empirical observations When the dynamics of this system is simulated on the computer, one observes rather sharply defined episodes of roughly equal lengths during which groups of cells fire (reside on the right branch of the v-nullcline) together, while other cells rest (reside on the left branch of the v-nullcline). Membership in these groups may change from episode to episode; a phenomenon that is called dynamic clustering. Experimental studies of actual neuronal networks, such as the olfactory bulb in insects or the thalamic cells involved in sleep rhythms appear to show similar patterns. This suggests that one could attempt to reduce the ODE dynamics to a simpler discrete model and study the properties of the discrete model instead.

7 Transient: linear sequence of activation Period: stable, cyclic sequence of activation Reduction to discrete dynamics (1,6) (4,5) (2,3,7) (1,5,6) (2,4,7) (3,6) (1,4,5) Assume: A cell does not fire in consecutive episodes

8 Some other solutions (1,6) (4,5) (2,3,7) (1,5,6) (2,4,7) (3,6) (1,4,5) Different transient Same attractor (1,3,7) (4,5,6) Different transient Different attractor Network Architecture (1,2,5) (4,6,7) (2,3,5) (1,6,7) (3,4,5) (1,2,7) (3,4,5,6)

9 What is the state transition graph of the dynamics? How many attractors and transients are there? 27 65 4 1 3 Network architecture

10

11 3456 127 3463467 345 167 235 467 125 342567 12567 1257 1235 146 1467 1446 12357 23567 2345 67 2 25 1346 134 13467 12345 134567 1267 267 35 2467 12467 356 1247 17 23456

12 1457 236 457 126 357 246 1357 1246 3457 1236 1256 1347 12356 26 57 12346 47256 347 12356 23 234 1567 4567 12 123 4 567 14567 1234 34567 123567 124 3567 147 2356 157 2346

13 Remarks 1) We have assumed that refractory period = 1 If a cell fires then it must wait one episode before it can fire again. 2) We have assumed that threshold = 1 If a cell is ready to fire, then it will fire if it received input from at least one other active cell. For now, we assume that: refractory period of every cell = p threshold for every cell = 1

14 Discrete Dynamics Start with a directed graph D = [ V D, A D ] and integer p. 27 65 4 1 3

15 Discrete Dynamics Start with a directed graph D = [ V D, A D ] and integer p. A state s(k) at the discrete time k is a vector: s(k) = [s 1 (k), …., s n (k)] where s i (k)  {0, 1, …,p} for each i. (n = # cells) The state s i (k) = 0 means neuron i fires at time k. Dynamics on the discrete network: If s i (k) < p, then s i (k+1) = s i (k)+1 If s i (k) = p, and there exists a j with s j (k)=0 and  A D, then s i (k+1) = 0. If s i (k) = p, and there is no j with s j (k)=0 and  A D, then s i (k+1) = p.

16 Two Issues 1)When can we reduce the differential equations model to the discrete model? 2)What can we prove about the discrete model? In particular, how does the network connectivity influence the discrete dynamics?

17 Reducing the neuronal model to discrete dynamics Given integers n ( size of network ) and p ( refractory period ), can we choose intrinsic and synaptic parameters so that for any network architecture, every orbit of the discrete model can be realized by a stable solution of the neuronal model? Answer: - for purely inhibitory networks. No Yes - for excitatory-inhibitory networks.

18 Post-inhibitory rebound We will consider networks of neurons in which the w-nullcline intersects the left branch of the v-nullcline(s). If such a neuron receives excitatory input, the v-nullcline moves up, if it receives inhibitory input, the v-nullcline moves down. If two such neurons are coupled by inhibitory synapses, the resulting dynamics is known under the name post- inhibitory rebound.

19 Purely Inhibitory Network cell 1 cell 2 cell 3 cell 4 C(0) C(1) g = 0 C(2) 12 34 (1,2) (3,4) Note that the distance between cells within each cluster increases.

20 Excitatory-Inhibitory Networks E-cells I-cells excitation inhibition

21 Formally reduce E-I network  purely inhibitory network E-cells I-cells excitation inhibition E-cell fires I-cells fire E-cells fire due to rebound We can then define a graph on the set of E-cells and define discrete dynamics as before.

22 More precisely: The vertex set of the digraph consists of the numbers of all E-neurons. An arc is included in the digraph if and only if there is some I-neuron x that may receive excitatory input from i and may send inhibitory input to j.

23 Rigorously reducing E-I networks to discrete dynamics Assume: All-to-all coupling among I-cells Inhibitory synapses are indirect (slow) Suitable functions f and g The ODE dynamics is assumed to be the dynamics on the slow timescale; all trajectories move along the v-nullclines; jumps are instantaneous

24 Discrete vs. ODE models Consider any such network with any fixed architecture and fix p, the refractory period. We can then define both the continuous neuronal and discrete models. Let P(0) be any state of the discrete model. We then wish to show that there exists a solution of the neuronal system in which different subsets of cells take turns jumping up to the active phase. The active cells during each subsequent episode are precisely those determined by the discrete orbit of P(0), and this exact correspondence to the discrete dynamics remains valid throughout the trajectory of the initial state. We will say that such a solution realizes the orbit predicted by the discrete model. This solution will be stable in the sense that there is a neighborhood of the initial state such that every trajectory that starts in this neighborhood realizes the same discrete orbit.

25 Main Theorem (Terman, Ahn, Wang, Just; Physica D, 237(3) (2008)) Suppose a discrete model defined by a digraph is given. Then there re are intervals for the choice of the intrinsic parameters of the cells and the synaptic parameters in the ODE model so that: 1. Every orbit of the discrete model is realized by a stable solution of the differential equations model. 2. Every solution of the differential equations model eventually realizes a periodic orbit of the discrete model. That is, if X(t) is any solution of the differential equations model, then there exists T > 0 such that the solution {X(t) : t > T } realizes a periodic orbit or a steady state of the discrete model.

26 Strategy Suppose we are given an E-I network. Let s(0) be any initial state of the discrete system. We wish to choose initial positions of the E- and I- cells so that the E-I network produces firing patterns as predicted by the discrete system. g = 0 E-cells

27 J2J2 J0J0 J1J1 J2J2 J0J0 J1J1 We construct disjoint intervals J k, k = 0,…,p, so that: Let s(0) = (s 1, ….., s n ). Consider E-cells, (v i,w i ). Assume: If s i (0) = k, then w i (0)  J k. Then:  T * > 0 such that if s i (1) = k, then w i (T * )  J k. The only E-cells that fire for t  [0,T * ] are those with s i (0) = 0. p = 2

28 Generalized Discrete Dynamics Start with a directed graph D = [ V D, A D ] and vectors of integers p = [p 1, …, p n ] and th = [th 1, …, th n ]. A state s(k) at the discrete time k is a vector: s(k) = [s 1 (k), …., s n (k)] where s i (k)  {0, 1, …,p i } for each i. (n = # cells) The state s i (k) = 0 means neuron i fires at time k. Dynamics on the discrete network: If s i (k) < p, then s i (k+1) = s i (k)+1 If s i (k) = p i, and there exists at least th i nodes j with s j (k)=0 and  A D, then s i (k+1) = 0. If s i (k) = p i, and there are fewer than th i nodes j with s j (k)=0 and  A D, then s i (k+1) = p i.

29 Rigorous analysis of discrete model Start with a directed graph D = [ V D, A D ] p i = refractory period of neuron i th i = threshold of neuron i n = # of vertices. How does the expected dynamics of the discrete model depend on the density of connections? We will study this question by considering random initial states in random digraphs with a specified connection probability.

30 Some Definitions Let L = {s(1), …., s(k)} be an attractor. Act(L) = { i: s i (t) = 0 for some t} (the active set of L) L is fully active if Act(L) = [n] = {1, …, n} L is a minimal attractor if Act(L)  Ø and, for each i  Act(L), s i (0), …., s i (k) cycle through 0, 1, …, p i. Let: MA = {states that belong to a minimal attractor} FAMA = {states that belong to a fully active minimal attractor}

31 Consider random digraphs:  (n) = probability  A D for given. Theorem: Assume that p i and th i are bounded independent of n. (i)If  (n) =  (ln n / n), then as n  , with probability one, |FAMA|/|states|  1. (ii) If  (n) = o(ln n / n), then as n  , with probability one, |MA|/|states|  0. A phase transition occurs when  (n) ~ ln n / n. Just, Ahn, Terman; Physica D 237(24) (2008)

32 Autonomous sets Definition: Let s(0) = [s 1 (0), …., s n (0)] be a state. We say A  V D is autonomous for s(0) if for every i  A, s i (t) is minimally cycling (that is, s i (0), s i (1), …, s i (t) cycles through {0, …., p i }) in the discrete system that is obtained by restricting the nodes of the system to V D. Example: Active sets of minimal attractors are autonomous. Note that the dynamics on an autonomous set does not depend on the states of the remainder of the nodes.

33 The following result suggests that there exists another phase transition ~ C/n. Theorem: Assume that each p i C/n, then with probability tending to one as n  , a randomly chosen state will have an autonomous set of size at least n. In particular, most states have a large set of minimally cycling nodes. Just, Ahn, Terman; Physica D 237(24) (2008)

34 Numerical explorations

35 Current work in progress When the connection probability is ~ 1/n, another phase transition occurs for the case when all refractory periods and all firing thresholds are 1. Below this phase transition, with high probability the basin of attraction of the steady state [1, …, 1] becomes the whole state space. We are investigating what happens for connection probabilities slightly above this phase transition. Theoretical results predict longer transients near the phase transition, and this is what we are seeing in simulations. One question we are interested in is whether chaotic dynamics would be generic for connection probabilities in a critical range.

36 Other ongoing research Can we generalize our first theorem to architectures where the connections between the I-cells are somewhat random rather than all-to-all? How to incorporate learning and processing of inputs into this model? Can we obtain analogous results for networks based on different single-cell ODE dynamics?

37 Hopfield Networks Networks are modeled as digraphs with weighted arcs; weights may be positive or negative Each neuron has a firing threshold th i At each step, neurons are in state zero or one The successor state of a given state is determined by summing the weights of all incoming arcs that originate at neurons that are in state one. If this weight exceeds th i, the neuron goes into state one (fires), otherwise it goes into state zero

38 Hopfield vs. Terman Networks Hopfield networks don’t model refractory periods Terman networks don’t allow negatively weighted arcs For refractory period p = 1, both kinds are examples of Boolean networks Dynamics of random Hopfield networks tends to become more chaotic as connectivity increases Random Terman networks may allow chaotic dynamics only for narrow range of connectivity

39 Why am I interested in this? My major interests are centered around models of gene regulatory networks. These can be modeled with ODE systems; but Boolean and other discrete models are also being studied in the literature, with the (generally) unrealistic assumption of synchronous updating. Question: Under which conditions can we prove a correspondence between discrete and continuous models of gene regulatory networks as in our first theorem?


Download ppt "Dyskretne i niedyskretne modele sieci neuronów Winfried Just Department of Mathematics, Ohio University Sungwoo Ahn, Xueying Wang David Terman Department."

Similar presentations


Ads by Google