# Example Project and Numerical Integration Computational Neuroscience 03 Lecture 11.

## Presentation on theme: "Example Project and Numerical Integration Computational Neuroscience 03 Lecture 11."— Presentation transcript:

Example Project and Numerical Integration Computational Neuroscience 03 Lecture 11

Example Project: Investigation into Integrate and fire models Build a model integrate and fire neuron using Augmented by the rule that if V reaches V th an AP is fired after which V is reset to V reset Use E L = -80mV, V rest = -70mV, R m = 10M m = 10 ms. Initially set V=V rest. When the membrane potential reaches V th = -54mV make the neuron fire a spike and reset the potential to V reset = -80mV. Show sample voltage traces (with spikes) for a 300ms long current pulse centered in a 500ms long simulation.

Determine the firing rate of the model and compare it with the theoretical prediction:

Investigate the behaviour of this neuron and see how the outputs change if different parameters are modelled. Eg above we see what happens if the current injection is not constant.

Alternatively one could investigate introducing spike rate adaptation by adding a conductance Where r m g sra = 0.06, sra = 100ms E K =-70mV And after a spike is fired

Or one can simulate adding an excitatory synaptic conductance with I e =0 P s is probability of firing and changes whenever the presynaptic neuron fires using: And after a spike is fired Here use E s = 0 r m g s = 0.5, s = 10ms P max =0.5. Plot V(t) and the synaptic current and trigger synaptic events at times: 50, 150, 190, 300, 320 and 400 Another interesting avenue is to add a calcium-based conductance and see if one can get bursting behaviour Or one can couple 2 IAF neurons etc etc

Numerical Integration However whatever scheme one attempts it will be necessary to use some numerical integration (although the equation can be solved for constant I e ). The idea behind numerical integration is very simple and intuitive and can be seen from the following figure: Here we estimate the area under the curve by summing the areas of the trapezium. But what if we are dealing with a differential equation?

Difference Equations Eg consider the IAF equation: The key idea here is to go back to the definition of the differential ie We can then replace the differential (LHS of above) with the finite difference (RHS of above). Alternatively we note that: and iteratively solve: Eulers method

tt+h y(t) y(t+h) h dy/dt Eulers method effectively evaluates the function at the future timestep by evaluating the gradient at t and assuming it is constant over the range h Thus the accuracy of y(t+h) is increased (in one sense, see later) by making the increase in time-step h small

t t+h y(t) y(t+h) t+2h y(t+2h) We then iteratively produce the rest of the y-values as illustrated above ie dy/dt evaluated at t+h and h times this value added to y(t+h) to get y(t+2h) etc etc

Can make the reliance of the accuracy of Eulers method on h explicit by examining the Taylor expansion of y(t) ie: So if we say: y(t+h) = y(t) +h dy/dt +err Then the error term is: Thus for small h the error is determined by the factor h 2 ie O(h 2 ) and the method is said to be first order accurate (as we would divide by h to estimate dy/dt) NB note error does also depend on higher order differentials so normally have to assume y is sufficiently smooth

However, this procedure is assymetric as we assume dy/dt has a constant value evaluated at beginning of the time interval More symmetric (and accurate) to use an estimate from the middle of the interval ie at h/2 tt+h y(t) y(t+h) h dy/dt tt+h y(t) y(t+h) h dy/dt t+h/2

However to do this we will need to use the value of y at t+h/2 which we do not yet know Thus we first have to estimate y(t+h/2) using dy/dt evaluated at t and then use this estimate to evaluate dy/dt at t+h/2 as illustrated below: tt+h y(t+h/2) t+h/2 Ie if dy/dt= f(y,t): y(t+h/2) = y(t) + h/2 f( y(t), t) (estimate 1 in figure) y(t+h) = y(t) + h f( y(t+h/2), t+h/2) (estimate 2 in figure) This is the 2 nd Order Runge-Kutta method 1 2

As the name suggests it is 2 nd order accurate (method called nth order if error term is O(h n+1 ) Why? So comparing this estimate with the Taylor expansion: We see the error is O(h 3 )

Using a similar procedure but using 4 evaluations of dy/dt we get the commonly used 4 th order Runge-Kutta (again dy/dt written as f(y, t) k 1 = h f( y(t), t) k 2 = h f( y(t)+k 1 /2, t+h/2) = h f( y(t+h/2), t+h/2) k 3 = h f( y(t)+k 2 /2, t+h/2) k 4 = h f( y(t)+k 3, t+h) y(t+h) = y(t) + k 1 /6 + k 2 /3 + k 3 /3 + k 4 /6 + O(h 5 )

However, note that high order does NOT imply high accuracy: rather it means that the method is more efficient ie quicker to run Generally, for difficult functions (non-smooth) the simpler methods will be more accurate Errors can also arise as a consequence of round-off errors due to computer evaluation Unfortunately, while truncation error reduces as we lower h, round-off errors will increase This results in constraints being placed on step-sizes used so that algorithm remains stable and round-off errors do not swamp the solution

Note that one can get an estimate of the gradient from an advanced timestep (forward differencing) or from a previous timestep (backward difference) Using forward differences (or mix of forwards and backwards) is a lot more computationally intensive as we then have to solve a set of simultaneous equations (though system is usually tridiagonal), but often need to do so as these schemes (implicit) are much more stable

EG: IAF equation Back to the IAF equation: Becomes: Where timestep is 1 ms (as long as it doesnt vary too quickly).

If there are gating variables for conductances etc they can be integrated in the same way eg These should be updated at different times to the voltage ie if V is computed at times 0, t, 2t, 3t etc, then z would be updated at times t/2, 3t/2, 5t/2 etc If calcium based conductances are included, they should be updated simultaneously with V Also, remember that if you can analytically integrate an equation you should eg

Download ppt "Example Project and Numerical Integration Computational Neuroscience 03 Lecture 11."

Similar presentations