Presentation is loading. Please wait.

Presentation is loading. Please wait.

GUM*02 tutorial session UTSA, San Antonio, Texas Parameter searching in neural models Mike Vanier, Caltech.

Similar presentations


Presentation on theme: "GUM*02 tutorial session UTSA, San Antonio, Texas Parameter searching in neural models Mike Vanier, Caltech."— Presentation transcript:

1 GUM*02 tutorial session UTSA, San Antonio, Texas Parameter searching in neural models Mike Vanier, Caltech

2 The problem: you want to build a model of a neuron you have a body of data you know a lot about the neuron’s morphology physiology ion channel kinetics but you don’t know everything!

3 Typical preliminary data set anatomy rough idea of morphology detailed reconstruction

4 Typical preliminary data set physiology current clamp synaptic potentials potentiation modulators

5 Typical preliminary data set ion channels identities of the major types kinetics modulation

6 Missing data? ion channels identities of ALL channels densities (uS/(um) 2 ) detailed kinetics anatomy detailed reconstructions? variability? physiology voltage clamp, neuromodulators, etc. ???

7 Harsh reality most experiments not done with models in mind >half of model parameters loosely constrained or unconstrained experiments to collect model params are not very sexy

8 A different approach collect data set model should match collect plausible parameters those known to be correct educated guesses build model test model performance modify parameters until get match

9 How to modify parameters? manually 10 parameters @ 5 values each: 9765625 possible simulations 1 sim/minute = 19 years! use previous results to guide searching non-linear interactions? tedious!!!

10 How to modify parameters? automatically set ranges for each parameter define update algorithm start parameter search go home! check results in a day, week,...

11 match function need to quantify goodness of fit reduce entire model to one number 0 = perfect match match: spike rates spike times voltage waveform

12 simple match function inputs: different current levels e.g. 0.05, 0.1, 0.15, 0.2, 0.25, 0.3 nA outputs: spike times

13 waveform match function inputs: hyperpolarized current levels e.g. -0.05, -0.1 nA outputs: V m (t)

14 other match functions some data might be more important to match than the rest adaptation bursting behavior incorporate into more complex match functions

15 weight early spikes more w ij : weighting params set w i0 < w i1 < w i2 <...

16 harder match functions bursting purkinje cell, pyramidal cell transitions btw complex behaviors regular spiking bursting

17 the data set need exceptionally clean data set noise in data set: model will try to replicate it! need wide range of inputs

18 typical data set for neuron model current clamp over wide range hyperpolarized (passive) depolarized (spiking)

19 the process (1) build model anatomy channel params from lit match passive data hyperpolarized inputs

20 the process (2) create match function waveform match for hyperpolarized spike match for depolarized run a couple of simulations check that results aren’t ridiculous get into ballpark of right params

21 the process (3) choose params to vary channel densities channel kinetics m inf (V), tau(V) curves passive params choose parameter ranges

22 the process (4) select a param search method conjugate gradient genetic algorithm simulated annealing set meta-params for method

23 the process (5) run parameter search periodically check best results marvel at your own ingenuity curse at your stupid computer figure out why it did/didn’t work

24 results (motivation)

25 parameter search methods different methods have different attributes local or global optima? efficiency? depends on nature of parameter space smooth or ragged?

26 the shapes of space smooth ragged

27 genesis param search methods Conjugate gradient-descent (CG) Genetic algorithm (GA) Simulated annealing (SA) Brute Force (BF) Stochastic Search (SS)

28 conjugate gradient (CG) “The conjugate gradient method is based on the idea that the convergence to the solution could be accelerated if we minimize Q over the hyperplane that contains all previous search directions, instead of minimizing Q over just the line that points down gradient. To determine x i+1 we minimize Q over x 0 + span(p 0,p 1,p 2,...,p i ) where the p k represent previous search directions.”

29 no, really... take a point in parameter space find the line of steepest descent (gradient) minimize along that line repeat, sort of along conjugate directions only i.e. ignore subspace of previous lines

30 CG method: good and bad for smooth parameter spaces: guaranteed to find local minimum for ragged parameter spaces: guaranteed to find local minimum ;-) not what we want...

31 genetic algorithm pick a bunch of random parameter sets a “generation” evaluate each parameter set create new generation copy the most fit sets mutate randomly, cross over repeat until get acceptable results

32 genetic algorithm (2) amazingly, this often works global optimization method many variations many meta-params mutation rate crossover type (single, double) and rate no guarantees

33 simulated annealing make noise work for you! noisy version of “simplex algorithm” evaluate points on simplex add noise to result based on “temperature” move simplex through space accordingly gradually decrease temperature to zero

34 simulated annealing(2) some nice properties: guaranteed to find global optimum but may take forever ;-) when temp = 0, finds local minimum how fast to decrease temperature?

35 comparing methods (1)

36 comparing methods (2)

37 comparing methods (3)

38 recommendations Passive models: SA, CG Small active models: SA Large active models: SA, GA Network models: usually SOL

39 genesis tutorial (1) objects: paramtableGA paramtableSA paramtableCG task: parameterize simple one-compt neuron Na, K dr, K M channels

40 genesis tutorial (2) parameters: g max of Na, K dr, K M K M  (v) scaling K M m inf (v) midpoint

41 Conclusions param search algorithms are useful but: pitfalls, judgment modeler must help computer failure is not always bad! will continue to be active research area


Download ppt "GUM*02 tutorial session UTSA, San Antonio, Texas Parameter searching in neural models Mike Vanier, Caltech."

Similar presentations


Ads by Google