Presentation is loading. Please wait.

Presentation is loading. Please wait.

HYPE Hybrid method for parameter estimation In biochemical models Anne Poupon Biology and Bioinformatics of Signalling Systems PRC, Tours, France.

Similar presentations


Presentation on theme: "HYPE Hybrid method for parameter estimation In biochemical models Anne Poupon Biology and Bioinformatics of Signalling Systems PRC, Tours, France."— Presentation transcript:

1 HYPE Hybrid method for parameter estimation In biochemical models Anne Poupon Biology and Bioinformatics of Signalling Systems PRC, Tours, France

2 The question Ay0y0 By1y1 y2y2 Cy3y3 k0k0 k1k1 k2k2 k3k3 k4k4 How to simulate the evolutions of the different quantities as a function of time ?

3 The question A A B B

4 A A B B This is the topology of the model Or static model inference graph influence graph...

5 The question A A B B k0k0 k1k1 Ordinary differential equation (ODE) The dynamical model: topology + time-evolution rules

6 The question A A B B k0k0 k1k1 Ordinary differential equation (ODE) Using mass action law :

7 The question A A B B k0k0 k1k1 Ordinary differential equation (ODE) Using mass action law : A A B B k0k0 k1k1 C C

8 The question A A B B k0k0 k1k1 k 0 = 1 k 1 = 1 [A](0) = 10 [B](0] =0

9 The question A A B B k0k0 k1k1 k 0 = 2 k 1 = 1 [A](0) = 10 [B](0] =0

10 The question A A B B k0k0 k1k1 What if we don’t know the values of the parameters ?

11 The question A A B B k0k0 k1k1 What if we don’t know the values of the parameters ? Experimental values

12 The question A A B B k0k0 k1k1 What if we don’t know the values of the parameters ? Experimental values Try to find k 0 and k 1 such as the simulated curves fit with experimental values

13 The question A A B B k0k0 k1k1 What if we don’t know the values of the parameters ? Experimental values Try to find k 0 and k 1 such as the simulated curves fit with experimental values That’s parameter estimation !

14 The question Iterative methodology Initial k 0, k 1 Simulate Compare with exp. values Objective function Compare with exp. values Objective function Change k 0, k 1 Done !

15 The question A A B B k0k0 k1k1 Why do we want to simulate ?

16 The question A A B B k0k0 k1k1 Why do we want to simulate ? If we can find with sufficient precision the parameters of the model, we can simulate its behavior in any condition without doing the experiments !

17 In order to parameterize biochemical models, we need a parameter estimation method : fast, so different topologies can be explored robust, we want to make sure that the parameter found is correct, and that it’s unique flexible, so different types of data can be used : dose-response, time series, relative mesurments, etc. The question However...

18 Test models In order to develop the method, we need a benchmark... We cannot use real systems because the parameters are unknown ! We will use synthetic models. Synthetic models allow to evaluate the influence of experimental uncertainty. We also designed the different models in order to evaluate the importance of 2 different features: the number of molecular species the range between smallest and biggest parameter value

19 Model 1 1 for all parameters Model 2 parameters from 2.10 -3 to 1,28.10 2 (5 logs) Model 3 parameters from 5.10 -7 to 1.10 5 (12 logs) 4 equations 8 parameters Test models Ay0y0 By1y1 y2y2 Cy3y3 k0k0 k1k1 k2k2 k3k3 k4k4 k 5 = A + y 0 k 6 = B + y 1 + y 2 k 7 = C + y 3

20 Test models A y0y0 By1y1 Dy4y4 C y2y2 y3y3 k0k0 k1k1 k2k2 k3k3 k6k6 k7k7 k8k8 k9k9 k4k4 k5k5 k 10 = A + y 0 k 11 = B + y 1 k 12 = C + y 2 + y 3 k 13 = D + y 4 Model 4 5 equations 14 parameters

21 Model 5 10 equations 27 parameters Test models A y0y0 B y1y1 y2y2 Cy3y3 Dy4y4 y5y5 E y6y6 y7y7 Fy8y8 Gy9y9 k0k0 k1k1 k3k3 k2k2 k5k5 k4k4 k6k6 k7k7 k8k8 k9k9 k 10 k 11 k 13 k 12 k 15 k 14 k 16 k 17 k 18 k 20 = A + y 0 k 21 = B + y 1 + y 2 k 22 = C + y 3 k 23 = D + y 4 + y 5 k 24 = E + y 6 + y 7 k 25 = F + y 8 k 26 = G + y 9

22 Model 6 16 equations 42 parameters Test models k 34 = A + y 0 k 35 = B + y 1 + y 2 k 36 = C + y 3 + y 4 k 37 = D + y 5 + y 6 + y 7 k 38 = E + y 8 + y 9 k 39 = F + y 10 k 40 = G + y 11 + y 12 k 41 = H + y 13 + y 14 + y 15

23 Theoretical parameters Concentrations at chosen time points « Experimental » data Parameter estimation ODE Integration Add perturbation (pert) HYPE Observed error Real error Principle

24 The objective function is what we need to minimize Objective function

25 Normalized difference between observed and simulated in % Objective function

26 Normalized difference between observed and simulated in % Sum on all the time point for this observable Objective function

27 Normalized difference between observed and simulated in % Sum on all the time point for this observable Divided by the number of time points for this observable Objective function

28 Normalized difference between observed and simulated in % Sum on all the time point for this observable Divided by the number of time points for this observable Sum on observables Objective function

29 Normalized difference between observed and simulated in % Sum on all the time point for this observable Divided by the number of time points for this observable Sum on observables Divided by the nb of observables Objective function

30 Normalized difference between observed and simulated in % Sum on all the time point for this observable Divided by the number of time points for this observable Sum on observables Divided by the nb of observables Depends on standard deviation Objective function

31

32

33 Optimization method Now we need an optimization method.... Evolutionnary methods are usually very good at finding extrema in a large and rought solution space ! Most popular: genetic algorithm

34 1 individual = (x, y); Parameter 1 Parameter 2 Genetic algorithm

35 Mutation Parameter 1 Parameter 2 Genetic algorithm

36 Cross-over Parameter 1 Parameter 2 Genetic algorithm

37 New parents Parameter 1 Parameter 2 Genetic algorithm

38 Cross-over : global exploration Mutation : local exploration After enought generations, all the individuals are close to the global minimum. Genetic algorithm

39 ModelBest error Model 10.0045 Model 20.0076 Model 30.00018 Model 40.026 Model 50.083 Model 60.17

40 Genetic algorithm ModelBest error Model 10.0045 Model 20.0076 Model 30.00018 Model 40.026 Model 50.083 Model 60.17 Not so bad... But not that good !

41 Parameter 1 Parameter 2 CMA-ES

42 Parameter 1 Parameter 2 CMA-ES

43 Parameter 1 Parameter 2 CMA-ES

44 New parent : weighted average Parameter 1 Parameter 2 CMA-ES

45 Best direction Parameter 1 Parameter 2 CMA-ES

46 Parameter 1 Parameter 2 CMA-ES

47 New generation Parameter 1 Parameter 2 CMA-ES

48 Genetic algorithm ModelGenetic Algorithm CMA-ES Model 10.00450 Model 20.00760 Model 30.00018NC Model 40.026NC Model 50.083NC Model 60.17NC

49 Genetic algorithm ModelGenetic Algorithm CMA-ES Model 10.00450 Model 20.00760 Model 30.00018NC Model 40.026NC Model 50.083NC Model 60.17NC When CMA-ES converges its very good, but...

50 Hybrid method Let’s try to combine them ! Genetic algorithm always converges, but not very close to the solution CMA-ES doesn’t converge often, but when it does, it gets very close

51 Hybrid method GA runs … CMA-ES runs … best GA runs … CMA-ES runs … best Average GA runs … CMA-ES runs … best Median GA runs … CMA-ES runs … best Best (i) One/Best (ii) Average/Best (iii) Median/Best (iv) Best/Best

52 Hybrid method GA runs … CMA-ES runs … best GA runs … CMA-ES runs … best Average GA runs … CMA-ES runs … best Median GA runs … CMA-ES runs … best Best (i) One/Best (ii) Average/Best (iii) Median/Best (iv) Best/Best

53 Hybrid method ModelGenetic Algorithm CMA-ESHybrid Model 10.004500 Model 20.007600 Model 30.00018NC0 Model 40.026NC2 10-6 Model 50.083NC6.3 10-4 Model 60.17NC1.4 10-4

54 Hybrid method Modelstdev=0stdev=0.1stdev=0.2 Model 102.4 10-31 10-2 Model 201 10-45.4 10-3 Model 306 10-51.1 10-3 Model 42 10-62.7 10-23.5 10-2 Model 56.3 10-43 10-27 10-2 Model 61.4 10-42.6 10-24.7 10-2 What happens if we have uncertainties on measured values ? Real errors: compare simulation with real values, not perturbed ones Error is significantly lower than uncertainty on experimental data.

55 1 random parameter set, 1 experimental data set EP x10 GA x10 Praxis x10 PS x10 HJLMSD Hype x10 Best set (Copasi error) 7 parameter sets Best set (Copasi error) Hype error Best set Comparison with other methods SSm Go x10 Best set (SSmGo error)

56 Comparison with other methods MethodM1M2M3M4M5M6 EP0.0391.31.530.640.10.56 GA-SR0.0340.360.560.890.150.31 HJ0.0420.43.320.960.0930.086 LM0.0390.0870.0770.610.0791.56 Praxis0.0390.725.570.670.10.35 P Swarm0.0390.370.192250.070.2 SD0.0251.9315.420.620.471.36 SSmGo0.0390.0574410.0410.077.93 HYPE0.0110.00540.00110.0350.070.047

57 Simulation in control conditions (model 2) Comparison with other methods Ay0y0 By1y1 y2y2 Cy3y3 Errors : Copasi: 0.088 HYPE: 0.0054

58 Simulation in perturbed conditions (k1/100) Comparison with other methods Ay0y0 By1y1 y2y2 Cy3y3 Errors : Copasi: 0.17 HYPE: 0.06

59 0.617.93 * ** *** Comparison with other methods For all 6 models HYPE is significantly more predictive

60 0.617.93 * ** *** Comparison with other methods For all 6 models HYPE is significantly more predictive If the model is good enought it is predictive !

61 A real model... Control of the balance between G and beta- arrestine pathways at the angiotensine receptor Heitzler et al. Competing G protein-coupled receptor kinases balance G protein and β- arrestin signaling. Molecular systems biology. 01/2012; 8:590.

62 A real model... What is the situation ? 11 equations 3 observables: DAG in control conditions PKC in control conditions ERK in control conditions + 4 perturbed conditions 32 unknown parameters for control conditions If we use only control conditions, we don’t have enought data ! So we use the perturbed conditions, but then... 55 ODEs 36 unknown parameters

63 A real model... DAG PKC ERK ctl + Ro ERK ctl + Si barr2 ERK ctl + si GRK23 ERK ctl + si GRK56

64 A real model... Is the model predictive ?

65 A real model... Is the model predictive ? Make a prediction...

66 A real model... Is the model predictive ? Make a prediction...... then do the experiment !

67 A real model...

68 4 parameter sets with very low error Highest simulated value Lowest

69 Estimations Log (k10) Are estimated parameters reliable ? Darker bar: value of the parameter in the set. Colored region: values of the parameter for which the error remains small (< 3 fold) For k10 same value in all sets A real model...

70 A real model

71

72 A second type of behavior: different values in the different sets, but we could take 0 in the 4 sets.

73 A real model 25 parameters have same values in the 4 sets

74 A real model 25 parameters have same values in the 4 sets 8 parameters with upper bounds

75 A real model 25 parameters have same values in the 4 sets 8 parameters with upper bounds The 2 remaining parameters have the same values in 3 of the 4 sets.

76 What next ? 3 problems remain: Identifiability of the parameters, can we do something more formal (and more generic !) Convergence efficiency: only 4 good parameter sets over 60 estimations Computation time: one optimisation of the model takes about 3 weeks on a single core.

77 k 5 =0.002 Log(k 5 ) Log(erreur observée) Identifiability

78 k 0 =0.07 k 1 =128 k 2 =24 k 5 =0.002 k 4 =0.6 k 3 =1 k 6 =0.01 k 7 =1 Identifiability

79 When we use unperturbed data, we can reach very small errors, and the estimated parameter values are very close to the expected ones. Problem is, is real world we don’t have unperturbed data !

80 Identifiability Expected 0.0655

81 Identifiability Expected 1.10 5

82 Identifiability Let’s come back to the equations...

83 Identifiability Let’s come back to the equations... p 2 appears in many places...

84 Identifiability Let’s come back to the equations... whereas p 5 appears only once as a product with p 0

85 Identifiability Expected 0.05

86 Identifiability Can we do something more formal... Can we express the variation of the error as a function of the variation of the parameters ?

87 Identifiability Can we do something more formal... Can we express the variation of the error as a function of the variation of the parameters ? Write the Taylor expansion: If t is close to a point a, then the concentrations at a+t are:

88 Taylor expansion Since we use only mass action laws, each ODE can be written as: Using the ODEs we get:

89 Taylor expansion t has to be close to a, but how close ? Within the covergence radius. We cannot compute the exact convergence radius, but we can have an upper bound: Using We can show that

90 Taylor expansion This result is very interesting because this upper bound does not depend neither on i, nor on a ! Why not use that to integrate the ODEs ? ODE integration is the most CPU consuming task in the process...

91 Taylor expansion Globally very satisfying ! Less computation time and more convergence. RosenbrockTaylor M110119.5 M25738094 M36045273 M4570533 M555151866 M68713418158 Computation times (min) per good optimisation

92 Taylor expansion RosenbrockTaylor M110119.5 M25738094 M36045273 M4570533 M555151866 M68713418158 Globally very satisfying ! Less computation time and more convergence. But... Computation times (min) per good optimisation

93 Taylor expansion When the convergence radius becomes two small, Rosenbrock is more efficient. Hybrid: if R < 0.05 go back to Rosenbrock RosenbrockTaylorHybrid M110119.523 M257380941611 M36045273270 M4570533170 M5551518661498 M687134181586888

94 « Drifting » parameters What about the « drifting » parameters ?

95 « Drifting » parameters We can evaluate the difference between the simulations made using the 2 parameter sets at time  using Taylor expansion: By definition: We can compute the first term:

96 « Drifting » parameters For model 3, i=0, this term is: If p1 doesn’t vary and p 0 p 5 =     Since p 0 is small (5.10 -7 ), this term is neglictible as long as |    remains small. Consequently p 0 « wanders » between 10 -11 and 0.1 while p 5 varies between 1 and 10 8. But the product reaches the expected value. However it cannot be null or nothing happens in the system !

97 Taylor expansion When one parameter reaches very high values, the convergence radius gets very small. Consequently we can limit this parameter drifting by adding small penalty to the error when the convergence radius becomes too low. RosenbrockTaylorHybridHyb + pen M110119.52317 M2573809416111330 M36045273270280 M4570533170112 M5551518661498941 M6871341815868886627

98 Conclusions It is possible to reliabily estimate the parameters of a model, even in cases where: the standard deviations on experimental data is important the number of observables is small the parameter have very different values In practice the models are identifiable, thanks to mass action ! The problem of drifting can be identified and contained to a certain extent We can get to errors small enought so that the model becomes predictive

99 BIOS group Domitille Heitzler Eric Reiter Pascale Crépieux Astrid Musnier Kelly Leon Guillaume Durand Nathalie Gallay-Langonne Laurence Dupuy Christophe Gauthier Vincent Piketty People... LRI, Orsay Jérôme Azé Nikolaus Hansen INRIA Rocquencourt Frédérique ClémentFrançois Fages Aurélien Rizk Duke University Robert J. Lefkowitz Seungkirl Ahn Jihee Kim Jonathan D. Violin


Download ppt "HYPE Hybrid method for parameter estimation In biochemical models Anne Poupon Biology and Bioinformatics of Signalling Systems PRC, Tours, France."

Similar presentations


Ads by Google