Presentation is loading. Please wait.

Presentation is loading. Please wait.

Economic evaluation of health programmes Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision.

Similar presentations


Presentation on theme: "Economic evaluation of health programmes Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision."— Presentation transcript:

1 Economic evaluation of health programmes Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision Analytic Modelling III Nov 5, 2008

2 Plan of class  Patient-level simulations: an example  Assessment of uncertainty in decision- analytic models  Assessment of uncertainty due to sampling variation in individual studies

3 3rd alternative: patient-level simulation  Each individual encounters events with probabilities that can be made path- dependent  Virtually infinite flexibility  But how to “populate” all model parameters?

4 Example of a study using patient-level simulation: Stahl JE, Rattner D, et al., Reorganizing the system of care surrounding laparoscopic surgery: A cost-effectiveness analysis using discrete simulation, Medical Decision-Making Sep-Oct 2004, 461 – 471.

5 Study background

6 Base case process of care

7

8

9 Arriving or exiting process

10

11

12

13

14

15

16 Sensitivity analysis results

17 Average cost of patients cared for per day

18

19

20

21 Conclusions  New system yields a lower cost per patient treated  Slightly higher cost if patient volume is lower  Reason: higher cost per minute more than compensated by higher throughput of patients  Sensitivity analyses point to robustness of conclusion to several changes in assumptions, evaluated one at a time

22 Significance  Use of a simulation model allows representation of a complex process that neither decision tree nor Markov model could represent  Obtaining valid data may be an issue – process represented in greater detail, but on what basis are those details defined?

23 Dealing with uncertainty in decision-analytic models

24 Types of uncertainty in DAMs and how to handle them Type of uncertaintyMethod for handling it MethodologicalReference case/sensitivity analysis Parameter uncertaintyProbabilistic sensitivity analysis Modeling uncertainty StructureSensitivity analysis ProcessNo established method Generalizability/transferabilitySensitivity analysis (From Box 3.3 of Drummond et al. 2005)

25 Limitations of one-way sensitivity analyses  Stahl et al. 2004 varied key parameters one at at time  Limitations to this:  Conscious or unconscious bias in selection of parameters to vary  Subjective interpretation – when do we conclude results are too sensitive to variation in a parameter or other feature of the model?  Variation one at a time ignores potential interactions and also covariation  In many DAMs there are too many parameters to be able to represent results of such analyses meaningfully

26 Probabilistic sensitivity analysis 1.Represent uncertainty in parameters by means of a distribution  Example, beta distribution for parameter between 0 and 1  Use joint distribution for parameters that are correlated 2.Propagate uncertainty through the model  Monte Carlo simulation  E.g., 10,000 replications using each time a different set of randomly-selected parameter values

27 The beta distribution

28 Probabilistic sensitivity analysis  Present implications of parameter uncertainty  Confidence intervals around ICER, or around incremental net benefit  Cost-effectiveness acceptability curve, May also show scatter plot on cost-effectiveness plane

29 Cost-effectiveness acceptability curves

30 Incremental cost-effectiveness ratio C E - C C E E - E C Average cost per person: Experimental Tx (E) Control group(C) RCEI = Average value of effectiveness measure: Experimental group Control group (C)

31 Representing uncertainty of the ICER Ratio nature complicates things Analytic methods exist but tend to oversimplify reality Bootstrapping methods are now widely used instead

32 Using the bootstrap to obtain a measure of the sampling variability of the ICER Suppose we have n EXP et n CON observations in the experimental and control groups, respectively. One way to estimate the uncertainty around an ICER is to: 1.Sample n CON cost-effect pairs from the control group, with replacement 2.Sample n EXP cost-effect pairs from the experimental group, with replacement 3. Calculate the ICER from those two new sets of cost-effect pairs 4. Repeat steps 1 to 3 many times, e.g., 1000 times. 5.Plot the resulting 1,000 ICER values on the Cost- effectiveness plane See Drummond & McGuire, Eds., Economic evaluation in health care, Oxford, 2001, p. 189

33 An illustration of step 1 (Note: These are made-up data)

34 Going over the next steps again… Do exactly the same steps for data from the experimental group, independently. Calculate the ICER from the 2 bootstrapped samples Store this ICER in memory Repeat the steps all over again Of course, this is done by computer. Stata is one program that can be used to do this fairly readily.

35 Bootstrapped replications of an ICER with 95% confidence interval Source: Drummond & McGuire 2001, p. 189 Note: ellipses here are derived using Van Hout’s method and are too big; the bootstrap gives better results

36 2 common problems with bootstrapped confidence intervals The magnitude of negative ICERs conveys no useful information:  A: (1 LY, - $2,000): ICER = -2,000 $/LY  B: (2 LY, - $2,000): ICER = -1,000 $/LY  C: (2 LY, - $1,000): ICER = - 500 $/LY  B is preferred yet is intermediate in value Positive ICERs from the NE and SW quadrants have opposite interpretations  In NE quadrant, fewer $ for an increase in LY favors new treatment; in SW quadrant, fewer $ saved from a reduction in LY favors old one As a result, if enough bootstrapped replications fall in other quadrants than the NE, the 95% confidence interval will be uninterpretable.

37 Bootstrapped replications that fall in all 4 quadrants Source: Drummond & McGuire 2001, p. 193

38 A solution: the Cost-effectiveness acceptability curve Strategy: We recognize that the decision-maker may in fact have a ceiling ratio, or shadow price R C – a maximum amount of $ per unit benefit he or she is willing to pay So we will estimate, based on our bootstrapped replications, the probability that the ICER is less than or equal to the ceiling ratio, as a function of the ceiling ratio If the ceiling ratio is $0, then the probability that the ICER is less than or equal to 0 is the p-value of the statistic from testing the null hypothesis that the costs of the 2 groups are the same  Recall that the p-value is the probability of observing the difference in costs seen in the data set (or a smaller one) by chance if the true difference is in fact 0.

39 Cost-effectiveness acceptability curve (CEAC) Source: Drummond & McGuire 2001, p. 195


Download ppt "Economic evaluation of health programmes Department of Epidemiology, Biostatistics and Occupational Health Class no. 17: Economic Evaluation using Decision."

Similar presentations


Ads by Google