Download presentation

Presentation is loading. Please wait.

Published byTatiana Herde Modified about 1 year ago

1
Some methodological issues in value of information analysis: an application of partial EVPI and EVSI to an economic model of Zanamivir Karl Claxton and Tony Ades

2
Partial EVPIs Light at the end of the tunnel…… ……..maybe it’s a train

3
A simple model of Zanamivir

4

5

6
(£40.00)(£20.00)£0.00£20.00£40.00 Normal Distribution Mean = (£0.51) Std Dev = £12.52 inb Distribution of inb

7
EVPI for the decision EVPI = EV(perfect information) - EV(current information)

8

9
Partial EVPI EVPI pip = EV(perfect information about pip) - EV(current information) EV(optimal decision for a particular resolution of pip) particular resolution of pip) Expectation of this difference over all resolutions of pip EV(prior decision for the same resolution of pip) same resolution of pip) -

10
Partial EVPI Some implications: information about an input is only valuable if it changes our decision information is only valuable if pip does not resolve at its expected value General solution: linear and non linear models inputs can be (spuriously) correlated

11

12
Felli and Hazen (98) “short cut” EVPI pip = EVPI when resolve all other inputs at their expected value Appears counter intuitive: we resolve all other uncertainties then ask what is the value of pip ie “residual” EVPIpip ? But: resolving at EV does not give us any information Correct if: linear relationship between inputs and net benefit inputs are not correlated

13
So why different values? The model is linear The inputs are independent?

14
“Residual” EVPI wrong current information position for partial EVPI what is the value of resolving pip when we already have perfect information about all other inputs? Expect residual EVPI pip < partial EVPI pip EVPI when resolve all other inputs at each realisation ?

15
Thompson and Evans (96) and Thompson and Graham (96) inb simplifies to: inb = Rearrange: pip: inb = pcz: inb = phz: inb = rsd: inb = upd: inb = phs: inb = pcs: inb = Felli and Hazen (98) used a similar approach Thompson and Evans (96) is a linear model emphasis on EVPI when set others to joint expected value requires payoffs as a function of the input of interest

16
Reduction in cost of uncertainty intuitive appeal consistent with conditional probabilistic analysis RCU E(pip) = EVPI - EVPI(pip resolved at expected value) But pip may not resolve at E(pip) and prior decisions may change value of perfect information if forced to stick to the prior decision ie the value of a reduction in variance Expect RCU E(pip) < partial EVPI

17
Reduction in cost of uncertainty spurious correlation again? RCU pip = E pip [EVPI – EVPI(given realisation of pip)] = partial EVPI RCU pip = EVPI – E pip [EVPI(given realisation of pip)] = [EV(perfect information) - EV(current information)] - E pip [EV(perfect information, pip resolved) - EV(current information, pip resolved)] E pip [EV(perfect information, pip resolved) - EV(current information, pip resolved)]

18
EVPI for strategies Value of including a strategy? EVPI with and without the strategy included demonstrates bias difference = EVPI associated with the strategy? EV(perfect information, all included) – EV(perfect information, excluded) E all inputs [Max d (NB d | all inputs )] – E all inputs [Max d-1 (NB d-1 | all inputs )]

19
Conclusions on partials Life is beautiful …… Hegel was right ……progress is a dialectic Maths don’t lie …… ……but brute force empiricism can mislead

20
EVSI…… …… it may well be a train Hegel’s right again! ……contradiction follows synthesis

21
EVSI for model inputs generate a predictive distribution for sample of n sample from the predictive and prior distributions to form a preposterior propagate the preposterior through the model value of information for sample of n find n* that maximises EVSI-cost sampling

22
EVSI for pip Epidemiological study n prior:pip Beta ( , ) predicitive:rip Bin(pip, n) preposterior:pip’ = (pip( + )+rip)/(( + +n) as n increases var(rip*n) falls towards var(pip) var(pip’) < var(pip) and falls with n pip’ are the possible posterior means

23
EVSIpip = reduction in the cost of uncertainty due to n obs on pip = difference in partials (EVPIpip – EVPIpip’) E pip [E other [Max d (NB d | other, pip )] - Max d E other (NB d | other, pip )] - E pip’ [E other [Max d (NB d | other, pip’ )] - Max d E other (NB d | other, pip’ )] pip’has smaller var so any realisation is less likely to change decision E pip [E other [Max d (NB d | other, pip )] > E pip’ [E other [Max d (NB d | other, pip’ )] E(pip’) = E(pip) E pip [Max d E other (NB d | other, pip )] = E pip’ [Max d E other (NB d | other, pip’ )]

24

25
EVSIpip Why not the difference in prior and preposterior EVPI? effect of pip’ only through var(NB) change decision for the realisation of pip’ once study is completed difference in prior and preposterior EVPI will underestimate EVSIpip

26
Implications EVSI for any input that is conjugate generate preposterior for log odds ratio for complication and hospitalisation etc trial design for individual endpoint (rsd) trial designs with a number of endpoints (pcz, phz, upd, rsd) n for an endpoint will be uncertain (n_pcz = n*pip, etc) consider optimal n and allocation (search for n*) combine different designs eg: obs study (pip) and trial (upd, rsd) or obs study (pip, upd), trial (rsd)…. etc

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google