Presentation on theme: "OPC Koustenis, Breiter. General Comments Surrogate for Control Group Benchmark for Minimally Acceptable Values Not a Control Group Driven by Historical."— Presentation transcript:
General Comments Surrogate for Control Group Benchmark for Minimally Acceptable Values Not a Control Group Driven by Historical Data Requires Pooling of Different Investigations
(continued) Periodical Re-Evaluation and Updating the OPCs Policy not yet formalized –Specific Guidance on Methodology to Derive an OPC –Is urgently needed
Bayesian Issues in Developing OPC Objective means? –Derived from (conditionally?) exchangeable studies –Non-informative hyper-prior For new Bayesian trials should the OPC be expressed as a (presumably tight) posterior distribution rather than a fixed number? –E.g. logit(opc) ~ normal(?,?), etc
Does OPC Preempt an Informative Prior? An objective informative prior would be derived from some of the same trials used to set the OPC. This could be dealt with by computing the joint posterior distribution of opc and p new. But this would be extremely burdensome to implement for anything but an in- house OPC (Breiter). A non-informative prior might be least burdensome.
Bayesian Endpoints Superiority: –P(p new < opc | New Data) Non-inferiority –P(p new < opc + | New Data) –PP(p new < kopc | New Data)
OPC as an Agreed upon Standard Historical Data + ??? Are evaluated to produce an agreed upon OPC as a fixed number with no uncertainty. Can I used some of these same data to develop an informative prior? Probably yes but needs work. The issue is what claim will be made for a successful device trial.
The prior depends on the Claim Claim: The complication rate (say) of the new device is not larger than (say) the median of comparable devices +. –If the new device is exchangeable with a subset of comparable devices then the correct prior for the new device is the joint distribution of (p new, opc) prior to the new data. –If the new device is not exchangeable with any comparable devices, then a non- informative prior should be used.
(continued) Claim: The complication rate of the new device is not greater than a given number (opc + ). –The prior can be based on device trials that are considered exchangeable with the planned trial (e.g. in house).
Logic Chopping? Not necessarily. Consider –The average male U of IA professor is taller than the average male professor. vs –The average male U of IA professor is taller than 511 How you or I arrived at the 511 is not relevant to the posterior probability.
But perhaps thats a bit disingenuous The regulatory goal is clearly to set an OPC that will not permit the reduction of average safety or efficacy of a class of devices. Of necessity, it has to be related to an estimate of some sort of average. So a claim of superiority or non-inferiority to an opc is clearly made at least indirectly with reference to a control
Would it Make sense to Express the OPC as a PD? If the OPC is derived from a hierarchical analysis of exchangeable device trials it would be possible to compute the predictive distribution of x new. Could inferiority (superiority) be defined as the observed x new being below the 5 th (above the 95 th ) percentile of the predictive distribution?
Binary Response Setup i = arm (T or C) j = center k = Ss Response variable y ijk ~ bernoulli(p ij ) logit(p Cj ) = j logit(p Tj ) = j + + j Primary: > - Secondary: j s are within clinical tolerance
Specify Secondary Goal ? If the difference between the treatment group varies more than twice the non- inferiority margin [ ] Possible interpretations: –Random CxT interaction: < 2 –Multiple comparisons: max | j – k | < 2
(continued) Modify... Liu et. al... Center j is non-inferior: + j > -k All centers must be non-inferior? ID the inferior centers?
Why Bootstrap Resample? To increase n of Ss in clusters? --- Probably invalid To generate a better approximation of the null sampling distribution? --- OK, but what are the details? Do you combine the two arms and resample? Why not use random-effects Glimmix if you want to stick to frequentist methods.
Bayesian Analysis Ad-hoc pooling is not necessary Can produce the posterior distribution of any function of the parameters. Can use non-informative hyper-priors, so is objective = data driven. Will have the best frequentist operating characteristics (which could be calculated by simulation.)
Bayesian Setup Define j = + j (logit of p in the T arm) ( j, j ) ~ iid N((, ) have near non-informative priors Primary goal: P( | Data) (or -bar) Secondary goal(s): ?? –P( < 2 | Data) (or s –For each (j,j) P(| j – j | < 2 | Data) –For each j P(| j – | < 2 | Data) –For each j P( j > -k | Data)
Bayes Could Use the Original Metric p Cj = 1/(1+exp(- j )) p Tj = 1/(1+exp(- j - j )) p C = 1/(1+exp(- )) p T = 1/(1+exp(- - )) Primary: P(p T – p C > | Data) Secondary: –e.g. P(p Tj – p Cj > k(p T – p C ) | Data)