David Ohlssen (Novartis Pharmaceticals Corporation)

Slides:



Advertisements
Similar presentations
OPC Koustenis, Breiter. General Comments Surrogate for Control Group Benchmark for Minimally Acceptable Values Not a Control Group Driven by Historical.
Advertisements

Agency for Healthcare Research and Quality (AHRQ)
Grading the Strength of a Body of Evidence on Diagnostic Tests Prepared for: The Agency for Healthcare Research and Quality (AHRQ) Training Modules for.
Introduction to the User’s Guide for Developing a Protocol for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research.
Bayesian evidence synthesis in drug development and comparative effectiveness research David Ohlssen (Novartis Pharmaceticals Corporation)
Exploring uncertainty in cost effectiveness analysis NICE International and HITAP copyright © 2013 Francis Ruiz NICE International (acknowledgements to:
Bayesian posterior predictive probability - what do interim analyses mean for decision making? Oscar Della Pasqua & Gijs Santen Clinical Pharmacology Modelling.
Sensitivity Analysis for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Copyright restrictions may apply JAMA Pediatrics Journal Club Slides: Pharmacologic Treatment of Pediatric Headaches El-Chammas K, Keyes J, Thompson N,
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 2.3 – Publication bias.
What role should probabilistic sensitivity analysis play in SMC decision making? Andrew Briggs, DPhil University of Oxford.
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Modelling Partially & Completely Missing Preference-Based Outcome Measures (PBOMs) Keith Abrams Department of Health Sciences, University of Leicester,
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Opportunities for Bayesian analysis in evaluation of health-care interventions David Spiegelhalter MRC Biostatistics Unit Cambridge
The role of economic modelling – a brief introduction Francis Ruiz NICE International © NICE 2014.
Impact of Dose Selection Strategies on the Probability of Success in the Phase III Zoran Antonijevic Senior Director Strategic Development, Biostatistics.
Quality improvement for asthma care: The asthma care return-on-investment calculator Ginger Smith Carls, M.A., Thomson Healthcare (Medstat) State Healthcare.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
Journal Club Alcohol, Other Drugs, and Health: Current Evidence November-December 2007.
1 A Bayesian Non-Inferiority Approach to Evaluation of Bridging Studies Chin-Fu Hsiao, Jen-Pei Liu Division of Biostatistics and Bioinformatics National.
The Cost-Effectiveness and Value of Information Associated with Biologic Drugs for the Treatment of Psoriatic Arthritis Y Bravo Vergel, N Hawkins, C Asseburg,
Decision Analysis as a Basis for Estimating Cost- Effectiveness: The Experience of the National Institute for Health and Clinical Excellence in the UK.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Adaptive Designs for Clinical Trials
As noted by Gary H. Lyman (JCO, 2012) “CER is an important framework for systematically identifying and summarizing the totality of evidence on the effectiveness,
DISCUSSION Alex Sutton Centre for Biostatistics & Genetic Epidemiology, University of Leicester.
Are the results valid? Was the validity of the included studies appraised?
Demonstrating the impact of statistics in the preclinical area using Bayesian analysis with informative priors 8 th -10 th October 2014 Non Clinical Statistics.
Multiple Choice Questions for discussion
Simple Linear Regression
Value of Information Analysis Roger J. Lewis, MD, PhD Department of Emergency Medicine Harbor-UCLA Medical Center Los Angeles Biomedical Research Institute.
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
Optimal cost-effective Go-No Go decisions Cong Chen*, Ph.D. Robert A. Beckman, M.D. *Director, Merck & Co., Inc. EFSPI, Basel, June 2010.
Introduction to Systematic Reviews Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /9/20151.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Incorporating heterogeneity in meta-analyses: A case study Liz Stojanovski University of Newcastle Presentation at IBS Taupo, New Zealand, 2009.
Simon Thornley Meta-analysis: pooling study results.
Center for Radiative Shock Hydrodynamics Fall 2011 Review Assessment of predictive capability Derek Bingham 1.
Empirical Efficiency Maximization: Locally Efficient Covariate Adjustment in Randomized Experiments Daniel B. Rubin Joint work with Mark J. van der Laan.
Federal Institute for Drugs and Medical Devices The BfArM is a Federal Institute within the portfolio of the Federal Ministry of Health (BMG) The use of.
August 20, 2003FDA Antiviral Drugs Advisory Committee Meeting 1 Statistical Considerations for Topical Microbicide Phase 2 and 3 Trial Designs: A Regulatory.
Impact of Air Pollution on Public Health: Transportability of Risk Estimates Jonathan M. Samet, MD, MS NERAM V October 16, 2006 Vancouver, B.C. Department.
Bayesian Statistics & Innovative Trial Design April 3, 2006 Jane Perlmutter
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
An Overview of the Objectives, Approach, and Components of ComET™ Mr. Paul Price The LifeLine Group All slides and material Copyright protected.
Is the conscientious explicit and judicious use of current best evidence in making decision about the care of the individual patient (Dr. David Sackett)
Journal Club Alcohol, Other Drugs, and Health: Current Evidence November-December 2012.
Replication in Prevention Science Valentine, et al.
1 Statistical Review of the Observational Studies of Aprotinin Safety Part II: The i3 Drug Safety Study CRDAC and DSaRM Meeting September 12, 2007 P. Chris.
Effect of Rosiglitazone on the Risk of Myocardial Infarction And Death from Cardiovascular Causes Alternative Interpretations of the Evidence George A.
EBM --- Journal Reading Presenter :葉麗雯 Date : 2005/10/27.
Systematic Reviews and Meta-analyses. Introduction A systematic review (also called an overview) attempts to summarize the scientific evidence related.
Course: Research in Biomedicine and Health III Seminar 5: Critical assessment of evidence.
1 Lecture 10: Meta-analysis of intervention studies Introduction to meta-analysis Selection of studies Abstraction of information Quality scores Methods.
Statistical Criteria for Establishing Safety and Efficacy of Allergenic Products Tammy Massie, PhD Mathematical Statistician Team Leader Bacterial, Parasitic.
1 Chapter 6 SAMPLE SIZE ISSUES Ref: Lachin, Controlled Clinical Trials 2:93-113, 1981.
Is a meta-analysis right for me? Jaime Peters June 2014.
“New methods in generating evidence for everyone: Can we improve evidence synthesis approaches?” Network Meta-Analyses and Economic Evaluations Petros.
Systematic review of the potential adverse effects of caffeine consumption in healthy adults, pregnant women, adolescents, and children: Cardiovascular.
The Importance of Adequately Powered Studies
Supplementary Table 1. PRISMA checklist
Meta-analysis of joint longitudinal and event-time outcomes
Critical Reading of Clinical Study Results
Mark Rothmann U.S. Food and Drug Administration September 14, 2018
Aiying Chen, Scott Patterson, Fabrice Bailleux and Ehab Bassily
EAST GRADE course 2019 Introduction to Meta-Analysis
Evidence-Based Public Health
A Bayesian Design with Conditional Borrowing of Historical Data in a Rare Disease Setting Peng Sun* July 30, 2019 *Joint work with Ming-Hui Chen, Yiwei.
Assessing Similarity to Support Pediatric Extrapolation
Presentation transcript:

David Ohlssen (Novartis Pharmaceticals Corporation) Applying Bayesian evidence synthesis in comparative effectiveness research David Ohlssen (Novartis Pharmaceticals Corporation)

Overview Part 1 Bayesian DIA CER sub-team Part 2 Overview of Bayesian evidence synthesis

Part 1 Bayesian DIA CER sub-team

Team Members Chair: David Ohlssen Co-chair: Haijun Ma  Other team members: Fanni Natanegara, George Quartey, Mark Boye, Ram Tiwari, Yu Chang

Problem Statement Comparative effectiveness research (CER) is designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options Timely research and dissemination of CER results to be used by clinicians, patients, policymakers, and health plans and other payers to make informed decisions at both the individual and population levels Bayesian approaches provide a natural framework for combining information from a variety of sources in comparative effectiveness research Rapid technical development as evident by a recent flurry of publications Limited understanding on how Bayesian techniques should be applied in practice

Objectives Encourage the appropriate application of Bayesian approaches to the problem of comparative effectiveness. Input into ongoing initiatives on comparative effectiveness within medical products development setting through white papers/publications and session at future meetings

Project Scope Analysis of patient benefit risk using existing data Initially focused on 1) The use of Bayesian evidence synthesis techniques such as mixed treatment comparisons 2) Joint Modeling in benefit risk assessment

Current aims for 2012 Literature review of Bayesian methods in CER – Q4 2012  To gain an understanding and appreciation of other CER working groups – Q4 2012 Decide on the list of CER working groups to contact Understand the objectives, status of each group

Part 2 Overview of Bayesian evidence synthesis

Introduction Evidence synthesis in drug development The ideas and principles behind evidence synthesis date back to the work of Eddy et al; 1992 However, wide spread application has been driven by the need for quantitative health technology assessment: cost effectiveness comparitive effectiveness Ideas often closely linked with Bayesian principles and methods: Good decision making should ideally be based on all relevant information MCMC computation

Recent developments in comparative effectiveness Health agencies have increasing become interested in health technology assessment and the comparative effectiveness of various treatment options Statistical approaches include extensions of standard meta- analysis models allowing multiple treatments to be compared FDA Partnership in Applied Comparative Effectiveness Science (PACES) -including projects on utilizing historical data in clinical trials and subgroup analysis

Aims of this talk Evidence synthesis Introduce some basic concepts Illustration through a series of applications: Motivating public health example Network meta-analysis Using historical data in the design and analysis of clinical trials Subgroup analysis Focus on principles and understanding of critical assumptions rather than technical details

Basic concepts Framework and Notation for evidence synthesis YS Y1,..,YS Data from S sources 1,…, S Source-specific parameters/effects of interest (e.g. a mean difference) Question related to 1,…, S (e.g. average effect, or effect in a new study) 2 ? S 1

Strategies for HIV screening So the first topic I am going to discuss is the use of historical data from previous studies and particularly control groups

Ades and Cliffe (2002) HIV: synthesizing evidence from multiple sources Aim to compare strategies for screening for HIV in pre- natal clinics: Universal screening of all women, or targeted screening of current injecting drug users (IDU) or women born in sub-Saharan Africa (SSA) Use synthesis to determine the optimal policy

Key parameters Ades and Cliffe (2002) a- Proportion of women born in sub-Saharan Africa (SSA) b Proportion of women who are intravenous drug users (IDU) c HIV infection rate in SSA d HIV infection rate in IDU e HIV infection rate in non-SSA, non-IDU f Proportion HIV already diagnosed in SSA g Proportion HIV already diagnosed in IDU h Proportion HIV already diagnosed in non-SSA, non-IDU NO direct evidence concerning e and h!

A subset of some of the data used in the synthesis Ades and Cliffe (2002) HIV prevalence, women not born in SSA,1997-8 [db + e(1 − a − b)]/(1 − a) 74 / 136139 Overall HIV prevalence in pregnant women, 1999 ca + db + e(1 − a − b) 254 / 102287 Diagnosed HIV in SSA women as a proportion of all diagnosed HIV, 1999 fca/[fca + gdb + he(1 − a − b)] 43 / 60

Implementation of the evidence synthesis Ades and Cliffe (2002) The evidence was synthesized by placing all data sources within a single Bayesian model Easy to code in WinBUGS Key assumption – consistency of evidence across the different data sources Can be checked by comparing direct and indirect evidence at various “nodes” in the graphical model (Conflict p-value)

Network meta-analysis So the first topic I am going to discuss is the use of historical data from previous studies and particularly control groups

Motivation for Network Meta-Analysis There are often many treatments for health conditions Published systematic reviews and meta-analyses typically focus on pair-wise comparisons More than 20 separate Cochrane reviews for adult smoking cessation More than 20 separate Cochrane reviews for chronic asthma in adults An alternative approach would involve extending the standard meta-analysis techniques to accommodate multiple treatment This emerging field has been described as both network meta-analysis and mixed treatment comparisons

Network meta-analysis graphic B E F C G D H

Network meta-analysis – key assumptions Three key assumptions (Song et al., 2009): Homogeneity assumption – Studies in the network MA which compare the same treatments must be sufficiently similar. Similarity assumption – When comparing A and C indirectly via B, the patient populations of the trial(s) investigating A vs B and those investigating B vs C must be sufficiently similar. Consistency assumption – direct and indirect comparisons, when done separately, must be roughly in agreement.

Example 2 Network meta-analysis Trelle et al (2011) - Cardiovascular safety of non-steroidal anti-inflammatory drugs: Primary Endpoint was myocardial infarction Data synthesis 31 trials in 116 429 patients with more than 115 000 patient years of follow-up were included. A Network random effects meta-analysis were used in the analysis Critical aspect – the assumptions regarding the consistency of evidence across the network How reasonable is it to rank and compare treatments with this technique? placebo Lumiracoxib naproxen rofecoxib Ibuprofen Diclofenac Celecoxib Etoricoxib

Results from Trelle et al Myocardial infarction analysis Relative risk with 95% confidence interval compared to placebo Treatment RR estimate lower limit upper limit Celecoxib 1.35 0.71 2.72 Diclofenac 0.82 0.29 2.20 Etoricoxib 0.75 0.23 2.39 Ibuprofen 1.61 0.50 5.77 Lumiracoxib 2.00 6.21 Naproxen 0.37 1.67 Rofecoxib 2.12 1.26 3.56 Authors' conclusion: Although uncertainty remains, little evidence exists to suggest that any of the investigated drugs are safe in cardiovascular terms. Naproxen seemed least harmful.

Comments on Trelle et al Drug doses could not be considered (data not available). Average duration of exposure was different for different trials. Therefore, ranking of treatments relies on the strong assumption that the risk ratio is constant across time for all treatments The authors conducted extensive sensitivity analysis and the results appeared to be robust

placebo B A C D Combination product Additional Example Using Network meta-analysis for Phase IIIB Probability of success in a pricing trial placebo B A C D Combination product

Use of Historical controls So the first topic I am going to discuss is the use of historical data from previous studies and particularly control groups

Introduction Objective and Problem Statement Design a study with a control arm / treatment arm(s) Use historical control data in design and analysis Ideally:  smaller trial comparable to a standard trial Used in some of Novartis phase I and II trials Design options Standard Design: “n vs. n” New Design: “n*+(n-n*) vs. n” with n* = “prior sample size” How can the historical information be quantified? How much is it worth?

The Meta-Analytic-Predictive Approach Framework and Notation Y1,..,YH Historical control data from H trials 1,…, H Control “effects” (unknown) ? ‘Relationship/Similarity’ (unknown) no relation… same effects * Effect in new trial (unknown) Design objective: [ * | Y1,…,YH ] Y* Data in new study (yet to be observed) Y1 Y2 YH 2 1 ? H So the method we use to do this is very simple Bayesian hierarchical model I will just introduce a bit of notation here to decribe it * Y*

Example – meta-analytic predictive approach to form priors Application Random-effect meta-analysis prior information for control group in new study, corresponding to prior sample size n*

Observed Control Response Rates Bayesian setup-using historical control data Meta Analysis of Historical Data Study Analysis Observed Control Response Rates Placebo Drug Prior Distribution of Control Response Rate Observed Control data Prior Distribution of drug response rate Observed Drug data Historical Trial 1 Historical Trial 2 Bayesian Analysis Historical Trial 3 Predictive Distribution of Control Response Rate in a New Study Historical Trial 4 Meta-Analysis Historical Trial 5 Posterior Distribution of Control Response Rate Posterior Distribution of Drug Response Rate So now I will discuss how this would be used in a study analysis On the left hand side is our meta-analysis Historical Trial 6 Historical Trial 7 Posterior Distribution of Difference in Response Historical Trial 8

Utilization in a quick kill quick win PoC Design ... ≥ 70% ... ≥ 50% ... ≥ 50% Positive PoC if P(d ≥ 0.2)... 1st Interim 2nd Interim Final analysis Negative PoC if P(d < 0.2)... ... ≥ 90% ... ≥ 90% ... > 50% With N=60, 2:1 Active:Placebo, IA’s after 20 and 40 patients Scenario First interim Second interim Final Overall power Stop for efficacy Stop for futility Claim efficacy Fail   d = 0 1.6% 49.0% 1.4% 26.0% 0.2% 21.9% 3.2% d = 0.2 33.9% 5.1% 27.7% 3.0% 8.8% 21.6% 70.4% d = 0.5 96.0% 0.0% 4.0% 100.0% Finally how might this be used in a study design Well I believe in an earlier talkthe idea of using posterior probability statements comparing a treatment effect to maybe a clinically relevant threshold was discussed Here we have a picture of what I sometimes call a quick kill or quick win design that will be often used in proof of concept studies We see there is multiple opportunities to kill the study or move on with development each based on comparing the treatment with the clinically relevant effect using a posterior probability statement. And then typically what we would do is look at the frequentist operating characteristics of the design for fine tuning With pPlacebo = 0.15, 10000 runs

R package available for design investigation 33 | Evidence synthesis in drug development

Subgroup Analysis So the first topic I am going to discuss is the use of historical data from previous studies and particularly control groups

Introduction to Subgroup analysis For biological reasons treatments may be more effective in some populations of patients than others Risk factors Genetic factors Demographic factors This motivates interest in statistical methods that can explore and identify potential subgroups of interest

Challenges with exploratory subgroup analysis random high bias - Fleming 2010 Effects of 5-Fluorouracil Plus Levamisole on Patient Survival Presented Overall and Within Subgroups, by Sex and Age* Hazard Ratio Risk of Mortality Analysis North Central Intergroup Group Treatment Study Group Study # 0035 (n = 162) (n = 619) All patients 0.72 0.67 Female 0.57 0.85 Male 0.91 0.50 Young 0.60 0.77 Old 0.87 0.59

Assumptions to deal with extremes Jones et al (2011) Similar methods to those used when combining historical data However, the focus is on the individual subgroup parameters g1,......, gG rather than the prediction of a new subgroup Unrelated Parameters g1,......, gG (u) Assumes a different treatment effect in each subgroup Equal Parameters g1=...= gG (c)  Assumes the same treatment effect in each subgroup Compromise. Effects are similar/related to a certain degree (r)

Comments on shrinkage estimation This type of approach is sometimes called shrinkage estimation Shrinkage estimation attempts to adjust for random high bias When relating subgroups, it is often desirable and logical to use structures that allow greater similarity between some subgroups than others A variety of possible subgroup structures can be examined to assess robustness

Subgroup analysis– Extension to multiple studies Data summary from several studies Subgroup analysis in a meta-analytic context Efficacy comparison T vs. C Data from 7 studies 8 subgroups defined by 3 binary base-line covariates A, B, C A, B, C high (+) or low (-) describing burden of disease (BOD) Idea: patients with higher BOD at baseline might show better efficacy

Graphical model Subgroup analysis involving several studies Y1,..,YS Data from S studies ? Y2 Y... 2 S 1 YS Study-specific parameters 1,…, S Parameters allow data to be combined from multiple studies g2 gG g1 ? Subgroup parameters g1,…, gG Main parameters of interest Various modeling structures can be examined

Extension to multiple studies Example 3: sensitivity analyses across a range of subgroup structures defined by 3 binary base-line covariates A, B, C A, B, C high (+) or low (-) describing burden of disease (BOD) 41 | Evidence synthesis in drug development

Summary Subgroup analysis Important to distinguish between exploratory subgroup analysis and confirmatory subgroup analysis Exploratory subgroup analysis can be misleading due to random high bias Evidence synthesis techniques that account for similarity among subgroups will help adjust for random high bias Examine a range of subgroup models to assess the robustness of any conclusions

Conclusions There is general agreement that good decision making should be based on all relevant information However, this is not easy to do in a formal/quantitative way Evidence synthesis offers fairly well-developed methodologies has many areas of application is particularly useful for company-internal decision making (we have used and will increasingly use evidence synthesis in our phase I and II trials) has become an important tool when making public health policy decisions

References 44 | Combining Information in Drug Development 2010

Evidence Synthesis/Meta-Analysis DerSimonian, Laird (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7; 177-88 Gould (1991). Using prior findings to augment active-controlled trials and trials with small placebo groups. Drug Information J. 25 369--380. Normand (1999). Meta-analysis: formulating, evaluating, combining, and reporting (Tutorial in Biostatistics). Statistics in Medicine 18: 321-359. See also Letters to the Editor by Carlin (2000) 19: 753-59, and Stijnen (2000) 19:759-761 Spiegelhalter et al. (2004); see main reference Stangl, Berry (eds) (2000). Meta-analysis in Medicine in Health Policy. Marcel Dekker Sutton, Abrams, Jones, Sheldon, Song (2000). Methods for Meta-analysis in Medical Research. John Wiley & Sons Trelle et al., “Cardiovascular safety of non-steroidal anti-inflammatory drugs: network non-steroidal anti-inflammatory drugs: network meta-analysis,” BMJ 342 (January 11, 2011): c7086-c7086.

Historical Controls Ibrahim, Chen (2000). Power prior distributions for regression models.Statistical Science, 15: 46-60 Neuenschwander, Branson, Spiegelhalter (2009). A note on the power prior. Statistics in Medicine, 28: 3562-3566 Neuenschwander, Capkun-Niggli, Branson, Spiegelhalter. (2010). Summarizing Historical Information on Controls in Clinical Trials. Clinical Trials, 7: 5-18 Pocock (1976). The combination of randomized and historical controls in clinical trials. Journal of Chronic Diseases, 29: 175-88 Spiegelhalter et al. (2004); see main reference Thall, Simon (1990). Incorporating historical control data in planning phase II studies. Statistics in Medicine, 9: 215-28

Subgroup Analyses Berry, Berry (2004). Accounting for multiplicities in assessing drug safety: a three-level hierarchical mixture model. Biometrics, 60: 418-26 Davis, Leffingwell (1990). Empirical Bayes estimates of subgroup effects in clinical trial. Controlled Clinical Trials, 11: 37-42 Dixon, Simon (1991). Bayesian subgroup analysis. Biometrics, 47: 871-81 Fleming (2010), “Clinical Trials: Discerning Hype From Substance,” Annals of Internal Medicine 153:400 -406. Hodges, Cui, Sargent, Carlin (2007). Smoothing balanced single-error terms Analysis of Variance. Technometrics, 49: 12-25 Jones, Ohlssen, Neuenschwander, Racine, Branson (2011). Bayesian models for subgroup analysis in clinical trials. Clinical Trials Clinical Trials 8 129 -143 Louis (1984). Estimating a population of parameter values using Bayes and empirical Bayes methods. JASA, 79: 393-98 Pocock, Assman, Enos, Kasten (2002). Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practic eand problems. Statistics in Medicine, 21: 2917–2930 Spiegelhalter et al. (2004); see main reference Thall, Wathen, Bekele, Champlin, Baker, Benjamin (2003). Hierarchical Bayesian approaches to phase II trials in diseases with multiple subtypes, Statistics in Medicine, 22: 763-80

Poisson network meta-analysis model Model extension to K treatments : Lu, Ades (2004). Combination of direct and indirect evidence in mixed treatment comparisons, Statistics in Medicine, 23:3105-3124. Different choices for µ’s and  ’s. They can be: common (over studies), fixed (unconstrained), or “random” Note: random  ’s  (K-1)-dimensional random effects distribution

Acknowledgements Stuart Bailey ,Björn Bornkamp, Beat Neuenschwander, Heinz Schmidli, Min Wu, Andrew Wright