1 Keaven Anderson, Ph.D. Amy Ko, MPH Nancy Liu, Ph.D. Yevgen Tymofyeyev, Ph.D. Merck Research Laboratories June 9, 2010 Information-Based Sample Size Re-estimation.

Slides:



Advertisements
Similar presentations
Mentor: Dr. Kathryn Chaloner Iowa Summer Institute in Biostatistics
Advertisements

Phase II/III Design: Case Study
Sample Size & Power Estimation Computing for Research April 9, 2013.
Sample size estimation
A Flexible Two Stage Design in Active Control Non-inferiority Trials Gang Chen, Yong-Cheng Wang, and George Chi † Division of Biometrics I, CDER, FDA Qing.
1 Implementing Adaptive Designs in Clinical Trials: Risks and Benefits Christopher Khedouri, Ph.D.*, Thamban Valappil, Ph.D.*, Mohammed Huque, Ph.D.* *
Statistical Analysis for Two-stage Seamless Design with Different Study Endpoints Shein-Chung Chow, Duke U, Durham, NC, USA Qingshu Lu, U of Science and.
Bayesian posterior predictive probability - what do interim analyses mean for decision making? Oscar Della Pasqua & Gijs Santen Clinical Pharmacology Modelling.
1 Statistical and Practical Aspects of a Non-Stop Drug Development Strategy Karen L. Kesler and Ronald W. Helms Rho, Inc. Contact:
Estimation of Sample Size
Design and Analysis of Group Sequential Clinical Trials with Multiple Endpoints and Software Development Shuangge Ma*, Michael R. Kosorok and Thomas D.
An Experimental Paradigm for Developing Dynamic Treatment Regimes S.A. Murphy Univ. of Michigan March, 2004.
An Experimental Paradigm for Developing Adaptive Treatment Strategies S.A. Murphy Univ. of Michigan UNC: November, 2003.
An Experimental Paradigm for Developing Adaptive Treatment Strategies S.A. Murphy Univ. of Michigan ACSIR, July, 2003.
An Experimental Paradigm for Developing Adaptive Treatment Strategies S.A. Murphy Univ. of Michigan February, 2004.
Clinical Trials Hanyan Yang
1Carl-Fredrik Burman, 11 Nov 2008 RSS / MRC / NIHR HTA Futility Meeting Futility stopping Carl-Fredrik Burman, PhD Statistical Science Director AstraZeneca.
Sample Size Determination
Adaptive Designs for Clinical Trials
Sample Size Determination Ziad Taib March 7, 2014.
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
Clinical Trial Results. org Rescue Angioplasty or Repeat Fibrinolysis After Failed Fibrinolytic Therapy for ST-Segment Myocardial Infarction: A Meta-Analysis.
Bayesian Statistics in Clinical Trials Case Studies: Agenda
Intervention Studies Principles of Epidemiology Lecture 10 Dona Schneider, PhD, MPH, FACE.
Testing and Estimation Procedures in Multi-Armed Designs with Treatment Selection Gernot Wassmer, PhD Institut für Medizinische Statistik, Informatik und.
Inference for a Single Population Proportion (p).
CI - 1 Cure Rate Models and Adjuvant Trial Design for ECOG Melanoma Studies in the Past, Present, and Future Joseph Ibrahim, PhD Harvard School of Public.
Background to Adaptive Design Nigel Stallard Professor of Medical Statistics Director of Health Sciences Research Institute Warwick Medical School
Optimal cost-effective Go-No Go decisions Cong Chen*, Ph.D. Robert A. Beckman, M.D. *Director, Merck & Co., Inc. EFSPI, Basel, June 2010.
Aligning Trial Design and Key Processes in Phase III Event Driven Trials: Protocol (via a Special Protocol Assessment), Data Monitoring Committee Charter.
Effectiveness: Overview of Current Approaches and Emerging Trial Designs Doug Taylor, PhD Director of Biostatistics Family Health International.
Lecture 20: Study Design and Sample Size Estimation in TTE Studies.
Topics in Clinical Trials (7) J. Jack Lee, Ph.D. Department of Biostatistics University of Texas M. D. Anderson Cancer Center.
1 An Interim Monitoring Approach for a Small Sample Size Incidence Density Problem By: Shane Rosanbalm Co-author: Dennis Wallace.
1 Statistics in Drug Development Mark Rothmann, Ph. D.* Division of Biometrics I Food and Drug Administration * The views expressed here are those of the.
All-or-None procedure: An outline Nanayaw Gyadu-Ankama Shoubhik Mondal Steven Cheng.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
1 Interim Analysis in Clinical Trials Professor Bikas K Sinha [ ISI, KolkatA ] RU Workshop : April18,
August 20, 2003FDA Antiviral Drugs Advisory Committee Meeting 1 Statistical Considerations for Topical Microbicide Phase 2 and 3 Trial Designs: A Regulatory.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
Framework of Preferred Evaluation Methodologies for TAACCCT Impact/Outcomes Analysis Random Assignment (Experimental Design) preferred – High proportion.
Motivation Using SMART research designs to improve individualized treatments Alena Scott 1, Janet Levy 3, and Susan Murphy 1,2 Institute for Social Research.
© Copyright McGraw-Hill 2004
INTRODUCTION TO CLINICAL RESEARCH Introduction to Statistical Inference Karen Bandeen-Roche, Ph.D. July 12, 2010.
MPS/MSc in StatisticsAdaptive & Bayesian - Lect 51 Lecture 5 Adaptive designs 5.1Introduction 5.2Fisher’s combination method 5.3The inverse normal method.
Various Topics of Interest to the Inquiring Orthopedist Richard Gerkin, MD, MS BGSMC GME Research.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 6 –Multiple hypothesis testing Marshall University Genomics.
بسم الله الرحمن الرحیم.
1 Chapter 6 SAMPLE SIZE ISSUES Ref: Lachin, Controlled Clinical Trials 2:93-113, 1981.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 15: Sample size and Power Marshall University Genomics.
Core Research Competencies:
Overview of Adaptive Design
Applied Biostatistics: Lecture 2
Statistical Approaches to Support Device Innovation- FDA View
Sample Size Estimation
Chapter 8: Inference for Proportions
Strategies for Implementing Flexible Clinical Trials Jerald S. Schindler, Dr.P.H. Cytel Pharmaceutical Research Services 2006 FDA/Industry Statistics Workshop.
When we free ourselves of desire,
Shuangge Ma, Michael R. Kosorok, Thomas D. Cook
Dose-finding designs incorporating toxicity data from multiple treatment cycles and continuous efficacy outcome Sumithra J. Mandrekar Mayo Clinic Invited.
Aiying Chen, Scott Patterson, Fabrice Bailleux and Ehab Bassily
COMP60621 Fundamentals of Parallel and Distributed Systems
Joint Statistical Meetings 2018
2.3. Measures of Dispersion (Variation):
Innovative Trial Designs in GI Drug Development: Why Trials Succeed and Fail—A Brief Report From 2018 Digestive Disease Week  M. Scott Harris, Colin W.
COMP60611 Fundamentals of Parallel and Distributed Systems
Optimal Basket Designs for Efficacy Screening with Cherry-Picking
Exact Inference for Adaptive Group Sequential Clinical Trials
A response-adaptive platform trial may start by enrolling a broad patient population and randomise patients equally across a range of treatments, shown.
Medical Statistics Exam Technique and Coaching, Part 2 Richard Kay Statistical Consultant RK Statistics Ltd 22/09/2019.
Presentation transcript:

1 Keaven Anderson, Ph.D. Amy Ko, MPH Nancy Liu, Ph.D. Yevgen Tymofyeyev, Ph.D. Merck Research Laboratories June 9, 2010 Information-Based Sample Size Re-estimation for Binomial Trials

2 Objective: Fit-for-purpose sample-size adaptation Examples here restricted to binary outcomes Wish to find sample size to definitively test for treatment effect  ≥  min  Minimum difference of clinical interest,  min, is KNOWN  May be risk difference, relative risk, odds-ratio  Do not care about SMALLER treatment differences Desire to limit sample size to that needed if  ≠  min Control group event rate UNKNOWN Follow-up allows interim analysis to terminate trial without ‘substantial’ enrollment over-running

3 Case Study 1 CAPTURE Trial (Lancet, 1997(349): )  Unstable angina patients undergoing angioplasty  30-day cardiovascular event endpoint  Control event rate may range from 10%-20%  Wish 80% power to detect  min = 1/3 reduction (RR)

4 Case Study 2 Response rate study  Control rate may range from 10% to 25%   min = 10% absolute difference

5 Can we adapt sample size? Gao, Ware and Mehta [2010] take a conditional power approach to sample size re-estimation  Presented by Cyrus Mehta at recent KOL lecture  Would presumably plan for null hypothesis  0 >  min and adapt sample size up if interim treatment effect is “somewhat promising” Information-based group sequential design 1.Estimate statistical information at analysis (blinded) 2.Do (interim or final) analysis based on proportion of final desired information (spending function approach) 3.If max information AND max sample size not reached –If desired information likely by next analysis, stop there –Otherwise, go to next interim –Go back to 1.

6 Fair comparison? The scenarios here are set up for information-based design to be preferred  Other scenarios may point to a conditional power approach  Important to distinguish your situation to choose the appropriate method!  Scenarios where the information-based approach works well are reasonably common  Blinded approaches such as information-based design are considered “well-understood’’ in the FDA draft guidance

7 Information-based approach Enroll patients continuously Estimate current information Analyze data Estimate next analysisGo to final (may adapt) Go to next IA Stop if done Stop enrollment

8 Example adaptation Target Information is re-scaled Adapted up to finish

9 Estimating information: Notation

10 Variance of (Note:  =proportion in Arm E) General formula Absolute difference (  = p C – p E ) Relative risk (  = log(p E / p C ))

11 Estimating variance and information Event rates estimated  Assume overall blinded event rate  Assume alternate hypothesis  Use MLE estimate for treatment group event rates (like M&N method) Use these event rates to estimate Statistical information

12

13

14

15 CAPTURE information-based approach Plan for maximum sample size of 2800 Analyze every 350 patients At each analysis  Compute proportion of planned information  Analyze  Adapt appropriately

16

17

18

19 Case Study 2 Response rate study  Control rate may range from 10% to 25%   min = 10% absolute difference

20 Execution of the IA Strategy: Conditional power approach of Gao et al Interim Analysis, calculate: Rate Difference Stop for futility Diff<3.86% † 3.86% ≤ Diff<16.7% Stop for efficacy Diff ≥16.7% ‡ † Corresponding to a CP of 15%; ‡ Corresponding to a P< Continue Re- estimate Sample Size CP ≤ CP ≤ 0.85 Compute Conditional Power

21 Overall Power of the Study IA without SSR and IA with SSR Initial sample size is 289 in each case. Maximum possible sample size is 578 (2 times of 289, cap of the SSR) NSAIDS Response Rate DrugA Response Rate gsDesign (Efficacy, Futility) Adaptive (gsDesign+SSR) E(N) † /Group Power E(N) † /Group Power 10%20% 15%25% 20%30% 25%35% † E(N) = expected sample size, which is the average of the sample size for such a design. The actual sample size the study might end up with varies. 90.0% 92.4% %78.2%73.3%86.8%81.9%78.0%

22 Information-based approach

23

24 Information-based approach Maximum sample size of 1100 Plan analyses at 200, 400, 600, 800, 1100 Adapt assume target  min =.10  Absolute response rate improvement

25

26

27 Some comments Computations performed using gsDesign R package  Available at CRAN  For CAPTURE example, 10k simulations were performed for a large # of scenarios –Parallel computing was easily implemented using Rmpi (free) or Parallel R (REvolution Computing)  For smaller # of scenarios used for 2 nd case study, sequential processing on PC was fine  My objective is to produce a vignette making this method available Technical issues  Various issues such as over-running and “reversing information time” need to be considered

28 Objective: Fit-for-purpose sample-size adaptation Examples here restricted to binary outcomes Wish to find sample size to definitively test for treatment effect  ≥  min  Minimum clinical difference of interest,  min, is KNOWN  May be risk difference, relative risk, odds-ratio  Do not care about SMALLER treatment differences Desire to limit sample size to that needed if  ≠  min Control group event rate UNKNOWN Follow-up allows interim analysis to terminate trial without ‘substantial’ enrollment over-running

29 Summary Information-based group sequential design for binary outcomes is  Effective at adapting maximum sample size to power for treatment effect  ≥  min  Group sequential aspects terminate early for futility, large efficacy difference Results demonstrated for absolute difference and relative risk examples If you can posit a minimum effect size of interest, this may be an effective adaptation method