We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byMolly Chavez
Modified over 2 years ago
Evaluating Poolability of Continuous and Binary Endpoints Across Centers Roseann White Clin/Reg Felllow Abbott Vascular – Cardiac Therapies
Evaluating Poolability 09/29/2006 2Company Confidential © 2006 Abbott Background MA in Statistics from UC Berkeley 2 nd Generation statistician (My mother specialized in econometrics, i.e. statistics for economics) 15 years as professional statistician in the Biotechnology industry providing statistical support for research, analytical method development, diagnostics, clinical, and manufacturing Favorite Quote: Statisticians speak for the data when the data cant speak for themselves
Evaluating Poolability 09/29/2006 3Company Confidential © 2006 Abbott Evaluating Poolability across Centers Key Issues Current Methods Proposed alternative method Potential Bayesian approach
Evaluating Poolability 09/29/2006 4Company Confidential © 2006 Abbott Key Issues Centers are not chosen at random –Sponsors try to include centers that will represent the patient population across the geography –Often there are no centers or centers that receive very few of the type of patients needed Clinical Trials tend to initiate more centers than they potentially need –Accelerate enrollment –Involve Key Opinion leaders –Provide visibility to product In device trials, its often difficult to blind the clinician to the product being used.
Evaluating Poolability 09/29/2006 5Company Confidential © 2006 Abbott Key Issues Assessing poolability is rarely discussed prospectively from both a clinical as well as statistical perspective –No definition of what a clinical meaningful difference among sites –When assessing poolability for centers with a small number of patients, should they be combined Based on size of center? Based on what geographical region the center is located? Based on standard practices?
Evaluating Poolability 09/29/2006 6Company Confidential © 2006 Abbott Current Methodology Centers that have less than a pre-specified number of patients are combined into a center The interaction effect between center and treatment is tested: If the p-value is greater than a pre-specified value, then there is evidence of the lack of poolability of the sites
Evaluating Poolability 09/29/2006 7Company Confidential © 2006 Abbott Challenges to the current methodology Reflexive – does not take into consideration whether a clinically meaningful interaction would be detected Combination of all the smaller sites may dilute regional differences What p-value does one choose? –<0.05 to only pick up extreme differences. i.e. increase specificity and decrease sensitivity –>0.05 so to increase the sensitivity but decrease the specificity
Evaluating Poolability 09/29/2006 8Company Confidential © 2006 Abbott Proposed alternative Process Prospectively define what a clinical meaningful interaction Determine sample size necessary to detect difference Site 1Site 2 Measure
Evaluating Poolability 09/29/2006 9Company Confidential © 2006 Abbott Proposed Alternative Process (cont) Combine smaller centers (where enrollment is too low to detect differences) with larger similar centers where simliar is pre- specified –Geographical similar (same country or region) –Same patient population (urban vs. rural) –Same standard practices (con-committment medication use) If center groupings are still too small – use bootstrap method of resampling to get the appropriate number from each site
Evaluating Poolability 09/29/ Company Confidential © 2006 Abbott Example – Binary Endpoint Primary endpoint – non-inferiority in oputcome rate –Assumptions T1=9%, T2=9%, margin=5% –N=1400 Clinical Meaningful interaction between treatment groups: –If the difference between the treatment group varies more than twice the non-inferiority margin Minimum grouping size=150;
Evaluating Poolability 09/29/ Company Confidential © 2006 Abbott Bootstrap Use bootstrap when –Actual possible group size is lower than needed –Actual possible group size is greater than needed Simulation results Limitations –For Binary outcomes, grouping sizes less than 50 can lead to misleading results NActual Grouping Size Needed Grouping Size % p<
Evaluating Poolability 09/29/ Company Confidential © 2006 Abbott Does Bayesian play a role in determining poolability? Modify the approach presented by Jen-Pei Liu, et. al. in A Bayesian noninferiority approach to evaluation of bridging studies J Biopharm Stat May;14(2): Step 1: Develop a prior based on the treatment difference based on largest center grouping Step 2: Use the data from the next largest center grouping and prior distribution to obtain the mean and variability of the posterior distribution Step 3: Evaluate the posterior probability that difference is greater or equal to some clinically acceptable limit Step 4: If the posterior probability is sufficiently large, say 80%, then conclude the similarity between the two center groupings. Step 5: Repeat the same process with next center grouping.
Evaluating Poolability 09/29/ Company Confidential © 2006 Abbott Conclusion Pre-specify clinical meaningful difference up front Group smaller sites by commonalities not size If group size is smaller or much larger than needed – potential solution is using bootstrap but need more investigation Explore bayesian approach to evaluating poolability.
An Update on FDAs Critical Path Initiative Statistical Contributions Robert T. ONeill Ph.D. Director, Office of Biostatistics Center for Drug Evaluation.
The Application of Propensity Score Analysis to Non-randomized Medical Device Clinical Studies: A Regulatory Perspective Lilly Yue, Ph.D.* CDRH, FDA, Rockville.
OPC Koustenis, Breiter. General Comments Surrogate for Control Group Benchmark for Minimally Acceptable Values Not a Control Group Driven by Historical.
New Designs for Phase III Clinical Trials Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute
Regulatory Perspectives on the Thorough QT/QTc Study 2005 FDA/Industry Workshop Washington DC September 16, 2005 Juan (Joanne) Zhang, Ph.D. CDER/FDA *Acknowledgement:
Design of Clinical Trials for Treatment of Invasive Fungal Infections John H. Powers, MD FACP FIDSA Senior Medical Scientist SAIC in support of Collaborative.
Phase II/III Design: Case Study Case study: Seamless Phase II/III Design for Registration Program Jeff Maca, Ph.D. Senior Associate Director Novartis Pharmaceuticals.
1 Benefits Transfer and Meta Analysis Professor Anil Markandya Department of Economics and International Development University of Bath
1 1 Use of Data Monitoring Committees (DMC) in Device Trials: An FDA Division of Cardiovascular Devices (DCD) Perspective Bram Zuckerman MD, FACC
A Practical Approach to Accelerating the Clinical Development Process Jerald S. Schindler, Dr.P.H. Assistant Vice President Global Biostatistics & Clinical.
Impact Evaluation: An Overview Lori Beaman, PhD RWJF Scholar in Health Policy UC Berkeley.
Introduction to Hypothesis Testing. What is a Hypothesis Test? A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis.
Basic Concepts, Practical Issues and Statistical Methods in Bridging Studies Shein-Chung Chow, Ph.D. Professor Department of Biostatistics & Bioinformatics.
Non-randomized studies: Studies with historical controls and the use of Objective Performance Criteria (OPCs) Jeff Cerkvenik Statistics Manager Medtronic,
Client Assessment and Other New Uses of Reliability Will G Hopkins Physiology and Physical Education University of Otago, Dunedin NZ Reliability: the Essentials.
FDA/Industry Workshop September 14-16, 2005 Critical Path Initiative: What it means for pharmaceutical industry statisticians Walter Offen, Lilly Brenda.
Sampling Design & Procedure. Population The aggregate of the all the elements, sharing some common set of characteristics that comprises the universe.
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Patient Selection Markers in Drug Development Programs Michael Ostland Genentech FDA/Industry Statistics Workshop: Washington D.C., September 14 – 16,
Bridging Studies Global Development W. Joe Shih Biostatistics Dept UMDNJ-School of Public Health Sept. 29, 2006 FDA/Industry Conference, Washington D.C.
Dr. Kumud More ICRI, Mumbai 15/04/10 “enabling critical decisions in early drug development ™
Investigator Responsibilities in Clinical Research Melissa Adde Director, Clinical Trials Office, INCTR
MICS4 Survey Design Workshop Multiple Indicator Cluster Surveys Survey Design Workshop Sampling Overview.
Measuring innovation CARIBBEAN REGIONAL WORKSHOP ON SCIENCE, TECHNOLOGY AND INNOVATION (STI) INDICATORS St Georges, Grenada 1-3 February.
Introduction Describe what panel data is and the reasons for using it in this format Assess the importance of fixed and random effects Examine the Hausman.
Sample Size Determination. Introduction Integral part of vast majority of quantitative studies Integral part of vast majority of quantitative studies.
Thinking About Inference BPS chapter 15 © 2010 W.H. Freeman and Company.
Povertyactionlab.org Planning Sample Size for Randomized Evaluations Esther Duflo J-PAL.
Meta-analysis The EBM workshop A.A.Haghdoost, MD; PhD of Epidemiology
SJS SDI_181 Design of Statistical Investigations 18 Sample Size Determination Stephen Senn.
© 2016 SlidePlayer.com Inc. All rights reserved.