We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byMolly Chavez
Modified over 3 years ago
Evaluating Poolability of Continuous and Binary Endpoints Across Centers Roseann White Clin/Reg Felllow Abbott Vascular – Cardiac Therapies
Evaluating Poolability 09/29/2006 2Company Confidential © 2006 Abbott Background MA in Statistics from UC Berkeley 2 nd Generation statistician (My mother specialized in econometrics, i.e. statistics for economics) 15 years as professional statistician in the Biotechnology industry providing statistical support for research, analytical method development, diagnostics, clinical, and manufacturing Favorite Quote: Statisticians speak for the data when the data cant speak for themselves
Evaluating Poolability 09/29/2006 3Company Confidential © 2006 Abbott Evaluating Poolability across Centers Key Issues Current Methods Proposed alternative method Potential Bayesian approach
Evaluating Poolability 09/29/2006 4Company Confidential © 2006 Abbott Key Issues Centers are not chosen at random –Sponsors try to include centers that will represent the patient population across the geography –Often there are no centers or centers that receive very few of the type of patients needed Clinical Trials tend to initiate more centers than they potentially need –Accelerate enrollment –Involve Key Opinion leaders –Provide visibility to product In device trials, its often difficult to blind the clinician to the product being used.
Evaluating Poolability 09/29/2006 5Company Confidential © 2006 Abbott Key Issues Assessing poolability is rarely discussed prospectively from both a clinical as well as statistical perspective –No definition of what a clinical meaningful difference among sites –When assessing poolability for centers with a small number of patients, should they be combined Based on size of center? Based on what geographical region the center is located? Based on standard practices?
Evaluating Poolability 09/29/2006 6Company Confidential © 2006 Abbott Current Methodology Centers that have less than a pre-specified number of patients are combined into a center The interaction effect between center and treatment is tested: If the p-value is greater than a pre-specified value, then there is evidence of the lack of poolability of the sites
Evaluating Poolability 09/29/2006 7Company Confidential © 2006 Abbott Challenges to the current methodology Reflexive – does not take into consideration whether a clinically meaningful interaction would be detected Combination of all the smaller sites may dilute regional differences What p-value does one choose? –<0.05 to only pick up extreme differences. i.e. increase specificity and decrease sensitivity –>0.05 so to increase the sensitivity but decrease the specificity
Evaluating Poolability 09/29/2006 8Company Confidential © 2006 Abbott Proposed alternative Process Prospectively define what a clinical meaningful interaction Determine sample size necessary to detect difference Site 1Site 2 Measure
Evaluating Poolability 09/29/2006 9Company Confidential © 2006 Abbott Proposed Alternative Process (cont) Combine smaller centers (where enrollment is too low to detect differences) with larger similar centers where simliar is pre- specified –Geographical similar (same country or region) –Same patient population (urban vs. rural) –Same standard practices (con-committment medication use) If center groupings are still too small – use bootstrap method of resampling to get the appropriate number from each site
Evaluating Poolability 09/29/2006 10Company Confidential © 2006 Abbott Example – Binary Endpoint Primary endpoint – non-inferiority in oputcome rate –Assumptions T1=9%, T2=9%, margin=5% –N=1400 Clinical Meaningful interaction between treatment groups: –If the difference between the treatment group varies more than twice the non-inferiority margin Minimum grouping size=150;
Evaluating Poolability 09/29/2006 11Company Confidential © 2006 Abbott Bootstrap Use bootstrap when –Actual possible group size is lower than needed –Actual possible group size is greater than needed Simulation results Limitations –For Binary outcomes, grouping sizes less than 50 can lead to misleading results NActual Grouping Size Needed Grouping Size % p<0.05 1000251500.15 501500.09 1001500.054
Evaluating Poolability 09/29/2006 12Company Confidential © 2006 Abbott Does Bayesian play a role in determining poolability? Modify the approach presented by Jen-Pei Liu, et. al. in A Bayesian noninferiority approach to evaluation of bridging studies J Biopharm Stat. 2004 May;14(2):291-300. Step 1: Develop a prior based on the treatment difference based on largest center grouping Step 2: Use the data from the next largest center grouping and prior distribution to obtain the mean and variability of the posterior distribution Step 3: Evaluate the posterior probability that difference is greater or equal to some clinically acceptable limit Step 4: If the posterior probability is sufficiently large, say 80%, then conclude the similarity between the two center groupings. Step 5: Repeat the same process with next center grouping.
Evaluating Poolability 09/29/2006 13Company Confidential © 2006 Abbott Conclusion Pre-specify clinical meaningful difference up front Group smaller sites by commonalities not size If group size is smaller or much larger than needed – potential solution is using bootstrap but need more investigation Explore bayesian approach to evaluating poolability.
Non-randomized studies: Studies with historical controls and the use of Objective Performance Criteria (OPCs) Jeff Cerkvenik Statistics Manager Medtronic,
OPC Koustenis, Breiter. General Comments Surrogate for Control Group Benchmark for Minimally Acceptable Values Not a Control Group Driven by Historical.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20 th February 2012.
Challenges of Non-Inferiority Trial Designs R. Sridhara, Ph.D.
1 Analyzing HIV Prevalence Trends from Antenatal Clinic (ANC) Sentinel Surveillance Data and Serosurveillance Data from High Risk Groups* Ray Shiraishi.
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
1 OTC-TFM Monograph: Statistical Issues of Study Design and Analyses Thamban Valappil, Ph.D. Mathematical Statistician OPSS/OB/DBIII Nonprescription Drugs.
FDA/Industry Statistics Workshop 2005 Parallel Session 6: Vaccine Trials 10:10 am – 11:30 am, Sept. 16, 2005.
Many Important Issues Covered Current status of ICH E5 and implementation in individual Asian countries Implementation at a regional level (EU) and practical.
The Application of Propensity Score Analysis to Non-randomized Medical Device Clinical Studies: A Regulatory Perspective Lilly Yue, Ph.D.* CDRH, FDA, Rockville.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Phase II/III Design: Case Study Case study: Seamless Phase II/III Design for Registration Program Jeff Maca, Ph.D. Senior Associate Director Novartis Pharmaceuticals.
Delivering Robust Outcomes from Multinational Clinical Trials: Principles and Strategies Andreas Sashegyi, PhD Eli Lilly and Company.
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Establishing Efficacy through Randomized Controlled Clinical Trials Ernst R. Berndt, Ph.D. MIT and NBER.
FDA/Industry Workshop September, 19, 2003 Johnson & Johnson Pharmaceutical Research and Development L.L.C. 1 Uses and Abuses of (Adaptive) Randomization:
When Should a Clinical Trial Design with Pre-Stratification be Used? Group 1.
© Guidant 2005 Surrogate Endpoints and Non-randomized Trials Roseann White Humble Biostatistician.
Chapter 5 Producing Data 5.1 Designing Samples 5.2 Designing Experiments 5.3 Simulating Experiments.
1 Study Design Issues and Considerations in HUS Trials Yan Wang, Ph.D. Statistical Reviewer Division of Biometrics IV OB/OTS/CDER/FDA April 12, 2007.
Probability & Statistical Inference Lecture 7 MSc in Computing (Data Analytics)
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
1 International Society for CNS Clinical Trials and Methodology FDA Advisory Committee Meeting Proposed Requirement for Long-Term Data to Support Initial.
Welcome to Workshop #5: Accelerated Approval (AA) in Rare Diseases: Review of a White Paper Proposal Emil D. Kakkis, M.D., Ph.D. President and Founder.
5-4-1 Unit 4: Sampling approaches After completing this unit you should be able to: Outline the purpose of sampling Understand key theoretical.
On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach Author: Steven L. Salzberg Presented by: Zheng Liu.
Design of Clinical Trials of Antibiotic Therapy for Acute Otitis Media Colin D. Marchant, M.D. Boston University School of Medicine and Tufts University.
CONSIDERATIONS FOR SURGICAL MEDICAL DEVICE TRIALS LCTU Liverpool Clinical Trials Unit Considerations for Medical Device Trials.
Journal Club Alcohol and Health: Current Evidence March-April 2006.
DHHS / FDA / CDRH 1 FDA Summary CardioSEAL® STARFlex™ Septal Occlusion System with Qwik Load NMT Medical P000049/S3.
Biostatistics Support for Medical Student Research (MSR) Projects Allen Kunselman Division of Biostatistics and Bioinformatics Department of Public Health.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
How To Design a Clinical Trial THIS EXERCISE WILL GUIDE YOU THROUGH ALL THE STEPS OF YOUR COURSE PROJECT, IN PARTICULAR HOW TO DESIGN A CLINICAL TRIAL.
Regulatory Affairs and Adaptive Designs Greg Enas, PhD, RAC Director, Endocrinology/Metabolism US Regulatory Affairs Eli Lilly and Company.
EBM --- Journal Reading Presenter ：呂宥達 Date ： 2005/10/27.
Labeling claims for patient- reported outcomes (A regulatory perspective) FDA/Industry Workshop Washington, DC September 16, 2005 Lisa A. Kammerman, Ph.D.
Non-randomized Medical Device Clinical Studies: A Regulatory Perspective Sep. 16, 2005 Lilly Yue, Ph.D.* CDRH, FDA, Rockville MD * No official support.
6. Statistical Inference: Example: Anorexia study Weight measured before and after period of treatment y i = weight at end – weight at beginning For n=17.
EBC course 10 April 2003 Critical Appraisal of the Clinical Literature: The Big Picture Cynthia R. Long, PhD Associate Professor Palmer Center for Chiropractic.
1 Statistical Perspective Acamprosate Experience Sue-Jane Wang, Ph.D. Statistics Leader Alcoholism Treatment Clinical Trials May 10, 2002 Drug Abuse Advisory.
Introduction to the Meeting Introduction to the Meeting Advisory Committee for Pharmaceutical Sciences Clinical Pharmacology Subcommittee November 17-18,
Research Design Evidence Based Medicine Concepts and Glossary.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Journal Club Alcohol, Other Drugs, and Health: Current Evidence May–June 2012.
Drugs vs. Devices Jeng Mah & Gosford A Sawyerr Sept 16, 2005.
Medical Statistics as a science. Why Do Statistics? Extrapolate from data collected to make general conclusions about larger population from which data.
Statistical Review of Intergel by Richard Kotz Statistician, CDRH/OSB.
Is there a comparison? ◦ Are the groups really comparable? Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
© 2017 SlidePlayer.com Inc. All rights reserved.