We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byMolly Chavez
Modified over 4 years ago
Roseann White Clin/Reg Felllow Abbott Vascular – Cardiac TherapiesEvaluating Poolability of Continuous and Binary Endpoints Across Centers Roseann White Clin/Reg Felllow Abbott Vascular – Cardiac Therapies
Background MA in Statistics from UC Berkeley2nd Generation statistician (My mother specialized in econometrics, i.e. statistics for economics) 15 years as professional statistician in the Biotechnology industry providing statistical support for research, analytical method development, diagnostics, clinical, and manufacturing Favorite Quote: “Statisticians speak for the data when the data can’t speak for themselves” Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Evaluating Poolability across CentersKey Issues Current Methods Proposed alternative method Potential Bayesian approach Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Key Issues Centers are not chosen at randomSponsors try to include centers that will represent the patient population across the geography Often there are no centers or centers that receive very few of the type of patients needed Clinical Trials tend to initiate more centers than they potentially need Accelerate enrollment Involve Key Opinion leaders Provide visibility to product In device trials, its often difficult to “blind” the clinician to the product being used. Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Key Issues Assessing poolability is rarely discussed prospectively from both a clinical as well as statistical perspective No definition of what a clinical meaningful difference among sites When assessing poolability for centers with a small number of patients, should they be combined Based on size of center? Based on what geographical region the center is located? Based on standard practices? Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Current Methodology Centers that have less than a pre-specified number of patients are combined into a “center” The interaction effect between center and treatment is tested: If the p-value is greater than a pre-specified value, then there is evidence of the lack of poolability of the sites Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Challenges to the current methodologyReflexive – does not take into consideration whether a clinically meaningful interaction would be detected Combination of all the smaller sites may dilute regional differences What p-value does one choose? <0.05 to only pick up extreme differences. i.e. increase specificity and decrease sensitivity >0.05 so to increase the sensitivity but decrease the specificity Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Proposed alternative ProcessProspectively define what a clinical meaningful interaction Determine sample size necessary to detect difference Measure Site 1 Site 2 Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Proposed Alternative Process (con’t)Combine smaller centers (where enrollment is too low to detect differences) with larger “similar” centers where simliar is pre-specified Geographical similar (same country or region) Same patient population (urban vs. rural) Same standard practices (con-committment medication use) If center groupings are still too small – use bootstrap method of resampling to get the “appropriate number” from each site Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Example – Binary EndpointPrimary endpoint – non-inferiority in oputcome rate Assumptions T1=9%, T2=9%, margin=5% N=1400 Clinical Meaningful interaction between treatment groups: If the difference between the treatment group varies more than twice the non-inferiority margin Minimum grouping size=150; Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Bootstrap N Actual Grouping Size Needed Grouping Size % p<0.05 1000Use bootstrap when Actual possible group size is lower than needed Actual possible group size is greater than needed Simulation results Limitations For Binary outcomes, grouping sizes less than 50 can lead to misleading results N Actual Grouping Size Needed Grouping Size % p<0.05 1000 25 150 0.15 50 0.09 100 0.054 Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Does Bayesian play a role in determining poolability?Modify the approach presented by Jen-Pei Liu, et. al. in “A Bayesian noninferiority approach to evaluation of bridging studies” J Biopharm Stat May;14(2): Step 1: Develop a prior based on the treatment difference based on largest center grouping Step 2: Use the data from the next largest center grouping and prior distribution to obtain the mean and variability of the posterior distribution Step 3: Evaluate the posterior probability that difference is greater or equal to some clinically acceptable limit Step 4: If the posterior probability is sufficiently large, say 80%, then conclude the similarity between the two center groupings. Step 5: Repeat the same process with next center grouping. Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Conclusion Pre-specify clinical meaningful difference up frontGroup smaller sites by commonalities not size If group size is smaller or much larger than needed – potential solution is using bootstrap but need more investigation Explore bayesian approach to evaluating poolability. Evaluating Poolability 09/29/2006 Company Confidential © 2006 Abbott
Effect of high flow oxygen on mortality in chronic obstructive pulmonary disease patients in prehospital setting: randomized control trial (BMJ. 2010;341:c5462)
Client Assessment and Other New Uses of Reliability Will G Hopkins Physiology and Physical Education University of Otago, Dunedin NZ Reliability: the Essentials.
Labeling claims for patient- reported outcomes (A regulatory perspective) FDA/Industry Workshop Washington, DC September 16, 2005 Lisa A. Kammerman, Ph.D.
Non-randomized Medical Device Clinical Studies: A Regulatory Perspective Sep. 16, 2005 Lilly Yue, Ph.D.* CDRH, FDA, Rockville MD * No official support.
Drugs vs. Devices Jeng Mah & Gosford A Sawyerr Sept 16, 2005.
Non-randomized studies: Studies with historical controls and the use of Objective Performance Criteria (OPCs) Jeff Cerkvenik Statistics Manager Medtronic,
The Application of Propensity Score Analysis to Non-randomized Medical Device Clinical Studies: A Regulatory Perspective Lilly Yue, Ph.D.* CDRH, FDA,
FDA/Industry Workshop September, 19, 2003 Johnson & Johnson Pharmaceutical Research and Development L.L.C. 1 Uses and Abuses of (Adaptive) Randomization:
FDA/Industry Statistics Workshop 2005 Parallel Session 6: Vaccine Trials 10:10 am – 11:30 am, Sept. 16, 2005.
OPC Koustenis, Breiter. General Comments Surrogate for Control Group Benchmark for Minimally Acceptable Values Not a Control Group Driven by Historical.
1 Testing in the Open Market Testing in the Open Market AAAS Colloquium on Personalized Medicine: Planning for the Future June 2, 2009 Courtney C. Harper,
When Should a Clinical Trial Design with Pre-Stratification be Used? Group 1.
Intro to Statistics Part2 Arier Lee University of Auckland.
On Comparing Classifiers: Pitfalls to Avoid and a Recommended Approach Author: Steven L. Salzberg Presented by: Zheng Liu.
6. Statistical Inference: Example: Anorexia study Weight measured before and after period of treatment y i = weight at end – weight at beginning For n=17.
1 Analyzing HIV Prevalence Trends from Antenatal Clinic (ANC) Sentinel Surveillance Data and Serosurveillance Data from High Risk Groups* Ray Shiraishi.
1 WORKSHOP 4: KEY COMMENTS FROM THE PANEL DISCUSSION The 3rd Kitasato University - Harvard School of Public Health Symposium Wednesday October 2nd - Thursday.
Phase II/III Design: Case Study
Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
© 2018 SlidePlayer.com Inc. All rights reserved.