Paul Gallo 2019 Joint Statistical Meetings July 31, 2019

Slides:



Advertisements
Similar presentations
Confidentiality and trial integrity issues for monitoring adaptive design trials Paul Gallo FDA-Industry Workshop September 28, 2006.
Advertisements

Phase II/III Design: Case Study
Susan Boynton, VP, Global Regulatory Affairs, Shire
Data Monitoring Models and Adaptive Designs: Some Regulatory Experiences Sue-Jane Wang, Ph.D. Associate Director for Adaptive Design and Pharmacogenomics,
The role of the Statistician in Data Monitoring Committees (DMC) Fritz Schindel Accovion GmbH Marburg - Germany.
USING AND PROMOTING REFLECTIVE JUDGMENT AS STUDENT LEADERS ON CAMPUS Patricia M. King, Professor Higher Education, University of Michigan.
1 FDA DRAFT GUIDANCE ON CLINICAL TRIAL DATA MONTORING COMMITTEES Susan S. Ellenberg, Ph.D. Office of Biostatistics and Epidemiology Center for Biologics.
Non-Experimental designs: Developmental designs & Small-N designs
Clinical Trials Hanyan Yang
Large Phase 1 Studies with Expansion Cohorts: Clinical, Ethical, Regulatory and Patient Perspectives Accelerating Anticancer Agent Development and Validation.
1Carl-Fredrik Burman, 11 Nov 2008 RSS / MRC / NIHR HTA Futility Meeting Futility stopping Carl-Fredrik Burman, PhD Statistical Science Director AstraZeneca.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Codex Guidelines for the Application of HACCP
Accredited Member of the Association of Clinical Research Professionals, USA Tips on clinical trials Maha Al-Farhan B.Sc, M.Phil., M.B.A., D.I.C.
JumpStart the Regulatory Review: Applying the Right Tools at the Right Time to the Right Audience Lilliam Rosario, Ph.D. Director Office of Computational.
Adaptive designs as enabler for personalized medicine
Types of validity we will study for the Next Exam... internal validity -- causal interpretability external validity -- generalizability statistical conclusion.
CLAIMS STRUCTURE FOR SLE Jeffrey Siegel, M.D. Arthritis Advisory Committee September 29, 2003.
DATA MONITORING COMMITTEES: COMMENTS ON PRESENTATIONS Susan S. Ellenberg, Ph.D. Department of Biostatistics and Epidemiology University of Pennsylvania.
REAL WORLD RESEARCH THIRD EDITION Chapter 8: Designs for Particular Purposes: Evaluation, Action and Change 1©2011 John Wiley & Sons Ltd.
Adaptive randomization
1 Updates on Regulatory Requirements for Missing Data Ferran Torres, MD, PhD Hospital Clinic Barcelona Universitat Autònoma de Barcelona.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Comments on “Adaptation and Heterogeneity” by Armin Koch Paul Gallo, Willi Maurer PhRMA Adaptive Design KOL Lecture Series November 14, 2008.
Governance, Risk and Ethics. 2 Section A: Governance and responsibility Section B: Internal control and review Section C: Identifying and assessing risk.
Rosemarie Bernabe, PhD Julius Center for Health Sciences and Primary Care Patient representatives’ contributions to the benefit-risk assessment tasks of.
GCP (GOOD CLINICAL PRACTISE)
PRAGMATIC Study Designs: Elderly Cancer Trials
Stages of Research and Development
Incorporating Evaluation into a Clinical Project
Significance of Findings and Discussion
Back to Basics – Approval Criteria
Patient Focused Drug Development An FDA Perspective
Non-contentious disposals
Using internet information critically Reading papers Presenting papers
Risk Assessment Meeting
Evidence-based Medicine
Risk Communication in Medicines
FDA’s IDE Decisions and Communications
Overview of Adaptive Design
8. Causality assessment:
Statistical Approaches to Support Device Innovation- FDA View
I. Why You Might Be Called
Internal Assessment 2016 IB Chemistry Year 2 HL.
Within Trial Decisions: Unblinding and Termination
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 2. Session 6. Developing indicators.
Data Managers’ Forum What’s in it for us?
Watching From Above: The Role of the DSMB
The NICE Citizens Council and the role of social value judgements
The Outside-in Corporation
CASE STUDY BY: JESSICA PATRON.
Critical Reading of Clinical Study Results
Strategies for Implementing Flexible Clinical Trials Jerald S. Schindler, Dr.P.H. Cytel Pharmaceutical Research Services 2006 FDA/Industry Statistics Workshop.
Data Monitoring Committees: Current Issues and Challenges Some Discussion Points Jim Neaton University of Minnesota.
Crucial Statistical Caveats for Percutaneous Valve Trials
Reading Research Papers-A Basic Guide to Critical Analysis
Myeloma UK Clinical Trial Network (CTN)
Progress Report on the Patient Reported Outcomes Harmonization Team
The DMC’s role in monitoring futility
Tim Auton, Astellas September 2014
Data Monitoring committees and adaptive decision-making
Quality Risk Management ICH Q9 Frequently Asked Questions (FAQ)
Dr. Matthew Keough August 8th, 2018 Summer School
Consider the Evidence Evidence-driven decision making
MODULE B - PROCESS SUBMODULES B1. Organizational Structure
Non-Experimental designs
A modest attempt at measuring and communicating about quality
Regulatory Perspective of the Use of EHRs in RCTs
David Manner JSM Presentation July 29, 2019
Presentation transcript:

Paul Gallo 2019 Joint Statistical Meetings July 31, 2019 Implementing Effective DMC Decision-Making in Complex Clinical Trial Designs Paul Gallo 2019 Joint Statistical Meetings July 31, 2019

Expanding utilization of DMCs The usage of Data Monitoring Committees has increased in recent years So has the scope of potential recommendations, e.g.: stopping or modifying a trial for ethical reasons based on safety concerns stopping for demonstrated efficacy (usually based on a group sequential scheme) stopping for lack of effect, or futility implementing an adaptive design scheme To be implemented efficiently while maintaining integrity and interpretability of trial results

Some issues Understanding relevant benefit-risk considerations DMC role in complex / adaptive trials Level / details of unblinding in DMC reports Data quality Stated role of the DMC statistician

1) Benefit-risk Recently, often heard / seen: Do they really? “DMCs evaluate benefit-risk”, or “DMCs assess benefit-risk”, etc. Do they really? In the same manner as: FDA, when deciding whether a treatment should be available to patients, or what a label should describe? Me and my doctor, when deciding on a course of treatment for me?

Data access It’s increasingly accepted – correctly – that DMCs should have access to relevant safety and efficacy data regardless of the specific focus of the monitoring plan in order to ensure they can carry out their responsibilities to trial participants

Challenges Interim data often can not convey a clear picture of either risk or benefit, much less both misleading signals (especially early) are inevitable different schedules / time courses of different outcomes some important safety concerns are quite long-term Expert DMCs sensibly integrate the strength of numerical signals, relevant scientific knowledge, consistency of patterns, etc. e.g., what types of safety risks might be expected? but sometimes there are surprises . . .

So, what does the DMC do for B-R? Is this more correctly stated as “consider” rather than “assess” or “evaluate”? What is the question the DMC addresses as it considers whether a trial should continue? How about Is this study a rational, ethical medical experiment that can provide important information on therapies that may have favorable benefit-risk profiles, without exposing participants to undue risks, relative to the answers it may provide? Usually, not something that lends itself to simple algorithms

2) Adaptive designs Adaptive designs have received increasing focus during years Examples of aspects that might be changed: sample size, treatment arms, patient population, randomization allocation, etc. Validity of conclusions depends on adequate pre-specification of an adaptive plan

DMCs in adaptive designs A DMC seems a natural party to consider for a role in the adaptation decision process they’re already allowed access to interim results independence, objectivity Possible misunderstanding: what changes can a DMC implement pro-actively? apart from their safety responsibilities, which always take precedence

DMC role adaptive trials What the DMC can’t do Proactively initiate aspects of change based on their review – essentially re-design the trial in a more favorable direction pre-specification is a fundamental tenet of a valid design What the DMC could do Play a role in implementing a pre-planned adaptation scheme note: doesn’t mean that actions are fully algorithmic

DMC as implementer FDA 2010 draft AD guidance: FDA 2018: “a DMC . . . can help implement the adaptation decision according to the prospective adaptation algorithm, but it should not be in a position to otherwise change the study design except for serious safety-related concerns that are the usual responsibility of a DMC” FDA 2018: “. . . revisions based on non-prospectively planned analyses can create difficulty in controlling the Type I error probability and in interpreting the trial results”

One board, or two? Should adaptation decisions be made by a group separate from a more “familiar” DMC? Both single and separate board models seem to have been used FDA 2018 acknowledges both possibilities Adaptations are an aspect of trial design – decisions may not be completely independent of other DMC recommendations Personally, I’d usually recommend a single DMC (as per Antonijevic et al, TIRS 2013)

3) Coding / unblinding Fully unblinded outputs? Coded, e.g., A / B? Arguments in support of coding sometimes focus on the “accidental access” possibility I actually saw this once: A / B, then C / D, E / F . . . Different coding for efficacy vs safety (or other parameters with potential to de-code)? Or else, how can the DMC take into account critical benefit-risk considerations?

Fundamental principle We need to be NOT in the mindset that added clarity of information to a DMC in any way compromises a trial

Options Putting a de-coding envelope in the hands of the DMC usually satisfies most perspectives opened whenever they want (even on Day 1), no need to request, or to tell anyone In situations structurally more complex or where full labelling sensibly facilitates review (e.g., multiple doses), we’d do so up front If a DMC initially insists on explicit labelling, we wouldn’t object

4) Data quality / completeness Data of less than perfect quality is a source of uncertainty so is having less data! Timely decisions, based on current available information, have important ethical implications Inherent conflict between quality / completeness of data, and “recent-ness” can’t recent, not-fully-cleaned data be relevant? Overheard from an experienced DMC expert: If it’s clean, it’s too old!

Data quality A principle?: DMC should never make what turns out to be an incorrect decision because they didn’t understand limitations in the data they reviewed! I like the idea of making essentially everything available, clearly explaining any limitations Providing multiple versions? – e.g., clean / all? adjudicated / all? (+ adjudication confirmation rates) Allocate resources sensibly

5) DMC statistician – a 2nd class citizen? Proposals I’ve seen / heard: The DMC statistician is “a non-voting member” “No statistician is needed because the DMC is only reviewing safety” DMC composition reflects a variety of expertises and perspectives relevant to a decision Statistics is unquestionably one of these Is “voting” over-emphasized anyway? For major decisions, DMCs aim to arrive at a consensus that all members are comfortable with

Stated role of the DMC statistician Safety review These may be the setting where statistical perspective is most indispensable! Strength of signal vs scientific plausibility vs degree of exploration vs consistency of related outcomes – what is the data really saying? I would recommend to resist any specification of the DMC statistician as less than a full- fledged member

Thank you !!