Presentation is loading. Please wait.

Presentation is loading. Please wait.

CCEB STATISTICAL CONSIDERATIONS IN CLINICAL TRIALS Susan S. Ellenberg, Ph.D. University of Pennsylvania School of Medicine ASENT Clinical Trials Course.

Similar presentations


Presentation on theme: "CCEB STATISTICAL CONSIDERATIONS IN CLINICAL TRIALS Susan S. Ellenberg, Ph.D. University of Pennsylvania School of Medicine ASENT Clinical Trials Course."— Presentation transcript:

1 CCEB STATISTICAL CONSIDERATIONS IN CLINICAL TRIALS Susan S. Ellenberg, Ph.D. University of Pennsylvania School of Medicine ASENT Clinical Trials Course Arlington, VA March 6, 2008

2 CCEB TOPICS Randomization Sample size determination Dropouts and noncompliance Multiplicity Interim monitoring

3 CCEB WHY WE NEED CONTROLS Changes from baseline could be due to factors other than intervention –Natural variation in disease course –Patient expectations/psychological effects –Regression to the mean Cannot assume investigational treatment is cause of observed changes

4 CCEB

5

6 RANDOMIZATION Good Prognosis Poor Prognosis GPPPGPPP Treatment A Treatment B RANDOMIZERANDOMIZE

7 CCEB STRATIFIED RANDOMIZATION Randomization in principle should produce groups that are prognostically equivalent In practice, not uncommon to observe imbalances in treatment assignments Performing randomization within strata defined by prognostic factors reduces risk of such imbalances

8 CCEB BLOCKED RANDOMIZATION “Blocking” refers to a constraint on randomization that forces the numbers assigned to treatment and control to be equal after every X assignments, where X is the specified block size X must be a multiple of the number of treatment arms In a 2-arm trial, X typically is 2, 4 or 6

9 CCEB CENTRAL VS LOCAL RANDOMIZATION Local randomization allows more flexibility at site re time of randomization More quality assurance concerns when each site has its own randomization lists, especially in open-label study –Inadvertent bias –Deliberate subversion of randomization –Use of envelopes particularly problematic

10 CCEB SAMPLE SIZE DEPENDS ON Size of effect (“difference”) to be detected (or ruled out, for noninferiority trials) Desired limits on error rates (  ) Variability of outcome

11 CCEB CONTROL OF ERROR Significance level = Type I error =  Probability of concluding there is a treatment difference when there is really no difference (false positive) Power = 1-Type II error = 1-  Probability of concluding there is no treatment difference when there truly is a difference of given size (false negative)

12 CCEB WHEN WE ARE INVESTIGATING PROPORTIONS The smaller the desired error rates... The smaller the difference to be detected... The closer the expected event rates to 0.5… T HE L ARGER T HE S AMPLE S IZE W ILL B E

13 CCEB SAMPLE SIZE BY ERROR RATE AND DIFFERENCE TO BE DETECTED Event/success rates  =0.05  =0.05  =0.01 pwr=0.80pwr=0.90pwr=0.90 0.10 vs 0.20438 572 794 0.20 vs 0.30626 824 1152 0.20 vs 0.40182 236 328 0.40 vs 0.60214 278 386 From Fleiss, Levin and Paik, Statistical Methods for Rates and Proportions, John Wiley

14 CCEB SURVIVAL (TIME-TO-EVENT) DATA Sample size depends on the number of events that are expected The expected number of events increases if –sample size increases –patients are followed longer –higher risk patients are entered –more outcomes are counted as events (e.g., recurrence and death instead of just death)

15 CCEB INTENT-TO-TREAT (ITT) PRINCIPLE All randomized patients should be included in the (primary) analysis, in their assigned treatment groups, regardless of compliance with the assigned treatment.

16 CCEB IMPORTANT IMPLICATION OF ITT PRINCIPLE All patients entered into the study should be followed for the study outcome, regardless of compliance with the assigned treatment or other aspects of the protocol.

17 CCEB WHY DO INVESTIGATORS WANT TO EXCLUDE SUBJECTS FROM ANALYSIS? They refused the assigned treatment They didn’t return for evaluations They were found to be ineligible after randomization They started taking other treatments after randomization that violated protocol

18 CCEB EFFECT OF RANDOMIZATION Good Prognosis Poor Prognosis GPPPGPPP Treatment A Treatment B RANDOMIZERANDOMIZE

19 CCEB EXAMPLE: BIAS DUE TO EXCLUSION FROM ANALYSIS Randomized trial of cancer therapy following surgery to remove tumor –Arm 1: chemotherapy after recovery from surgery –Arm 2: no further therapy after surgery Not blinded—side effects of chemotherapy would reveal treatment Protocol called for treatment to commence no later than 6 weeks post-surgery

20 CCEB EXAMPLE (cont.) What if treatment did not start until more than 6 weeks post-surgery? –Rationale for therapy is that it will kill any remaining cancer not removed at surgery –If therapy not started shortly after surgery, won’t work—including such patients will dilute treatment effect What’s wrong with this?

21 CCEB EXAMPLE (cont.) Only those assigned to post-surgical treatment are at risk of being excluded What if those who start therapy late are those who had the most extensive surgery and thus required longer recovery period? What if those with most extensive surgery are most likely to have remaining unseen cancer?

22 CCEB CORONARY DRUG PROJECT Five-year mortality by treatment group Coronary Drug Project Research Group, JAMA, 1975 Treatment GroupN% mortality clofibrate106518.2 placebo269519.4

23 CCEB CORONARY DRUG PROJECT Five-year mortality by adherence to clofibrate Coronary Drug Project Research Group, NEJM, 1980 AdherenceN% mortality < 80%35724.6 >80%70815.0

24 CCEB CORONARY DRUG PROJECT Five-year mortality by adherence to clofibrate and placebo Coronary Drug Project Research Group, NEJM, 1980 AdherenceN% MortalityN% mortality <80%35724.688228.2 >80%70815.0181315.1 ClofibratePlacebo

25 CCEB MULTIPLICITY Suppose we do a study in which we compare placebo A with placebo B We do not expect the results to be identical, but we do expect them to be similar If we look at the data in enough ways, however, we may well find an occasional “statistically significant” difference: a FALSE POSITIVE

26 CCEB PRE-SPECIFYING OBJECTIVES In designing a trial, need to decide how treatment will be evaluated Often not straightforward—may be many ways of measuring treatment effect Problem: if we don’t determine primary measure of effect in advance, the multiplicity issue arises

27 CCEB RCT EXAMPLE: “LIQUID STITCHES” New material developed that surgeon can apply to close wound, stop bleeding Need to study how quickly and effectively bleeding is stopped Possible outcomes of interest: –time to cessation of bleeding –whether bleeding stopped within X sec –total amount of blood loss –whether further effort was needed to stop bleeding –whether blood loss greater than Y ml

28 CCEB MORE MULTIPLICITY Which statistical test? How to handle missing data? What baseline factors (e.g., size of wound) should be taken into account?

29 CCEB OTHER MULTIPLICITY CONCERNS Subset effects: no overall treatment effect, but effect seen in a subset: eg, –women –those over age of 50 –those with early stage disease –those treated in specialty clinics Time effect – no overall treatment effect at prespecified time point, but effect seen at earlier time point

30 CCEB COMMON PHRASES RELATED TO THE MULTIPLICITY PROBLEM Testing to a foregone conclusion Data dredging Torturing the data until they confess

31 CCEB Subset analyses are important in developing information about optimal treatment strategies BUT Subset analyses may be unreliable since multiple analyses frequently produce spuriously positive (or negative) results

32 CCEB PROBABILITY OF POSITIVE SUBSET WHEN NO TRUE DIFFERENCES No. subsets * Prob. ≥ 1 subset with p<0.05 2 0.10 5 0.23 10 0.40 20 0.64 *non-overlapping

33 CCEB ALSO WORKS OTHER WAY Suppose a clinical trial is positive What will you find when you examine results in subsets? The more subsets examined, the greater the chance you will find a subset in which the result is in the opposite direction from overall result

34 CCEB MULTIPLE TESTING AND EARLY STOPPING Repeated testing of data to evaluate any emerging differences between arms offers multiple chances to observe a nominally significant (i.e., p<0.05) result Conducting repeated test at nominal level will inflate false positive rate

35 CCEB MULTIPLE LOOKS AND TYPE I ERROR nominal significance Probability of nominally significant result (%) level No. of repeated tests 12345102550200 _____________________________________________.01 11.82.42.93.34.77.08.812.6.0558.310.712.614.219.326.632.042.4 From McPherson K, New England Journal of Medicine; 290:501-2, 1974

36 CCEB DEVELOPMENT OF NEW MONITORING APPROACHES Recognition that simple monitoring at nominal significance level was inadequate led to new statistical methods for interim monitoring Most common approach currently: group sequential designs –Pre-specified number of interim analyses –Overall significance level controlled at desired level (e.g., 0.05)

37 CCEB

38 FUTILITY TESTING Evaluates whether the possibility of an eventual positive result can be ruled out No repeated testing issues--can evaluate on any schedule, with any frequency Termination unlikely on this basis until most of the trial is completed

39 CCEB REFERENCES This is just a brief overview of some important issues in designing, conducting and analyzing clinical trials Good starting references –Friedman, Furberg, DeMets, Fundamentals of Clinical Trials (Springer) –International Conference on Harmonization Guidance: Statistical Principles for Clinical Trials www.fda.gov/cder/guidance/ICH_E9-fnl.PDF


Download ppt "CCEB STATISTICAL CONSIDERATIONS IN CLINICAL TRIALS Susan S. Ellenberg, Ph.D. University of Pennsylvania School of Medicine ASENT Clinical Trials Course."

Similar presentations


Ads by Google