Presentation on theme: "Adaptive Design Methods in Clinical Trials"— Presentation transcript:
1Adaptive Design Methods in Clinical Trials Shein-Chung Chow, PhDDepartment of Biostatistics and BioinformaticsDuke University School of MedicineDurham, North CarolinaPresented atThe 2011 MBSW, Muncie, IndianaMay 23, 20111
2Lecture 1 - Outline What is adaptive design? Possible benefits Type of adaptive designsPossible benefitsRegulatory perspectivesProtocol amendments
3What is adaptive design? There is no universal definition.Adaptive randomization, group sequential, and sample size re-estimation, etc.PhRMA (2006)FDA (2010)Adaptive design is also known asFlexible design (EMEA, 2002, 2006)Attractive design (Uchida, 2006)
4PhRMA (2006). J. Biopharm. Stat., 16 (3), 275-283. PhRMA’s definitionPhRMA (2006). J. Biopharm. Stat., 16 (3),An adaptive design is referred to as a clinical trial design that uses accumulating data to decide on how to modify aspects of the study as it continues, without undermining the validity and integrity of the trial.
5PhRMA’s definition Characteristics Adaptation is a design feature. Changes are made by design not on an ad hoc basis.CommentsIt does not reflect real practice.It may not be flexible as it means to be.
6FDA’s definitionUS FDA Guidance for Industry – Adaptive Design Clinical Trials for Drugs and Biologics Feb, 2010 An adaptive design clinical study is defined as a study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of data (usually interim data) from subjects in the study
7FDA’s definition Characteristics Adaptation is a prospectively planned opportunity.Changes are made based on analysis of data (usually interim data).Does not include medical devices?CommentsIt classifies adaptive designs into well-understood and less well-understood designsIt does not reflect real practice (protocol amendments)It is not a guidance but a document of caution
8AdaptationAn adaptation is defined as a change or modification made to a clinical trial before and during the conduct of the study.Examples includeRelax inclusion/exclusion criteriaChange study endpointsModify dose and treatment durationetc.
9Types of adaptations Prospective adaptations Concurrent adaptations Adaptive randomizationInterim analysisStopping trial early due to safety, futility, or efficacySample size re-estimationetc.Concurrent adaptationsTrial proceduresImplemented by protocol amendmentsRetrospective adaptationsStatistical proceduresImplemented by statistical analysis plan prior to database lock and/or data unblinding
11Adaptive randomization design A design that allows modification of randomization schedulesUnequal probabilities of treatment assignmentIncrease the probability of successTypes of adaptive randomizationTreatment-adaptiveCovariate-adaptiveResponse-adaptive
12CommentsRandomization schedule may not be available prior to the conduct of the study.It may not be feasible for a large trial or a trial with a relatively long treatment duration.Statistical inference on treatment effect is often difficult to obtain if it is not impossible.
13Group sequential design An adaptive design that allows for prematurely stopping a trial due tosafety,efficacy/futility, orbothbased on interim analysis resultsSample size re-estimationOther adaptations
14Comments Well-understood design without additional adaptations Overall type I error rate may not be preserved whenthere are additional adaptations (e.g., changes in hypotheses and/or study endpoints)there is a shift in target patient population
15Flexible sample size re-estimation An adaptive design that allows for sample size adjustment or re-estimation based on the observed data at interimblinding or unblindingSample size adjustment or re-estimation is usually performed based on the following criteriavariabilityconditional powerreproducibility probabilityetc.
16CommentsCan we always start with a small number and perform sample size re-estimation at interim?It should be noted sample size re-estimation is performed based on estimates from the interim analysis.A flexible sample size re-estimation design is also known as an N-adjustable design.
17Drop-the-losers design Drop-the-losers design is a multiple stage adaptive design that allows dropping the inferior treatment groupsGeneral principlesdrop the inferior armsretain the control armmay modify or add additional armsIt is useful where there are uncertainties regarding the dose levels
18CommentsThe selection criteria and decision rules play important roles for drop-the-losers designs.Dose groups that are dropped may contain valuable information regarding dose response of the treatment under study.How to utilize all of the data for a final analysis?Some people prefer pick-the-winner.
19Adaptive dose finding design A typical example is an adaptive dose escalation design.A dose escalation design is often used in early phase clinical development to identify the maximum tolerable dose (MTD), which is usually considered the optimal dose for later phase clinical trialsadaptation to the traditional “3+3” escalation rulecontinual re-assessment method (CRM) in conjunction with the Bayesian’s approach
20Comments How to select the initial dose? How to select the dose range under study?How to achieve statistical significance with a desired power with a limit number of subjects?What are the selection criteria and decision rules?What is the probability of achieving the optimal dose?
21Biomarker-adaptive design A design that allows for adaptation based on the responses of biomarkers such as genomic markers for assessment of treatment effect.It involvesqualification and standardoptimal screening designestablishment of predictive modelvalidation of the established predictive model
22CommentsA classifier marker usually does not change over the course of study and can be used to identify patient population who would benefit from the treatment from those do not.DNA marker and other baseline marker for population selectionA prognostic marker informs the clinical outcomes, independent of treatment.A predictive marker informs the treatment effect on the clinical endpoint.Predictive marker can be population-specific. That is, a marker can be predictive for population A but not population B.
23CommentsClassifier marker is commonly used in enrichment process of target clinical trialsPrognostic vs. predictive markersCorrelation between biomarker and true endpoint make a prognostic markerCorrelation between biomarker and true endpoint does not make a predictive biomarkerThere is a gap between identifying genes that associated with clinical outcomes and establishing a predictive model between relevant genes and clinical outcomes
24Adaptive treatment-switching design A design that allows the investigator to switch a patient’s treatment from an initial assignment to an alternative treatment if there is evidence of lack of efficacy or safety of the initial treatmentCommonly employed in cancer trialsIn practice, a high percentage of patients may switch due to disease progression
25Comments Estimation of survival is a challenge to biostatistician. A high percentage of subjects who switched could lead to a change in hypotheses to be tested.Sample size adjustment for achieving a desired power is critical to the success of the study.
26Adaptive-hypotheses design A design that allows change in hypotheses based on interim analysis resultsoften considered before database lock and/or prior to data unblindingExamplesswitch from a superiority hypothesis to a non-inferiority hypothesischange in study endpoints (e.g., switch primary and secondary endpoints)
27Comments Switch between non-inferiority and superiority The selection of non-inferiority marginSample size calculationSwitch between the primary endpoint and the secondary endpointsPerhaps, should consider the switch from the primary endpoint to a co-primary endpoint or a composite endpoint
28Seamless adaptive trial design A seamless adaptive (e.g., phase II/III) trial design is a trial design that combines two separate trials (e.g., a phase IIb and a phase III trial) into one trial and would use data from patients enrolled before and after the adaptation in the final analysisLearning phase (e.g., phase II)Confirmatory phase (e.g., phase III)
29Comments Characteristics Questions/Concerns Will be able to address study objectives of individual (e.g., phase IIb and phase III) studiesWill utilize data collected from both phases (e.g., phase IIb and phase III) for final analysisQuestions/ConcernsIs it efficient?How to control the overall type I error rate?How to perform power analysis for sample size calculation/allocation?How to perform a combined analysis if the study objectives/endpoints are different at different phases?
30Multiple adaptive design A multiple adaptive design is any combinations of the above adaptive designsvery flexiblevery attractivevery complicatedstatistical inference is often difficult, if not impossible to obtain
31Multiple adaptive design A multiple adaptive design is any combinations of the above adaptive designsvery flexiblevery attractivevery complicatedstatistical inference is often difficult, if not impossible to obtainvery painful
32Possible benefits Correct wrong assumptions Select the most promising option earlyMake use of emerging external information to the trialReact earlier to surprises (positive and/or negative)May speed up development process32
33Possible benefitsMay have a second chance to re-design the trial after seeing data from the trial itself at interim (or externally)Sample sizemay start out with a smaller up-front commitment of sample sizeMore flexible but more problematic operationally due to potential bias
34Regulatory perspectives May introduce operational bias.May not be able to preserve type I error rate.P-values may not be correct.Confidence intervals may not be reliable.May result in a totally different trial that is unable to address the medical questions the original study intended to answer.Validity and integrity may be in doubt.
35Protocol amendmentsOn average, for a given clinical trial, we may have 2-3 protocol amendments during the conduct of the trial.It is not uncommon to have 5-10 protocol amendments regardless the size of the trial.Some protocols may have up to 12 protocol amendments
36Protocol amendments Rationale for changes Review process Clinical StatisticalReview processInternal protocol reviewIRBRegulatory agencies
37Ad hoc adaptations Inclusion/exclusion criteria (slow enrollment) Changes in dose/dose regimen or treatment duration (safety concern)Changes in study endpoints (increase the probability of success)Increase the frequency of data monitoring or administrative looksOthers
38ConcernsProtocol amendments may result in a similar but different target patient populationProtocol amendments (with major changes) could result in a totally different trial that is unable to address the questions the original trial intended to answer.
39Target patient population Has the disease under studyInclusion criteria to describe the target patient populationExclusion criteria to remove heterogeneitySubpopulations may be defined based on some baseline demographics and/or patient characteristics
40Target patient population Denote target patient population by , where and are population mean and standard deviation, respectively.After a modification made to the trial procedures, the target patient population lead to the actual patient population of
42Target patient population is usually referred to as the effectsizeThe effect size after the modification made is inflated or reduced by the factor of“Clinically meaningful difference” may have been changed after the modification (adaptation) is made.
43Target patient population is referred to as a sensitivity index.When (i.e., there are no impact on the target patient population after the modifications made). In this case, we have =1 (i.e., the sensitivity index is 1).
44Sensitivity indexA shift in mean of the target patient population may be offset by the inflation (or reduction) of the variability, e.g.,A shift of 10% (-10%) in mean could be offset by a 10% inflation (reduction) of variabilitymay not be sensitive due to the masking effect between and C.
45Moving target patient population Under the moving target patient population, the effect size is the original effect size times the sensitivity index, that isHow will this impact statistical inference?
46Two possible approaches Adjustment for covariateAssuming that change in population can be linked by a covariateChow SC and Shao J. (2005). J. Biopharm. Stat., 15,Assessment of the sensitivity indexAssuming that the shift and/or scale parameter is random and derive a unconditional inference for the original patient populationChow SC, Shao J, and Hu OYP (2002). J. Biopharm. Stat., 12,
47Adjustment for covariate - motivation An asthma trial (Chow and Shao, 2005)Primary endpoint: FEV1 change from baselineAdaptation: inclusion criteria (slow enrollment)Range of BaselineFEV1 inInclusion CriterionNumber of PatientsMean of FEV1 ChangeStandard Deviationof FEV1 Change1.5~2.090.311.861.5~2.5150.422.301.5~3.0160.542.79This is an asthma drug trial found in Chow and Shao, 2005.The primary study endpoint is the change of FEV1, the difference between FEV1 after treatment and the baseline FEV1.During the trial, the protocol was amended twice due to slow enrollment.For each amendment, the inclusion criterion regarding the baseline FEV1 was changed.The amendment result in shifts in target population. And there seems some relationship between _ and _.As a result, statistical inference based on the collected data by ignoring the shift in target population could be biased and hence misleading.It’s a trial with continuous endpoint. Now we focus on binary response trial. For example, a subject may be defined as a responder if the FEV1 change is larger then 10% of baseline.In this talk, statistical inferences based on binary response with protocol amendments will be derived.
48Adjustment for covariates The idea is to relate the means before and after protocol amendments by means of some covariates. In other words,where and are the mean and the corresponding covariate after the protocol amendment, is a given function (linear or non-linear), and is the number of protocol amendments.
49An example - summary statistics Range of Baseline FEV1 InInclusion CriterionNumber of PatientsMean ofFEV1 ChangeS.D.of FEV1 ChangeMean of Baseline FEV1Test drug1.5~2.090.310.141.862.0~2.5150.422.302.5~3.0160.540.162.79Placebo80.151.8126.96.36.1990.202.84
50Results of the proposed method Estimate for Test DrugEstimate for PlaceboDifferenceEstimated SEP-valueCovariate-adjusted method0.330.170.160.0570.0021Classical method0.440.190.250.0660.0116P-value was obtained based on testing for one-sided hypothesis.Classical method ignoring shift in patient population tends to overestimate the treatment effect.
51Example for sample size calculation WithoutadjustmentAdjustmentFactorRHypothesesTotalEach treatmentSuperiority2001003121561.56Non-inferiority1095517085Equivalence275138431216To illustrate the use of sample size formulas, we consider an example.1. Suppose that the true response rates of the test treatment and the control treatment are 50% and 30%, respectively.2. We consider the case that the protocol is amended twice with equal allocation,and v and beta can be estimated from a pilot study or based on historical date.3. Under the setting, for the Superiority test with delta=0.03, the originally total sample size is 200,but after protocol amendments, a total of 312 patients are needed in order to achieve the desired power.Note: Two protocol amendment is considered; δ is chosen as 3%.
52Summary Endpoint Goal Model Estimation Conti. The WLS method is used to estimate , thenBinaryThe ML method is used to estimate , then: the test treatment: the control treatmentIn this study, we follow similar ideas of Chow and Shao.we assume that there is a logistic relationship between the response rate and the true mean of the covariate. Then we fit the model to link those response rates. And then the interesting parameter p_0 can be estimated.If we want to compare two treatments, it’s notable thatFor each amendment,patients are selected by the same criteria and then randomly allocated to either the test treatment or control treatment group.So, the relationships between the binary response and the covariate for both treatments can be described by a single model.
53Assessment of the sensitivity index The sensitivity index can be estimated bywhere
54Assessment of the sensitivity index There are four scenarios for assessment of sensitivity index(i) both and are fixed(ii) is random and is fixed(iii) is fixed and is random(iv) both and are randomIn additionThe sample size between two protocol amendments is random
55Sample size adjustment – random location shift HypothesesWithout adjustmentAdjustmentSuperiorityNon-inferiorityEquivalence
56Sample size adjustment – random scale shift HypothesesWithoutadjustmentAdjustmentSuperiorityNon-inferiorityEquivalence
58Dose finding trials Fundamental questions Is there any drug effect? What doses exhibit a response different from control?What is the nature of the dose-response relationship?What is the optimal dose?
60Dose response trials Randomized parallel dose-response designs ICH E4 (1994) Guideline on Dose-response Information to Support Drug RegistrationRandomized parallel dose-response designsCrossover dose-response designForced titration designDose escalation designOptimal titration designPlacebo-controlled titration to endpoint
63Dose escalation trial Primary objectives Principles Is there any evidence of the drug effect?What is the nature of the dose-response?What is the optimal dose?PrinciplesLess patients to be exposed to the toxicityMore patients to be treated at potential efficacious dose levels
64Dose escalation trial Algorithm-based design Model-based design Traditional dose escalation (TER) ruleThe “3+3” TERThe “a+b” TERModel-based designContinual reassessment method (CRM)CRM in conjunction with Bayesian approach
65Dose escalation trial Dose limiting toxicity (DLT) DLT is referred to as unacceptable or unmanageable safety profile which is pre-defined by some criteria such as Grade 3 or greater hematological toxicity according to the US National Cancer Institute’s Common Toxicity Criteria (CTC)Maximum tolerable dose (MTD)MTD is the highest possible but still tolerable dose with respect to some pre-specified DLT.
66Dose escalation trial The “3+3” TER The traditional escalation rule is to enter three patients at a new dose level and then enter another three patients when a DLT is observed.The assessment of the six patients is then performed to determine whether the trial should be stopped at the level or to escalate to the next dose level.
67Traditional escalation rule Basically, there are two types of escalation rulesTraditional escalation rule (TER)Does not allow dose de-escalationStrict traditional escalation rule (STER)Allow dose de-escalation if two of three patients have DLTsThe “3+3” TER can be generalized to the “a+b” design with and without dose de-escalation.
68Continual reassessment method For the method of CRM, the dose-response relationship is continually reassessed based on accumulative data collected from the trial. The next patient who enters the trial is then assigned to the potential MTD levelDose toxicity modelingDose level selectionReassessment of model parametersAssignment of next patient
69Dose toxicity modeling AssumptionsThere is monotonic relationship between dose and toxicityThe biologically inactive dose is lower than the active dose, which is in turn lower than the toxic doseModelThe logistic model is often considered
70Dose toxicity modeling where is the probability of toxicity associated with dose , and and are positive parameters to be determined.Then, the MTD can be expressed aswhere is the probability of DLT at MTD
71Dose toxicity modeling RemarksFor an aggressive tumor and a transient and non-life-threatening DLT, could be as high as 0.5For persistent DLT and less aggressive tumors, could be as low as 0.1 to 0.25A commonly used value for is somewhere between 0 and 1/3=0.33Crowley (2001)
72Dose level selection General principles It should be low enough to avoid severe toxicityIt should be high enough for observing some activity or potential efficacy in humansThe commonly used starting dose is the dose at which 10% mortality occurs in mice
73Dose level selectionThe subsequent dose levels are usually selected based on the following multiplicative setwhere is called the dose escalation factor
74Dose level selection Remarks The highest dose level should be selected such that it covers the biologically active dose, but remains lower than the toxic dose.In general, CRM does not require pre-determined dose intervals.
75Reassessment of model parameters The key is to estimate the parameter in the response modeAn initial assumption or prior about the parameter is necessary in order to assign patients to the dose level based on the toxicity relationshipThe estimate of is continually updated based on cumulative data observed from the trial
76Reassessment of model parameter Remarks – the estimation method could be a frequentist or Bayesian approachFrequentist approachMaximum likelihood estimate or least square estimate are commonly consideredBayesian approach- It requires a prior distribution about the parameter- It provides posterior distribution and predictive probabilities of MTD
77Assignment of next patient The updated dose-toxicity model is usually used to choose the dose level for the next patient. In other words, the next patient enrolled in the trial is assigned to the current estimated MTD based on dose-response model
78Assignment of next patient RemarksAssignment of patient to the most updated MTD leads to majority of the patients assigned to the dose levels near MTD, which allows a more precise estimate of MTD with a minimum number of patientsIn practice, this assignment is subject to safety constraints such as limited dose jump and delayed response
79Criteria for design selection Number of patients expectedNumber of DLT expectedToxicity rateProbability of observing DLT prior to MTDProbability of correctly achieving the MTDProbability of overdosingOthersFlexibility of dose de-escalation
80An example – radiation theraphy Small size cohort for lower dose levelsMinimize the number of patients at lower dose groupsMajority patients near the MTDIdeally, the last two dose cohorts under studyFlexibility for dose de-escalationLimited dose jump if CRM is usedHigher probability of reaching the MTDLower probability of overdosingAlso take moderate AE into consideration
81Example - clinical trial simulation Simulation runs = 5,000Initial dose = 0.5 mCi/kgDose range is from 0.5 mCi/kg to 4.5 mCi/kg#of dose levels = 6Modified Fibonacci sequence is considered. That is, dose levels are 0.5, 1, 1.6, 2.5, 3.5, and 4.7.Maximum dose de-escalation allowed = 1 (for STER)DLT rate at MTD is assumed to be 1/3=33%Logistic toxicity model is assumed for the CRMUniform prior is assumed for the CRM#of doses allowed to skip = 0
82Summary of Simulation Results (based ob 5,000 simulation runs) Design# patients expected (N)# of DLT expectedMeanMTD (SD)Prob. of selectingcorrect MTD“3+3”TER188.8.131.52 (0.507)0.392STER*17.593.21.70 (0.499)0.208CRM**13.822.33 (0.451)0.696* Allows dose de-escalation** Uniform prior was used
83What is seamless adaptive design? A seamless trial design is defined as a design that combines two separate (independent) trials into a single study.The single study is able to address the study objectives that are normally achieved through the conduct of the two trials.
84Seamless adaptive trial design A seamless adaptive trial design is referred to as a seamless design that applies adaptations during the conduct of the trial.A seamless adaptive design would use data collected from patients enrolled before and after the adaptation in the final analysis.
85Characteristics Combine two separate trials into a single trial It is also known as a two-stage adaptive designThe single trial consists of two stages (phases)Learning (exploratory) phaseConfirmatory phaseOpportunity for adaptations based on accrued data at the end of learning phase
86Advantages Opportunity for saving Efficiency Combined analysis Stopping early for safety and/or futility/efficacyEfficiencyCan reduce lead time between the learning and confirmatory phasesCombined analysisData collected at the learning phase are combined with those data obtained at the confirmatory phase for final analysis
87Limitations (regulatory’s concerns) May introduce operational biasAdaptations relate to dose, hypothesis and endpoint etc.May not be able to control the overall type I error rateWhen study objectives/endpoints are different at different stagesStatistical methods for combined analysis are not well establishedComplexity depends upon the adaptations apply
88An example Two-stage phase II/III study Learning (exploratory) phase Dose findingConfirmatory phaseEfficacy confirmation
89Comparison of type I errors Let and be the type I error for phase II and phase III studies, respectively. Then the alpha for the traditional approach is given byif one phase III study is requiredif two phase III studies are requiredIn an adaptive seamless phase II/III design, the actual alpha isThe alpha for a seamless design is actually times larger than the traditional design
90Comparison of powerLet and be the power for phase II and phase III studies, respectively. Then the power for the traditional approach is given byif one phase III study is requiredif two phase III studies are requiredIn an adaptive seamless phase II/III design, the power isThe power for a seamless design is actually times larger than the traditional design
91Comparison Traditional Approach Seamless Design Significance level 1/20 * 1/201/20Power0.8 * 0.80.8Lead time6 m – 1 yrReduce lead timeSample size
92Seamless adaptive designs In practice, a seamless adaptive design may combine two separate (independent) trials with similar but different study objectives into a single trial, e.g.,A phase II trial for dose selection and a phase III study for efficacy confirmationIn some cases, the study endpoints considered at the two separate trials may be different, e.g.,A biomarker or surrogate endpoint versus a regular clinical endpoint
93Seamless adaptive designs Category I - SSSame study objectivesSame study endpointsCategory II - SDDifferent study endpointsCategory III - DSDifferent study objectivesCategory IV - DD
94Categories of two-stage seamless adaptive designs Study endpoints at different stagesSDStudy objectives at different stagesI=SSII-SDIII=DSIV=DD
95Seamless adaptive designs Category I - SSSimilar to typical group sequential designCategory II - SDBiomarker (or surrogate endpoint or clinical endpoint with shorter duration) versus clinical endpointCategory III - DSDose finding versus early efficacyCategory IV - DDTreatment selection with biomarker (or surrogate endpoint or clinical endpoint with different treatment duration) versus efficacy confirmation with regular clinical endpoint
96Frequently asked questions How to perform power analysis for sample size calculation/allocation?How to control the overall type I error rate at a pre-specified level of significance?Especially when the study objectives at different stages are differentHow to combine data collected from both stages for a valid final analysis?Especially when the study objectives and study endpoints at different stages are different
97Statistical analysis for Category I – SS designs Similar to group sequential trial design with planned interim analysesCan be treated as a multiple-stage trial design with adaptationsAdaptationsStop the trial early for futility/efficacyDrop the losersSample size re-estimationetc
98Hypotheses testingConsider a K-stage design and suppose we are interested in testing the following null hypothesiswhere is the null hypothesis at the kth stage
99Stopping rulesLet be the test statistic associated with the null hypothesisStop for efficacy ifStop for futility ifContinue with adaptations ifWhere and
100Test based on individual p-values This method is referred to as method of individual p-values (MIP)Test statisticsFor K=2 (a two-stage design), we have
106Practical IssuesBased on MIP, MSP, or MPP, sample size calculation for achieving the desire power by controlling the overall type I error rate at a pre-specified level of significance can be obtainedStatistical methodology was derived under the assumptions of category I – SS designsDid not address the issue of sample size allocation.The target patient population may have been shifted after adaptationsHow do we preserve type I error rate?Adaptations are made based on “estimates”How this will affect the power?More adaptations will result in a more complicated adaptive design
107Statistical analysis for Category II – SD designs Let be the data observed from stage 1 (learning phase) and stage 2 (confirmatory phase), respectively.Assume that there is a relationship between and , i.e.,The idea is to use the predicted values of at the first stage for the final combined analysis.
108Assumptions and and can be related by where is an error term with zero mean and variance
110Sample size allocation Let and be the sample sizes at the first and second stages, respectively.Also, letThen the total sample sizewhere is referred to as the allocation factor.
111Sample size allocation For testing the hypothesis of equality, we havewhereand
112RemarksFollowing similar idea, formulas for sample size calculation/allocation for testing the hypothesis of superiority, non-inferiority, and equivalence can also be derived.Similar idea can be applied to count data (binary responses) and time-to-event data assuming that there is a well-established relationship between the two study endpoints at different stages.
113Category II – SD designs (continuous and binary endpoints)Hypotheses TestingContinuous EndpointBinary ResponseEqualityNon-inferiority(δ < 0)Superiority(δ > 0)Equivalence
114Cox’s Proportional Hazard Model Category II – SD designs(time-to-event data)Hypotheses TestingWeibull DistributionCox’s Proportional Hazard ModelEqualityNon-inferiority(δ < 0)Superiority(δ > 0)Equivalence
115NoteThe definitions of the notations given in the previous slides can be found in following referenceReferenceChow, S.C. and Tu, Y.H. (2008). On two-stage seamless adaptive design in clinical trials. Journal of Formosan Medical Association, 107, No. 12, S51-S59.
116RemarksOne of the key assumptions in the proposed method is that there is a well-established relationship between the endpointsBiomarker vs clinical endpointSame clinical endpoint with different durationsWhen there is a shift in patient population, the proposed method needs to be modifiedProtocol amendments
117An example – the HCV study Study objectives – to evaluate the safety and efficacy of a test treatment for treating patients with hepatitis C virus (HCV) genotype 1 infectionDose selection (phase II)Efficacy confirmation (phase III)Study designA two-stage phase II/III seamless adaptive designSubjects are randomly assigned to five treatments (4 active and one placebo)
118An example – the HCV study CharacteristicsStudy objectives are similar but differentStudy endpoints are differentStudy endpointsFirst stage – early virologic response (EVR) at week 12Second stage – sustained virologic response (SVR) at 72 week (i.e., 24 weeks after 48 weeks of treatment)
119Adaptations considered Two planned interim analysesThe first interim analysis will be performed when all Stage 1 subjects have completed study Week 12.The second interim analysis will be conducted when all Stage 2 subjects have completed Week 12 of the study and about 75% of Stage 1 subjects have completed Stage 1 treatment.The O’Brien-Fleming type of boundaries are applied.
120Criteria for dose selection at Stage 1 Dose selection is performed based on the precision analysis.Based on EVR, the dose with highest confidence level for achieving statistical difference (i.e., the observed difference is not by chance alone) as compared to the control arm is selected.
121An example – the HCV study Notations: treatment effect of the ith dose group atthe jth stage based on surrogate endpoint: treatment effect of the ith dose group at thejth stage based on regular clinical endpointi=1,…, k (dose group)j=1,2 (stage)
122Study design of the HCV study Two-stage seamless adaptive designStage Stage 2Multiple stage design1st interim analysisDecision-makingEnd of Stage 12nd interim analysisSample size re-estimationEnd of studyStage 1Stage 2Stage 3Stage 4
123An example – the HCV study This two-stage seamless design can then be viewed as a 4-stage designHypotheses of interest
124An example – the HCV study Testing procedureStage 1If , then stop the trial.If , then treatment will proceed to Stage 2, whereStage 2If , then stop the trial.If but then move to Stage 3.
125An example – the HCV study Stage 3If,stop the trial; otherwise move to Stage 4.Stage 4reject
126Controlling type I error rate It can be shown that the maximum probability of wrongly rejecting is achieve whenandDenote and the two vectors. Then we have
127Sample size calculation Power can be evaluated at andDenote and the two vectors. Then we have
128Sample size calculation Let N be the total number. ThenSimilar to Thall, Simon and Ellenburg (1988), we can choose the parameters such thatis minimized.
129Remaining issues How about sample size allocation? Is the usual O’Brien-Fleming type of boundaries appropriate?How to combine data collected from both stages for a valid final analysis?What if there is a shift in target patient population?Can clinical trial simulation help?
130RemarksThe usual sample size calculation for a two-stage design with different study objectives/endpoints needs adjustment.One of the key assumptions is that there is a well-established relationship between different endpoints.This relationship may not exist or cannot be verified in practice.When there is a shift in patient population (e.g., as the result of protocol amendments), the above method needs to be modified.
131Future perspectives Well-understood design Less well-understood design Group sequential designLess well-understood designAdaptive group sequential designAdaptive dose findingTwo-stage seamless adaptive designFor less well-understood designs, theyshould be used with caution
132Future perspectivesDesign-specific guidances are necessarily developedMisuseAbuseStatistical methods need to be derivedValidityReliabilityMonitoring of adaptive trial designIntegrity
133Concluding remarks Clinical Statistical Adaptive design reflects real clinical practice in clinical development.Adaptive design is very attractive due to its flexibility and efficiency.Potential use in early clinical development.StatisticalThe use of adaptive methods in clinical development will make current good statistics practice even more complicated.The validity of adaptive methods is not well established.
134Concluding remarks Regulatory Regulatory agencies may not realize but the adaptive methods for review/approval of regulatory submissions have been employed for years.Specific guidelines regarding different types of less-well-understood adaptive designs are necessary developed.
135Selected references Special issues at Biometrics, Statistics in Medicine, Journal of Biopharmaceutical Statistics, Biometrical Journal, Pharmaceutical Statistics, etc. Gallo, P., et al. (2006). Adaptive design in clinical drug development – an executive summary of the PhRMA Working Group (with discussions). Journal of Biopharmaceutical Statistics, 16, Chow, S.C. and Chang, M. (2006). Adaptive Design Methods in Clinical Trials. Chapman Hall/CRC Press, Taylor & Francis, New York, NY. Chow, S.C. and Chang, M. (2008). Adaptive design methods in clinical trials – a review. The Orphanet Journal of Rare Diseases, 3, 1-11. Pong, A. and Chow, S.C. (2010). Handbook of Adaptive Design In Pharmaceutical Research and Development. Chapman Hall/CRC Press, Taylor & Francis, New York, NY.