Choosing Endpoints and Sample size considerations

Slides:



Advertisements
Similar presentations
Hypothesis Testing Goal: Make statement(s) regarding unknown population parameter values based on sample data Elements of a hypothesis test: Null hypothesis.
Advertisements

Study Size Planning for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Sample size estimation
Statistics.  Statistically significant– When the P-value falls below the alpha level, we say that the tests is “statistically significant” at the alpha.
November 10, 2010DSTS meeting, Copenhagen1 Power and sample size calculations Michael Væth, University of Aarhus Introductory remarks Two-sample problem.
Estimation of Sample Size
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
Sample size computations Petter Mostad
1 Equivalence and Bioequivalence: Frequentist and Bayesian views on sample size Mike Campbell ScHARR CHEBS FOCUS fortnight 1/04/03.
PSY 1950 Confidence and Power December, Requisite Quote “The picturing of data allows us to be sensitive not only to the multiple hypotheses that.
Introduction to Biostatistics, Harvard Extension School © Scott Evans, Ph.D.1 Sample Size and Power.
BS704 Class 7 Hypothesis Testing Procedures
BCOR 1020 Business Statistics
Chapter 9 Hypothesis Testing II. Chapter Outline  Introduction  Hypothesis Testing with Sample Means (Large Samples)  Hypothesis Testing with Sample.
Sample size calculations
Sample Size Determination
Sample size and study design
Review for Exam 2 Some important themes from Chapters 6-9 Chap. 6. Significance Tests Chap. 7: Comparing Two Groups Chap. 8: Contingency Tables (Categorical.
Phase II Design Strategies Sally Hunsberger Ovarian Cancer Clinical Trials Planning Meeting May 29, 2009.
Sample Size Determination Ziad Taib March 7, 2014.
Power and Sample Size Part II Elizabeth Garrett-Mayer, PhD Assistant Professor of Oncology & Biostatistics.
Qian H. Li, Lawrence Yu, Donald Schuirmann, Stella Machado, Yi Tsong
1. Statistics: Learning from Samples about Populations Inference 1: Confidence Intervals What does the 95% CI really mean? Inference 2: Hypothesis Tests.
Copyright © 2012 by Nelson Education Limited. Chapter 8 Hypothesis Testing II: The Two-Sample Case 8-1.
1/2555 สมศักดิ์ ศิวดำรงพงศ์
Inference in practice BPS chapter 16 © 2006 W.H. Freeman and Company.
Testing Hypotheses Tuesday, October 28. Objectives: Understand the logic of hypothesis testing and following related concepts Sidedness of a test (left-,
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
More About Significance Tests
Confidence Intervals Elizabeth S. Garrett Oncology Biostatistics March 27, 2002 Clinical Trials in 20 Hours.
Inference for a Single Population Proportion (p).
CI - 1 Cure Rate Models and Adjuvant Trial Design for ECOG Melanoma Studies in the Past, Present, and Future Joseph Ibrahim, PhD Harvard School of Public.
Comparing Two Population Means
RMTD 404 Lecture 8. 2 Power Recall what you learned about statistical errors in Chapter 4: Type I Error: Finding a difference when there is no true difference.
Sample size determination Nick Barrowman, PhD Senior Statistician Clinical Research Unit, CHEO Research Institute March 29, 2010.
1 CSI5388: Functional Elements of Statistics for Machine Learning Part I.
Statistical Power and Sample Size Calculations Drug Development Statistics & Data Management July 2014 Cathryn Lewis Professor of Genetic Epidemiology.
Understanding the Variability of Your Data: Dependent Variable Two "Sources" of Variability in DV (Response Variable) –Independent (Predictor/Explanatory)
Chapter 9 Hypothesis Testing II: two samples Test of significance for sample means (large samples) The difference between “statistical significance” and.
Step 3 of the Data Analysis Plan Confirm what the data reveal: Inferential statistics All this information is in Chapters 11 & 12 of text.
Hypothesis Testing Hypothesis Testing Topic 11. Hypothesis Testing Another way of looking at statistical inference in which we want to ask a question.
Biostatistics Class 6 Hypothesis Testing: One-Sample Inference 2/29/2000.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
Lecture 16 Section 8.1 Objectives: Testing Statistical Hypotheses − Stating hypotheses statements − Type I and II errors − Conducting a hypothesis test.
통계적 추론 (Statistical Inference) 삼성생명과학연구소 통계지원팀 김선우 1.
Introduction to sample size and power calculations Afshin Ostovar Bushehr University of Medical Sciences.
Chapter 8 Delving Into The Use of Inference 8.1 Estimating with Confidence 8.2 Use and Abuse of Tests.
RDPStatistical Methods in Scientific Research - Lecture 41 Lecture 4 Sample size determination 4.1 Criteria for sample size determination 4.2 Finding the.
Chapter 20 Testing Hypothesis about proportions
Medical Statistics as a science
Introduction to Inference: Confidence Intervals and Hypothesis Testing Presentation 8 First Part.
Issues concerning the interpretation of statistical significance tests.
Biostatistics in Practice Peter D. Christenson Biostatistician Session 3: Testing Hypotheses.
Biostatistics in Practice Peter D. Christenson Biostatistician Session 4: Study Size for Precision or Power.
Power & Sample Size Dr. Andrea Benedetti. Plan  Review of hypothesis testing  Power and sample size Basic concepts Formulae for common study designs.
Statistical Analysis II Lan Kong Associate Professor Division of Biostatistics and Bioinformatics Department of Public Health Sciences December 15, 2015.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 10 Comparing Two Groups Section 10.1 Categorical Response: Comparing Two Proportions.
Sample Size Determination
Compliance Original Study Design Randomised Surgical care Medical care.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
HYPOTHESIS TESTING FOR DIFFERENCES BETWEEN MEANS AND BETWEEN PROPORTIONS.
Hypothesis Tests u Structure of hypothesis tests 1. choose the appropriate test »based on: data characteristics, study objectives »parametric or nonparametric.
Introduction to Biostatistics, Harvard Extension School, Fall, 2005 © Scott Evans, Ph.D.1 Sample Size and Power Considerations.
Clinical Trials 2015 Practical Session 2. Exercise I Normally Distributed Response Data H 0 :  = 0 H a :  =  a > 0 n=? α=0.05 (two-sided) β=0.20 
1 Chapter 6 SAMPLE SIZE ISSUES Ref: Lachin, Controlled Clinical Trials 2:93-113, 1981.
Sample Size Determination
How many study subjects are required ? (Estimation of Sample size) By Dr.Shaik Shaffi Ahamed Associate Professor Dept. of Family & Community Medicine.
Aiying Chen, Scott Patterson, Fabrice Bailleux and Ehab Bassily
How many study subjects are required ? (Estimation of Sample size) By Dr.Shaik Shaffi Ahamed Professor Dept. of Family & Community Medicine College.
Presentation transcript:

Choosing Endpoints and Sample size considerations Methods in Clinical Cancer Research March 3, 2015

Sample Size and Power The most common reason statisticians get contacted Sample size is contingent on design, analysis plan, and outcome With the wrong sample size, you will either Not be able to make conclusions because the study is “underpowered” Waste time and money because your study is larger than it needed to be to answer the question of interest And, with wrong sample size, you might have problems interpreting your result: Did I not find a significant result because the treatment does not work, or because my sample size is too small? Did the treatment REALLY work, or is the effect I saw too small to warrant further consideration of this treatment? This is an issue of CLINICAL versus STATISTICAL significance

Sample Size and Power Sample size ALWAYS requires the investigator to make some assumptions How much better do you expect the experimental therapy group to perform than the standard therapy groups? How much variability do we expect in measurements? What would be a clinically relevant improvement? The statistician CANNOT tell you what these numbers should be (unless you provide data) It is the responsibility of the clinical investigator to define these parameters

Sample Size and Power Review of power Power = The probability of concluding that the new treatment is effective if it truly is effective Type I error = The probability of concluding that the new treatment is effective if it truly is NOT effective (Type I error = alpha level of the test) (Type II error = 1 – power) When your study is too small, it is hard to conclude that your treatment is effective

Three common settings Binary outcome: e.g., response vs. no response, disease vs. no disease Continuous outcome: e.g., number of units of blood transfused, CD4 cell counts Time-to-event outcome: e.g., time to progression, time to death.

Most to least powerful Continuous Time-to-event Binary/categorical Example: mouse study Metastases yes vs. no Volume or number of metastatic nodules

Continuous outcomes Easiest to discuss Sample size depends on Δ: difference under the null hypothesis α: type 1 error β type 2 error σ: standard deviation r: ratio of number of subjects in the two groups (usually r = 1)

Continuous Outcomes We usually find sample size OR find power find Δ But for Phase III cancer trials, most typical to solve for N.

Example: sample size in EACA study in spine surgery patients* The primary goal of this study is to determine whether epsilon aminocaproic acid (EACA) is an effective strategy to reduce the morbidity and costs associated with allogeneic blood transfusion in adult patients undergoing spine surgery.  (Berenholtz) Comparative study with EACA arm and placebo arm. Primary endpoint: number of allogeneic blood units transfused per patient through day 8 post-operation. Average number of units transfused without EACA is expected to be 7 Investigators would be interested in regularly using EACA if it could reduce the number of units transfused by 30% (to 5 units or less). * Berenholtz et al. Spine, 2009 Sept. 1.

Example: sample size in EACA study in spine surgery patients H0: μ1 – μ2 = 0 H1: μ1 – μ2 ≠ 0 We want to know what sample size we need to have large power and small type I error. If the treatment DOES work, then we want to have a high probability of concluding that H1 is “true.” If the treatment DOES NOT work, then we want a low probability of concluding that H1 is “true.”

Two-sample t-test approach Assume that the standard deviation of units transfused is 4. Assume that difference we are interested in detecting is μ1 – μ2 = 2. Assume that N is large enough for Central Limit Theorem to ‘kick in’. Choose two-sided alpha of 0.05

Two-sample t-test approach

Two-sample t-test approach For testing the difference in two means, with equal allocation to each arm: With UNequal allocation to each arm, where n2 = rn1

Sample size = 30,Power = 26% Sampling distn under H1: μ1 - μ2 = 0 Vertical line defines rejection region μ1 - μ2

Sample size = 60,Power = 48% Sampling distn under H1: μ1 - μ2 = 0 Vertical line defines rejection region μ1 - μ2

Sample size = 120,Power = 78% Sampling distn under H1: μ1 - μ2 = 0 Vertical line defines rejection region μ1 - μ2

Sample size = 240, Power = 97% Sampling distn under H1: μ1 - μ2 = 0 Vertical line defines rejection region μ1 - μ2

Sample size = 400, Power > 99% Sampling distn under H1: μ1 - μ2 = 0 Sampling distn under H1: μ1 - μ2 = 2 Vertical line defines rejection region μ1 - μ2

Likelihood Approach Not as common, but very logical Resulting sample size equation is the same, but the paradigm is different. Create likelihood ratio comparing likelihood assuming different means vs. common mean:

Other outcomes Binary: Time-to-event use of exact tests often necessary when study will be small more complex equations than continuous Why? Because mean and variance both depend on p Exact tests are often appropriate If using continuity correction with χ2 test, then no closed form solution Time-to-event similar to continuous parametric vs. non-parametric assumptions can be harder to achieve for parametric

Single Arm, response rate Ho: p= 0.20 Ha: p = 0.40 One-sided alpha 0.05

N = 10; power = 37%

N = 25; power = 73%

N = 50; power = 90%

N = 80; power = 99%

Time to event endpoints Power depends on number of events For the same number of patients, accrual time, and expected hazard ratio, the power may be very different. The number of expected events at time of analysis determines power.

Example: Median PFS 4 months vs. 8 months HR = 0.5 12 month accrual, 12 month follow-up Two-sided alpha = 0.05 Power = 94%

Example: Median PFS 12 months vs. 24 months HR = 0.5 12 month accrual, 12 month follow-up Two-sided alpha = 0.05 Power = 77%

Choosing endpoints Mostly a phase II question Common predicament PFS vs. response OS vs. PFS Binary PFS vs. time to event PFS

Choosing type I and II errors Phase III: Type I: One-sided 0.025 Two-sided 0.05 Type II: 20% (i.e. power of 80%) Phase II More balanced Common to have 10% of each Common to see 1-sided tests with single arm studies especially

Other issues in comparative trials Unbalanced design why? might help accrual; might have more interest in new treatment; one treatment may be very expensive as ratio of allocation deviates from 1, the overall sample size increases (or power decreases) Accrual rate in time-to-event studies Length of follow-up per person affects power Need to account for accrual rate and length of study

Equivalence and Non-inferiority trials When using frequentist approach, usually switch H0 and Ha “Superiority” trial “Non-inferiority” trial

Equivalence and Non-inferiority trials Slightly more complex To calculate power, usually define: H0: δ > d Ha: δ < d Usually one-sided Choosing β and α now a little trickier: need to think about what the consequences of Type I and II errors will be. Calculation for sample size is the same, but usually want small δ. Sample size is usually much bigger for equivalence trials than for standard comparative trials.

Equivalence and Non-inferiority trials Confidence intervals more natural to some Want CI for difference to exclude tolerance level E.g. 95% CI = (-0.2,1.3) and would be willing to declare equivalent if δ = 2 Problems with CIs: Hard-fast cutoffs (same problem as HTs with fixed α) Ends of CI don’t mean the same as the middle of CI Likelihood approach probably best (still have hard-fast rule, though).

Non-inferiority example Recent PRC study. Sorafenib vs. Sorafenib + A in hepatocellular cancer Primary objective: demonstrate that safety of the combination is no worse than sorafenib alone.

Example Toxicity rate of Sorafenib alone: assumed to be 40%. A toxicity rate of no more than 50% would be considered ‘non-inferior’. Hypothesis test for combination (c) and sorafenib alone (s) H0: pc – ps ≥ 0.10 H1: pc – ps < 0.10

Example  

Calculations Must specify rate in each group and delta. Note that the difference in rates may not need to equal delta. Example: Trt A vs. Trt B Equivalent safety profiles might be implied by delta of 0.10 (i.e. no more than 10% worse). But, you may expect that Trt B (novel) actually has a better safety profile.

Non-inferiority sample sizes Example 1 Example 2 Example 3 New trt has lower toxicity New trt has equal toxicity New trt has worse toxicity Alpha 5% Power 80% Toxicity rate, control group 40% novel trt group 30% 45% Delta 10% Sample size required (total) 140 594* 2414 *If there is truly no difference between the standard and experimental treatment, then 594 patients are required to be 80% sure that the upper limit of a one-sided 95% confidence interval (or equivalently a 90% two-sided confidence interval) will exclude a difference in favor of the standard group of more than 10%.

Other considerations: cluster randomization Example: Prayer-based intervention in women with breast cancer To implement, identified churches in S.E. Baltimore Women within churches are in same ‘group’ therapy sessions Consequence: women from same churches has correlated outcomes Group dynamic will affect outcome Likely that, in general, women within churches are more similar (spiritually and otherwise) than those from different churches Power and sample size? Lack of independence → need larger sample size to detect same effect Straightforward calculations with correction for correlation Hardest part: getting good prior estimate of correlation!

Other Considerations: Non-adherence Example: side effects of treatment are unpleasant enough to ‘encourage’ drop-out or non-adherence Effect? Need to increase sample size to detect same difference Especially common in time-to-event studies when we need to follow individuals for a long time to see event. Adjusted sample size equations available (instead of just increasing N by some percentage) Cross-over: an adherence problem but can be exacerbated. Example: vitamin D studies

Glossed over…. Interim analyses These will increase your sample size but usually not by much. Goal: maintain the same OVERALL type I and II errors. More looks, more room for error. But, asymmetric looks are a little different….

Futility only stopping At stage 1, you can only declare ‘fail to reject’ the null At stage 2, you can ‘fail to reject’ or ‘reject’ the null. Two opportunities for a Type II error One opportunity for a Type I error Ignoring interim look in planning Increases type II error; decreases power Decreases type I error. Why? Two hurdles to reject the null. “Non-binding” stopping boundary.

Practical Considerations We don’t always have the luxury of finding N Often, N fixed by feasibility We can then ‘vary’ power or clinical effect size But sometimes, even that is difficult. We don’t always have good guesses for all of the parameters we need to know…

Not always so easy More complex designs require more complex calculations Usually also require more assumptions Examples: Longitudinal studies Cross-over studies Correlation of outcomes Often, “simulations” are required to get a sample size estimate.

(1) Odds ratio between cases and controls for a one standard deviation change in marker (2) SD(a)/SD(marker) (3) Matching (4) Power for X1 as simulated (5) Power for X1 replaced with median from respective quintile 1.12 0.25 0.5 1 1:1 1:2 0.52 0.70 0.50 0.59 0.29 0.38 0.48 0.65 0.44 0.53 0.26 0.36 1.16 0.76 0.88 0.81 0.64 0.72 0.87 0.66 0.74 0.56 1.22 0.94 0.98 0.92 0.97 0.79 0.89 0.95 0.71 1.28 > 0.99 >0.99 0.99 0.91 (1) Odds ratio between cases and controls for a one standard deviation change in marker (in units of standard deviations of the controls) (2) SD(a)/SD(marker in controls) (3) Matching (4) Power for X1 as simulated (5) Power for X1 replaced with median from respective quintile   1.16 0.25 0.5 1 1:1 1:2 0.55 0.75 0.51 0.65 0.32 0.50 0.54 0.70 0.59 0.31 0.43 1.22 0.77 0.88 0.87 0.64 0.78 0.73 0.84 0.82 0.57 0.72 1.28 0.92 0.98 0.89 0.97 0.94 0.91 0.95

Helpful Hint: Use computer! At this day and age, do NOT include sample size formula in a proposal or protocol. For common situations, software is available Good software available for purchase Stata (binomial and continuous outcomes) NQuery PASS Power and Precision Etc….. FREE STUFF ALSO WORKS! Cedars Sinai software https://risccweb.csmc.edu/biostats/ Cancer Research and Biostatistics (non-profit, related to SWOG) http://stattools.crab.org/