Presentation is loading. Please wait.

Presentation is loading. Please wait.

Design of Clinical Trials for Treatment of Invasive Fungal Infections John H. Powers, MD FACP FIDSA Senior Medical Scientist SAIC in support of Collaborative.

Similar presentations


Presentation on theme: "Design of Clinical Trials for Treatment of Invasive Fungal Infections John H. Powers, MD FACP FIDSA Senior Medical Scientist SAIC in support of Collaborative."— Presentation transcript:

1 Design of Clinical Trials for Treatment of Invasive Fungal Infections John H. Powers, MD FACP FIDSA Senior Medical Scientist SAIC in support of Collaborative Clinical Research Branch Division of Clinical Research National Institute of Allergy and Infectious Diseases National Institutes of Health

2 2Disclosures Consultant for: Acureon Johnson and Johnson Astra-Zeneca Merck Centegen Methylgene Cerexa Octoplus CoNCERT Takeda Destiny Theravance Forest Wyeth

3 3Introduction Why is appropriate design of trials important? Why is appropriate design of trials important? How do clinical practice and clinical research differ? How do clinical practice and clinical research differ? What are the principles for designing an adequate and well-controlled, internally valid clinical trial? What are the principles for designing an adequate and well-controlled, internally valid clinical trial? How can we do better to address these issues? How can we do better to address these issues?

4 4 Why is Design Important? Four possible reasons for the results of a trial: Four possible reasons for the results of a trial: Random error – results due to chance alone Random error – results due to chance alone Bias – systematic error that results in deviation of results from true results (inaccurate measurement) Bias – systematic error that results in deviation of results from true results (inaccurate measurement) Confounding – Confounding – error where the measured result is the actual measure but not causally related to treatment received error where the measured result is the actual measure but not causally related to treatment received factors such as disease severity are not confounders in randomized trials, but effect modifiers factors such as disease severity are not confounders in randomized trials, but effect modifiers If the above reasons ruled out then……… If the above reasons ruled out then………

5 5 Why is Design Important? 4. Valid results – validity means ability of study to measure what is purports to measure Internal validity – ability of study to measure what it purports to measure Internal validity – ability of study to measure what it purports to measure External validity – ability to generalize (transfer) results to population rather than just sample measured External validity – ability to generalize (transfer) results to population rather than just sample measured A trial that does not have internal validity cannot have external validity A trial that does not have internal validity cannot have external validity

6 6 Why is Design Important? Random error addressed by adequate sample size Random error addressed by adequate sample size P value addresses probability results may be due to chance P value addresses probability results may be due to chance Does not address likelihood that hypothesis is likely true Does not address likelihood that hypothesis is likely true Bias and confounding addressed by appropriate design, no statistical fix after the study is over Bias and confounding addressed by appropriate design, no statistical fix after the study is over Increased sample size can increase effects of bias and confounding on results Increased sample size can increase effects of bias and confounding on results Only way to obtain valid results is through appropriate design, conduct and analysis of trial Only way to obtain valid results is through appropriate design, conduct and analysis of trial

7 7 Why is Design Important? Invalid clinical trial results can lead to important clinical consequences: Invalid clinical trial results can lead to important clinical consequences: Ineffective therapies used widely in patients ( cannot figure it out later since difficult to determine cause and effect in individual patients) Ineffective therapies used widely in patients ( cannot figure it out later since difficult to determine cause and effect in individual patients) Unwarranted harms to patients in absence of benefits Unwarranted harms to patients in absence of benefits Emergence of resistance and elimination of benefits for other patients Emergence of resistance and elimination of benefits for other patients Ethical issues of exposing subjects to harm in scientifically invalid research Ethical issues of exposing subjects to harm in scientifically invalid research Belmont Report, Ethical Principles and Guidelines for Research Involving Human Subjects http://ohsr.od.nih.gov/guidelines/belmont.html Belmont Report, Ethical Principles and Guidelines for Research Involving Human Subjects http://ohsr.od.nih.gov/guidelines/belmont.html http://ohsr.od.nih.gov/guidelines/belmont.html

8 8 Ioannidis JP PLoS Medicine 2005;2(8):e124

9 9 Clinical Trials and Clinical Practice Clinical practice and clinical research differ Clinical practice and clinical research differ Clinical practice based on interventions designed solely to enhance the well-being of an individual patient or clients and that have reasonable expectation of success Clinical practice based on interventions designed solely to enhance the well-being of an individual patient or clients and that have reasonable expectation of success Belmont Report p.3, Ethical Principles and Guidelines for Research Involving Human Subjects Belmont Report p.3, Ethical Principles and Guidelines for Research Involving Human Subjects Clinical research is activity designed to test an hypothesis in groups of subjects and thereby to develop or contribute to generalizable knowledge Clinical research is activity designed to test an hypothesis in groups of subjects and thereby to develop or contribute to generalizable knowledge Belmont Report http://ohsr.od.nih.gov/guidelines/belmont.html Belmont Report http://ohsr.od.nih.gov/guidelines/belmont.html http://ohsr.od.nih.gov/guidelines/belmont.html

10 10 Clinical Trials and Clinical Practice Question not whether individual clinician believes drug will be effective for individual patient in clinical practice Question not whether individual clinician believes drug will be effective for individual patient in clinical practice Questions is how to study drug to demonstrate safety and effectiveness in group of patients in a clinical trial to then generalize to clinical practice Questions is how to study drug to demonstrate safety and effectiveness in group of patients in a clinical trial to then generalize to clinical practice Medical need is reason to do a trial, not a reason to accept invalid trials or lesser evidence Medical need is reason to do a trial, not a reason to accept invalid trials or lesser evidence Designing trials based on previously held beliefs in absence of evidence does not allow gathering of evidence to validate those beliefs Designing trials based on previously held beliefs in absence of evidence does not allow gathering of evidence to validate those beliefs

11 11 What are the Principles? Clear statement of objectives of the trial Clear statement of objectives of the trial Study design permits valid quantitative comparison with a control Study design permits valid quantitative comparison with a control Select patients with disease (treatment) or at risk of disease (prevention) Select patients with disease (treatment) or at risk of disease (prevention) Baseline comparability (randomization) Baseline comparability (randomization) Minimize bias (blinding, etc.) Minimize bias (blinding, etc.) Appropriate methods of assessment of outcomes Appropriate methods of assessment of outcomes Appropriate methods of analysis Appropriate methods of analysis 8. Appropriate measurement of potential harms

12 12 How Can We Do Better? 1) Clear objective: Define disease and clinical time course – mixing together various infections makes interpretation of results challenging Define disease and clinical time course – mixing together various infections makes interpretation of results challenging Differentiate treatment from prevention trials – salvage vs primary treatment Differentiate treatment from prevention trials – salvage vs primary treatment Differentiate explanatory trials from strategy/management trials Differentiate explanatory trials from strategy/management trials Differentiate measurement of effectiveness from measurement of harms Differentiate measurement of effectiveness from measurement of harms Better natural history data – what is an invasive infection? Does in vitro resistance affect clinical outcomes and by how much? Better natural history data – what is an invasive infection? Does in vitro resistance affect clinical outcomes and by how much? Allows for better enrollment criteria, more homogeneous population, less variability, and appropriate timing of outcomes Allows for better enrollment criteria, more homogeneous population, less variability, and appropriate timing of outcomes 2) Quantitative comparison with control Absence of control makes it challenging to assess causality of outcomes Absence of control makes it challenging to assess causality of outcomes Choice of control: no treatment, placebo, dose response, active, historical Choice of control: no treatment, placebo, dose response, active, historical Choice of study design: superiority, non-inferiority Choice of study design: superiority, non-inferiority

13 13 Quantitative Comparison with a Control Many ID clinical trials designed as noninferiority (NI) trials Many ID clinical trials designed as noninferiority (NI) trials Misconceptions about goals of NI trials Misconceptions about goals of NI trials Rule out margin by which test intervention may be less effective than control intervention Rule out margin by which test intervention may be less effective than control intervention Does not show that experimental intervention is as good as or equivalent to control unless shows statistical superiority Does not show that experimental intervention is as good as or equivalent to control unless shows statistical superiority Experimental intervention can be statistically inferior/superior and noninferior at same time as long as not more inferior than margin specified prior to trial Experimental intervention can be statistically inferior/superior and noninferior at same time as long as not more inferior than margin specified prior to trial Designing a noninferiority trial means one is willing to accept less effectiveness with the experimental intervention (for what trade off?) Designing a noninferiority trial means one is willing to accept less effectiveness with the experimental intervention (for what trade off?)

14 14 Designing a Valid Noninferiority Trial 1. Quantitative assessment that is reliable and reproducible (based on trials that are themselves adequate and well controlled) of benefit of control over placebo and suitably conservative evaluation examining variability (not just point estimates) 2. Maintenance of the effect of the control from trial to trial (constancy assumption) Similar definition of disease, endpoints, timing of endpoints Similar definition of disease, endpoints, timing of endpoints Changes in medical practice, adjunctive therapies, antimicrobial resistance Changes in medical practice, adjunctive therapies, antimicrobial resistance 3. Selection of margin of loss of effect of control that is less than the benefit of control over placebo found in step 1 International Conference on Harmonization Guidance E-10, Choice of Control Group and Related Issues in Clinical Trials, www.ich.org

15 15 Designing a Valid Noninferiority Trial If these conditions not met, demonstration of similarity means experimental and control intervention may be similarly effective or similarly ineffective If these conditions not met, demonstration of similarity means experimental and control intervention may be similarly effective or similarly ineffective Experimental intervention may not be any more effective than placebo even if control agent previously effective Experimental intervention may not be any more effective than placebo even if control agent previously effective Link to external negative control data in NI trials similar to external (historical) trials with similar biases Link to external negative control data in NI trials similar to external (historical) trials with similar biases Other forms of bias in NI trials beyond statistical issues Other forms of bias in NI trials beyond statistical issues Not ensuring subjects have disease under study Not ensuring subjects have disease under study Blinding less effective at preventing bias since investigators know all subjects receiving active intervention Blinding less effective at preventing bias since investigators know all subjects receiving active intervention Greater bias due to inappropriate conduct of trials, concomitant medications, missing data, etc. Greater bias due to inappropriate conduct of trials, concomitant medications, missing data, etc.

16 16 How Can We Do Better? 3) Selection of subjects with disease (treatment) or at risk of disease Rapid diagnostics which evaluate host response as well as presence of organisms Rapid diagnostics which evaluate host response as well as presence of organisms Biomarkers can be useful in diagnosis but in presence of signs and symptoms of disease (positive predictive value of test related to pre-test probability) Biomarkers can be useful in diagnosis but in presence of signs and symptoms of disease (positive predictive value of test related to pre-test probability) Better current natural history data in prevention trials to better select populations at risk Better current natural history data in prevention trials to better select populations at risk 4) Baseline comparability using Randomization controls for selection bias as well as measured and unmeasured confounders; basis for statistics Randomization controls for selection bias as well as measured and unmeasured confounders; basis for statistics Appropriate development of severity classifications (comparing baseline variables to clinical outcomes) to stratify subjects at baseline and decrease variability Appropriate development of severity classifications (comparing baseline variables to clinical outcomes) to stratify subjects at baseline and decrease variability

17 17 How Can We Do Better? 5) Minimizing bias Blinding of microbiological data to persons assessing outcome in situations where impact of in vitro resistance on clinical outcomes is unclear Blinding of microbiological data to persons assessing outcome in situations where impact of in vitro resistance on clinical outcomes is unclear Could have unblinded third parties assess culture results in serious diseases Could have unblinded third parties assess culture results in serious diseases Will allow correlation of clinical outcomes with in vitro testing to better define resistance Will allow correlation of clinical outcomes with in vitro testing to better define resistance Evaluate clinical outcome at time of culture result in any case Evaluate clinical outcome at time of culture result in any case Control for concomitant medications Control for concomitant medications Minimize loss to follow-up and missing data Minimize loss to follow-up and missing data

18 18 How Can We Do Better? 6) More accurate and sensitive outcome measures Effect of antimicrobials in severe disease based upon decrease in all- cause mortality Effect of antimicrobials in severe disease based upon decrease in all- cause mortality Biomarkers can make it more difficult to show effects in some diseases since adds another criteria to assessment of outcomes Biomarkers can make it more difficult to show effects in some diseases since adds another criteria to assessment of outcomes Develop well-defined clinical outcome criteria independent of clinician judgment (can cause misclassification bias and increased variability = increased sample size) based on natural history of disease Develop well-defined clinical outcome criteria independent of clinician judgment (can cause misclassification bias and increased variability = increased sample size) based on natural history of disease Expert outcome assessment does not eliminate bias and calls into question generalizability of results Expert outcome assessment does not eliminate bias and calls into question generalizability of results Timing of outcomes - Time to event analyses in superiority trials can inform duration of therapy, increase power to detect differences, decrease sample size, and answer clinically relevant question on magnitude of effect Timing of outcomes - Time to event analyses in superiority trials can inform duration of therapy, increase power to detect differences, decrease sample size, and answer clinically relevant question on magnitude of effect

19 19 Multiple/Composite Endpoints All cause mortality Non-fatal clinical events Symptoms of disease Surrogate endpoints Lubsen J et al. Stat Med 2003;21:2159-70. Interested in multiple aspects of how disease may affect patients lives

20 20 Multiple/Composite Endpoints All cause mortality Non-fatal clinical events Symptoms of disease Surrogate endpoints Lubsen J et al. Stat Med 2003;21:2159-70. Success based on events from lower on hierarchy should not supersede failure based on events higher up on hierarchy that occur during course of trial even when surrogate is used as part of primary outcome

21 21 How Can We Do Better? 7) Appropriate analysis Decrease proportions of subjects who are indeterminate or unevaluable by eliminating inappropriate exclusions from per protocol analysis – all events post-randomization included Decrease proportions of subjects who are indeterminate or unevaluable by eliminating inappropriate exclusions from per protocol analysis – all events post-randomization included Evaluation of the intent to treat, modified intent to treat analysis protects against selection bias, maintains integrity of randomization Evaluation of the intent to treat, modified intent to treat analysis protects against selection bias, maintains integrity of randomization Appropriate adjustments for multiple comparisons in secondary endpoints and subgroup analyses Appropriate adjustments for multiple comparisons in secondary endpoints and subgroup analyses Use of gate-keeper step wise hypothesis testing to control for false positive results but requires a priori specification of order of hypothesis testing Use of gate-keeper step wise hypothesis testing to control for false positive results but requires a priori specification of order of hypothesis testing

22 22 8. Analysis of Harms Safety analysis requires an adequate number of subjects to assess adverse events Safety analysis requires an adequate number of subjects to assess adverse events Rule of threes – measurement of no events in a given trial allows rule out rate of 3 divided by number of subjects studied (3/300 = 1%) Rule of threes – measurement of no events in a given trial allows rule out rate of 3 divided by number of subjects studied (3/300 = 1%) Not evaluating statistical significance of harms since not testing a hypothesis in most clinical trials, but developing a hypothesis Not evaluating statistical significance of harms since not testing a hypothesis in most clinical trials, but developing a hypothesis Overall assessment of risks and benefits depends upon nature and magnitude of both Overall assessment of risks and benefits depends upon nature and magnitude of both Greater risks acceptable when treatment has large effect on clinically important endpoints like death Greater risks acceptable when treatment has large effect on clinically important endpoints like death Serious adverse events less acceptable when benefits small Serious adverse events less acceptable when benefits small Unacceptable if benefits compared to placebo unclear Unacceptable if benefits compared to placebo unclear

23 23Conclusions Need to accept that we can improve on current level of evidence, answer questions that are still unclear Need to accept that we can improve on current level of evidence, answer questions that are still unclear Many opportunities to develop more clinically relevant and more efficient clinical trials Many opportunities to develop more clinically relevant and more efficient clinical trials Result can be more information for clinicians and patients, optimal use of antimicrobials by describing who benefits, by how much and with quantitative comparison to risks Result can be more information for clinicians and patients, optimal use of antimicrobials by describing who benefits, by how much and with quantitative comparison to risks


Download ppt "Design of Clinical Trials for Treatment of Invasive Fungal Infections John H. Powers, MD FACP FIDSA Senior Medical Scientist SAIC in support of Collaborative."

Similar presentations


Ads by Google