Presentation is loading. Please wait.

Presentation is loading. Please wait.

Clinical trials and pitfalls in planning a research project Dr. D. W. Green Consultant Anaesthetist King's College Hospital Denmark Hill London SE5 9RS.

Similar presentations


Presentation on theme: "Clinical trials and pitfalls in planning a research project Dr. D. W. Green Consultant Anaesthetist King's College Hospital Denmark Hill London SE5 9RS."— Presentation transcript:

1 Clinical trials and pitfalls in planning a research project Dr. D. W. Green Consultant Anaesthetist King's College Hospital Denmark Hill London SE5 9RS with grateful thanks to Professor Alan Aitkenhead

2 insufficient information poor research inadequate sample size no power analysis no confidence intervals biased confounding factors e.g. mixed sexes for PONV vague end points e.g. not clearly defined severity of pain straying from hypothesis Seven deadly scientific sins

3 Laboratorystructure/activity analysis Animaldoes it work in animals ? is it toxic ? Human volunteers Phase 1....Is it toxic ? Phase 2....Does it work ? Phase 3....Does it work better than existing drugs ? Phase 4....Post marketing surveillance What's it like in the real world ? New Drugs: Types of study

4 Background has it been done before? is it worth doing? clinical scientific essential step has anything similar been done before? methods used by others?

5 Protocol Introduction background information justification why, what gap will it fill, what benefits succinct don’t miss out relevant info

6 Methodology: Ethics and consent Crucial Declaration of Helsinki benefit to patients benefit to society Information to patients purpose, what it involves potential benefits, ability to withdraw risks and disadvantages without prejudice children and incompetent adults

7 Selection of patients Age efficacy and current disease ASA status Sex pharmacokinetics, dynamics e.g. PONV Type of surgery applicability and availability Ability to give consent e.g. ICU Pregnancy

8 Designs prospective vs retrospective open vs blind (double or single) randomisation acceptable methods eg envelopes opened after entering the trial use of placebo ethics and other treatments block design blocks of patients: analyse after each block to enable one to stop when results are available stratification sequential analysis

9 Pitfalls Funding salaries drugs, equipment and investigations e.g. NHS costs statistics and data collection design time …. how long do we go on for? negative result … do (should) we publish? contradictory results vs other studies statistical and clinical effects rival investigators

10 Assessment and measurements which techniques validity, accuracy, objective, analysis which observer blinded, nurses, how many make measurement, are they trained how often science, statistics, practicality over long periods, placebo effect of frequent assessments number of variables, fewer the better availability of test e.g. troponin T

11 Documentation Ethics committee approval patient information data collection forms data type, storage, security, confidentiality, safety consent forms

12 Disproving the null hypothesis The ‘null’ hypothesis is that there is no difference between the treatments a probability value ‘p’ tells you how often the difference between the treatments could have occurred by chance. p < 0.05 is 1 in 20 or less (statistically significant) p < 0.01 is 1 in 100 or less (highly statistically significant)

13 Disproving the null hypothesis Type I error is where a difference is shown which could have occurred by chance 1 in 20 trials will show a difference where none exists if ‘p’ is reported at the 0.05 level multiple subgroup analysis in a trial may also give subgroup treatment differences a statistically significant result is more likely to be reported!

14 Disproving the null hypothesis Type II error is showing no difference where one actually exists almost always due to insufficient numbers can mask beneficial treatment effects BUT! if trial is large enough it may produce a statistically significant effect where the clinical significance is marginal

15 Size of study Power of study to show a difference in Rx ( e.g. 70% chance of demonstrating a 15% difference with a p < 0.05)) able disprove the null hypotheses with minimal or no Type II error may require pilot to determine treatment differences requires large numbers if differences are small or if great variability in treatment outcomes lower power (smaller numbers) may be acceptable if outcome is important (e.g. leukaemia)

16 Assessment of population size 15% of patients die within one year of admission to hospital for suspected myocardial infarction. Preventing 1/3rd of these deaths would be a major advance. Roughly, how many patients are needed for a clinical trial if doctors want to be 90% sure that a difference between treatments as large as the prevention of 1/3rd of deaths will not be missed at the p < 0.05 level?

17 Presentation of results Significance: clinical versus statistical p values confidence intervals (95%) (+/- 2 SE) risk reduction (relative and absolute) numbers needed to treat odds ratios

18 Measures of risk reduction Relative risk reduction …. Is it meaningful? Headline “50% reduction in mortality” –if normal mortality is 50/100 this is great (25) –if normal mortality is 1/100 … (1 in 200) Number needed to treat is better measure –reciprocal of risk reduction e.g. 4 in first (25/100) –200 in the second (0.5/100) If cost of treatment is £10,000 ………. !!

19 Number needed to treat Control event rate is 9 cases in 30 (0.3) Experimental event rate is 1 case in 29 (0.033) Then, NNT = 1/(CER - EER) = 1/(0.3-0.033) = 4 This method corrects for relative and absolute risk by relating to the control event rate

20 Number needed to treat Diabetic neuropathy 6.5 year prospective trial –9.6% developed DN (conventional) –2.8% developed DN (intensive treatment) Relative risk reduction = (9.6-2.8)/9.6 = 71% Absolute risk reduction = 9.6-2.8 = 6.8% Number needed to treat = 1/.068 = 15 people for 6.5 years to prevent one case of DN

21 Odds ratios OR are used where it is difficult to calculate the relative risk e.g. case control studies A value greater than 1 assumes increased risk Confidence intervals (95%) will give the overall picture (e.g. if CI crosses 1 then the result may not be significant

22 Odds ratio calculation Calculated as the ratio of the results of the control group divided by the experimental group (9/21) divided by (1/29) = 0.08 The relationship between OR and NNT is not linear and is very confusing … even to statisticians!

23 Evidence based medicine The process of systematically finding, appraising and using contemporaneous research findings as a basis for clinical decisions

24 Evidence based medicine Accurate identification of the clinical question to be investigated a search of the literature to select relevant articles evaluation of the evidence implementation of the findings into clinical practise


Download ppt "Clinical trials and pitfalls in planning a research project Dr. D. W. Green Consultant Anaesthetist King's College Hospital Denmark Hill London SE5 9RS."

Similar presentations


Ads by Google