Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Critical Appraisal January 9, 2006 Carin Gouws.

Similar presentations


Presentation on theme: "Introduction to Critical Appraisal January 9, 2006 Carin Gouws."— Presentation transcript:

1 Introduction to Critical Appraisal January 9, 2006 Carin Gouws

2 Objectives What is evidence-based clinical practice? A Hierarchy of Evidence Review of study designs and concepts Review of criteria for a good Randomized Control Trials (RCTs) Review of criteria for a good systematic review and meta-analyses

3 Evidence-Based Clinical Practice Using scientific evidence to inform clinical decision-making in conjunction with experience, a strong educational background and consideration of the patient’s values and situation Evidence alone is never enough –Framing the case correctly (e.g. correct diagnosis) –Value judgments, weighing costs and benefits to individuals and stakeholders –Ethics All evidence is not equal

4 A Hierarchy of Evidence Unsystematic clinical observation Physiologic studies (BP, CO, exercise capacity) Single observational study addressing patient- important outcomes Systematic review of observational studies addressing patient-important outcomes Single randomized control trial (RCT) Systematic reviews of RCTs N of 1 RCT

5 Hierarchy within the Hierarchy Internal Validity The ability of the study results to support a cause-effect relationship between the treatment and the observed outcome External Validity The generalizability of the study results to patients outside the study

6 Steps to Evidence-Based Practice 1.Identify the question (yours and the study’s) –Who are the patients? –What is the intervention? –What is the outcome of interest? 2.Collect the data 3.Critically appraise the evidence 4.Integrate what we have learned into clinical practice

7 Research Design Elements Experimental vs. Observational Intervention controlled by researcher or not Prospective vs. Retrospective Collecting data after the start of a study (forward) vs. looking at data already collected (backward) Concurrent Control vs. Historical Control Compare the outcome of the control group who participated in the study at the same time as the treatment group Compare the results of the treatment group with patient data that already exist from previous studies Dependent variable, Independent variable, Control groups, Confounders, Interactions, Bias, Surrogate Endpoints, Composite Endpoints

8 Research Design Elements Superiority, Non-inferiority, Equivalence Statistical significance vs. Clinical significance Efficacy (Explanatory) trial vs. Effectiveness (Management) trial –Can work under ideal circumstances vs. real world –Positive explanatory trial? –Negative efficacy trial? –Positive management trial? –Negative effectiveness trial? 0 Lower Threshold Upper Threshold

9 Research Designs Theory Building –Descriptive Observational Case Reports, Case series, Descriptive statistics, Surveys, Correlation studies, Qualitative Hypothesis Testing –Experimental random assignment, e.g. RCTs –Quasi-experimental Non-random assignment –Analytical Observational No intervention Cohort (incidence), case-control (proportion, use OR), cross-sectional (prevalence) Evidence Summaries –Systematic reviews, meta-analyses

10 Critical Appraisal Approach 1: A general approach that can be used for most types of studies Approach 2: Specific to RCTs assessing treatment outcome Approach 3: Specific to systematic reviews and meta-analyses –What is the difference between a systematic review and a meta-analyses?

11 Critical Appraisal 1: most studies 1.Clearly stated objectives & appropriate study design? 2.Is the study relevant – external validity? 3.Inclusions and exclusions described & justified? 4.Group allocation – control group? Random? 5.Procedures appropriately defined & applied? 6.Equal scrutiny of ALL groups (“blinding” & objective measures)? 7.Outcome measures & follow up adequate? 8.Biases / confounders dealt with?

12 Critical Appraisal 1: most studies 9.Statistical Analysis – appropriate tools? 10.Statistical Analysis– proper interpretation? 11.Sample Size or power described? “n” increases with SMALLER treatment effect, LARGER variation among individuals, MORE covariates, MORE groups 12.Clear and Complete Results? 13.Conclusions appropriate and complete?

13 Critical Appraisal 2: RCT INTERNAL VALIDITY: Are the results valid? Were patients randomized? –Adequacy of allocation sequence –Simple & restricted randomization ~ balanced prognostics Was randomization concealed? –Selection bias, Confounding bias Were patients/clinicians/outcome assessors aware of group allocation? –“Blinding”: placebo effect, interviewer bias, bias of interpretation –Trials where blinding was inadequate demonstrate (on average) a 17% overestimation of treatment effect

14 Critical Appraisal 2: RCT INTERNAL VALIDITY: Are the results valid? Were patients in treatment and control groups similar? –Balance of known prognostic factors (e.g. age, gender) Were patients analyzed in the groups to which they were randomized? –Intention-to-treat analysis –More conservative, preserve balance of prognostic factors, minimize type I error, greater generalizability Was follow-up complete? –Techniques for missing data: exclusion, assume worst-case scenario, last outcome carried forward, growth curve analysis, hot-deck method, regression, multiple imputation methods

15 Critical Appraisal 2: RCT What are the results? How large was the treatment effect? How precise was the estimate of the treatment effect? EXTERNAL VALIDITY: How can I apply the results to patient care? Were the study patients similar to the patients in my practice? Were all clinically important outcomes considered? Are the likely treatment benefits worth the potential harm and costs

16 Critical Appraisal 2: RCT ADDITIONAL ELEMENTS Subgroup Analysis (Interactions) –Indirect evidence supporting the hypothesized interaction? –Hypothesis precedes the analysis –How many comparisons were made? –Statistically significant effect given appropriate analysis (one test for both groups) –Magnitude of the effect Large differences may be by chance alone esp if small SS, therefore look at the precision (confidence intervals) –Interaction consistent across studies

17 Critical Appraisal 2: RCT ADDITIONAL ELEMENTS Surrogate Endpoints –Valid if established causal relationship between surrogate and patient-important outcome Composite Endpoints –Are the component endpoints of similar patient importance? –Do the more and less important endpoints occur with similar frequency? –Is the underlying biology for each outcome similar enough that comparable risk reductions are expected? –Are the point estimates of each component similar and are the confidence intervals narrow?

18 Critical Appraisal 3: Syst. Rev. A priori question/hypothesis formation Appropriate/relevant eligibility criteria Complete search of the literature Select relevant studies and resolving disagreement about inclusion appropriately Assessing publication bias Quality assessment of included studies and appropriate resolution of disagreement between assessors –Blinding was the only criterion with a significant influence on effect size (overestimation by 25%) – not concealment of randomization or loss to follow-up –Concealment of randomization may be a greater threat to internal validity in placebo-controlled trials –Blinding may be a greater threat to internal validity in trials using subjective outcomes –Completeness of follow-up may be of greater threat to internal validity when there is a higher expected rate of lost-to-follow-up or loss is related to the treatment

19 Critical Appraisal 3: Syst. Rev. Blinded extraction of the data, agreement between independent assessors Analysis and presentation of results –Original patient data vs. summary data –Pooling the data: Fixed-Effects and Random-Effects Models –Choice of summary measure (RR, OR, etc.) –Forest plot –Sensitivity analysis Examination of the robustness of the statistical findings How does the estimate of effect change? Differences in Study Results: Heterogeneity and appropriateness of pooling Subgroup Analysis

20 Summary There is always evidence: –Critical appraisal: Given the estimated treatment effect, the quality of the evidence and the direction of influence of biases needs to be considered in assessing what the true treatment effect might be Evidence alone is never enough: –When applying results, clinical judgment, patient values and correct clinical context is important

21 Preparation for Next Session February meeting: workshop in critical appraisal Materials: I will select 3-4 studies of relevant topics and make them available to you in advance Please read and analyze according to most suitable approach or using your own criteria (slides will be available in the resource centre) Be ready to discuss the studies

22 Conclusion Questions? Feedback? Further Reading: Guyatt, G., and Rennie, D. User’s Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. USA: AMA Press, 2002. (originally published as a series in JAMA) Sackett, D.L., Haynes, R.B. and Tugwell, P. Clinical Epidemiology: A Basic Science for Clinical Medicine. 2 nd ed. Boston: Little, Brown & Co., 1991.


Download ppt "Introduction to Critical Appraisal January 9, 2006 Carin Gouws."

Similar presentations


Ads by Google