Presentation is loading. Please wait.

Presentation is loading. Please wait.

Critical appraisal of (Systematic review) Meta-analysis

Similar presentations


Presentation on theme: "Critical appraisal of (Systematic review) Meta-analysis"— Presentation transcript:

1 Critical appraisal of (Systematic review) Meta-analysis
羅政勤 彰化秀傳紀念醫院

2 Objectives To understand the different terminology of Meta-analysis, systematic review, To understand the key criteria for critical appraisal To select an appropriate checklist or other instrument to use for critical appraisal. Validity, Impact, Practicability (CASP)

3 Terminology Review: ≧2 publication synthesise results + conclusions
Overview(systematic literature review): a review strives to comprehensively identify and track down all literature on a given topic Meta-analysis: Specific statistical strategy assembling results of several studies into a single estimate

4 Introduction Systematic reviews form a potential method for overcoming the barriers faced by clinicians when trying to access and interpret evidence to inform their practice

5 Systematic reviews Concise summaries of best available evidence that addresses defined questions scientific tool used to appraise, summarise, and communicate results and implications of otherwise unmanageable quantities of research

6 Systematic reviews Defining a question
A good question will have four components: Type of person involved Type of exposure Type of control Outcomes

7 Question for systematic review
Poor Do vitamins prevent cancer? Good Does taking a daily multivitamin prevent occurrence of lung cancer in smokers compared to no multivitamin?

8 Systematic reviews Develop an answerable question
Make a comprehensive search for appropriate sources Select the sources (inclusion/exclusion criteria) Critically appraise the sources Synthesize the evidence Draw appropriate inferences

9 Evidence based Practice

10 SR and Meta-analysis Systematic reviews may or may not include a statistical synthesis called meta-analysis, whether the studies are similar enough so that combining their results is meaningful

11 Meta-analysis Statistical method for combining the results of trials
Most appropriate for randomized trials May also be appropriate for observational studies

12 Results of a meta-analysis
Forest plots of a meta-analysis of four randomized trials (2, 5–7) comparing no adjuvant chemotherapy with adjuvant chemotherapy in early-stage ovarian cancer for overall survival (A) and recurrence free survival (B). The position of each square indicates the hazard ratio, and the area of the square is proportional to the variance of the estimated effect. The length of the horizontal line through the square indicates the 99% confidence interval (CI), and the inner tick marks indicate the 95% CI. The arrow at the end of the horizontal line indicates that the 99% CI is larger than the scale of the figure. The diamond indicates the hazard ratio (middle of the diamond) and the 95% CI (extremes of the diamond) for the combined data from the four randomized trials. Linear trends and heterogeneity of the hazard ratios were assessed by a chi-square test for trend ( 2) and a 2 for heterogeneity (Het 2), respectively. Degrees of freedom for each 2 test are given in parentheses. The hazard ratio for overall survival is (95% CI = to 0.942), 2(1) = 5.740, P = .017; Het 2(1) = 1.474, P = .688, and for recurrence-free survival is (95% CI = to 0.831), 2(1) = , P<.001, Het 2(1) = 2.101, P = .552, both in favor of adjuvant chemotherapy. O–E = number of events observed minus number of events expected under the null hypothesis. Variance = variance of 1/logarithm of the hazard ratio. Forest plots of a meta-analysis of four randomized trials comparing no adjuvant chemotherapy with adjuvant chemotherapy in early-stage ovarian cancer for overall survival (A) and recurrence free survival (B). JNCI Cancer Spectrum 95(2):

13 Advantages of meta-analysis
Allows pooling of several studies = increase sample size Gathers literature in one place Provides a quantitative summary (possibly less bias than a narrative) Generate hypotheses Provide information for future trials

14 Disadvantages of meta-analysis
Even randomized studies often differ significantly in their design, outcome, exposure measures Publication bias Studies differ in quality Time trends Health studies tend to be (comparatively) few

15 Interpreting the results of a meta-analysis
Was process valid (question, search strategy, reproducible)? Are studies comparable? Are results similar? What is the estimate and precision of the estimate?

16 Conclusion Systematic reviews : top of hierarchy of evidence
Caution before accepting findings of any systematic review without first appraising it Systematic reviews appear at the top of the hierarchy of evidence. This reflects the fact that, when rigorously conducted, they should give us the best possible estimate of any true effect. However, caution must be exercised before accepting the findings of any systematic review without first appraising it. Like any piece of research, a systematic review may be done poorly. Not all systematic reviews are rigorous and unbiased. Little attention may have been paid to the intervention, the patient selection group or the search strategy; or the systematic review may have combined studies in meta-analysis which should not have been pooled because they differ in terms of intervention used or participants included. Therefore, it is important that users of systematic reviews become familiar with the steps involved

17 Cautious Attention paid to patient selection group , inter-vention, or search strategy; SR combined studies in meta-analysis pooled in different intervention or participants included

18 3 reasons validity finding
1) Chance 2) Bias 3) Confounding

19 Chance Random variation
Chance: statistical analysis (hypothesis testing and estimation.) Avoid random variation : adequate  sample size 

20 Bias Systematic (non-random) error in estimation of population characteristic e.g. effect of treatment compared to control in a population Systematic means …

21 Classification of sources of bias in analytical studies
Allocation Performance Placebo-effect Attrition Detection Analytical Reporting Selection Measurement Analysis

22 1. Allocation bias Any treatment allocation method that causes a systematic difference in participant characteristics at the start of a trial (baseline) independent prognostic characteristics (confounders) failure to plan e.g. confounding by indication failure to execute

23 2. Performance bias Systematic differences in the care of the two groups, other than the intervention being investigated nursing & supportive care monitoring for adverse effects

24 3. Placebo-effect bias Placebo-effect - a beneficial effect gained because the participant believes he is receiving effective therapy (includes satisfying pat-doc relationship as well as medicinal intervention) In trials with a “no-treatment” arm, confounding due to a differential placebo-effect may occur if the subjects are aware they are not receiving active therapy

25 Reasons for bias - Confounding
When a non-causal association due to a common cause of both T and H prevents us from quantifying any causal association

26 Confounding – measured & unmeasured common causes
Random variation (chance) imprecise Systematic variation (bias) inaccurate Confounder : factor prognostically linked to outcome and unevenly distributed btw study groups Known confounders : stratify results- Unknown confounders: randomisation

27 Confounding – measured & unmeasured common causes
Non-causal assoc drug cancer Smoking Supportive care Placebo-effect

28 4. Attrition bias All clinical trials have a period of follow-up, attrition occurs when subjects do not complete the follow-up process (loss to follow-up) This is harmful because attrition causes loss of information and hence less precise estimates of the treatment effect, if too many subjects cannot be analyzed Systematic differences in the loss of participants to follow up between groups may cause bias if the analysis is improper e.g. analyzing only participants who had complete follow-up or who were fully compliant (per protocol analysis)

29 5. Detection bias Systematic differences in outcome assessment btw groups measurement method follow-up frequency for outcomes

30 6. Analytical bias Bias arising because of the method of analysis
choice of subjects to analyze the analysis dataset choice of statistical estimators biased & unbiased estimators choice of multivariate models

31 7. Reporting bias Selective reporting of Use of composite endpoints
clinical outcomes e.g. surrogate, subgroups time-points e.g. early Use of composite endpoints component events not equally significant

32 What is Apprasial? A technique to increase effectiveness of reading by exclude research studies too poorly designed to inform practice.

33 Why appraisal? To free time of concentrate on a more systematic evaluation of studies cross quality threshold and extract salient points

34 How to Appraise? Appraising a Secondary studies(Review) Validity
Impact(Results) Practicability(Application) Instruments tools such as CASP

35 Critical Appraisal Skills Programme (CASP)

36 Appraisal tools for Systematic review
10 questions to help you make sense of reviews Is the study valid? What are the results? Will the results help locally? 10 questions adapted from Oxman AD, Cook DJ, Guyatt GH, Users’ guides to medical literature. VI. How to use an overview. JAMA 1994; 272 (17):

37 Screening question First 2 questions
Screening questions can be answered quickly. Worth proceeding If answer to both is “yes”,

38 Screening question 1. Did the review ask a clearly-focused question? 􀂉 Yes 􀂉 Can’t tell 􀂉 No Focused : – the population studied – the intervention given or exposure – the outcomes considered 2. Did the review include the right type of study? 􀂉 Yes 􀂉 Can’t tell 􀂉 No included studies: – address the review’s question – have an appropriate study design Is it worth continuing?

39 3. Did the reviewers try to identify all relevant studies
3. Did the reviewers try to identify all relevant studies? 􀂉 Yes 􀂉 Can’t tell 􀂉 No Consider: – which bibliographic databases were used – if there was follow-up from reference lists – if there was personal contact with experts –searched for unpublished studies –searched for non-English-language studies 4. Did the reviewers assess the quality of the 􀂉 Yes 􀂉 Can’t tell 􀂉 No i– if a clear, pre-determined strategy was used to determine which studies were included. Look for: – a scoring system – more than one assessor

40 5. If the results of the studies have been
combined, was it reasonable to do so? Consider – the results of each study are clearly displayed – the results were similar from study to study (look for tests of heterogeneity) – the reasons for any variations in results are discussed 6. How are the results presented and what is the main result? Consider: – how the results are expressed (e.g. odds ratio,relative risk, etc.) – how large this size of result is and how meaningful it is – how you would sum up the bottom-line result of the review in one sentence

41 7. How precise are these results?
Consider: – if a confidence interval were reported. Would your decision about whether or not to use this intervention be the same at the upper confidence limit as at the lower confidence limit? – if a p-value is reported where confidence intervals are unavailable

42 8. Can the results be applied to the local 􀂉 Yes 􀂉 Can’t tell 􀂉 No
population? Consider whether: – the population sample covered by the review could be different from your population in ways that would produce different results – your local setting differs much from that of the review – you can provide the same intervention in your setting 9. Were all important outcomes considered? 􀂉 Yes 􀂉 Can’t tell 􀂉 No Consider outcomes from the point of view of the: – individual – policy makers and professionals – family/carers – wider community reported can it be filled in from elsewhere?

43 10. Should policy or practice change as a result of 􀂉 Yes 􀂉 Can’t tell 􀂉 No
the evidence contained in this review? Consider: – whether any benefit reported outweighs any harm and/or cost. If this information is not


Download ppt "Critical appraisal of (Systematic review) Meta-analysis"

Similar presentations


Ads by Google