Presentation is loading. Please wait.

Presentation is loading. Please wait.

Instructor Resource Chapter 19 Copyright © Scott B. Patten, 2015. Permission granted for classroom use with Epidemiology for Canadian Students: Principles,

Similar presentations


Presentation on theme: "Instructor Resource Chapter 19 Copyright © Scott B. Patten, 2015. Permission granted for classroom use with Epidemiology for Canadian Students: Principles,"— Presentation transcript:

1 Instructor Resource Chapter 19 Copyright © Scott B. Patten, 2015. Permission granted for classroom use with Epidemiology for Canadian Students: Principles, Methods & Critical Appraisal (Edmonton: Brush Education Inc. www.brusheducation.ca).

2 Chapter 19. Steps in critical appraisal

3 Objectives Describe the goals of critical appraisal. List steps for critical appraisal, and describe their hierarchy. Explain critical appraisal as a skill honed with practice, feedback, and knowledge.

4 The goals of critical appraisal Critical appraisal is a necessary skill for scientists, clinicians, and public health officials. It reflects the reality that authors’ conclusions must be examined critically—they cannot be accepted at face value. Research is a conduit for information and knowledge. Critical appraisal gauges the success of research in achieving this goal. Critical appraisal pervades every aspect of research from proposals to peer-reviewed publications.

5 Critical appraisal (continued) The default position in critical appraisal is, of course, a critical stance. Results are not accepted as true because investigators assert that they are true. It is better to assume that results are false, and to only accept them if vulnerabilities to random error, systematic error, and confounding cannot be uncovered. It is helpful to have an organized approach to critical appraisal.

6 Critical appraisal step by step Step 1: Identify the study’s research question or hypothesis. Step 2: Identify the exposure and disease variables. Step 3: Identify the study design. Step 4: Assess for selection bias. Step 5: Assess for misclassification bias. Step 6: Assess for confounding and effect measure modification. Step 7: Assess the role of chance. Step 8: Address causality. Step 9: Assess generalizability. Step 10: Report the critical appraisal.

7 Identifying the study’s research question or hypothesis A study’s topic of inquiry should be clearly stated in a form that aligns with an estimate or comparison. It should be: in the form of a hypothesis (e.g., that an exposure is significantly associated with a disease, that a treatment leads to better outcomes than placebo treatment) or, in the form of a question (e.g., Does this exposure increase the risk of this disease? Does this treatment lead to a better outcome than treatment with placebo?)

8 Identifying the study’s research question or hypothesis (continued) Some studies are poorly conceptualized. Investigators may not express goals ways that fit the needs of critical appraisal. For example, they may state that their goal was to “explore” or “look at” an issue. They may then present analyses, and attempt to form conclusions, based interpreting these estimates and models. In this situation, it is it best to set aside the authors’ narrative, and focus on a particular reported estimate, allowing appraisal to occur. In essence, this converts the authors’ vagueness into real methodological issues: vulnerability to type I error or other problems that emerge with the selective reporting of exploratory or descriptive results.

9 Identify the exposure and disease variables You can easily get off track in critical appraisal without a clear sense of the exposure and disease of interest. A precise focus on specific exposure-disease associations allows evaluation of issues such as measurement accuracy, or attrition related to disease and exposure status.

10 Identify the study design This is usually easy to do based on: the unit of analysis (individual or aggregate) the timing of measurement (prospective, retrospective) the direction of logical inquiry (forward, backward) If the study is interventional, the method of assignment of exposure is also key.

11 Identify the study design (continued) Usually, the authors will have already classified the study design and made a statement about what sort of study has been conducted. However, a little skepticism makes sense here, because authors may use terminology incorrectly or choose terminology that they think will put a positive spin on their results.

12 Identify the study design (continued) The advantage of identifying the study design is that each design has strengths and weaknesses. Identifying the design helps to organize critical thinking, pointing towards potential pitfalls. For example, determining that a study is cross-sectional flags likely difficulties in determining temporality—an issue with implications for causal reasoning However, there are no absolute rules about what designs are better or worse overall. A poorly conducted study of any design (even a randomized controlled trial) may give erroneous results.

13 Red flags with various designs Study designRed flags Ecological studiesEcological fallacy Difficulty detecting and controlling confounding Cross-sectional studiesLack of temporal clarity Case-control studiesRecall bias (differential misclassification of exposure) Selection bias Prospective cohort studies Bias due to attrition (selection bias) Differential misclassification of outcome Randomized controlled trials Diminished generalizability

14 Assessment of selection bias Why assess selection bias first? Systematic error has priority over random error because effective handling of random error (e.g., P values, confidence intervals) in the face of systematic error is pointless Selection occurs before measurement.

15 Assessment of selection bias (continued) To assess selection bias, you need to conceptualize the selection probabilities and apply the rules that determine when selection bias will occur. With selection probabilities, it is important to think broadly. Selection probabilities are influenced by more than just sampling. They are also influenced by factors such as provision of consent, provision of information and attrition. Anything that affects participation must be considered.

16 Assessment of selection bias (continued) Selection probabilities, however, are almost never known—not to critical appraisers, and generally not to researchers either. Therefore, the assessment of selection bias involves a qualitative assessment of the likely values of selection probabilities.

17 Assessment of selection bias (continued) In a study that is estimating a frequency such as prevalence, the rules governing the assessment of selection bias are simple. If the selection probability depends on the attribute whose frequency is being assessed, bias will occur.

18 Assessment of selection bias (continued) The rules are more complex when the targets of estimation are parameters such as odds ratios or risk differences. Here, selection probabilities often depend explicitly on disease (e.g., in a case-control study) or on exposure (e.g., in many prospective cohort studies). This in itself does not cause selection bias.

19 Assessment of selection bias (continued) Selection bias will occur if the probability of selection of cases and controls—in a case-control study, for example—depends on exposure in some way that differs between the cases and controls. Similarly, in a prospective cohort study, attrition may introduce bias if it depends on disease in a way that differs between those with and without exposure.

20 Schematic of an unbiased case-control study

21 Schematic of a biased case- control study* * Selection probability is higher in exposed cases.

22 Schematic of an unbiased prospective cohort study* * This is a special purpose cohort with oversampling of an exposed group.

23 Schematic of a biased prospective cohort study* * Attrition occurs at a greater rate among the exposed than nonexposed cases of incident disease.

24 Schematic of a biased prospective cohort study* * Attrition occurs at a greater rate among the exposed than nonexposed cases of incident disease. Here, the measure of association is a risk difference.

25 Two parts to bias assessment Assessment of bias (including selection bias) should go beyond just saying whether bias may have occurred. It should specify: the direction of bias the magnitude of bias

26 Assessment of misclassification bias In a study estimating a frequency, this is a straightforward issue: Insensitivity of the measure used to classify the attribute will lead to false negatives, deflating the numerator and leading to underestimation of the frequency. A lack of specificity in such a measure will predictably lead to false-positive results. These can be expected to inflate the numerator of the frequency with a resulting tendency toward overestimation.

27 Assessment of misclassification bias (continued) In a study estimating parameters such as odds ratios, risk ratios, or risk differences, the situation is more complex. The critical appraiser must decide whether there is misclassification and whether it is differential or nondifferential.

28 Schematic for a case-control study without misclassification

29 Schematic depicting recall bias* * Differential misclassification of exposure, with lower sensitivity in controls.

30 Assessment of confounding and effect measure modification Effect measure modification should be assessed first. A study would normally do this with a test for heterogeneity or with an interaction term in a model. If effect measure modification has not been assessed, then the study’s estimates may be meaningless averages of the effect in people with or without the modifying variable.

31 Assessment of confounding and effect measure modification (continued) In critical appraisal there are usually 3 concerns about confounding: Does a potential confounder exist that has not been addressed in the study (e.g., was it unmeasured)? Has the issue of confounding been properly dealt with in the analysis (e.g., if confounding is present, is an adjusted measure of association reported)? Were some of the variables that were treated as confounders in an analysis actually mediators of the effect being studied?

32 Assessing the role of chance The approach depends on whether statistical tests or confidence intervals are reported. Statistical testing: Consider type I and type II error. Confidence intervals: Examine the width and location of the confidence interval. This provides information about certainty, and also the plausible range of values consistent with the data. Remember that these considerations are subject to considerations of validity: for example, P values are of little interest if the estimate is invalid!

33 Addressing causality The concept of causality is hierarchically related to the concept of validity. If an estimate is invalid, there is often no point in considering its causal significance. Note, however, that this is not always the case—for example, in the case of a substantial association in the presence of nondifferential misclassification bias. If an estimate is unbiased, causation may be considered. A common approach is to consider causal criteria, as described, for example, by Bradford Hill.

34 Assessing generalizability If an estimate is judged to be invalid, then there is no point assessing its generalizability. Internal validity is a necessary precondition for any consideration of external validity. External validity is a study’s applicability to populations beyond its target population—its assessment is entirely a matter of judgement. This assessment—as in all considerations of external validity—would be a matter of considered opinion, not fact.

35 Reporting a critical appraisal A critical appraisal can usually be written up in a few pages, or provided verbally in 5 to 10 minutes. It should normally follow the succession explored in this chapter. This means that it would often start with a brief summary of the work, including the objectives of the study, the exposure and disease variable, and the study design.

36 Reporting a critical appraisal (continued) This would be followed by an examination of the key issues of bias, effect modification, and confounding, as briefly outlined above. If the results of the study are considered invalid, then the appraisal may end there. If the estimates are determined to be valid, then a discussion of their causal significance and generalizability to the local population are often pursued.

37 Critical appraisal as a skill Critical appraisal is a skill and, like all skills, requires practice and feedback. It also requires knowledge. For example, consideration of biological plausibility very often depends on knowledge of physiology. Judgements about external validity require detailed knowledge of different populations. The best way to develop the skill of critical appraisal—to build your fluency with study designs and types of error, to identify critical gaps in your knowledge—is to apply it in a setting where feedback is available, such as internal peer-review groups and journal clubs.

38 End


Download ppt "Instructor Resource Chapter 19 Copyright © Scott B. Patten, 2015. Permission granted for classroom use with Epidemiology for Canadian Students: Principles,"

Similar presentations


Ads by Google