Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.

Similar presentations


Presentation on theme: "Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence."— Presentation transcript:

1 Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence

2 Quality: Methodological and Non Methodological Factors Some researchers distinguish between a study’s methodological and its non methodological quality. Methodological quality is the extent to which all aspects of a study’s design and implementation combine to protect its findings against biases. A focus on methodological quality means intensively examining factors such as the scientific soundness of the research design and sampling strategy, measurement, data analysis, and adherence to a protocol.

3 Quality: Methodological and Non Methodological Factors Evaluating non methodological quality means analyzing the clarity and relevance of the program’s objectives, theoretical basis and content; the trustworthiness of the research’s funding source; and the adequacy of the resources, setting, and ethical considerations. Non methodological quality criteria are important because they are indicators of a program’s potential pertinence and acceptability. HIGH METHODOLOGICAL QUALITY PRODUCES VALID EVIDENCE. HIGH NON METHODOLOGICAL QUALITY IS ESSENTIAL IN PRODUCING EVIDENCE THAT MATTERS.

4 Rating the Quality of the Evidence The U.S. Agency for Health Care Research and Quality (AHRQ) commissioned the Evidence-Based Practice Center at Research Triangle Institute International-University of North Carolina Goals were to produce a report that would describe systems (1) that rate the quality of evidence in individual studies or (2) grade the strength of entire bodies of evidence and (3) provide guidance on the best practices in the field of grading evidence. The Center came up with systems for identifying and rating critical domains for RCTs and observational studies (that is, all studies that are not RCT’s)

5 AHRQ’s Domains for Evaluating the Quality of RCT’s and Observational/Non Randomized Studies (Critical Domains are Italicized) Randomized Controlled Trials Observational/Non Randomized Studies Study question Study population RandomizationComparability of participants BlindingExposure or Program/Intervention InterventionsOutcome Measures OutcomesStatistical Analysis Statistical analysisResults Discussion Funding or sponsorship

6 RCT Quality and the CONSORT Statement To help the research consumer in evaluating the quality of research reports, that is, to begin to grade the extent to which they adequately cover each domain, proponents of evidence- based medicine have developed the Consolidated Standards of Reporting Trials (CONSORT) (www.consort-statement.org).www.consort-statement.org The CONSORT statement is available in several languages and has been endorsed by prominent medical, clinical and psychological journals. CONSORT consists of a checklist and flow diagram to help improve the quality of reports of randomized controlled trials. CONSORT covers all domains listed by the U.S. AHRQ.

7 Observational Studies and TREND The TREND statement applies to all studies that do not use randomization (including quasi-experimental designs). The AHRQ critical domains for non randomized trials include comparability of participants, exposure or program/intervention, outcome measures, and statistical analysis. To address these domains, the American Public Health Association has issued the TREND (Transparent Reporting of Evaluations with Nonrandomized Designs) statement. Source: Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. Mar 2004;94(3):361-366.

8 U.S. Preventive Services Task Force: Grading the Evidence The U.S. Preventive Services Task Force (USPSTF) is an independent panel of experts in primary care and prevention that systematically reviews evidence of effectiveness and develops recommendations for clinical preventive services. The USPSTF combines quality, quantity, and consistency in grading evidence.

9 U.S. Preventive Services Task Force: Grading the Evidence (Continued) Level I, the highest quality, consists of evidence obtained from at least one properly randomized controlled trial. Level II-1 is evidence that is obtained from well-designed controlled trials without randomization; level II-2 consists of evidence obtained from well-designed cohort or case-control analytic studies, and Level II-3 is evidence obtained from multiple time series with or without the intervention. The weakest evidence is Level III which consists of opinions of respected authorities, descriptive studies and case reports; and reports of expert committees. The USPSTF grades its recommendations according to one of five classifications (A, B, C, D, I) reflecting the strength of evidence and magnitude of net benefit (benefits minus harms).

10 Is the Literature Review Reliable? Valid? A reliable review is one that consistently provides the same information about methods and content from time to time from one person (“within”) and among several (“across”) reviewers. A valid review is an accurate one. Large, comprehensive literature reviews nearly always have more than one reviewer. The measuring agreement between two reviewers is called kappa, defined as the agreement beyond chance divided by the amount of agreement possible beyond chance.

11 Systematic Reviews Reviews of the literature are either systematic or narrative. A systematic review follows a detailed set of procedures or a protocol that includes a focused study question (e.g., PICO) a specific search strategy (that defines the search terms and databases that are used as well as the criteria for including and excluding studies) specific instructions on the types of data to be abstracted on study objective, methods, findings and quality Systematic literature reviews take two forms Qualitative summaries of the literature based on trends the reviewer finds across studies Meta-analysis, which uses statistical methods to produce a quantitative summary of program effects

12 Systematic Reviews: Meta-Analysis A meta-analysis is a systematic review of the literature that uses formal statistical techniques to sum up the results (‘effect sizes”) of separate studies on the same topic. The idea is that the larger numbers obtained by combining study findings provide greater statistical power than any of the individual studies. The concept of effect size is central to meta-analysis. An effect is the extent to which an outcome is present in the population, and it is a crucial component of sample size selection. Some meta-analyses, which are observational studies (they are retrospective reviews), are higher quality than others.

13 Meta-Analysis: Questions for Reviewing Quality Are the objectives of the meta-analysis unambiguously stated? Are the inclusion and exclusion criteria explicit? Are the search strategies described in detail? Is a standardized protocol used to screen the literature? Is a standardized protocol or abstraction form used to collect data? Do the authors fully explain their method of combining or “pooling” results? Does the report summarize the flow of studies, provide descriptive data for each study, and summarize key findings? Is the review current?


Download ppt "Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence."

Similar presentations


Ads by Google