Presentation is loading. Please wait.

Presentation is loading. Please wait.

Systematic Review Module 8: Assessing Applicability C. Michael White, PharmD, FCP, FCCP Professor and Director University of Connecticut/Hartford Hospital.

Similar presentations


Presentation on theme: "Systematic Review Module 8: Assessing Applicability C. Michael White, PharmD, FCP, FCCP Professor and Director University of Connecticut/Hartford Hospital."— Presentation transcript:

1 Systematic Review Module 8: Assessing Applicability C. Michael White, PharmD, FCP, FCCP Professor and Director University of Connecticut/Hartford Hospital Evidence-based Practice Center Speaker has no actual or potential conflicts of interest in relation to this activity

2 Learning Objectives The successful learner will be able to: – – Describe applicability and substantiate its importance – – Delineate a systematic approach to assessing applicability Based on PICOTS domains – – Apply a standard approach to discerning whether a study is evaluating efficacy or effectiveness 1

3 Applicability of Studies 2

4 Defining Applicability Applicability definition – – “Inferences about the extent to which a causal relationship holds over variations in persons, settings, treatments, and outcomes” Applicable study results likely reflect expected outcomes in the real world Others terms used synonymously with applicability include external validity, generalizability, and relevance Shadish W, Cook. T Experimental and quasi-experimental design for generalized casual inference. Boston: Houghton Mifflin; 2002. ~Shadish and Cook, 2002 3

5 Framing Applicability Issues Framing Applicability Issues Frame issues of applicability with reference to specific clinical or policy questions the review is intended to inform Applicability needs to be considered at the outset – – When scope of review is determined – – When key questions are identified Atkins D. Assessing applicability. Methods guide. 4

6 Applicability Resources Clinical experts and stakeholders can provide general information important in framing applicability issues – – What the population of interest looks like Mostly female, mostly elderly, mostly ethnic – – What types of care or procedures are routine or represent standard of care – – Are certain subpopulations characteristically different from others Biologically, clinically Atkins D. Assessing applicability. Methods guide. 5

7 Other Applicability Resources Registry or epidemiological information, practice guidelines, consensus papers, book chapters, and general reviews can provide useful applicability information – – Applicability issues do not have to be reviewed for each study – – Used to place the available literature in context Should be a factor in rating the strength of evidence Atkins D. Assessing applicability. Methods guide.. 6

8 General Considerations in Judging Applicability Applicability judgments should be based on stepwise considerations of a number of specific issues – – However, applicability is a general rather than absolute construct No validated formulaic criteria Atkins D. Assessing Applicability. Methods Guide. 7

9 General Considerations in Judging Applicability Stepwise approach to applicability: – – Consider applicability based on nature of interventions and outcomes – – Identify a few factors that are most relevant to applicability – – Summarize findings in a consistent way using PICOTS framework – – Summarize reasoning behind judgments made about applicability to other populations or interventions Atkins D. Assessing Applicability. Methods Guide. 8

10 Population and Applicability Data to AbstractConditions That Limit Applicability Eligibility criteria, proportion of screened individuals enrolled Narrow eligibility criteria, high exclusion rate Demographics (range and mean): age, gender, race, ethnicity Differences between study population and patients in community Severity or stage of illness (referral or primary care population) Narrow or unrepresentative severity or stage of illness Run-in period: attrition rate before randomization and reasons (nonadherence, side effects, no response) Run-in periods with high exclusion rates Event rates in treatment and control groups Events rates markedly different than in community Prevalence of disease (for diagnostic studies) Disease prevalence in study population different than in community Atkins D. Assessing Applicability. Methods Guide. Gartlehner G. J Clin Epidemiol 2006;59:1040-8. 9

11 Population and Applicability: Examples In the FIT trial, only 4,000 of 54,000 women screened were enrolled. Women were younger, healthier, and more adherent than typical osteoporosis patients. Trial of etanercept for juvenile diabetes excluded patients with side effects during an active run-in period. Trial found low incidence of adverse events. Clinical trials used to inform Medicare decisions enrolled patients who were younger (60 vs. 75 years) and more often male (75 vs. 42%) than Medicare patients with cardiovascular disease. Atkins D. Assessing Applicability. Methods Guide. 10

12 Intervention and Applicability Data to AbstractConditions That Limit Applicability Medication dose, schedule, duration Regimen not reflective of current practice Intensity of behavioral interventionsIntensity of intervention not feasible for routine use Adherence interventionsMonitoring practices or visit frequency not used in practice Version of rapidly changing technology Versions not in common use CointerventionsCointerventions that likely modify effectiveness of therapy Training/skill level of intervention team (surgery/diagnostics) Level of training not widely available Atkins D. Assessing Applicability. Methods Guide. 11

13 Intervention and Applicability: Examples Studies of behavioral modification to promote healthy diet employ larger number and longer duration of visits than those available to most community patients. Antiretroviral trials’ use of pill counts does not always translate into effectiveness in real-world practice. Combining iron and zinc attenuates the ability of iron to raise hemoglobin levels. Trials of carotid endarterectomy selected surgeons with extensive experience and low complication rates were not representative of average vascular surgeons. Atkins D. Assessing Applicability. Methods Guide. 12

14 Comparator, Outcomes, and Applicability Data to Abstract Conditions That Limit Applicability Comparator Medication dose, schedule, duration (if applicable) Regimen not reflective of current practice Comparator chosen versus others available (if applicable) Use of substandard alternative therapy Outcomes Outcomes (benefits AND HARMS) and how they were defined Surrogate endpoints, improper definitions for outcomes, composite endpoints Atkins D. Assessing Applicability. Methods Guide. 13

15 Comparator and Applicability: Examples Fixed-dose study compared high dose duloxetine (80 to 120 mg) to low dose paroxetine (20 mg) Many trials evaluating magnesium in acute myocardial infarction conducted before thrombolytics, antiplatelets, beta-blockers, and primary percutaneous coronary intervention (PCI) were used Only 1 of 23 trials comparing bypass surgery to PCI used drug-eluting stents Atkins D. Assessing Applicability. Methods Guide. 14

16 Outcomes and Applicability: Examples Trials of biologics for rheumatoid arthritis use radiographic progression rather than symptom evaluations Trials comparing cyclooxygenase-2 inhibitors and nonsteroidal antiinflammatory drugs use endoscopy- evaluated ulceration rather than symptomatic ulcers Atkins D. Assessing Applicability. Methods Guide. 15

17 Timing, Setting, and Applicability Data to AbstractConditions That Limit Applicability Timing Timing of follow-up/outcome measures Followup too short to detect important benefits or harms, measuring effects at inappropriate times Setting Geographic settingSettings where standards of care differ markedly from setting of interest Clinical settingSpecialty population or level of care that differs from community Atkins D. Assessing Applicability. Methods Guide. 16

18 Timing and Applicability: Examples Alzheimer’s disease trials evaluate surrogate end points (cognitive function scales) at 6 months, which may not reflect long-term outcomes (institutionalization rates) Trials evaluate the QTc interval-prolonging effects of drugs using single dose, end-of- dosing interval evaluations rather than evaluations at maximum blood concentrations Atkins D. Assessing Applicability. Methods Guide. 17

19 Setting and Applicability: Examples Studies evaluating benefits of breast self- exams conducted in Shanghai and St. Petersburg, countries that do not employ routine mammography screening as in US – – Would self-exam be as effective if routine mammogram picks up cancer at earlier stages? Studies of open surgical abdominal aortic aneurysm repair found inverse relationship between hospital volume and short-term mortality Atkins D. Assessing Applicability. Methods Guide. 18

20 Efficacy or Effectiveness Seven criteria used – – 5 of 7 indicative of effectiveness trial 1. 1. Enrolled primary care population 2. 2. Less stringent eligibility criteria 3. 3. Assessment of health-related outcomes 4. 4. Long study duration, clinically relevant treatment modalities 5. 5. Assessment of adverse events 6. 6. Adequate sample size to assess minimally important difference for a patient perspective 7. 7. Intention to treat analysis Gartlehner G. Int J Tech Assessment Health Care 2009;25:323-30. Gartlehner G. J Clin Epidemiol 2006;59:1040-8. 19

21 Assessment of Effectiveness Decision Tool EPC directors reviewed 26 trials – – 20 were judged subjectively as effectiveness trials, 6 as efficacy Scale not used Using the scale, 17 of 20 met five criteria and only 1 of 6 efficacy trials did Atkins D. Assessing Applicability. Methods Guide. 20

22 Guidance for Assessing Applicability (I) Overarching principle – – Be practical, focus on a limited number of features that are most important to the key questions and objectives of the review Step 1: Report a priori factors affecting the applicability of questions being asked using PICOTS format – – Considerations should be reflected in key questions, inclusion and exclusion criteria for the review Atkins D. Assessing Applicability. Methods Guide. 21

23 Guidance for Assessing Applicability (II) Step 1 Actions: – – Identify general challenges and specific factors that may affect applicability Factors chosen will vary based on nature of intervention, perspective (clinician, policymaker, patient), and outcome (benefit, harm) – – Consult stakeholders, review background Identify factors critical to determining if evidence is applicable to decisions they need to make Understand current practice to subsequently assess extent to which studies reflect it – – Extract specific information using PICOTS format Atkins D. Assessing Applicability. Methods Guide. 22

24 Guidance for Assessing Applicability (III) Step 2: Review and synthesize the evidence with explicit attention to crucial factors within the PICOTS format Step 2 Actions: – – Identify which of your trials are effectiveness or efficacy If you have a mix, compare and contrast findings Judge whether differences between the body of efficacy trials and the real world are important enough to limit its value in making health care decisions Atkins D. Assessing Applicability. Methods Guide. 23

25 Guidance for Assessing Applicability (IV) Step 2 Actions (cont.) – – Examine observational studies with more representative populations to inform judgments about applicability of trial data Population-based studies, pharmacoepidemiologic studies, registries – – Assess applicability of aggregated evidence Results of effectiveness trials should be highlighted Identify important factors in trials that may impact applicability and the direction and magnitude of the bias Atkins D. Assessing Applicability. Methods Guide. 24

26 Guidance for Assessing Applicability (V) Step 2 Actions (cont.) – – Consider subgroup analyses Seek evidence for empirical relationship between characteristics and effect size u u Trials done predominantly in males; subgroup analyses reporting results based on gender can inform the direction and magnitude of the bias u u Comparison of event rates across studies can illustrate variation based on population characteristics Atkins D. Assessing Applicability. Methods Guide. 25

27 Guidance for Assessing Applicability (VI) Step 3: Summarize evidence; PICOTS format Step 3 Actions: – – Indicate for each domain (PICOTS) a judgment about whether characteristics of the evidence raise applicability concerns – – Describe not only what study did (exclude patients with history of bleeds) but also the effect it had (low risk of bleeding) and extent this reduced applicability – – Note when major questions of applicability are not addressed and the implications for applicability Atkins D. Assessing Applicability. Methods Guide. 26

28 Summary Applicability Table Summary Conclusion Description of Applicability for Evidence Compared to Question P Describe general characteristics of enrolled patients (proportion with characteristic more helpful than ranges) Describe how enrolled populations differ from target population and how this may affect benefits or harms I Describe general characteristics of interventions Describe how interventions compare to routine use and how this might affect benefits and harms C Describe comparators usedDescribe whether comparators reflect best alternative treatment and how this might influence treatment effect Atkins D. Assessing Applicability. Methods Guide. 27

29 Summary Applicability Table Summary Conclusion Description of Applicability for Evidence Compared to Question O Describe what outcomes are most frequently reported Describe whether measured outcomes reflect most important clinical benefits and harms T Describe range of followupDescribe whether followup used is sufficient to detect clinically important benefits and harms S Describe settings where studies are conducted Describe whether settings used may affect applicability Atkins D. Assessing Applicability. Methods Guide. 28

30 Key Messages Applicability is important and distinct from internal validity The reviewer needs to evaluate applicability by comparing and contrasting the target population and the study population using the PICOTS format There are discernable differences between efficacy and effectiveness studies – – Effectiveness studies have high applicability Transparency is an important aspect of the Effective Healthcare Program – – A standard approach improves transparency 29


Download ppt "Systematic Review Module 8: Assessing Applicability C. Michael White, PharmD, FCP, FCCP Professor and Director University of Connecticut/Hartford Hospital."

Similar presentations


Ads by Google