RTI International is a trade name of Research Triangle Institute Nancy Berkman, PhDMeera Viswanathan, PhD

Slides:



Advertisements
Similar presentations
Study Objectives and Questions for Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Protocol Development.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
When To Select Observational Studies as Evidence for Comparative Effectiveness Reviews Prepared for: The Agency for Healthcare Research and Quality (AHRQ)
Mapping Studies – Why and How Andy Burn. Resources The idea of employing evidence-based practices in software engineering was proposed in (Kitchenham.
5/14/ Research proposal submission Medical Research center, HMC Dr. Anjum Susan John, MD, Specialist MRC.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Critical Appraisal Dr Samira Alsenany Dr SA 2012 Dr Samira alsenany.
Summarizing Community-Based Participatory Research: Background and Context for the Review Lucille Webb, MEd Eugenia Eng, DrPH Alice Ammerman, DrPH Meera.
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
Dimensions of Data Quality M&E Capacity Strengthening Workshop, Addis Ababa 4 to 8 June 2012 Arif Rashid, TOPS.
Creating Research proposal. What is a Marketing or Business Research Proposal? “A plan that offers ideas for conducting research”. “A marketing research.
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Prototype Evidence-based Database for Transportation Asset Management Janille Smith-Colin, Infrastructure Research Group 2014 UTC Conference for the Southeastern.
The CAHPS Health Literacy Item Set Beverly Weidmer, RAND Corporation AHRQ 2009 Annual Conference Research to Reform: Achieving Health System Change Bethesda,
Critical Appraisal of Clinical Practice Guidelines
Nursing Care Makes A Difference The Application of Omaha Documentation System on Clients with Mental Illness.
Evidence-Based Practice Current knowledge and practice must be based on evidence of efficacy rather than intuition, tradition, or past practice. The importance.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Ibrahim Kamara, MS, MPH, Sc.D Torrance Brown, MPH June 2011.
Cynthia Baur, Ph.D. Senior Advisor, Health Literacy August 23, 2011 The National Action Plan to Improve Health Literacy Office of the Director Office of.
Exposure Definition and Measurement in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Slide 1 Long-Term Care (LTC) Collaborative PIP: Medication Review Tuesday, October 29, 2013 Presenter: Christi Melendez, RN, CPHQ Associate Director, PIP.
Use of Community Based Participatory Research (CBPR) to Develop Nutrition Programs for Chronic Disease Prevention Elena Carbone, Dr.P.H., R.D., L.D.N.
Nursing Research Capacity Building. Background CON –opened as 9 th College at SQU in 2008 The CON’s next challenge is promoting nursing care based on.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Workshop The science and methodologies behind HTA, diversity and commonality across the EU Achieving more patient centred HTA in different countries.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
Methods: Pointers for good practice Ensure that the method used is adequately described Use a multi-method approach and cross-check where possible - triangulation.
“The Effect of Patient Complexity on Treatment Outcomes for Patients Enrolled in an Integrated Depression Treatment Program- a Pilot Study” Ryan Miller,
Successful Concepts Study Rationale Literature Review Study Design Rationale for Intervention Eligibility Criteria Endpoint Measurement Tools.
Quantitative and Qualitative Approaches
Criteria to assess quality of observational studies evaluating the incidence, prevalence, and risk factors of chronic diseases Minnesota EPC Clinical Epidemiology.
0 Personnel Development to Improve Services and Results for Children with Disabilities PERFORMANCE MEASURES Craig Stanton Office of Planning, Evaluation,
Potential Errors In Epidemiologic Studies Bias Dr. Sherine Shawky III.
Evidence-Based Medicine: What does it really mean? Sports Medicine Rounds November 7, 2007.
BMH CLINICAL GUIDELINES IN EUROPE. OUTLINE Background to the project Objectives The AGREE Instrument: validation process and results Outcomes.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Standardized Antibiotic Use in Long-Term Care Settings (SAUL Study) Steven Garfinkel American Institutes for Research AHRQ Annual Conference, Bethesda,
The HMO Research Network (HMORN) is a well established alliance of 18 research departments in the United States and Israel. Since 1994, the HMORN has conducted.
Developing a Review Protocol. 1. Title Registration 2. Protocol 3. Complete Review Components of the C2 Review Process.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Moving the Evidence Review Process Forward Alex R. Kemper, MD, MPH, MS September 22, 2011.
The Proposal AEE 804 Spring 2002 Revised Spring 2003 Reese & Woods.
Selecting Evidence for Comparative Effectiveness Reviews: When to use Observational Studies Dan Jonas, MD, MPH Meera Viswanathan, PhD Karen Crotty, PhD,
Evidence-Based Practice Evidence-Based Practice Current knowledge and practice must be based on evidence of efficacy rather than intuition, tradition,
Assembling a Systematic Review Team: Balancing Expertise and Potential Conflicts of Interest Avoiding bias in systematic reviews AHRQ annual meeting Bethesda.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Research article structure: Where can reporting guidelines help? Iveta Simera The EQUATOR Network workshop 10 October 2012, Freiburg, Germany.
Technology Needs Assessments under GEF Enabling Activities “Top Ups” UNFCCC/UNDP Expert Meeting on Methodologies for Technology Needs Assessments
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
AHRQ annual meeting September 10, 2008 Stephanie Chang MD, MPH Center for Outcomes and Evidence Conducting a methodologically sound systematic review with.
Evidence Based Practice (EBP) Riphah College of Rehabilitation Sciences(RCRS) Riphah International University Islamabad.
Publishing Educational Research Articles Dr. David Kaufman Faculty of Education Simon Fraser University Presented at Universitas Terbuka March 4, 2011.
CRISP Presentation on PCT Study Design: Case Study for Patient-Centered PCTs C. Daniel Mullins, PhD Professor and Chair Pharmaceutical Health Services.
Comparative Effectiveness Research (CER) and Patient- Centered Outcomes Research (PCOR) Presentation Developed for the Academy of Managed Care Pharmacy.
Fdtl5 FDTL5 project V-ResORT : A Virtual Resource for Online Research Training.
APHA, November 7, 2007 Amy Friedman Milanovich, MPH Head of Training and Dissemination Center for Managing Chronic Disease University of Michigan Using.
1 Copyright © 2012 by Mosby, an imprint of Elsevier Inc. Copyright © 2008 by Mosby, Inc., an affiliate of Elsevier Inc. Chapter 15 Evidence-Based Practice.
STEP - 4 Research Design 1. The term “research design” can be defined as, The systematic study plan used to turn a research question or research questions.
RTI International is a trade name of Research Triangle Institute The Costs of SBI: Findings from the literature Presented by Jeremy Bray, Gary.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Presentation Developed for the Academy of Managed Care Pharmacy
Presenter: Christi Melendez, RN, CPHQ
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 2. Session 6. Developing indicators.
AXIS critical Appraisal of cross sectional Studies
Presentation Developed for the Academy of Managed Care Pharmacy
Learning Module 11 Case Study Research.
Presentation Developed for the Academy of Managed Care Pharmacy
Presentation transcript:

RTI International is a trade name of Research Triangle Institute Nancy Berkman, PhDMeera Viswanathan, PhD Development of a Tool to Evaluate the Quality of Non-randomized Studies of Interventions or Exposures Presented by Nancy D Berkman, PhD & Meera Viswanathan, PhD Presented at AHRQ 2009 Annual Conference Bethesda, Maryland September 15, 2009

Acknowledgements Project funding provided by –Phase 1: Grant from RTI Independent Research and Development (IR&D) funds –Phase 2: Contract from Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services, through the Evidence-based Practice Centers program (EPC)

Context for the Project Increasing demand to include non-randomized studies in systematic literature reviews and comparative effectiveness reviews to capture –The effects of interventions or exposures on a more broadly defined population than can be observed through RCTs –Topics where RCTs would be logistically or ethically inappropriate –Longer term outcomes and harms (side effects) The trade-off for wider applicability of findings among observational studies, compared with RCTs, is a potentially wider range of sources of bias, including in selection, performance, detection of effects, and attrition.

Background: Rating the Quality of Non-randomized Studies The quality (internal validity) of each study included in a review needs to be evaluated: Well-established criteria and instruments exist for evaluating the quality of RCTs, but not non-randomized (observational) studies PIs conducting systematic reviews generally lack access to validated and adaptable instruments for evaluating the quality of observational studies. Each new review often develops its own quality rating tool, “reinvents the wheel”, leading to inconsistent standards within and across reviews.

Project Goals To create a practical and validated tool for evaluating the quality of non-randomized studies of interventions or exposures that is: Reflects a comprehensive theoretical framework: captures all relevant domains Broad applicability: can be used "off the shelf" by different PIs Modifiable: can be adapted to different topic areas Easy to use and understand: can be used by reviewers with varying levels of expertise or experience Validated: users can be confident of their evaluation of study quality Advances the methodology in the field Disseminated widely

Methods: Phase 1 Item development Reviewed the literature on the evaluation of the quality of observational studies Collected quality review items used in early tools to evaluate non-RCTs through –Published literature –90 AHRQ-sponsored EPC reviews Categorized all potential items into the 12 quality domains identified in Evaluating non-randomized intervention studies (Deeks et al., 2003)

Methods: Phase 1 (continued) Item Bank development Selected the best items for measuring each of the included domains Modified selected items where necessary to ensure that critical domains were included and to improve readability Developed a pre-specified set of responses Developed explanatory text to be used by PIs and abstractors to individualize as well as standardize interpretation

Methods: Phase 2 Technical Expert Panel input –Conceptual framework to ensure that we included all relevant domains –Face validity Cognitive interviews with potential users –Readability –Conceptualization Validation –Content/face validity –Inter-rater reliability testing

Conceptual Underpinnings of the Instrument Evaluation of quality can rely on either a description of methods or an assessment of validity and precision Methods description approach –Follows the reporting structure of many manuscripts –Relies less on judgment than on reporting Validity and precision approach –What we really care about –More challenging to evaluate –Greater reliance on judgment

Domains for quality evaluation approaches Methods description approach Background/context Sample definition and selection Intervention/exposure Creation of treatment groups Follow-up Specification of outcomes Analysis: comparability of groups Analysis: outcomes Interpretation Validity and precision approach Selection bias Performance bias Information bias Detection bias Attrition bias Reporting bias Precision

Tool Results Comprehensive: bank of 39 questions Modifiable: includes relevant items appropriate for all non-randomized study types Easy to use: instructions for PIs and abstractors to assist in appropriate interpretation of questions. Example: What is the level of detail in describing the intervention or exposure? [PI: specify which details need to be stated, e.g., intensity, duration, frequency, route, setting, and timing of intervention/exposure. For case-control studies, consider if the condition, timing, frequency, and setting of symptoms is provided in the case definition]

Next Steps Finalize inter-rater reliability results Publish findings and disseminate the tool Proposed Phase III: –Design specific validation including inter-rater reliability testing by study type –Reduce the number of questions needed to address specific domains –Develop a web-based platform for generating design and topic-specific instruments from the item bank.