Evaluability Assessments: Achieving Better Evaluations, Building Stronger Programs Nicola Dawkins, PhD, MPH ICF Macro.

Slides:



Advertisements
Similar presentations
Virginia - March 2014 (Content adapted from 2014 MSRRC Forum) Preparing for the State Systemic Improvement Plan.
Advertisements

1 Quality Indicators for Device Demonstrations April 21, 2009 Lisa Kosh Diana Carl.
1 Department of State Program Evaluation Policy Overview Spring 2013.
Measuring Value: Using Program Evaluation to Understand What’s Working -- Or Isn’t Juliana M. Blome, Ph.D. , MPH Office of Program Analysis and Evaluation.
What You Will Learn From These Sessions
Potential of Public Health Systematic Reviews to Impact on Primary Research Professor Laurence Moore September 2007.
Designing an Effective Evaluation Strategy
OVERVIEW OF ClASS METHODS and ACTIVITIES. Session Objectives By the end of the session, participants will be able to: Describe ClASS team composition.
CW/MH Learning Collaborative First Statewide Leadership Convening Lessons Learned from the Readiness Assessment Tools Lisa Conradi, PsyD Project Co-Investigator.
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Translating Knowledge to Action (K2A): An Organizing Framework and A Planning Tool Teresa J. Brady, PhD On behalf of the NCCDPHP Work Group on Translation.
NRCOI March 5th Conference Call
PPA 502 – Program Evaluation Lecture 2b – Evaluability Assessment.
PHAB's Approach to Internal and External Evaluation Jessica Kronstadt | Director of Research and Evaluation | November 18, 2014 APHA 2014 Annual Meeting.
Evaluation. Practical Evaluation Michael Quinn Patton.
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
CDC Evaluation Process Harlen Hays, MPH Office of Health Promotion, KDHE.
Sustaining Local Public Health and Built Environment Programs Fit Nation NYC November 3, 2011 Annaliese Calhoun.
Healthy North Carolina 2020 and EBS/EBI 101 Joanne Rinker MS, RD, CDE, LDN Center for Healthy North Carolina Director of Training and Technical Assistance.
Topic Generation and Research Prioritization Joe V. Selby, MD, MPH, Executive Director Rachael Fleurence, PhD, Scientist Rick Kuntz, MD, MSc, Chair, PDC.
Feed the Future Innovation Lab for Small-Scale Irrigation Alan Duncan Ethiopia Partner meeting, Mar From Plan to Action Field Studies and Ex Ante.
How to Develop the Right Research Questions for Program Evaluation
PHAB Slide Set 2013 The slides in this set are made available for use in presentations and educational sessions by health departments. The information.
“Walking Through the Steps and Standards” Presented by: Tom Chapel Focus On…
A Forum on Comprehensive Community Initiatives How Federal Agencies Can Foster Systems Change to Improve the Lives of Youth and Families Welcome to.
Live Healthy Napa County Creating and Sustaining a Common Agenda.
From Evidence to Action: Addressing Challenges to Knowledge Translation in RHAs The Need to Know Team Meeting May 30, 2005.
Program Collaboration and Service Integration: An NCHHSTP Green paper Kevin Fenton, M.D., Ph.D., F.F.P.H. Director National Center for HIV/AIDS, Viral.
Engaging Stakeholders in the Effective Health Care Program Information and tools for researchers and investigators.
Agenda Setting Input and Status Agenda Setting Input and Status.
Convergence Partnership 2007 Robert Wood Johnson Foundation W.K. Kellogg Foundation Kaiser Permanente The California Endowment Nemours Health and Prevention.
Responding to Asthma as a public health problem Partnering to develop and implement a countywide asthma plan to improve the quality of life for people.
1 Mid-Term Review of the Hyogo Framework for Action Roadmap to Disaster Risk Reduction in the Americas & HFA Mid-Term Review.
Purposes of Evaluation Why evaluate? Accountability: Justify trust of stakeholders (funders, parents, citizens) by showing program results (Summative)
Commissioning Self Analysis and Planning Exercise activity sheets.
Maria E. Fernandez, Ph.D. Associate Professor Health Promotion and Behavioral Sciences University of Texas, School of Public Health.
PATIENT-CENTERED OUTCOMES RESEARCH INSTITUTE PCORI Board of Governors Meeting Washington, DC September 24, 2012 Anne Beal, MD, MPH, Chief Operating Officer.
The Major Steps of a Public Health Evaluation 1. Engage Stakeholders 2. Describe the program 3. Focus on the evaluation design 4. Gather credible evidence.
Office of Special Education Programs U.S. Department of Education GRANT PERFORMANCE REPORT FOR CONTINUATION FUNDING.
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Notes by Ben Boerkoel, Kent ISD, based on a training by Beth Steenwyk.
11 The CPCRN, DCPC, NCI, and the Community Guide: Areas for Collaboration and Supportive Work Shawna L. Mercer, MSc, PhD Director The Guide to Community.
Evaluation Approaches, Frameworks & Designs HSC 489 – Chapter 14.
Copyright © 2008 Delmar. All rights reserved. Chapter 15 Program Development and Evaluation.
Selecting Evidence Based Practices Oregon’s initial attempts to derive a process Implementation Conversations 11/10.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 24, 2009.
Introduction to policy briefs What is a policy brief? What should be included in a policy brief? How can policy briefs be used? Getting started.
Evaluation design and implementation Puja Myles
1 National Indicators and Qualitative Studies to Measure the Activities and Outcomes of CDC’s PRC Program Demia L. Sundra, MPH Prevention Research Centers.
Project Evaluation for MSP Targeted Partnerships Eric R. Banilower Horizon Research, Inc.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Evaluation and Implementation 21 October 2015 PUBH 535.
Unit 9: Evaluating a Public Health Surveillance System #1-9-1.
Developing Monitoring & Evaluation Frameworks: Process or Product? Anne Markiewicz.
Session 2: Developing a Comprehensive M&E Work Plan.
Kansas Multi-Tier System of Supports: Living a Culture of Engagement.
A Framework for Assessing Needs Across Multiple States, Stakeholders, and Topic Areas Stephanie Wilkerson & Mary Styers REL Appalachia American Evaluation.
Demonstrating the Value of Stakeholder Input for Evaluation Design, Implementation and Dissemination Yvonne Abel, MS Abt Associates Inc. AEA Annual Conference,
Yvonne Abel, Abt Associates Inc. November 2010, San Antonio, TX Enhancing the Quality of Evaluation Through Collaboration Among Funders, Programs, and.
Organizations of all types and sizes face a range of risks that can affect the achievement of their objectives. Organization's activities Strategic initiatives.
[Presentation location] [Presentation date] (Confirm ABT logo) Building Bridges and Bonds (B3): An introduction.
Karen Cheung, MPH, Pamela Luna, DrPH, MST, Sarah Merkle, MPH American Evaluation Association Annual Meeting November 11, 2009 The findings and conclusions.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Community-Based Prevention Marketing (CBPM) for Policy Development Carol A. Bryant, PhD 1, Anita Courtney, MS, RD 2 Robert J. McDermott, PhD 1 Jim Lindenberger,
Evaluating the Quality and Impact of Community Benefit Programs
American Evaluation Association Annual Conference
Research Translation: Lessons from Dissemination and Implementation Research for Interventions Promoting Walking and Walkability August 18, am Pacific,
2018 OSEP Project Directors’ Conference
Presentation transcript:

Evaluability Assessments: Achieving Better Evaluations, Building Stronger Programs Nicola Dawkins, PhD, MPH ICF Macro

Project Team The findings and conclusions presented are those of the authors and do not necessarily represent the official position of the agencies. Robert Wood Johnson Foundation Laura Leviton, PhD Centers for Disease Control and Prevention DNPAO - Laura Kettel Khan, PhD DASH – Leah Robin, PhD and Seraphine Pitt Barnes, PhD, MPH, CHES DACH/PRC – Jo Anne Grunbaum, EdD Centers for Disease Control and Prevention Foundation Danielle Jackson, MPH, John Moore, PhD, RN, and Holly Wethington, PhD Macro International Inc. David Cotton, PhD, MPH, Nicola Dawkins, PhD, MPH, Karen Cheung, MPH, Mary Ann Hall, MPH, Thearis Osuji, MPH, and Starr Rice, BA

Will Discuss Today Introduction to EA, compare with full evaluation Purpose of Early Assessment project One unique process for using multiple EA method Steps in project Results Insights and conclusions

Evaluability Assessment Assesses: 1.Underlying program logic 2.Current state of program implementation 3.Feasibility of conducting rigorous outcomes-focused evaluation or other sorts of evaluation

Assist in improvement of program design, implementation, and evaluation characteristics Is intervention promising? Does intervention have program design integrity and realistic, achievable goals? Is intervention implemented as intended and at an appropriate developmental level? To answer questions: (1) Is there a feasible design? (2) Are data available or feasible to collect? Evaluable Intervention Yes No

CDC Framework for Program Evaluation Ensure use and share lessons learned Engage stakeholders Describe the program Focus the evaluation design Justify conclusions Steps Gather credible evidence

Evaluability Steps Compared to CDC’s Evaluation Framework Justify conclusions Agree on intended uses Engage stakeholdersInvolve stakeholders and intended users Describe the program Clarify program intent Determine program implementation Focus the evaluation design Work with stakeholders to prioritize key evaluation questions Gather credible evidence Explore designs and measurements Ensure use and share lessons learned CDC FrameworkEvaluability Steps

Multiple EA Example Convene a panel of experts to identify and review potential environmental programs and policies Assess environmental programs and policies’ readiness for evaluation Synthesize findings and share promising practices with the field Develop a network of public health and evaluation professionals with the skills to conduct evaluability assessments

Unique Systematic Screening and Assessment (SSA) Method Inputs Steps Products Guidance 1. CHOOSE priorities 2. SCAN environmental interventions 3. REVIEW AND IDENTIFY INTERVENTIONS that warrant evaluability assessment 4. EVALUABILITY ASSESSMENTS of priority interventions 5. REVIEW AND RATE interventions for promise/ readiness for evaluation 6. USE information 7. SYNTHESIZE what is known Expert review panel Focus Brief descriptions Report on each intervention Ratings and reports Constructive feedback Plan for rigorous evaluation List of interventions Report of intervention and evaluation issues Distributed network of practitioners/researchers Nominations, existing inventories, descriptions Communicate with all stakeholders Expert review panel

Systematic Process Nominations Received Met Inclusion Criteria After School/ Daycare 8134 Food Access5523 School District Local Wellness Policies 14658

Systematic Process Cont’d Expert panel selected 26 using criteria: – Potential impact – Innovativeness – Reach – Acceptability to stakeholders – Feasibility of implementation – Feasibility of adoption – Sustainability – Generalizability/transportability – Staff/organization capacity for evaluation

Selected Programs and Policies (Year 1) 7 After School/3 Daycare Programs – 5 programs: PA time, nutritious snacks – 4 programs: PA time, nutrition education – 1 policy: PA, nutrition, TV screen time 10 Food Access Programs – 5 farmers’ markets – 3 supermarket or corner store programs – 2 restaurant programs 6 School District Local Wellness Policies – All selected addressed PA and nutrition

Evaluability Assessment Review of documents – Draft logic model 2-3 day site visit – Interviews: program description, logic model, staffing, funding, sustainability, evaluation activities – Observations – TA /debriefing session Reports and recommendations Follow-up TA call with CDC experts

Readiness for Evaluation Review of site visit reports identified classifications: 1. Ready for stand-alone, outcome evaluation 2. Appropriate for cluster evaluation 3. Theoretically sound but need further development 4. Technical assistance needed in specific areas

Results for Year 1 Expert panel determined: – 14 ready for stand-alone, outcome evaluation – 2 best suited for cluster evaluation – 3 theoretically sound but need further development – 6 need TA in specific areas

Results for Year 1, Cont’d Dissemination of results from Year 1 Full evaluation planned for New York City Daycare Policy

Discovering Practice Based Evidence SSA Method builds evidence base through practice based evidence Year 1: 26 EAs 282 nominations 9 high potential impact, ready for evaluation

Year 2 Year 2 completed EAs of 27 initiatives Nominations Received Met Inclusion Criteria Selected After School/ Daycare Food Access29118 Comprehensive School PA 3972 Built Environment for PA 22144

Discovering Practice Based Evidence SSA Method builds evidence base through practice based evidence Year 2: 27 EAs 176 nominations 11 high potential impact, ready for evaluation

Key Lessons Learned Use an expert panel for diverse perspective Solicit broadly to maximize return Include programs/policies beyond start up phase to ensure implementation Centralize oversight for methodological integrity Provide technical assistance as an incentive to sites

Recap: It’s a Process 1. Choose priorities for the scan 2. Scan environmental programs & policies 3. Review and identify those that warrant evaluability assessment 4. Evaluability assessment of programs & policies 5. Review and rate for promise and readiness for evaluation 6.Use Information: Position for rigorous evaluation Feedback to innovators Cross-site synthesis 6.Use Information: Position for rigorous evaluation Feedback to innovators Cross-site synthesis

Overview of General EA vs SSA Method What is different? – EA as one component of a process of discovery – SSA Method explicitly provides feedback to innovators – SSA Method provided insights on clusters of projects – SSA Method helped identify policies and programs worthy of further attention What is the same? – Review documents – Discuss with stakeholders – Develop logic model – Iterate the process – Determine what can be evaluated

Of 458 innovations nominated in both years: – 174 met criteria for inclusion; 53 were selected for evaluability assessments; – 20 were of high potential impact and ready for stand alone evaluation. – Yet all of the nominations were viewed as important by stakeholders. – If all of them underwent evaluation, would be a 4% chance of encountering something with likelihood of concluding success! The Cost-Savings Factor

Conclusion 1 Without a systematic process, one would need to conduct at least 20 evaluations to discover 1 that might be successful. The process is cost-effective for funders and decision makers. It reduces uncertainty about evaluation investments.

Conclusion 2 Innovators found the process very helpful. Evaluability assessment plays a program development role.

Conclusion 3 Themes and issues emerged for clusters of policies and programs. Evaluability assessments can be configured to cast new light on – developments in the field – families or clusters of policies and programs

Impact on the Field of Prevention “Translating practice into evidence” A new method of topic selection and program identification Researchers very engaged by learning about practice Stimulated discussion of new research agendas

Nicola Dawkins