Right-sized Evaluation

Slides:



Advertisements
Similar presentations
Donald T. Simeon Caribbean Health Research Council
Advertisements

Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Dissemination and Use of Results from OVC Program Evaluations Florence Nyangara, PhD MEASURE Evaluation/Futures Group Dissemination Meeting, September.
DETERMINANTS OF DATA USE Session 2. Session Objectives  Explain the data-use conceptual framework  Highlight the determinants of data use  List potential.
UNDERSTANDING DATA AND INFORMATION FLOW Session 4.
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Business as Unusual: Changing the Approach to Monitoring OVC Programs Karen G. Fleischman Foreit, PhD Futures Group/MEASURE Evaluation.
Case management versus M&E in the context of OVC programs: What have we learned? Jenifer Chapman, PhD Futures Group/MEASURE Evaluation.
Bridging the Research-to-Practice Gap Session 1. Session Objectives  Understand the importance of improving data- informed decision making  Understand.
Linking Data with Action Part 1: Seven Steps of Using Information for Decision Making.
Linking Data with Action Part 2: Understanding Data Discrepancies.
EVALUATION RESEARCH To know if Social programs, training programs, medical treatments, or other interventions work, we have to evaluate the outcomes systematically.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
WHY IS THIS HAPPENING IN THE PROGRAM? Session 5 Options for Further Investigation & Information Flow.
Implementation Science: Finding Common Ground and Perspectives Laura Reichenbach, Evidence Project, Population Council International Conference on Family.
Introduction to Group Work. Learning Objectives The goal of the group project is to provide workshop participants with an opportunity to further develop.
Day 4: Field Practicum This presentation has been supported by the U.S President’s Emergency Plan for AIDS Relief (PEPFAR) through the U.S. Agency for.
STUDY IMPLEMENTATION Day 2 - Session 5 Interview guides and tips for effective strategies.
MEASURE Evaluation Data Quality Assurance Workshop Session 3 Introduction to Routine Data Quality Assessment.
PLANNING FOR QUALITATIVE DATA COLLECTION Day 2 - Session 4.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
MER Essential Survey Indicators Jenifer Chapman, PhD & Lisa Parker, PhD February 2, 2015.
Data Use for Gender-Aware Health Programming Welcome and Introductions.
Day 1: Well-being and Interviewing This presentation has been supported by the U.S President’s Emergency Plan for AIDS Relief (PEPFAR) through the U.S.
Monitoring & Evaluation Capacity Strengthening Workshop WORKSHOP INTRODUCTION AND OVERVIEW.
Integration of Community Based Services It seems like a good idea, but how to make it work? Molly Cannon Palladium/MEASURE Evaluation September 28, 2015.
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 9:
Community Health Information System in Action in SSNPR/Ethiopia
Difference-in-Differences Models
Designing Effective Evaluation Strategies for Outreach Programs
Introduction MODULE 2: Indicators and Data Collection and Reporting
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 5:
RHIS Design and Reform ROUTINE HEALTH INFORMATION SYSTEMS MODULE 10:
Training Trainers and Educators Unit 8 – How to Evaluate
Session: 5 Using the RDQA tool for System Assessment
Community Health Information System in Action in SNNPR/Ethiopia
Introduction MODULE 6: RHIS Data Demand and Use
Fundamentals of Monitoring and Evaluation
Training of Trainers on the OVC Household Vulnerability Prioritization Tool.
ROUTINE HEALTH INFORMATION SYSTEMS
MEASURE Evaluation Using a Primary Health Care Lens Gabriela Escudero
Presenting an Information Needs Framework for PEPFAR OVC Programs
Introduction to Comprehensive Evaluation
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 4:
Overview of the RHIS Rapid Assessment Tool
14 Cultural Competence Training, Assessment, and Evaluation of Cultural Competence, and Evidence-Based Practices in Culturally Competent Agencies.
Complementing Routine Data with Qualitative Data for Decision Making: Understanding the "Why" Behind Program Data Day 1 - Session 1 Note to Facilitator:
Training Trainers and Educators Unit 8 – How to Evaluate
Use of Community Health Data for Shared Accountability
Community Health Information System in Action in SNNPR/Ethiopia
What don’t we know? Often M&E data are reviewed, but questions still remain to truly understand why a program is not meeting it objectives. In this group.
Monitoring and Evaluation of Postharvest Training Projects
Assessment Training Session 9: Assessment Analysis
Training Content and Orientation
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 3:
Introduction to Health Informatics:
Introduction to the PRISM Framework
Information Systems for Health:
Information Systems for Health:
Introduction to Health Informatics:
Introduction MODULE 7: RHIS Governance and Management of Resources
Process Improvement, System Design, and Usability Evaluation
Siân Curtis, PhD OVC Evaluation Dissemination Meeting,
Data and Interoperability:
Use of Information for Decision Making
Measuring Data Quality
Introduction to Health Informatics
Process Improvement, System Design, and Usability Evaluation
Presentation transcript:

Right-sized Evaluation Lisa Parker, PhD February 2, 2015

Learning outcomes Improved ability to articulate information needs and identify an appropriate method for information gathering Understand types of evaluation and select appropriately Familiarity with the OVC survey toolkit as a resource for evaluation

Your experience with evaluation (10 mins) Think individually about an evaluation that was recently conducted in your country Why did you conduct it / why was it conducted? How did you use the data? Did the type of decisions made (based on the data) justify the cost and time it took to get the data? If time permits, briefly discuss at your tables.

Where are we in the Framework? 2 rows

Definitions the systematic collection and analysis of information about the characteristics, outcomes, and impact of programs and projects (PEPFAR, 2014; USAID 2011) the systematic investigation of the merit (quality), worth (value), or significance of an object (Scriven, 1999, cited by CDC)

Evaluation Policies

Why conduct an evaluation? To determine the effectiveness and efficiency of a program or intervention To ensure accountability and transparency To support program / intervention scale-up

Types of evaluation (PEPFAR) Process Outcome Impact Economic

Process evaluation Determine how the program is implemented, valued, and why results are/are not occurring Methods: document and routine data review, key informant interviews Frequency: Usually once only Data user: USG and programs Timeline: 6 weeks to 6 months (or more) Cost: $25K to several 100K From PEPFAR: A type of evaluation that focuses on program or intervention implementation, including, but not limited to access to services, whether services reach the intended population, how services are delivered, client satisfaction and perceptions about needs and services, management practices. In addition, a process evaluation might provide an understanding of cultural, socio-political, legal, and economic context that affect implementation of the program or intervention.” Example of question asked: Are activities delivered as intended, and are the right participants being reached?

Outcome evaluation Assess changes in program beneficiaries over time Methods: Pre-/Post-test using quantitative and/or qualitative methods Frequency: Non-routine (2+ points in time) Data user: Program and USG Timeline: 3-5 years Cost: $300K+ From PEPFAR: Is “a type of evaluation that determines if and by how much, intervention activities or services achieved their intended outcomes.” It focuses on “outputs and outcomes (including unintended effects) to judge program effectiveness, but may also assess program process to understand how outcomes are produced.” It is possible to use statistical techniques in some instances when control or comparison groups are not available (e.g., for the evaluation of a national program).”

Impact evaluation Assess changes in program beneficiaries over time that are attributable to program Methods: Experimental or quasi-experimental design with control/comparison group Frequency: Non-routine (2+ points in time) Data user: Stakeholders globally Timeline: 3+ years Cost: $500K to several million From PEPFAR: measure the change in an outcome that is attributable to a defined intervention by comparing actual impact to what would have happened in the absence of the intervention (the counterfactual scenario). IEs are based on models of cause and effect and require a rigorously defined counterfactual to control for factors other than the intervention that might account for the observed change. There are a range of accepted approaches to applying a counterfactual analysis, though IEs in which comparisons are made between beneficiaries that are randomly assigned to either an intervention or a control group provide the strongest evidence of a relationship between the intervention under study and the outcome measured to demonstrate impact.

When is an IE a good idea? When you are testing a new intervention or replicating a tested intervention in a new context When stakeholders globally will benefit from knowing the answer to your questions

When is an IE not warranted? If you need information primarily to show accountability and transparency If your audience will not demand attribution to make changes (find out before!) Methodological reasons, e.g., no suitable control group, intervention has rolled out, etc.

Economic evaluation Identify, measure, value and compare the costs and outcomes of alternative interventions Methods: cost-minimization, cost-effectiveness, cost-utility, cost-benefit analysis Frequency: Depends Data user: Program and USG (context specific) Timeline: Depends on method Cost: Depends on method From PEPFAR: Use of applied analytical techniques to identify, measure, value and compare the costs and outcomes of alternative interventions. Economic evaluation is a systematic and transparent framework for assessing efficiency focusing on the economic costs and outcomes of alternative programs or interventions. This framework is based on a comparative analysis of both the costs (resources consumed) and outcomes (health, clinical, economic) of programs or interventions. Main types of economic evaluation are cost-minimization analysis (CMA), cost-effectiveness analysis (CEA), cost-utility analysis (CUA) and cost-benefit analysis (CBA) (ranked in increasing immediate impact on decision making and decreasing concreteness of constructs being measured). Example of question asked: What is the cost-effectiveness of this intervention in improving patient outcomes as compared to other treatment models? A major issue is actually collecting the data on what it costs to run a program. There are no agreed guidelines or systems to do this.

Do what is needed, and nothing more Our message to you Do what is needed, and nothing more If you cannot clearly articulate what you will do with the data (and the added value of conducting a more complex evaluation), you don’t need it!

Mapping information needs to evaluation types (30 mins) Map questions on handout to evaluation / research types. Consider: Who wants to know? What will you (they) use the data for? Has the program started? How will beneficiaries be selected? When do you want results? What is your budget? Introduce group work.

What did you learn? Elicit feedback from participants

The research presented here has been supported by the President’s Emergency Plan for AIDS Relief (PEPFAR) through the United States Agency for International Development (USAID) under the terms of MEASURE Evaluation cooperative agreement AID-OAA-L-14-00004. Views expressed are not necessarily those of PEPFAR, USAID or the United States government. MEASURE Evaluation is implemented by the Carolina Population Center at the University of North Carolina at Chapel Hill in partnership with Futures Group, ICF International, John Snow, Inc., Management Sciences for Health, and Tulane University.