Download presentation
Presentation is loading. Please wait.
1
Right-sized Evaluation
Lisa Parker, PhD February 2, 2015
2
Learning outcomes Improved ability to articulate information needs and identify an appropriate method for information gathering Understand types of evaluation and select appropriately Familiarity with the OVC survey toolkit as a resource for evaluation
3
Your experience with evaluation (10 mins)
Think individually about an evaluation that was recently conducted in your country Why did you conduct it / why was it conducted? How did you use the data? Did the type of decisions made (based on the data) justify the cost and time it took to get the data? If time permits, briefly discuss at your tables.
4
Where are we in the Framework?
2 rows
5
Definitions the systematic collection and analysis of information about the characteristics, outcomes, and impact of programs and projects (PEPFAR, 2014; USAID 2011) the systematic investigation of the merit (quality), worth (value), or significance of an object (Scriven, 1999, cited by CDC)
6
Evaluation Policies
7
Why conduct an evaluation?
To determine the effectiveness and efficiency of a program or intervention To ensure accountability and transparency To support program / intervention scale-up
8
Types of evaluation (PEPFAR)
Process Outcome Impact Economic
9
Process evaluation Determine how the program is implemented, valued, and why results are/are not occurring Methods: document and routine data review, key informant interviews Frequency: Usually once only Data user: USG and programs Timeline: 6 weeks to 6 months (or more) Cost: $25K to several 100K From PEPFAR: A type of evaluation that focuses on program or intervention implementation, including, but not limited to access to services, whether services reach the intended population, how services are delivered, client satisfaction and perceptions about needs and services, management practices. In addition, a process evaluation might provide an understanding of cultural, socio-political, legal, and economic context that affect implementation of the program or intervention.” Example of question asked: Are activities delivered as intended, and are the right participants being reached?
10
Outcome evaluation Assess changes in program beneficiaries over time
Methods: Pre-/Post-test using quantitative and/or qualitative methods Frequency: Non-routine (2+ points in time) Data user: Program and USG Timeline: 3-5 years Cost: $300K+ From PEPFAR: Is “a type of evaluation that determines if and by how much, intervention activities or services achieved their intended outcomes.” It focuses on “outputs and outcomes (including unintended effects) to judge program effectiveness, but may also assess program process to understand how outcomes are produced.” It is possible to use statistical techniques in some instances when control or comparison groups are not available (e.g., for the evaluation of a national program).”
11
Impact evaluation Assess changes in program beneficiaries over time that are attributable to program Methods: Experimental or quasi-experimental design with control/comparison group Frequency: Non-routine (2+ points in time) Data user: Stakeholders globally Timeline: 3+ years Cost: $500K to several million From PEPFAR: measure the change in an outcome that is attributable to a defined intervention by comparing actual impact to what would have happened in the absence of the intervention (the counterfactual scenario). IEs are based on models of cause and effect and require a rigorously defined counterfactual to control for factors other than the intervention that might account for the observed change. There are a range of accepted approaches to applying a counterfactual analysis, though IEs in which comparisons are made between beneficiaries that are randomly assigned to either an intervention or a control group provide the strongest evidence of a relationship between the intervention under study and the outcome measured to demonstrate impact.
12
When is an IE a good idea? When you are testing a new intervention or replicating a tested intervention in a new context When stakeholders globally will benefit from knowing the answer to your questions
13
When is an IE not warranted?
If you need information primarily to show accountability and transparency If your audience will not demand attribution to make changes (find out before!) Methodological reasons, e.g., no suitable control group, intervention has rolled out, etc.
14
Economic evaluation Identify, measure, value and compare the costs and outcomes of alternative interventions Methods: cost-minimization, cost-effectiveness, cost-utility, cost-benefit analysis Frequency: Depends Data user: Program and USG (context specific) Timeline: Depends on method Cost: Depends on method From PEPFAR: Use of applied analytical techniques to identify, measure, value and compare the costs and outcomes of alternative interventions. Economic evaluation is a systematic and transparent framework for assessing efficiency focusing on the economic costs and outcomes of alternative programs or interventions. This framework is based on a comparative analysis of both the costs (resources consumed) and outcomes (health, clinical, economic) of programs or interventions. Main types of economic evaluation are cost-minimization analysis (CMA), cost-effectiveness analysis (CEA), cost-utility analysis (CUA) and cost-benefit analysis (CBA) (ranked in increasing immediate impact on decision making and decreasing concreteness of constructs being measured). Example of question asked: What is the cost-effectiveness of this intervention in improving patient outcomes as compared to other treatment models? A major issue is actually collecting the data on what it costs to run a program. There are no agreed guidelines or systems to do this.
15
Do what is needed, and nothing more
Our message to you Do what is needed, and nothing more If you cannot clearly articulate what you will do with the data (and the added value of conducting a more complex evaluation), you don’t need it!
16
Mapping information needs to evaluation types (30 mins)
Map questions on handout to evaluation / research types. Consider: Who wants to know? What will you (they) use the data for? Has the program started? How will beneficiaries be selected? When do you want results? What is your budget? Introduce group work.
17
What did you learn? Elicit feedback from participants
18
The research presented here has been supported by the President’s Emergency Plan for AIDS Relief (PEPFAR) through the United States Agency for International Development (USAID) under the terms of MEASURE Evaluation cooperative agreement AID-OAA-L Views expressed are not necessarily those of PEPFAR, USAID or the United States government. MEASURE Evaluation is implemented by the Carolina Population Center at the University of North Carolina at Chapel Hill in partnership with Futures Group, ICF International, John Snow, Inc., Management Sciences for Health, and Tulane University.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.