Presentation on theme: "1 The RealWorld Evaluation Approach to Impact Evaluation With reference to the chapter in the Country-led monitoring and evaluation systems book Michael."— Presentation transcript:
1 The RealWorld Evaluation Approach to Impact Evaluation With reference to the chapter in the Country-led monitoring and evaluation systems book Michael Bamberger and Jim Rugh Note: more information is available at:
2 The extensive use of weak impact evaluation designs Most impact evaluations are not able to use the textbook designs with pre-test/post-test project and control group comparisons Most assessments of impact are based on methodologically weak designs Many claims about project impacts are not justified and their tends to be a positive bias in many evaluation reports Very few evaluation reports assess the validity of the methodology and findings.
3 Weak evaluation designs are due to: Time constraints Budget constraints Data constraints – Non availability [including lack of baseline data] – Quality Political constraints – Lack of evaluation culture Lack of understanding of the value of evaluation Unwillingness to accept criticism Lack of expertise – Use of information as a political tool
The Real-World Evaluation Approach Step 1: Planning and scoping the evaluation A. Defining client information needs and understanding the political context B. Defining the program theory model C. Identifying time, budget, data and political constraints to be addressed by the RWE D. Selecting the design that best addresses client needs within the RWE constraints E. Assessing methodological quality and validity and defining minimum acceptable design standards Step 2 Addressing budget constraints A. Modify evaluation design B. Rationalize data needs C. Look for reliable secondary data D. Revise sample design E. Economical data collection methods Step 3 Addressing time constraints All Step 2 tools plus: F. Commissioning preparatory studies G. Hire more resource persons H. Revising format of project records to include critical data for impact analysis. I. Modern data collection and analysis technology Step 4 Addressing data constraints A. Reconstructing baseline data B. Recreating comparison groups C. Working with non-equivalent comparison groups D. Collecting data on sensitive topics or from difficult to reach groups E. Multiple methods Step 6 Assessing and addressing the strengths and weaknesses of the evaluation design An integrated checklist for multi-method designs A. Objectivity/confirmability B. Replicability/dependability C. Internal validity/credibility/authenticity D. External validity/transferability/fittingness Step 7 Helping clients use the evaluation A. Utilization B. Application C. Orientation D. Action Step 5 Addressing political influences A. Accommodating pressures from funding agencies or clients on evaluation design. B. Addressing stakeholder methodological preferences. C. Recognizing influence of professional research paradigms. 4
5 How RWE contributes to country- led Monitoring and Evaluation Increasing the uptake of evidence into policy making – Involving stakeholders in the design, implementation, analysis and dissemination – Using program theory to: base the evaluation on stakeholder understanding of the program and its objectives Ensure evaluation focuses on key issues – Present findings: when they are needed Using the clients preferred communication style
6 The quality challenge: matching technical rigor and policy relevance – Adapting the evaluation design to the level of rigor required by decision makers – Use of the Threats to validity checklist at several points in the evaluation cycle – Defining minimum acceptable levels of methodological rigor – Avoiding positive bias in the evaluation design and presentation of findings How to present negative findings
7 Adapting country-led evaluation to real- world constraints – Adapting the system to real-world budget, time and data constraints – Ensuring evaluations produce useful and actionable information – Adapting the M&E system to national political, administrative and evaluation cultures – Focus on institutionalization of M&E systems not just ad hoc evaluations – Evaluation capacity development – Focus on quality assurance
8 The RealWorld Evaluation Approach An integrated approach to ensure acceptable standards of methodological rigor while operating under realworld budget, time, data and political constraints. See summary chapter and workshop presentations at for more details
9 Reality Check – Real-World Challenges to Evaluation All too often, project designers do not think evaluatively – evaluation not designed until the end There was no baseline – at least not one with data comparable to evaluation There was/can be no control/comparison group. Limited time and resources for evaluation Clients have prior expectations for what the evaluation findings will say Many stakeholders do not understand evaluation; distrust the process; or even see it as a threat (dislike of being judged)
10 Determining appropriate (and feasible) evaluation design Based on an understanding of client information needs, required level of rigor, and what is possible given the constraints, the evaluator and client need to determine what evaluation design is required and possible under the circumstances.
baseline end of project evaluation Comparison group post project evaluation Design #1: Longitudinal Quasi-experimental P 1 X P 2 X P 3 P 4 C 1 C 2 C 3 C 4 Project participants midterm 11
baseline end of project evaluation Comparison group Design #2: Quasi-experimental (pre+post, with comparison) P 1 X P 2 C 1 C 2 Project participants 12
baseline end of project evaluation Control group Design #2+: Randomized Control Trial P 1 X P 2 C 1 C 2 Project participants 13 Research subjects randomly assigned either to project or control group.
end of project evaluation Comparison group Design #3: Truncated Longitudinal X P 1 X P 2 C 1 C 2 Project participants midterm 14
baseline end of project evaluation Comparison group Design #4: Pre+post of project; post-only comparison P 1 X P 2 C Project participants 15
end of project evaluation Comparison group Design #5: Post-test only of project and comparison X P C Project participants 16
baseline end of project evaluation Design #6: Pre+post of project; no comparison P 1 X P 2 Project participants 17
end of project evaluation Design #7: Post-test only of project participants X P Project participants 18
Other questions to answer as you plan an impact evaluation: 1. What are the key questions to be answered? For whom? What evidence will adequately inform them? 2. Will there be a next phase, or other projects designed based on the findings of this evaluation? 3. Is this a simple, complicated or complex situation (see next slide)? 19
As presented at the Cairo Impact Evaluation conference by Patricia Rogers, RMIT University 20 Implications for understanding impact and using impact evaluation SIMPLECOMPLICATEDCOMPLEX Question answered What works?What works for whom in what contexts? How do multiple interventions combine to produce the impact? Whats working? Process needed Knowledge transfer Knowledge translationKnowledge generation Nature of direction Single way to do it ContingentDynamic and emergent Metaphor for direction Written directionsMap and timetableCompass
Other questions to answer as you plan an impact evaluation: 1. Will focusing on one quantifiable indicator adequately represent impact? 2. Is it feasible to expect there to be a clear, linear cause-effect chain attributable to one unique intervention? Or will we have to account for multiple plausible contributions by various agencies and actors to higher-level impact? 3. Would one data collection method suffice, or should there be a combination of multiple methods used? 21
22 Ways to reconstruct baseline conditions A. Secondary data B. Project records C. Recall D. Key informants E. PRA and other participatory techniques such as timelines, and critical incidents to help establish the chronology of important changes in the community
23 Assessing the utility of potential secondary data Reference period Population coverage Inclusion of required indicators Completeness Accuracy Free from bias
24 Ways to reconstruct comparison groups Judgmental matching of communities. When phased introduction of project services beneficiaries entering in later phases can be used as pipeline control group. Internal controls when different subjects receive different combinations and levels of services
25 Evaluations provide recommendations for future decisions and action. If the findings and interpretation are not valid: Programs which do not work may continue or even be expanded Good programs may be discontinued Priority target groups may not have access or benefit Importance of validity
26 RWE quality control goals The evaluator must achieve greatest possible methodological rigor within the limitations of a given context Standards must be appropriate for different types of evaluation The evaluator must identify and control for methodological weaknesses in the evaluation design. The evaluation report must identify methodological weaknesses and how these affect generalization to broader populations.
27 Main RWE messages 1. Evaluators must be prepared for realworld evaluation challenges 2. There is considerable experience to draw on 3. A toolkit of rapid and economical RealWorld evaluation techniques is available (see 4. Never use time and budget constraints as an excuse for sloppy evaluation methodology 5. A threats to validity checklist helps keep you honest by identifying potential weaknesses in your evaluation design and analysis