Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented by Jim Rugh to NONIE Conference in Paris 28 March 2011.

Similar presentations

Presentation on theme: "Presented by Jim Rugh to NONIE Conference in Paris 28 March 2011."— Presentation transcript:


2 Presented by Jim Rugh to NONIE Conference in Paris 28 March 2011



5 baseline end of project evaluation Illustrating the need for quasi-experimental longitudinal time series evaluation design Project participants Comparison group post project evaluation An introduction to various evaluation designs scale of major impact indicator 4

6 … one at a time, beginning with the most rigorous design. 5

7 X = Intervention (treatment), I.e. what the project does in a community O = Observation event (e.g. baseline, mid-term evaluation, end-of-project evaluation) P (top row): Project participants C (bottom row): Comparison (control) group 6

8 baseline end of project evaluation Comparison group post project evaluation Design #1: Longitudinal Quasi-experimental P 1 X P 2 X P 3 P 4 C 1 C 2 C 3 C 4 Project participants midterm 7

9 baseline end of project evaluation Comparison group Design #2: Quasi-experimental (pre+post, with comparison) P 1 X P 2 C 1 C 2 Project participants 8

10 baseline end of project evaluation Control group Design #2+: Typical Randomized Control Trial P 1 X P 2 C 1 C 2 Project participants 9 Research subjects randomly assigned either to project or control group.

11 end of project evaluation Comparison group Design #3: Truncated QED X P 1 X P 2 C 1 C 2 Project participants midterm 10

12 baseline end of project evaluation Comparison group Design #4: Pre+post of project; post-only comparison P 1 X P 2 C Project participants 11

13 end of project evaluation Comparison group Design #5: Post-test only of project and comparison X P C Project participants 12

14 baseline end of project evaluation Design #6: Pre+post of project; no comparison P 1 X P 2 Project participants 13

15 end of project evaluation Design #7: Post-test only of project participants X P Project participants 14

16 15 DesignDesign T 1 (baseline) X (intervention) T 2 (midterm) X ( intervention, cont.) T 3 (endline) T 4 (ex-post) 1 P1C1P1C1 X P2C2P2C2 X P3C3P3C3 P4C4P4C4 2 P1C1P1C1 XX P2C2P2C2 3 X P1C1P1C1 X P2C2P2C2 4 P1P1 XX P2C2P2C2 5 XX P1C1P1C1 6 P1P1 XXP2P2 7 XXP1P1 Note: These 7 evaluation designs are described in the RealWorld Evaluation book

17 What kinds of evaluation designs are actually used in the real world (of international development)? Findings from meta- evaluations of 336 evaluation reports of an INGO. Post-test only59% Before-and-after25% With-and-without15% Other counterfactual 1%

18 Even proponents of RCTs have acknowledged that RTCs are only appropriate for perhaps 5% of development interventions. An empirical study by Forss and Bandstein, examining evaluations in the OECD/DAC DEReC database by bilateral and multilateral organisations found only 5% used even a counterfactual design. While we recognize that experimental and quasi experimental designs have a place in the toolkit for impact evaluations, we think that more attention needs to be paid to the roughly 95% of situations where these designs would not be possible or appropriate.


20 19 Inputs Implementation Process OutputsOutcomes Impacts Economic context in which the project operates Political context in which the project operates Institutional and operational context Socio-economic and cultural characteristics of the affected populations Note: The orange boxes are included in conventional Program Theory Models. The addition of the blue boxes provides the recommended more complete analysis. One form of Program Theory (Logic) Model DesignSustainability

21 20

22 PROBLEM PRIMARY CAUSE 2 PRIMARY CAUSE 1 PRIMARY CAUSE 3 Secondary cause 2.2 Secondary cause 2.3 Secondary cause 2.1 Tertiary cause 2.2.1 Tertiary cause 2.2.2 Tertiary cause 2.2.3 Consequences

23 DESIRED IMPACT OUTCOME 2 OUTCOME 1 OUTCOME 3 OUTPUT 2.2 OUTPUT 2.3 OUTPUT 2.1 Intervention 2.2.1 Intervention 2.2.2 Intervention 2.2.3 Consequences

24 Children are malnourished Diarrheal disease Insufficient food Poor quality of food Unsanitary practices Need for improved health policies Contaminated water Flies and rodents Do not use facilities correctly People do not wash hands before eating High infant mortality rate

25 Women empowered Young women educated Women in leadership roles Economic opportunities for women Female enrollment rates increase Curriculum improved Improved educational policies Parents persuaded to send girls to school Schools built School system hires and pays teachers Reduction in poverty

26 Advocacy Project Goal: Improved educational policies enacted Program Goal: Young women educated Construction Project Goal: More classrooms built Teacher Education Project Goal: Improve quality of curriculum Program goal at impact level ASSUMPTION (that others will do this) PARTNER will do this OUR project To have synergy and achieve impact all of these need to address the same target population.

27 We need to recognize which evaluative process is most appropriate for measurement at various levels Impact Outcomes Output Activities Inputs PERFORMANCE MONITORING PROJECT EVALUATION PROGRAM EVALUATION

28 27 Ultimate Impact End OutcomesIntermediate Outcomes OutputsInterventions Needs-basedHigher Consequence Specific ProblemCauseSolutionProcessInputs American Red Cross Program GoalProject ImpactOutcomesOutputsActivitiesInputs AusAIDScheme GoalMajor Development Objectives OutputsActivitiesInputs CARE logframeProgram GoalProject Final GoalIntermediate Objectives OutputsActivitiesInputs CARE terminology Program ImpactProject ImpactEffectsOutputsActivitiesInputs CIDA + GTZOverall goalProject purposeResults/OutputsActivitiesInputs CRS ProframeGoalStrategic ObjectiveIntermediate Results OutputsActivitiesInputs DANIDA + DfID GoalPurposeOutputsActivities EIDHROverall Objectives Specific Objective Expected Results Activities European UnionOverall Objective Project PurposeResultsActivities FAO + UNDP + NORAD Development ObjectiveImmediate Objectives OutputsActivitiesInputs PC/LogFrameGoalPurposeOutputsActivities Peace CorpsPurposeGoalsResultsObjectivesActivitiesVolunteers SAVE – Results Framework GoalStrategic ObjectiveIntermediate Results OutputsActivitiesInputs UNHCRSector Objective GoalProject Objective OutputsActivitiesInput/Resources USAID LogFrame Final GoalStrategic Objective Intermediate Results ActivitiesInputs USAID Results Framework GoalStrategic ObjectiveIntermediate Results (Outputs)(Activities)(Inputs) World BankLong-term ObjectivesShort-term Objectives OutputsInputs World Vision International Program GoalProject GoalOutcomesOutputsActivities(Inputs) The Rosetta Stone of Logical Frameworks


30 29 How do we know if the observed changes in the project participants or communities income, health, attitudes, school attendance, etc. are due to the implementation of the project credit, water supply, transport vouchers, school construction, etc. or to other unrelated factors? changes in the economy, demographic movements, other development programs, etc.

31 30 What change would have occurred in the relevant condition of the target population if there had been no intervention by this project?

32 31 Control group = randomized allocation of subjects to project and non-treatment group Comparison group = separate procedure for sampling project and non-treatment groups that are as similar as possible in all aspects except the treatment (intervention)

33 32 J-PAL is best understood as a network of affiliated researchers … united by their use of the randomized trial methodology… 2003 2010 2008 2006 2009

34 33 So, are Randomized Control Trials (RCTs) are the Gold Standard and should they be used in most if not all program impact evaluations? Yes or no? If so, under what circumstances should they be used? Why or why not? If not, under what circumstances would they not be appropriate?

35 Adapted from Patricia Rogers, RMIT University 34 Question needed for evidence-based policy What works? What interventions look like Discrete, standardized intervention How interventions work Pretty much the same everywhere Process needed for evidence uptake Knowledge transfer

36 35 Complicated, complex programs where there are multiple interventions by multiple actors Projects working in evolving contexts (e.g. countries in transition, conflicts, natural disasters) Projects with multiple layered logic models, or unclear cause-effect relationships between outputs and higher level vision statements (as is often the case in the real world of international development projects)

37 36 Reliable secondary data that depicts relevant trends in the population Longitudinal monitoring data (if it includes non- reached population) Qualitative methods to obtain perspectives of key informants, participants, neighbors, etc.

38 A conventional statistical counterfactual (with random selection into treatment and control groups) is often not possible/appropriate: When conducting the evaluation of complex interventions When the project involves a number of interventions which may be used in different combinations in different locations When each project location is affected by a different set of contextual factors When it is not possible to use standard implementation procedures for all project locations When many outcomes involve complex behavioral changes When many outcomes are multidimensional or difficult to measure through standardized quantitative indicators. 37

39 A: Theory based approaches 1. Program theory / logic models 2. Realistic evaluation 3. Process tracing 4. Venn diagrams and many other PRA methods 5. Historical methods 6. Forensic detective work 7. Compilation of a list of plausible alternative causes 8. … (for more details see

40 B: Quantitatively oriented approaches 1. Pipeline design 2. Natural variations 3. Creative uses of secondary data 4. Creative creation of comparison groups 5. Comparison with other programs 6. Comparing different types of interventions 7. Cohort analysis 8. … (for more details see

41 C: Qualitatively oriented approaches 1. Concept mapping 2. Creative use of secondary data 3. Many PRA techniques 4. Process tracing 5. Compiling a book of possible causes 6. Comparisons between different projects 7. Comparisons among project locations with different combinations and levels of treatment (for more details see


43 SimpleComplicatedComplex Following a recipeSending a rocket to the moon Raising a child Recipes are tested to assure easy replication Sending one rocket to the moon increases assurance that the next will also be a success Raising one child provides experience but is no guarantee of success with the next The best recipes give good results every time There is a high degree of certainty of outcome Uncertainty of outcome remains Sources: Westley et al (2006) and Stacey (2007), cited in Patton 2008; also presented by Patricia Rodgers at Cairo impact conference 2009. 42

44 Whats a conscientious evaluator to do when facing such a complex world?

45 DESIRED IMPACT OUTCOME 2OUTCOME 1 OUTCOME 3 OUTPUT 2.2 OUTPUT 2.3 OUTPUT 2.1 Intervention 2.2.1 Intervention 2.2.2 Intervention 2.2.3 Consequences A Simple RCT A more comprehensive design

46 Inputs Outputs Intermediate outcomes Impacts DonorGovernmentOther donors Credit for small farmers Rural roads Schools Health services Increased rural H/H income Increased production Increased school enrolment Increased use of health services Access to off- farm employment Improved education performance Improved health Increased political participation Expanding the results chain for multi-donor, multi-component program Attribution gets very difficult! Consider plausible contributions each makes.


48 47 OECD-DAC (2002: 24) defines impact as the positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended. These effects can be economic, sociocultural, institutional, environmental, technological or of other types. Is it limited to direct attribution? Or point to the need for counterfactuals or Randomized Control Trials (RCTs)?

49 48 1. Direct cause-effect relationship between one output (or a very limited number of outputs) and an outcome that can be measured by the end of the research project? Pretty clear attribution. … OR … 2. Changes in higher-level indicators of sustainable improvement in the quality of life of people, e.g. the MDGs (Millennium Development Goals)? More significant. But assessing plausible contribution is more feasible than assessing unique direct attribution.

50 1)thorough consultation with and involvement by a variety of stakeholders, 2)articulating a comprehensive logic model that includes relevant external influences, 3)getting agreement on desirable impact level goals and indicators, 4)adapting evaluation design as well as data collection and analysis methodologies to respond to the questions being asked, … Rigorous impact evaluation should include (but is not limited to):

51 5) adequately monitoring and documenting the process throughout the life of the program being evaluated, 6) using an appropriate combination of methods to triangulate evidence being collected, 7) being sufficiently flexible to account for evolving contexts, … Rigorous impact evaluation should include (but is not limited to):

52 8) using a variety of ways to determine the counterfactual, 9) estimating the potential sustainability of whatever changes have been observed, 10) communicating the findings to different audiences in useful ways, 11) etc. … Rigorous impact evaluation should include (but is not limited to):

53 The point is that the list of whats required for rigorous impact evaluation goes way beyond initial randomization into treatment and control groups.

54 To attempt to conduct an impact evaluation of a program using only one pre-determined tool is to suffer from myopia, which is unfortunate. On the other hand, to prescribe to donors and senior managers of major agencies that there is a single preferred design and method for conducting all impact evaluations can and has had unfortunate consequences for all of those who are involved in the design, implementation and evaluation of international development programs.

55 We must be careful that in using theGold Standard we do not violate the Golden Rule: Judge not that you not be judged! In other words: Evaluate others as you would have them evaluate you.

56 Caution: Too often what is called Impact Evaluation is based on a we will examine and judge you paradigm. When we want our own programs evaluated we prefer a more holistic approach.

57 To use the language of the OECD/DAC, lets be sure our evaluations are consistent with these criteria: RELEVANCE: The extent to which the aid activity is suited to the priorities and policies of the target group, recipient and donor. EFFECTIVENESS: The extent to which an aid activity attains its objectives. EFFICIENCY: Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. IMPACT: The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. SUSTAINABILITY is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.

58 The bottom line is defined by this question: Are our programs making plausible contributions towards positive impact on the quality of life of our intended beneficiaries? Lets not forget them!

59 58 T h a n k y o u !

Download ppt "Presented by Jim Rugh to NONIE Conference in Paris 28 March 2011."

Similar presentations

Ads by Google