Presentation on theme: "Workshop at the Cairo conference on Impact Evaluation 29 March 2009"— Presentation transcript:
1 Workshop at the Cairo conference on Impact Evaluation 29 March 2009 Monitoring? Evaluation? Impact Evaluation? Appreciating and Taking Advantages of the DifferencesWorkshop at the Cairo conference on Impact Evaluation29 March 2009Burt Perrin La MasqueVissecFRANCE
2 Putting the “and” back in MandE Alternative title:Putting the “and” back in MandE
3 Plan for the workshopParticipative approach – small group exercises, your real-world examples, general discussionConsider differences between monitoring and evaluationStrengths and limitations of eachUse and misuse of performance indicatorsHow to use monitoring and evaluation approaches appropriately and in a complementary fashionWhat is “impact evaluation” and where does it fit in?
4 What do we mean by Monitoring, and by Evaluation?
5 Monitoring – the concept and common definitions Tracking progress in accordance with previously identified objectives, indicators, or targets (plan vs. reality)RBM, performance measurement, performance indicators …En français: “suivi” vs. “contrôle”Some other uses of the termAny ongoing activity involving data collection and performance (usually internal, sometimes seen as self evaluation)
6 Evaluation – some initial aspects Systematic, data basedOften can use data from monitoring as one source of informationCan consider any aspect of a policy, programme, projectMajor focus on assessing the impact of the intervention (i.e. attribution, cause)E - valua - tion
7 Frequent status of M&E monitoringandevaluation Monitoringandevaluation RBM (Monitoring)Evaluationor
8 Ideal situation – Monitoring and Evaluation complementary
9 Monitoring and Evaluation Generally episodic, often externalCan question the rationale and relevance of the program and its objectivesCan identify unintended as well as planned impacts and effectsCan address “how” and “why” questionsCan provide guidance for future directionsCan use data from different sources and from a wide variety of methodsMonitoringPeriodic, using data routinely gathered or readily obtainable, generally internalAssumes appropriateness of programme, activities, objectives, indicatorsTracks progress against small number of targets/ indicators (one at a time)Usually quantitativeCannot indicate causalityDifficult to use for impact assessment
10 MONITORING, EVALUATION AND IA Investments (resources, staff…) and activitiesProductsImmediate achievements of the projectLong-term, sustainable changesInputsOutputsOutcomesImpactMonitoring: what has been invested, done and produced, and how are we progressing towards the achievement of the objectives?Evaluation: what occurred and what has been achieved as a result of the project?Impact assessment: what long-term, sustainable changes have been produced (e.g. the contribution towards the elimination of child labour)?
11 Evaluation vs. Research Primary objective: knowledge generationEvaluationreference to a particular type of situationUtilisation in some form an essential componentBut: evaluation makes use of research methodologies
12 Monitoring data: quantitative only, or also qualitative? Some/most guidelines specify quantitative onlySome nominally allow qualitative information, but:IndicatorQ1Q2Q3Q4Yr
13 Performance Indicators A consideration of their limitations and potential for misuseSee, for example:Burt Perrin, Effective Use and Misuse of Performance Measurement, American Journal of Evaluation, Vol. 19, No. 3, pp , 1998.Burt Perrin, Performance Measurement: Does the Reality Match the Rhetoric? American Journal of Evaluation, Vol. 20, No. 1, pp , 1999.
14 Common flaws, limitations, and misuse of performance indicators - 1 Goal displacementTerms and measures interpreted differentlyDistorted or inaccurate dataMeaningless and irrelevant dataCost shifting vs. cost savingsCritical subgroup differences hidden
20 Common flaws, limitations, and misuse of performance indicators -2 Do not take into account the larger context/complexitiesLimitations of objective-based approaches to evaluationUseless for decision making and resource allocationsCan result in less focus on innovation, improvement and outcomes
21 The process of developing indicators – should include: Involvement of stakeholdersDevelopment, interpretation and revision of indicatorsAllocation of time and resources to the development of indicatorsProvision of training and expertiseThinking about potential forms of misuse in advancePretesting, testing, review and revision
22 Using indicators appropriately – some basic strategic considerations First, do no harmMeaningful and useful at the grassroots – the program, staff, local stakeholdersNOT linked to budget allocations or managerial rewardsUse only when makes sense, e.g. Mintzberg, Pollitt/OECD:Standardised programmes – recurrent products/servicesEstablished programmes with a basis for identifying meaningful indicators and targetsNOT for tangible individual servicesNOT for non-tangible ideal services
23 Using indicators appropriately –strategic considerations – 2 Use indicators as indicatorsAt best, a window vs. realityTo raise questions rather than to provide the “answer”Different levels (e.g. input, activities, outputs, outcomes where it makes sense)
24 Using indicators appropriately –strategic considerations – 3 Focus on results vs. busy-nessPerformance information vs. performance dataDescriptive vs. numerical indicatorPerformance MANAGEment vs. MEASUREment(original intent diverted from management to control)Periodically review overall picture – ask if the “data” makes sense, identify questions arisingIndicators as part of a broad evaluation strategy
25 Using indicators appropriately – operational considerations Look at subgroup differencesIndicators/targets indicating direction vs. assessing performanceIf latter, don’t set up programme for failureDynamic vs. staticNever right the first timeConstantly reassess validity and meaningfulnessPre-test, pre-test, pre-testUpdate and reviseProvide feedback – and assistance as needed
26 Using indicators appropriately - reporting More vs. less information in reportsPerformance story vs. list of numbersIdentify limitations – provide qualificationsCombine with other informationRequest/provide feedback
28 A strategic approach to evaluation Raison d’être of evaluationSocial bettermentSensemakingMore generally, raison d’être of evaluationTo be used!Improved policies, programmes, projects, services, thinking
29 Monitoring and Evaluation Periodic, using data routinely gathered or readily obtainableAssumes appropriateness of programme, activities, objectives, indicatorsTracks progress against small number of targets/ indicators (one at a time)Usually quantitativeCannot indicate causalityDifficult to use for impact assessmentEvaluationGenerally episodicCan question the rationale and relevance of the program and its objectivesCan identify unintended as well as planned impacts and effectsCan provide guidance for future directionsCan address “how” and “why” questionsCan use data from different sources and from a wide variety of methods
30 Future orientation - Dilemma “The greatest dilemma of mankind is that all knowledge is about past events and all decisions about the future.The objective of this planning, long-term and imperfect as it may be, is to make reasonably sure that, in the future, we may end up approximately right instead of exactly wrong.”
31 Questions for evaluation Start with the questionsChoice of methods to followHow to identify questionsWho can use evaluation information?What information can be used? How?Different stakeholders – different questionsConsider responses to hypothetical findingsDevelop the theory of change (logic model)
32 The three key evaluation questions What’s happening?(planned and unplanned, little or big at any level)Why?So what?
33 Some uses for evaluation Programme improvementIdentify new policies, programme directions, strategiesProgramme formationDecision making at all levelsAccountabilityLearningIdentification of needsAdvocacyInstilling evaluative/questioning culture
34 Different types of evaluation Ex-ante vs. ex-postProcess vs. outcomeFormative vs. summativeDescriptive vs. judgementalAccountability vs. learning (vs. advocacy vs. pro-forma)Short-term actions vs. long-term thinkingEtc.
42 Making evaluation useful - 1 Be strategicE.g. start with the big picture – identify questions arisingFocus on priority questions and information requirementsConsider needs, preferences, of key evaluation usersDon’t be limited to stated/intended effectsDon’t try to do everything in one evaluation
43 Making evaluation useful - 2 Primary focus: how evaluation can be relevant and usefulBear the beneficiaries in mindTake into account diversity, including differing world views, logics, and valuesBe an (appropriate) advocateDon’t be too broad42Don’t be too narrow
44 How else can one practice evaluation so that it is useful? Follow the Golden Rule“There are no golden rules.” (European Commission)Art as much as scienceBe future orientedInvolve stakeholdersUse multiple and complementary methods, qualitative and quantitativeRecognize differences between monitoring and evaluation
45 To think about …Constructive approach, emphasis on learning vs. punishmentGood practices (not just problems)Take into account complexity theory, systems approach, chaos theorySynthesis, knowledge managementEstablishing how/if the intervention in fact is responsible for results (attribution or cause)
46 Impact evaluation/assessment: what does this mean? OECD/DAC definition of impact: Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.Development objective: Intended impact contributing to physical, financial, institutional, social, environ-mental, or other benefits to a society, community, or group of people via one or more development interventions.But beware! ‘Impact’ and ‘impact assessment’ frequently used in very different ways.
47 Determining attribution – some alternative approaches Experimental/quasi-experimental designs (randomisation)Eliminate rival plausible hypothesesPhysical (qualitative) causalityTheory of change approach“reasonable attribution”“Contribution” vs. “cause”Contribution analysis(simplest approach – at needed confidence)
48 Some considerations for meaningful impact evaluation Need information about inputs and activities as well as about outcomesCheck, don’t assume that what is mandated in (Western) capitals is what actually takes place sur le terrainCheck: are data sources really accurate?Dealing with responsiveness – a problem or a strength?Internal vs. external validity
49 Some questions about impact evaluation What is possible with multiple interventions?Changing situationStrategies/policies vs. projectsTime frame?
51 How Monitoring and Evaluation can be complementary Ongoing monitoringCan identify questions, issues for (in-depth) evaluationCan provide data for evaluationEvaluationCan identify what should be monitored in the future
52 Monitoring vs. Evaluation Start with the purpose and question(s)E.g. control vs. learning/improvementIdentify information requirements (for whom, how would be used …)Articulate the theory of changeUse most appropriate method(s) given the aboveSome form of monitoring approach? and/orSome form of evaluation?Do not use monitoring when evaluation is most appropriate – and vice versaConsider costs (financial, staff time). timelinessMonitoring usually – but not always! – less costly and quicker
53 Mon. and Eval. in combination Multi-method approach to evaluation usually most appropriate – can include monitoringGenerally monitoring most appropriate as part of an overall evaluation approachE.g. use evaluation to expand upon the “what” information from monitoring, and to address “why” and “so what” questionsStrategic questions strategic methodsSeek minimum amount of information that addresses the right questions and that will actually be usedTell the performance storyTake a contribution analysis approach
54 Contribution Analysis (Mayne: Using performance measures sensibly) Develop the results chainAssess the existing evidence on resultsAssess the alternative explanationsAssemble the performance storySeek out additional evidenceRevise and strengthen the performance story
55 Thank you / Merci pour votre participation. ConclusionGo forward, monitor and evaluate – and help to make a difference.Thank you / Merci pour votre participation.Burt Perrin