Presentation on theme: "Monitoring? Evaluation? Impact Evaluation? Appreciating and Taking Advantages of the Differences Workshop at the Cairo conference on Impact Evaluation."— Presentation transcript:
Monitoring? Evaluation? Impact Evaluation? Appreciating and Taking Advantages of the Differences Workshop at the Cairo conference on Impact Evaluation 29 March 2009 Burt PerrinLa Masque Burt@BurtPerrin.com30770 Vissec FRANCE +33 4 67 81 50 11
Alternative title: Putting the “and” back in MandE
Plan for the workshop Participative approach – small group exercises, your real-world examples, general discussion Consider differences between monitoring and evaluation Strengths and limitations of each Use and misuse of performance indicators How to use monitoring and evaluation approaches appropriately and in a complementary fashion What is “impact evaluation” and where does it fit in?
What do we mean by Monitoring, and by Evaluation?
Monitoring – the concept and common definitions Tracking progress in accordance with previously identified objectives, indicators, or targets (plan vs. reality) RBM, performance measurement, performance indicators … En français: “suivi” vs. “contrôle” Some other uses of the term Any ongoing activity involving data collection and performance (usually internal, sometimes seen as self evaluation)
Evaluation – some initial aspects Systematic, data based Often can use data from monitoring as one source of information Can consider any aspect of a policy, programme, project Major focus on assessing the impact of the intervention (i.e. attribution, cause) E - valua - tion
Frequent status of M&E monitoringandevaluation RBM (Monitoring) Evaluation or Monitoring andevaluation
Ideal situation – Monitoring and Evaluation complementary MonitoringEvaluation
Monitoring and Evaluation Monitoring P eriodic, using data routinely gathered or readily obtainable, generally internal Assumes appropriateness of programme, activities, objectives, indicators Tracks progress against small number of targets/ indicators (one at a time) Usually quantitative Cannot indicate causality Difficult to use for impact assessment Evaluation Generally episodic, often external Can question the rationale and relevance of the program and its objectives Can identify unintended as well as planned impacts and effects Can address “how” and “why” questions Can provide guidance for future directions Can use data from different sources and from a wide variety of methods
10 MONITORING, EVALUATION AND IA InputsOutputsOutcomesImpact Investments (resources, staff…) and activities Products Immediate achievements of the project Long-term, sustainable changes Monitoring: what has been invested, done and produced, and how are we progressing towards the achievement of the objectives? Evaluation: what occurred and what has been achieved as a result of the project? Impact assessment: what long-term, sustainable changes have been produced (e.g. the contribution towards the elimination of child labour)?
Evaluation vs. Research Research Primary objective: knowledge generation Evaluation reference to a particular type of situation Utilisation in some form an essential component But: evaluation makes use of research methodologies
Monitoring data: quantitative only, or also qualitative? Some/most guidelines specify quantitative only Some nominally allow qualitative information, but: Indicator Q1Q1 Q2Q2 Q3Q3 Q4Q4 YrYr
Performance Indicators See, for example: Burt Perrin, Effective Use and Misuse of Performance Measurement, American Journal of Evaluation, Vol. 19, No. 3, pp. 367-369, 1998. Burt Perrin, Performance Measurement: Does the Reality Match the Rhetoric? American Journal of Evaluation, Vol. 20, No. 1, pp. 101-114, 1999. A consideration of their limitations and potential for misuse
Common flaws, limitations, and misuse of performance indicators - 1 Goal displacement Terms and measures interpreted differently Distorted or inaccurate data Meaningless and irrelevant data Cost shifting vs. cost savings Critical subgroup differences hidden
Common flaws, limitations, and misuse of performance indicators -2 Do not take into account the larger context/complexities Limitations of objective-based approaches to evaluation Useless for decision making and resource allocations Can result in less focus on innovation, improvement and outcomes
The process of developing indicators – should include: Involvement of stakeholders Development, interpretation and revision of indicators Allocation of time and resources to the development of indicators Provision of training and expertise Thinking about potential forms of misuse in advance Pretesting, testing, review and revision
Using indicators appropriately – some basic strategic considerations First, do no harm Meaningful and useful at the grassroots – the program, staff, local stakeholders NOT linked to budget allocations or managerial rewards Use only when makes sense, e.g. Mintzberg, Pollitt/OECD: Standardised programmes – recurrent products/services Established programmes with a basis for identifying meaningful indicators and targets NOT for tangible individual services NOT for non-tangible ideal services
Using indicators appropriately – strategic considerations – 2 Use indicators as indicators At best, a window vs. reality To raise questions rather than to provide the “answer” Different levels (e.g. input, activities, outputs, outcomes where it makes sense)
Using indicators appropriately –strategic considerations – 3 Focus on results vs. busy-ness Performance information vs. performance data Descriptive vs. numerical indicator Performance MANAGEment vs. MEASUREment (original intent diverted from management to control) Periodically review overall picture – ask if the “data” makes sense, identify questions arising Indicators as part of a broad evaluation strategy
Using indicators appropriately – operational considerations Look at subgroup differences Indicators/targets indicating direction vs. assessing performance If latter, don’t set up programme for failure Dynamic vs. static Never right the first time Constantly reassess validity and meaningfulness Pre-test, pre-test, pre-test Update and revise Provide feedback – and assistance as needed
Using indicators appropriately - reporting More vs. less information in reports Performance story vs. list of numbers Identify limitations – provide qualifications Combine with other information Request/provide feedback
A strategic approach to evaluation Raison d’être of evaluation Social betterment Sensemaking More generally, raison d’être of evaluation To be used! Improved policies, programmes, projects, services, thinking
Monitoring and Evaluation Monitoring P eriodic, using data routinely gathered or readily obtainable Assumes appropriateness of programme, activities, objectives, indicators Tracks progress against small number of targets/ indicators (one at a time) Usually quantitative Cannot indicate causality Difficult to use for impact assessment Evaluation Generally episodic Can question the rationale and relevance of the program and its objectives Can identify unintended as well as planned impacts and effects Can provide guidance for future directions Can address “how” and “why” questions Can use data from different sources and from a wide variety of methods
Future orientation - Dilemma “The greatest dilemma of mankind is that all knowledge is about past events and all decisions about the future. The objective of this planning, long-term and imperfect as it may be, is to make reasonably sure that, in the future, we may end up approximately right instead of exactly wrong.”
Questions for evaluation Start with the questions Choice of methods to follow How to identify questions Who can use evaluation information? What information can be used? How? Different stakeholders – different questions Consider responses to hypothetical findings Develop the theory of change (logic model)
The three key evaluation questions What’s happening? (planned and unplanned, little or big at any level) Why? So what?
Some uses for evaluation Programme improvement Identify new policies, programme directions, strategies Programme formation Decision making at all levels Accountability Learning Identification of needs Advocacy Instilling evaluative/questioning culture
Different types of evaluation Ex-ante vs. ex-post Process vs. outcome Formative vs. summative Descriptive vs. judgemental Accountability vs. learning (vs. advocacy vs. pro-forma) Short-term actions vs. long-term thinking Etc.
Making evaluation useful - 1 Be strategic E.g. start with the big picture – identify questions arising Focus on priority questions and information requirements Consider needs, preferences, of key evaluation users Don’t be limited to stated/intended effects Don’t try to do everything in one evaluation
Making evaluation useful - 2 Primary focus: how evaluation can be relevant and useful Bear the beneficiaries in mind Take into account diversity, including differing world views, logics, and values Be an (appropriate) advocate Don’t be too broad 42 Don’t be too narrow 42
How else can one practice evaluation so that it is useful? Follow the Golden Rule “There are no golden rules.” (European Commission) Art as much as science Be future oriented Involve stakeholders Use multiple and complementary methods, qualitative and quantitative Recognize differences between monitoring and evaluation
To think about … Constructive approach, emphasis on learning vs. punishment Good practices (not just problems) Take into account complexity theory, systems approach, chaos theory Synthesis, knowledge management Establishing how/if the intervention in fact is responsible for results (attribution or cause)
Impact evaluation/assessment: what does this mean? OECD/DAC definition of impact: Positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended. Development objective: Intended impact contributing to physical, financial, institutional, social, environ-mental, or other benefits to a society, community, or group of people via one or more development interventions. But beware! ‘Impact’ and ‘impact assessment’ frequently used in very different ways.
Determining attribution – some alternative approaches Experimental/quasi-experimental designs (randomisation) Eliminate rival plausible hypotheses Physical (qualitative) causality Theory of change approach “reasonable attribution” “Contribution” vs. “cause” Contribution analysis (simplest approach – at needed confidence)
Some considerations for meaningful impact evaluation Need information about inputs and activities as well as about outcomes Check, don’t assume that what is mandated in (Western) capitals is what actually takes place sur le terrain Check: are data sources really accurate? Dealing with responsiveness – a problem or a strength? Internal vs. external validity
Some questions about impact evaluation What is possible with multiple interventions? Changing situation Strategies/policies vs. projects Time frame?
How Monitoring and Evaluation can be complementary Ongoing monitoring Can identify questions, issues for (in-depth) evaluation Can provide data for evaluation Evaluation Can identify what should be monitored in the future
Monitoring vs. Evaluation Start with the purpose and question(s) E.g. control vs. learning/improvement Identify information requirements (for whom, how would be used …) Articulate the theory of change Use most appropriate method(s) given the above Some form of monitoring approach? and/or Some form of evaluation? Do not use monitoring when evaluation is most appropriate – and vice versa Consider costs (financial, staff time). timeliness Monitoring usually – but not always! – less costly and quicker
Mon. and Eval. in combination Multi-method approach to evaluation usually most appropriate – can include monitoring Generally monitoring most appropriate as part of an overall evaluation approach E.g. use evaluation to expand upon the “what” information from monitoring, and to address “why” and “so what” questions Strategic questions strategic methods Seek minimum amount of information that addresses the right questions and that will actually be used Tell the performance story Take a contribution analysis approach
Contribution Analysis ( Mayne: Using performance measures sensibly ) 1. Develop the results chain 2. Assess the existing evidence on results 3. Assess the alternative explanations 4. Assemble the performance story 5. Seek out additional evidence 6. Revise and strengthen the performance story
Conclusion Go forward, monitor and evaluate – and help to make a difference. Thank you / Merci pour votre participation. Burt Perrin Burt@BurtPerrin.com