Presentation is loading. Please wait.

Presentation is loading. Please wait.

Capacity development & learning in evaluation

Similar presentations


Presentation on theme: "Capacity development & learning in evaluation"— Presentation transcript:

1 Capacity development & learning in evaluation
Uganda Evaluation Week 19th to 223rd May 2014

2 Contents What do we mean by evaluation capacity?
Links to managing for results Evidence from organisations who have introduced results management systems The challenge of evaluation culture The experience of DFID Results system or results culture Conclusions

3 What do we mean by evaluation capacity?
Promoting accountability, transparency & learning Question policies and practices Capacity empowers stakeholders to … A process of good governance? Building blocks? Individual Organisation Enabling environment systems Building blocks Process Leads to demand, supply and use of evaluation Both ?

4 Individuals’ knowledge, skills and competences
Individual Level Individuals’ knowledge, skills and competences Senior management capacity for strategy & planning At mid-management level, understanding of the role of evaluation as a tool for effectively achieving development results. Behavioural independence and professional competences of those who manage and/ or conduct evaluations. Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and evaluation systems.

5 The institutional framework
Institutional Level The institutional framework Evaluation policy exists and is implemented. An evaluation unit with a clearly defined role, responsibilities. Functional Quality Assurance system. Independence of funding for evaluations. System to plan, undertake and report evaluation findings in an independent, credible and useful way exists. Open dissemination of evaluation results. Knowledge management systems in support of the evaluation function exists and is used. Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and evaluation systems.

6 Enabling Environment A context that fosters the performance and results of individuals and organizations Functioning National voluntary organisation for professional evaluation. National policy on evaluation Strong evaluation culture. Public administration committed to transparency and managing for results and accountability. Political will to institutionalize evaluation. Existence of adequate information and statistical systems. Legislation and/or policies to institutionalize monitoring and evaluation systems Source: Adapted from Segone, 2010, Moving from policies to results by developing national capacities for country led monitoring and evaluation systems.

7 All or nothing? Individual Organisation Environment Neglect of…
Means that … Individual Skills are lacking at all levels Demand doesn’t emerge Organisation Lack of structures and processes Individuals’ skills are not applied Environment Absence of legislation & supportive policies Lack of clarity in roles and responsibilities

8 Results management framework
Strategic Results Framework Objectives, indicators & strategy Roles & responsibility Programme Results Framework Results chain & theory of change Align with strategic framework Performance indicators Credible performance reporting Relevant, timely & reliable reporting Use results to improve performance Adjust the programme Develop lessons & good practices Credible measurement & analysis Measure & assess results Assess contribution to strategic objectives Source: Itad Ltd, adapted from ‘Managing for Development Results Handbook’

9 Expectations for managers
Planning Understanding the theory of change Setting out performance expectations Implementation Measure and analyse results and assess contribution Decision-making & learning Deliberately learn from evidence and analysis Accountability Reporting on performance achieved against expectations Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

10 Evaluations of RBM UNDP (2007) Finland (2010)
Significant progress was made on: on sensitising staff to results; and on creating the tools to enable a fast and efficient flow of information. Managing for results has proved harder to achieve. Stronger emphasis on resource mobilisation and delivery, a culture fostering a low level of risk-taking, weak information systems at country level, the lack of clear lines of accountability and the lack of a staff incentive structure all work against building a strong culture of results. Finland (2010) Tools and procedures are comprehensive and well established . Good standards of project design are not consistently applied. Low priority given by managers to monitoring, reporting and evaluation. Most monitoring reports were activity-based or financial and there was little reporting against logframes. Managing for results depends not only on technical methodology, but also on the way the development cooperation programme is organised and managed. Finland’s approach is characterised as being risk-averse; few examples of results being used to inform policy.

11 ‘Can we demonstrate the difference that Norwegian aid makes?’
Overall conclusion Although there are some elements of good foundations for better results measurement, current arrangements lack the strength of leadership, depth of guidance and coherence of procedures necessary for effective evaluation of Norwegian aid. As a result of a lack of incentives, poor processes for planning and monitoring grants, and weaknesses in the procedures for evaluations, this cannot be demonstrated. ITAD Ltd (2014) Can we demonstrate the difference that Norwegian Aid makes? Evaluation of results measurement and how this can be improved Available at:

12 What is an evaluative culture?
An organization with a strong evaluative culture: engages in self-reflection and self-examination: deliberately seeks evidence on what it is achieving, such as through monitoring and evaluation, uses results information to challenge and support what it is doing, and values candour, challenge and genuine dialogue; engages in evidence-based learning: makes time to learn in a structured fashion, learns from mistakes and weak performance, and encourages knowledge sharing; encourages experimentation and change: supports deliberate risk taking, and seeks out new ways of doing business. Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

13 UK Department for International Development
A 2009 study into DFID’s evaluation reports found: Weaknesses were systemic in nature linked to top management, requiring a significant change in culture. A key overarching problem was an unduly defensive attitude to the findings from evaluation. Other detailed recommendations called for: evaluability issues to be considered at the planning stage; for training of staff; for strengthening the evidence base that underpins evaluations; and for requiring managers to make a formal response to evaluations. Roger C Riddell (2009) The Quality of DFID’s Evaluation Reports and Assurance Systems. IACDI (The Independent Advisory Committee on Development Impact)

14 DFID – Highly rated for evaluability
Five features of DFID’s approach combine to justify high ratings : Continuity of guidance from planning a project Business Case; quality assurance arrangements; evaluation policy; and evaluation training materials, with some cross referencing. Recognition that a clear logic model and results based on prior evidence strengthens the quality of project design rather than being a formality to complete a project proposal. Evaluability is assessed from several perspectives: expected impact and outcomes; strength of the evidence base; theory of change; and what arrangements are need to measure, monitor and evaluate progress and results. Documentation includes detailed descriptions, training or self-briefing materials and examples for staff to follow. There is consistency of message across planning guidance, appraisal and approval, with a detailed checklist for quality assurance. Source: ITAD Ltd (2014) Can we demonstrate the difference that Norwegian aid makes? An evaluation of results measurement and how this can be improved. Annex 5 (available on

15 DFID – Embedding Embedding: Business Cases and Evaluation advisors
Since 2011: 37 advisers in a evaluation role; 150 staff accredited in evaluation and 700 people receiving basic training. Significant increase in the quantity of evaluations commissioned increased from 12 per year, prior to 2011, to an estimated 40 completed evaluations in 2013/14. The embedding process has increased the actual and potential demand. Decision to evaluate now made during the preparation of BCs. Good for programme performance, but a lack of a broader strategic focus. Depth of this capacity is less than required with 81% accredited to date only at the foundation or competent level Gaps in capacity relate to: understanding why and when to commission evaluation enhancing the contexts of evaluations and engaging stakeholders appropriately selecting and implementing appropriate evaluation approaches while ensuring reliability of data and validity of analysis reporting and presenting information in a useful and timely manner. Need to: strengthen evaluation governance; develop a DFID evaluation strategy Source DFID (2014) Rapid Review of Embedding Evaluation in UK DFID

16 Core quality model Quality assurance Technical guidance Programme
Standards Perfor-mance Technical guidance How to do it Training Programme Procedures, roles & responsibilities Results & evaluability Comment is the QA can lead to risk averse reactions: References for World Bank and DFID are both from internal reviews: for DFID the ‘Quality Assurance Unit Annual Report 2011/12’; and ‘Better Programme Management: Update on the End-to-End Review of the Programme Cycle May 2013’ and for the World Bank, Delivering results by enhancing our focus on quality. OPCS (2012) para 41

17 DFID Learning DFID is the highest performing civil service main department for ‘learning and development’. (Cabinet Office survey) Evaluations are a key source of knowledge. 40 evaluations completed in 425 evaluations either underway or planned as at July 2013. Annual, mid-term and project completion reviews are an under-utilised resource. Staff find it hard to identify what is important and what is irrelevant. DFID’s ability to influence has been strengthened by its investment in knowledge. Issues: Workload pressures restricts making time to learn. Staff often feel under pressure to be positive about assessing both current and future project performance. Knowledge is sometimes selectively used to support decision-making. Positive bias links to a culture where staff have often felt afraid to discuss failure. Many evaluations are not sufficiently concise or timely to affect decision-making. Source: Independent Commission for Aid Impact (2014) How DFID Learns

18 UK – National Audit Office
£44m spent on government evaluation in Estimated 102 FTE staff working on evaluation in the government Findings Recommendations Significant spend Coverage incomplete Rationale for what the government evaluates is unclear. Evaluations often not robust enough to reliably identify the impact. Learning not used to improve impact and cost-effectiveness. Plan evaluation when designing all new policies. Design policy implementation to facilitate robust evaluation. Departments to make data available to independent evaluators for research purposes. Source NAO (2014) Evaluation in Government

19 Results system or results culture?
Many organizations have systems of results A results-based planning system with results frameworks for programmes. Results monitoring systems in place generating results data. Evaluations undertaken to assess the results achieved by an evaluation unit. Reporting systems in place providing data on the results achieved. But these should not be mistaken for an evaluative culture. Indeed, on their own, they can become a burdensome system that does not help management at all. Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR 4p

20 Measures to foster an evaluative culture
Leadership Demonstrated senior management leadership and commitment Regular informed demand for results information Building capacity for results measurement and results management Establishing and communicating a clear role and responsibilities for results management Organisational support structures Supportive organizational incentives Supportive organizational systems, practices and procedures An outcome-oriented and supportive accountability regime Learning-focused evaluation and monitoring A learning focus Building in learning Tolerating and learning from mistakes Mayne, John (2008) Building an evaluative culture for effective evaluation and results management. Institutional Learning and Change (ILAC) Initiative Brief 20. CGIAR

21 Conclusions – taking a positive view
Evaluation only one source of information alongside research and implementation experience. ECD needs to inform how these work together. Quality evaluation is built on quality planning. ECD needs to be linked to better planning systems. Technical skills are necessary but are not sufficient. Effective evaluation will be determined by the culture and incentives in the organisation. ECD is a journey, not a destination. Systems are not static; they need continual review, learning and revision. There is no simple solution but rather systems need to be introduced, used, tested, reviewed and then updated in a rolling cycle.

22 End


Download ppt "Capacity development & learning in evaluation"

Similar presentations


Ads by Google