Presentation on theme: "Strengthening organisational capacities for evaluation of humanitarian action London 28 th September 2010."— Presentation transcript:
Strengthening organisational capacities for evaluation of humanitarian action London 28 th September 2010
Introduction Evaluation revolution over last decade 1.ALNAP 2.Evaluation Guides 3.Evaluation Research 4.Evaluation Database 5.Meta evaluations
But Full benefit not being realised Not part of the culture of organisations Disconnected Growing scepticism “Nothing changes” eg Haiti
Efforts underway to address this UNEG OECD-DAC Individual agencies – CARE, DFID Not a new issue in the wider evaluation world Eg Patton: Utilisation-Focused Evaluation
Action research Followed other work: Sandison, Peta (2007) “The Utilisation of Evaluations” (London: ALNAP) Literature review Interviews Draft paper and workshop Further interviews, workshops
Limitations Work in progress Mostly only spoken to evaluators Aim of today: – Endorse/reject/alter the framework – Swap experience – Plan the way forward
Proposed Framework Capacity Area 1: Leadership, culture and structure Capacity Area 2: Evaluation purpose and policy Capacity Area 3: Evaluation Processes and Systems Capacity Area 4: Supporting processes and mechanisms
Capacity Area 1 Leadership, culture and structure
Ensure leadership is supportive of evaluation and monitoring Leadership is key – interviews and literature – Recruit? – Motivate? – Evaluation champions? – Demonstrate the benefit of evaluation
Leadership Leadership value evidence from evaluations – Agree 59% Strongly agree 32% (91% comb’d) – Disagree 5% – Strongly disagree 5% – Total Responses: 22
Operational settings Still positive, but less so for operational settings (Red Cross and NGO more negative) – Agree 64% – Disagree 27% – Strongly agree 9%
Evaluation culture Needs to be tackled early on Virtuous circle? Each organisation unique – Strategy needed – Learn from experience of others – Cultural web
Evidence and data is actively sought to help decision-making at all levels of the organisation – Agree 60% – Disagree 40%
The ‘personal factor’ Go from abstract to real and specific Identify actual primary intended users Exactly what do they want, when? Build ownership – not distant independent judge! Evaluators need people skills!
All evaluations include a stakeholder analysis of intended users – Disagree 60% – Agree 40% – NGO and Red Cross less positive
Organisational structure Central unit? Decentralised evaluations? Reporting to the board?
Structural position of the evaluation unit has tangible impact on emphasis placed on evaluations (e.g. for accountability versus for learning) – Agree 41% – Strongly agree 27% – Disagree 32%
Capacity Area 2 Evaluation purpose and policy
Why evaluate? Lesson-learning or accountability? The key question and the most controversial! Mandated evaluations undercut utility.... Separate out these functions?
My organisation recognises and actively works to resolve the tensions between accountability and learning – Disagree 50% – Agree 36% – Strongly agree 14% – No clear cut differences across org’l type
There is separation of accountability and learning functions: by the department/ individuals involved: – Agree 55% – Strongly agree 14% – Disagree 32% (UN)
Policy Does it exist? Is it tailored for humanitarian action? Do evaluators see it? Does it include reference to utilisation? OECD-DAC criteria? Is more flexibility needed? Policy does not mean practice......
Within my organisation, there is a formal policy relating to the evaluation of humanitarian aid: – Agree 41% – Strongly agree 36% – Disagree 14% – Strongly disagree 9%
This policy is distinct from the evaluation policy of development aid: – Agree 36% – Strongly agree 18% – Disagree 23% – Strongly disagree 23%
The policy makes reference to utilisation and follow-up of evaluations – Agree 41% – Strongly agree 32% – Disagree 18% – Strongly disagree 9%
Timeliness Depends on why being done For learning: need in time to change programmes RTEs are a response to this Need integrating into the programme cycle
Evaluations are specifically timed to meet programme management requirements: – Agree 55% – Strongly agree 18% – Disagree 28%
Quality not quantity Finite capacity for using evaluations Reflection takes time Unused evaluations lower morale and credibility
There is capacity within my organisation to reflect on, absorb and act upon the findings of evaluations – Agree 60% – Disagree 32% – Strongly disagree 9%
Evaluation Processes and Systems Capacity Area 3
Develop a strategic approach Where are problem areas? Not necessarily appropriate to cover everything SIDA example Where is change most likely? Where is change most needed?
My organisation has developed and implemented a strategic approach to selecting what should be evaluated – Disagree 55% – Agree 36% – Strongly agree 9%
Involve key stakeholders Evaluations are political Need buy-in early in the process Think downward accountability as well as upwards Don’t have too many stakeholders! Reference groups can help
There is a mechanism for involving key stakeholders in the evaluation process: - at the outset of the evaluation: – Agree 50% – Strongly agree 18% – Disagree 32%
There is a mechanism for involving key stakeholders in the evaluation process: - In drawing up the ToR: – Agree 41% – Strongly agree 18% – Disagree 41%
There is a mechanism for involving key stakeholders in the evaluation process: - In commenting upon final lessons and recommendations: – Agree 59% – Strongly agree 18% – Disagree 23%
Develop a range of evaluation tools Unnecessary framework category? Timeliness already covered Brevity linked to dissemination Focus on key issues linked to working out why you are doing the evaluation in the first place
One of the most supported issue! More evaluation tools are needed within my organisation: – Agree 64% – Strongly agree 18% – Disagree 19%
Mix internal and external staff Outsiders learn the most! (Process use) Expensive and take the learning with them. Don’t understand the nuances
Mixed teams/insider teams offer advantages compared with teams comprised solely of external evaluators: – Agree 73% – Strongly agree 27%
Technical quality For a good practice review
Dissemination This is a key strategic issue Think about this when commissioning the evaluation Can include targeted 1-1 briefings and a range of products TV Documentaries Themed reports
My organisation has a dissemination strategy for evaluation findings: – Agree 45% – Strongly agree 18% – Disagree 37%
Ensure management response UNDP Evaluation Resource Centre FAO: 2 year reviews of implementation of findings
There is a formal system within my organisation in which managers respond to evaluations findings and recommendations: – Agree 45% – Strongly agree 18% – Disagree 36% (NGOs and Red Cross)
There is follow-up of this response over time to see whether progress in implementing recommendations has been made: – Disagree 50% – Strongly disagree 9% – Agree 32% – Strongly agree 9%
Meta evaluations Very high demand for themed work Identify key benchmarks and run these through a series of evaluations (eg HR in emergencies) Corroborate findings across different evaluations
Meta evaluations are carried out within my organisation: – Strongly disagree 9% – Disagree 41% – Agree 36% – Strongly agree 14%
Capacity Area 4 Supporting processes and mechanisms
Improve monitoring Evaluations should not be a substitute for lack of monitoring Evaluations weakened by lack of data Should monitoring be standardised? SPHERE??
Monitoring of humanitarian programmes could be improved within my organisation: – Agree 50% – Strongly agree 45% – Disagree 5%
Standardised monitoring would improve the quality of evaluations: – Agree 59% – Strongly agree 14% – Disagree 27%
Involve Beneficiaries Evaluations driven by upward accountability demands HAP report – more needs to be done! Remains a challenge
All humanitarian evaluations involve a beneficiary survey / recipient feedback mechanism of some sort – Disagree 41% – Strongly disagree 9% – Agree 32% – Strongly agree 18%
All humanitarian evaluations should involve a beneficiary survey of some sort: – Agree 41% – Strongly agree 41% – Disagree 18% (UN)
Incentives Career incentives for evaluators Informal incentives (culture again....) Who demands evidence? DFID study: career advanced by original ideas, not from learning lessons Formal incentives also important: training
Working within the evaluation department restricts career opportunities within my organisation: – Disagree 64% – Strongly disagree 9% – Agree 28%
Finances Improving programmes through evaluation can be cost-effective provided they are used! Take a strategic approach to allocating evaluation resources
A rational approach is taken to assigning resources to evaluation within my organisation: – Agree 50% – Strongly agree 14% – Disagree 36%
Peer networks ALNAP UNEG Customised peer groups
Evaluation networks have proven useful in developing evaluation policy and practice: – Agree 59% – Strongly agree 36% – Disagree 5%
Media Saints or sinners Fear of criticism Do we need to engage with the media as a group?
There is pressure within my organisation to keep critical reports out of the public domain: – Disagree 59% – Strongly disagree 9% – Agree 18% – Strongly agree 14%
The anticipated media response affects the evaluation culture in my organisation: – Agree 41% – Strongly agree 9% – Disagree 41% – Strongly disagree 9%
Donors Drives much evaluation Can be an impediment to learning – Lack of ownership – Box-ticking Donor capacity also limited
Donor / external agency demands are the key factor influencing evaluation practice within my organisation: – Agree 36% – Strongly agree 23% – Disagree 27% – Strongly disagree 14%
Conclusions Complex Needs a significant change to make evaluations demand-led Work out why we are doing them first! The rest then follows.....
Way forward Comments on paper: general and by peer group Issues raised today Survey results In-depth analysis of agencies? Self-assessment tool? ALNAP committed to supporting this Expert change advisors?