Download presentation
Presentation is loading. Please wait.
Published byDelphia Osborne Modified over 9 years ago
1
Designing Influential Evaluations Session 2 Topics & Timing Uganda Evaluation Week - Pre-Conference Workshop 19 th and 20 th May 2014
2
Topics Who decides what to evaluate and how are topics prioritised? ◦ Discuss in small groups with your neighbours what factors might need to be taken into account when considering what to evaluate? ◦ Prepare to share your ideas in plenary. 2
3
Selecting programmes High stakeholders interest Strong evidence base Innovative, or pilot programme Cross-cutting concerns, for example, anti-corruption or value for money Level of contentiousness and risk Financial value Strategic importance to government objectives or is a particular policy priority Evaluability (whether it is possible to realistically evaluate a programme) 3
4
Evaluability asessment Key Questions Is the programme significant and relevant enough to merit evaluation? Are programme objectives well and clearly defined, plausible (realistic) and measureable? Can the evaluation be done in time to be useful and used? Can the results of the evaluation influence decisions about the programme? Is the cost of the evaluation offset by the likely benefits it can bring to the improvement of the programme or policy? 4
5
The political context is critical … Programs are political creatures: ◦ Identified, designed, debated, endorsed and funded through political processes ◦ Values, interests and policy horizons vary ◦ Survival is a potent political force Evidence based policy vs. policy based evidence: ◦ Selective evidence, data mining, etc. Evaluation is politics: ◦ Evaluation governance ◦ Decisions to evaluate or not evaluate ◦ Choice of evaluation methods 5
6
The use of evidence to inform policy making is desirable To increase transparency as citizens have the right to know how and why governments have made decisions which will impact on them. To increase accountability to the electorate for the decisions which governments make by making information available to them. To improve policy making through better government decisions and improved effectiveness in their implementation. To assist in resource allocation through better government decisions on the choice of policies and programmes to implement. To provide feedback to influence future policies and programmes. 6
7
Historically, evaluations have not been well used Many reasons have been given for the low rate of evaluation utilization, including: ◦ poor timing; ◦ lack of consultation with the evaluation clients and a failure to understand their information needs; ◦ findings that are not disseminated in a timely way to the potential users; ◦ or information not presented in a way which makes it easy to use. Evaluators are too concerned with questions of methodology and pay too little attention to why the evaluation is being done or how it will be used. The culture of many organizations makes it difficult to accept the kinds of criticisms which evaluations inevitably present; and a common defensive reaction in the face of criticism is to say the evaluation was not useful. 7
8
Case study – Uganda PETS Working in small groups, read the short example of an influential evaluation from Uganda. What general lessons emerge from this case? Discuss the implications for selecting evaluation topics in plenary. 8
9
Lessons learned on the design of useful evaluations 9
10
Summary Topic selection needs care and cunning It’s worth spending time to determine evaluability before finalising choice of topics Policy & timing are critical User needs also important 10
11
END 11
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.