Presentation on theme: "Evaluation use and usability: theoretical foundations and current state of international thinking Murray Saunders IOCE Lancaster University Joint Meeting."— Presentation transcript:
Evaluation use and usability: theoretical foundations and current state of international thinking Murray Saunders IOCE Lancaster University Joint Meeting of DG REGIO Evaluation Network and ESF Evaluation Partnership Gdansk 8 July 2011
Mapping the territory Drawing on international sources(IOCE, World Bank, SE Asia CoE, DG REGIO) 1.Critical distinctions in evaluation use 2.The use of evaluation outputs: What counts as Use What counts as usability 3. Some difficult questions: Designing evaluations, factors influencing use and questions produced through RUFDATA
The urge to sense make in increasingly complex environments Evaluations tell us what is going on and what works? Social and political imperatives (issues of transparency, resources, legitimacy and equity) Evaluations contribute to public debate? Methodological debate: difficulties and uncertainties in addressing end points (attribution, causality, alignment and design) Evaluations provide authoritative evidence for policy and developmental strategy? Evaluations cost time and money Moving away from evaluations as expensive ritual, or as compliance toward evaluations as use objects? Why a re-emphasis on the uses and impact of evaluation as a resource for policy learning and development?
The claim is that used effectively, evaluation offers a range of specific benefits or outcomes: stronger public sector planning more efficient deployment of resources improved management and implementation of programmes or other policy interventions stronger ownership and partnership amongst actors with a stake in programmes greater understanding of the factors determining the success of programmes broader scope to assess the value and costs of interventions The use of evaluation in the management of EU programmes in Poland, Martin Ferry, Karol Olejniczak Warsaw 2008 (Ernst and Young)
Instrumental: when decision makers use the evaluation findings to modify the object of the evaluation in some way Conceptual: when the evaluation findings help program staff understand the program in a new way Enlightenment: when the evaluation findings add knowledge to the field and thus may be used by anyone, not just those involved with the program or evaluation of the program Process use: cognitive, behavioural, program and organizational changes resulting from engagement in the evaluation process and learning to think evaluatively. Persuasive or symbolic: persuade important stakeholders that the program or organization values accountability or when an evaluator is hired to evaluate a program to legitimize a decision that has already been made prior to the commissioning of the evaluation. Dreolin N. Fleischer and Christina A. Christie American Journal of Evaluation : 158 Some distinctions of use and mis-use
Refers to the unintended or intended effects of the process of carrying out an evaluation: formative evaluation in action (Patton now talks about developmental evaluation-continuous adaptation) Foregrounding new issues for managing an intervention Drawing attention to hot spots or problem areas Forcing attention on difficult areas Providing a voice for the powerless Drawing attention to time-lines Making participants think about audience and users Policing role The idea of process use: effects of undertaking evaluations
Turning to the uses and usability of the outputs from an evaluation
What forms do evaluation outputs take? A report: Data and analyses (narrative analysis, statistical analysis, comparisons, modeling) Evidence of process, outputs, results, effects, outcomes Analysis: descriptive (what) Analysis: diagnostic (how) Analysis: prescriptive (what should be) Cases of good practice Scenarios Recipients experience
By their use we are referring to the capacity of outputs to contribute to development (policy, practice, strategic management) This involves enabling Changes in policies Changes in practices Changes in systems and protocols Changes in thinking Changes in culture In order for an evaluation output to have developmental impact Must involve contribution to decisions about sustainable changes /improvement
Stages of use of an evaluation output: use audit for stakeholders 0 No Awareness 1.Awareness:little concern or understanding of implication 2. Informational: awareness plus interest in knowing more about implications for policy or practice 3.Personal: beginning to analyse potential implications for policy and practice and impacts on planning (contribution) 4.Management: attention on difficulties in the processes and tasks involved in developing new practices/policies on the basis of evaluation outputs 5.Consequence: attention on impact on stakeholders of new practices/policies, their relevance, evaluation and implied changes derived from evaluation outputs 6.Collaboration: co-ordinating and co-operating with others in using new practices/implementing policies 7.Refocusing: attention now on adaptation, major changes, alternatives to original ideas, creativity
Use as engagement: engagement practice LessMore Distributive or dissemination practice: Report Executive summary Article Interactional practice: Working alongside colleagues Analysis of situational enabling and constraining factors for change (with decision makers/users) Presentational practice: Seminars Presentations Active workshops Embodiments Use
Differences in meaning and practice between use and usability Usability refers to the design of an evaluation: I suggests that there are 7 design decisions that can critically effect its usability Use refers to characteristics of the organisational context and its capacity to respond to evaluation outputs Both dimensions are important in explaining high or low use environments
Use refers to the way in which the outputs of an evaluation act as a resource for onward practice, policy or decision making Towards a framework; what counts as Use?
This requires bridging or boundary crossing practices Evaluation output as a bridging artefact or tool: Providing examples of interesting practice Suggesting ways of moving from A to B (theories of change or engagement strategies Connecting with existing practices (en-grooved practice and how to un-block) Designing evocative resources for change management Moving to decisions on continuation, funding or new policy Validity and authenticity of the evidence is fore-grounded
Embed the output in decision making cycles (clear knowledge on when decisions take place and who makes them) Clear understanding of organisational memory (how evaluations might accumulate and how this output connects) Analyse the capacity of an organisation to respond Systemic processes (feeding into structures that are able to identify and act on implications) Organisations that are lightly bureaucratised (complex adaptive systems) are better placed to respond to tricky or awkward evaluations Strongly connect evaluation to power structures Evaluations that are congruent: recommendations from evaluation need to build on what is already in place. Avoid suggestions which need a total change unless there are resources to back them up. Towards a strategy to maximise use
Usability refers to the way an evaluation design shapes the extent to which its outputs can be used Towards a utilisation framework; what counts as usability?
Reasons and Purposes [planning, managing, learning, developing, accountability] Potential users know why an evaluation is taking place and have an intention to use it Uses [providing and learning from examples of good practice, staff development, strategic planning, PR, provision of data for management control, planning and milestones Rehearsing use environments in real time with real people by identifying a list of specific practices, for example: Tabling the report at a meeting to assess its implication Deciding on what those implications might be and acting on them Doing so in an agreed timeline Undertaking staff development activities on the basis of the findings Publicising and disseminating more widely etc Designing evaluations for usability and use: critical questions produced through RUFDATA
Foci [activities, aspects, emphasis to be evaluated, should connect to the priority areas for evaluation] Create a need to know with key stakeholders by careful selection of foci (co-construction?) Data and Evidence [numerical, qualitative, observational, case accounts] Render evidence and data sets in ways that the non technical stakeholder can read them Audience [Community of practice, commissioners, yourselves] Discriminate between different audiences by style, form and content of output Timing [Coincidence with decision making cycles, life cycle of projects] Make sure the evaluation output and deliverable deadlines within a proposal coincide with other decision making cycles Agency [Yourselves, external evaluators, combination] Involve as wide a group as possible in design Designing evaluations for usability and use: critical questions produced through RUFDATA
Factors Affecting Evaluation Use Evaluation Process External/Internal Inclusive/hierarchic Qualitative/Quantitative Methodology used Product Quality Relevance Context specific Reality reflected Communication/engage- ment strategy Evaluative Culture and organisational context Findings attached to further funding Value given to evaluation Organisational insertion External influence (pressure/independence) User Attitude Forward looking to improvement Involvement