Presentation is loading. Please wait.

Presentation is loading. Please wait.

EVAL 6000: Foundations of Evaluation

Similar presentations


Presentation on theme: "EVAL 6000: Foundations of Evaluation"— Presentation transcript:

1 EVAL 6000: Foundations of Evaluation
Final lecture!

2 (Semi) In-Depth Examination of Five Evaluation Approaches
Utilization-focused evaluation Participatory evaluation Theory-driven/theory-based evaluation CIPP model for evaluation Consumer-oriented evaluation (Scriven’s Key Evaluation Checklist approach)

3 To facilitate a clearer understanding of these evaluation approaches, we will use Heifer Project International (HPI) as a case example to provide context and to discuss how these approaches might be applied in practice

4 Heifer Project International (HPI)
Aim is to reduce poverty, hunger, and social inequities through strategies aimed at creating self-reliance rather than providing short-term relief “Passing on the gift” is one of the unique attributes that sets apart Heifer from other international development initiatives

5 Goals Values Cornerstones Indicators Food & Income Security
Resource Sharing (POG) Environmental Protection Education & Empowerment Policy, Practice, & System Change Relationships Fostering Cornerstones Values Basic Needs Livestock Care & Management Environment Care & Management Education Empowerment System & Policy Improvement Cornerstones Passing on the Gift Accountability Sharing & Caring Sustainability & Self-Reliance Improved Animal Management Nutrition & Income Gender & Family Focus Genuine Need & Justice Improved Environment Full Participation Training & Education Spirituality Indicators Food Security Income Gender Equity Organizing and Action for Social Change Strengthening Communities Policy Change

6 Utilization-Focused Evaluation (UFE)
Evaluation done for and with specific intended primary users for specific, intended uses Premised on the assertion that evaluations should be judged by their utility and actual use

7 Utilization-Focused Evaluation (UFE)
Evaluator is charged with giving careful consideration to how everything that is done, from beginning to end, will affect use Is personal and situational, with strong emphasis on the “personal factor”

8 Utilization-Focused Evaluation (UFE)
Does not give primacy to any specific method, model, approach, or ideological orientation (with the exception of an emphasis on use) Does emphasize The Program Evaluation Standards as a basis for accountability and quality assurance

9 Utilization-Focused Evaluation (UFE)
Advance organizers What decisions, if any, are the evaluation findings expected to influence? When will decisions be made? By whom? When, then, must the evaluation findings be presented to be timely and influential? What is at stake in the decisions? For whom? What controversies or issues surround the decision? What is the history and context of the decision-making process? What other factors (values, politics, personalities, promises already made) will affect the decision making?

10 Utilization-Focused Evaluation (UFE)
Advance organizers, continued How much influence do you expect the evaluation to have—realistically? To what extent has the outcome of the decision already been determined? What data and findings are needed to support decision making? What needs to be done to achieve that level of influence? How will we know afterward if the evaluation was used as intended?

11 Participatory Evaluation
An extension of the more restrictive stakeholder-based approach (with elements of UFE) Emphasis on increasing use through participation Includes aspects of organizational learning and capacity building through stakeholder participation

12 Participatory Evaluation
Evaluator is a coordinator and responsible for technical support, training, and quality control Ultimately, the evaluator works collaboratively/in partnership with a select group of intended users

13 Participatory Evaluation
Two primary forms Practical participatory evaluation (PPE) Utilization-oriented (with an emphasis on formative evaluation) Transformative participatory evaluation (TPE) Democratic, emancipatory, empowerment-oriented

14 Participatory Evaluation
Who controls? Technical decision making (evaluator vs. stakeholder) Stakeholder selection for participation? Stakeholders selected for participation (diverse vs. limited) How deep? Stakeholder participation (involved in all aspects of inquiry vs. involved as a source for consultation)

15 Original dimensions of PPE

16 Modified dimensions of PPE (Cullen, 2010)

17 Theory-Driven/Based Evaluation
Any evaluation strategy or approach that explicitly integrates and uses stakeholder, social science, some combination of, or other types of theories in conceptualizing, designing, conducting, interpreting, and applying an evaluation

18 Theory-Driven/Based Evaluation
Sometimes referred to as program-theory evaluation, theory-based evaluation, theory-guided evaluation, theory-of-action, theory-of-change, program logic, logical frameworks, outcomes hierarchies, realist or realistic evaluation, and, program theory-driven evaluation science

19 Theory-Driven/Based Evaluation
All, in some form or another, aim to determine how, why, when, and for whom a program works and under what conditions (i.e., causal explanation)

20

21

22 Core Principles and Subprinciples of Theory-Driven Evaluation
1. Theory-driven evaluations/evaluators should formulate a plausible program theory a. Formulate program theory from existing theory and research (e.g., social science theory) b. Formulate program theory from implicit theory (e.g., stakeholder theory) c. Formulate program theory from observation of the program in operation/exploratory research (e.g., emergent theory) d. Formulate program theory from a combination of any of the above (i.e., mixed/integrated theory) 2. Theory-driven evaluations/evaluators should formulate and prioritize evaluation questions around a program theory a. Formulate evaluation questions around program theory b. Prioritize evaluation questions 3. Program theory should be used to guide planning, design, and execution of the evaluation under consideration of relevant contingencies a. Design, plan, and conduct evaluation around a plausible program theory b. Design, plan, and conduct evaluation considering relevant contingencies (e.g., time, budget, use) c. Determine whether evaluation is to be tailored (i.e., only part of the program theory) or comprehensive 4. Theory-driven evaluations/evaluators should measure constructs postulated in program theory a. Measure process constructs postulated in program theory b. Measure outcome constructs postulated in program theory c. Measure contextual constructs postulated in program theory 5. Theory-driven evaluations/evaluators should identify breakdowns, side effects, determine program effectiveness (or efficacy), and explain cause-and-effect associations between theoretical constructs a. Identify breakdowns, if they exist (e.g., poor implementation, unsuitable context, theory failure) b. Identify anticipated (and unanticipated), unintended outcomes (both positive and negative) not postulated by program theory c. Describe cause-and-effect associations between theoretical constructs (i.e., causal description) Explain cause-and-effect associations between theoretical constructs (i.e., causal explanation) i. Explain differences in direction and/or strength of relationship between program and outcomes attributable to moderating factors/variables ii. Explain the extent to which one construct (e.g., intermediate outcome) accounts for/mediates the relationship between other constructs

23 CIPP Model for Evaluation
The model’s core concepts are denoted by the acronym CIPP, which stands for evaluations of an entity’s context, inputs, processes, and products Generally targeted toward program managers and other decision makers

24 CIPP Model for Evaluation
Context evaluations are applied to assess needs, problems, assets, and opportunities, plus relevant contextual conditions and dynamics to help decision makers define goals and priorities and to help the broader group of users judge goals, priorities, and outcomes Input evaluations serve program planning by helping identify and then assess alternative approaches, competing action plans, staffing plans, and budgets for their feasibility and potential cost-effectiveness to meet targeted needs and achieve defined goals

25 CIPP Model for Evaluation
Process evaluations are used to assess the implementation of plans to help staff carry out activities and later to help the broad group of users judge program implementation and expenditures and also interpret outcomes Product evaluations are used to identify and assess costs and outcomes (intended and unintended, short-term and long-term) and may be divided into assessments of impact, effectiveness, sustainability, and transportability

26

27 The Relevance of Four Evaluation Types to Formative and Summative Evaluation Roles
Context Input Process Product Formative Evaluation: Prospective application of CIPP information to assist decision making and quality assurance. Guidance for determining areas for improvement and for choosing and ranking goals (based on assessing needs, problems, assets, and opportunities, plus contextual dynamics). Guidance for choosing a program strategy (based on identifying and assessing alternative strategies and resource allocation plans). Examination of the work plan. Guidance for implementing the operational plan (based on monitoring and judging activities and delivering periodic evaluative feedback). Guidance for continuing, modifying, adopting, or terminating the effort (based on assessing outcomes and side effects). Summative evaluation: Retrospective use of CIPP information to sum up the effort’s merit, worth, probity, equity, feasibility, efficiency, safety, cost, and significance. Comparison of goals and priorities to assessed needs, problems, assets, opportunities, and relevant contextual dynamics. Comparison of the program’s strategy, design, and budget to those of critical competitors and to goals and targeted needs of beneficiaries. Full description of the actual process and record of costs. Comparison of the designed and actual processes and costs. Comparison of outcomes and side effects to goals and targeted needs and, as feasible, to results of competitive programs. Interpretation of results against the effort’s assessed context, inputs, and processes.

28

29 Consumer-Oriented Evaluation
Predicated on “values” and “valuing” Values (aka, criteria and standards) brought to bear are derived from multiple sources (e.g., definitional, needs of impacted population, legal, ethical, functional/logical) Targeted toward those affected by programs (i.e., consumers)

30 Consumer-Oriented Evaluation
Requires evaluators to investigate values in terms of process, outcomes, costs, comparisons, and generalizability under the “Subevaluations” checkpoints in the Key Evaluation Checklist (KEC) Explicit integration of empirical “facts” with values (i.e., the fact-value synthesis) as well as the integration of multiple values (i.e., the value synthesis)

31 Consumer-Oriented Evaluation
Organized around 15 checkpoints Preliminaries Executive summary Preface Methodology Foundations Background and context Descriptions and definitions Consumers (impactees) Resources (a.k.a., “strengths assessment”) Values

32 Consumer-Oriented Evaluation
Checkpoints, continued Subevaluations Process Outcomes Costs Comparisons Generalizability Conclusions & Implications Synthesis (possible) Recommendations & Explanations (possible) Responsibility & Justification Report & Support Metaevaluation


Download ppt "EVAL 6000: Foundations of Evaluation"

Similar presentations


Ads by Google