Presentation is loading. Please wait.

Presentation is loading. Please wait.

Measuring and Monitoring Program Outcomes

Similar presentations


Presentation on theme: "Measuring and Monitoring Program Outcomes"— Presentation transcript:

1

2 Measuring and Monitoring Program Outcomes
Evaluation: A Systematic Approach Rossi, et al. Chapter 7 Ian Malcolm

3 Definition of Outcome “The state of the target population or the social conditions that a program is expected to have changed.” (Rossi et al., 2004, p.204) By this definition, program targets need not necessarily have changed or been impacted by the program.

4 Key Terms Outcome Level – the status of an outcome at some point in time. Outcome Change – the difference between outcome levels at different points in time. Program Effect – that portion of an outcome change that can be attributed uniquely to a program as opposed to the influence of some other factor.

5 Identifying Relevant Outcomes
The determination of relevant outcomes can be affected by the following: Stakeholder Perspectives Program Impact Theory Prior Research Unintended Outcomes

6 Stakeholder Perspectives
Stakeholders’ values are often articulated in the stated program objectives, goals, and mission. Often, the evaluator must translate input from stakeholders into practical form.

7 Program Impact Theory Expressed through logic modeling.
Outcomes are identified at two levels: Proximal outcomes: services are expected to affect these most directly and immediately. The program has the greatest capability to affect proximal outcomes, and it is easiest to attribute changes in these outcomes to the program. Distal outcomes are those that often occur indirectly and over longer periods of time, and attribution of change is much more difficult, partly due to the fact that factors other than the program impact them.

8 Program Proximal Effects Distal Effects
Metal finishers attend Environmental Workshops Increased Compliance with Enviro. Regs. Decreased Toxic Waste Discharge Better water quality

9 Prior Research Referring to prior research can call attention to relevant outcomes that might have been overlooked. Research may turn up standard definitions and measures that have policy significance.

10 Unintended Outcomes Outcomes not identified in the program’s impact theory. Prior research may shed light on potential unintended outcomes. Program personnel may provide insight into some that may arise during formative evaluation, so it is important for the evaluator to be in close contact with the program during implementation (e.g., the case of the Navajo schools in which certain key information revealed an important unintended outcome).

11 Measuring Outcomes Selection of important outcomes for measurement must be done carefully Some are not essential Some (esp. distal) may not be feasible to address because of difficulty and/or cost

12 Measurable outcomes must be observables that vary systematically.
Some are one-dimensional and simple to assess (e.g., whether students from XYZ CC automotive program are able to diagnose electronic problems) Some are multidimensional (e.g., the example of chargeable juvenile offenses (p.210)), and require the assessment of multiple facets to evaluate program effectiveness.

13 Examples of Multidimensional Outcomes
Juvenile delinquency Number of chargeable offenses in a given period Severity of offenses Type of offense: violent, property crime, drug offenses, other Time to first offense from an index date Official response to offense: police contact or arrest; court adjudication, conviction, or disposition Toxic Waste Discharge Type of waste: chemical, biological; presence of specific toxins Toxicity, harmfulness of waste substances Amount of waste discharged during a given period Frequency of discharge Proximity of discharge to populated areas Rate of dispersion of toxins through aquifers, atmosphere, food chains, etc.

14 Since many outcomes are multidimensional, it will often be necessary to utilize multiple measures to ensure that the overall impact of the program can be accurately assessed, and that outcomes will not be underrepresented as a result of considering a poorly performing measure. It may be possible to statistically combine related measures into a composite measure with greater validity (see p.217).

15 Measurement Procedures
Must be operationalized and systematized Often, procedures are relatively standard for a given area of study Ready-made procedures may be available

16 Key Properties of Measurement Procedures
Reliability The extent to which consistent results are obtained when measuring the same thing Validity The extent to which the procedure measures what it is intended to measure (may be assessed partly through comparison with alternative measures) Sensitivity The extent to which the values on the measure change when there is a change or difference in the thing being measured

17 Outcome Monitoring Continual measurement and reporting of indicators of the status of the social conditions the program is accountable for improving Key outcome indicators must be selected and monitored. Should be highly responsive to program effects. Should be things that only the program is likely to affect appreciably

18 Pitfalls in Monitoring
Indicators may be inappropriate or fail to address important outcomes “corruptibility of indicators” – book-cooking Interpretation problems – must be sure that indicators are interpreted in context and other potentially impactful factors are considered

19

20 Outcome Data Interpretation
Changes in key variables during program implementation must be taken into account (this highlights the need for monitoring) Process and service utilization information is important, esp. when comparing sites Need standards for judging quality of outcomes within the data limitations (pre-post comparisons may be helpful, though confounding effects may exist) Generally, outcomes are judged by administrators, stakeholders and experts in relation to expectations of performance quality These judgments are easy at the extremes, but harder when data fall in the mid-range.

21 Source Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A Systematic Approach (7th ed.). Thousand Oaks: Sage Publications.


Download ppt "Measuring and Monitoring Program Outcomes"

Similar presentations


Ads by Google