Presentation is loading. Please wait.

Presentation is loading. Please wait.

PPA 502 – Program Evaluation Lecture 10 – Maximizing the Use of Evaluation Results.

Similar presentations


Presentation on theme: "PPA 502 – Program Evaluation Lecture 10 – Maximizing the Use of Evaluation Results."— Presentation transcript:

1

2 PPA 502 – Program Evaluation Lecture 10 – Maximizing the Use of Evaluation Results

3 Ad Hoc Evaluations Most evaluations do not have an immediate, concrete, and observable effect on specific decisions and program practices.

4 Ad Hoc Evaluations The most typical impact is one in which the evaluation findings provide additional pieces of information in the difficult puzzle of program action, permitting some reduction in uncertainty. Evaluation results are often absorbed quickly into an organization’s culture, changing program reality, but usually not the operation of the program.

5 Ad Hoc Evaluations Five ways to increase use of evaluation results. –Develop realistic recommendations that focus on program improvement. –Explore multiple uses of study data. –Constantly remind decision makers of findings and recommendations. –Share findings and recommendations with broad audiences. –Assign evaluation staff to assist in implementing recommendations.

6 Ad Hoc Evaluations Develop realistic recommendations that focus on program improvement. –Decision makers want to improve programs. They appreciate focused evaluations that provide realistic options based on a systematic and independent assessment of programs. –One approach: complete a thorough analysis of the first 10 percent of the findings. Often this analysis provides a valid basis for developing a set potential recommendations for program change. Reviewed informally by as many stakeholders as possible.

7 Ad Hoc Evaluations Explore multiple uses of study data. –Evaluators often limit the use of evaluation data to the questions of hypotheses under investigation. –Backup data should be made available in summary tables at the end of the report for use by different audiences for different reasons. –The act of sharing data creates goodwill and maximizes the use of the evaluation effort. –Example: Michigan department of social services demographic program profiles of clients broken down by counties. –Creates potential for unintended program changes.

8 Ad Hoc Evaluations Constantly remind decision makers of findings and recommendations. –Write agency newsletter articles describing the findings, the recommendations, and successful implementation of changes to educate the members of the organization. –Prepare presentations for the agency director to ensure that the findings are incorporated into the director’s thinking and public statements.

9 Ad Hoc Evaluations Constantly remind decision makers of findings and recommendations (contd.) –Make recommendations for similar changes in other program areas so the organization’s members begin to think beyond the one-time implications of a single evaluation. –Remind other managers of evaluation results during critical executive committee meetings or in informal sessions.

10 Ad Hoc Evaluations Share findings and recommendations with broad audiences. –Stakeholders rarely are interested in the methodology, but are greatly interested in how the program should change as the result of the new information from the study. –Evaluation findings and recommendations should be presented in a concise, factual executive summary with a technical appendix or report available for a complete understanding of the methodology and statistics used. –Identify the limitations of the design and findings, but highlight the recommended options available to decision- makers.

11 Ad Hoc Evaluations Share findings and recommendations with broad audiences (contd.). –Sharing results with broad audiences. General public – interviews. Oversight organizations – briefings. University – lists of completed studies, copies of studies, and implementation information. Other professional staff – share results, publish results in professional journals.

12 Ad Hoc Evaluations Assign evaluation staff to assist in implementing recommendations. –Evaluation staff can gain valuable program experience and future credibility by being assigned to assist program staff in implementing recommendations from evaluation findings.

13 Ongoing Performance (Outcome) Monitoring The key problem of ad hoc evaluation is the limited usefulness window for cross- sectional information.

14 Ongoing Performance (Outcome) Monitoring One way to provide continuous evaluation data is to design a strategy for ongoing collection of client outcome information. –With such data the organization might regularly develop ways to improve the program by assessing current outcome impact. Public organizations need to institutionalize the collection of outcome information so it is regularly assessed and released in management reports.

15 Ongoing Performance (Outcome) Monitoring Uses for outcome information. –Orienting advisory board members on the impact of programs. –Providing outcome data to demonstrate effectiveness of programs. –Use of outcome information for marketing programs. –Use of outcome information to identify weaknesses to improve program performance. –Demonstrate accountability.

16 Ongoing Performance (Outcome) Monitoring Uses for outcome information (contd.). –Support resource allocation requests. –Justify budget requests. –Bolster employee motivation. –Support performance contracting. –Implement quality control checks on efficiency measurement. –Enhance management control. –Improve communication between citizens and government officials. –Improve services to customers.

17 Ongoing Performance (Outcome) Monitoring Elements to encourage greater use of outcome monitoring data. –Timely data. Six months. –Detailed breakouts. Disaggregate by important program characteristics. –Worker participation. Program staff should participate actively in both the selection of the outcome measures and the information collection.

18 Ongoing Performance (Outcome) Monitoring Elements to encourage greater use of outcome monitoring data (contd.). –Perception that data are valid. Check accuracy. Public results of validity check. Cross-check outcomes. Obtain high response rate. Ensure high face validity.

19 Ongoing Performance (Outcome) Monitoring Elements to encourage greater use of outcome monitoring data (contd.). –Demonstrate the usefulness of the outcome data. Encourage and train managers to seek information as to which program characteristics are impacting or explaining changes in the outcomes. Encourage program managers to provide explanatory information along with the performance reports they provide to higher levels, particularly if reported outcomes differ from expected outcomes. Publicize the outcomes for work units internally to create constructive competition between work groups to improve outcome results. Require program managers to estimate the impact of their budget requests on subsequent outcome levels. Encourage program managers to use outcome information in their speeches, daily discussions, meetings, and press interviews.

20 Ongoing Performance (Outcome) Monitoring Elements to encourage greater use of outcome monitoring data (contd.). –Repeat the measurements. Outcome measurements should be collected at least quarterly. –Changes over time can be identified. –Agency gains more confidence in consistent data. –Development of an historic database. –Mandating outcome reporting. Trend at federal and local level to require the collection of outcome information. Program outcome reporting should be required as part of budget justification.

21 Ongoing Performance (Outcome) Monitoring Elements to encourage greater use of outcome monitoring data (contd.). –Develop appropriate information systems. Some outcome information has unique characteristics that require adaptations of current program data collection systems to track clients over time, especially after they leave the program.

22 Conclusions Ad hoc evaluations. –Develop realistic recommendations that focus on program improvement. –Explore multiple uses of study data. –Constantly remind decision makers of findings and recommendations. –Share findings and recommendations with broad audiences. –Assign evaluation staff to assist in implementing recommendations.

23 Conclusions Ongoing performance (outcome) monitoring. –Timely reports should be provided. –Reports should include detailed breakdowns by program, client, and worker characteristics. –Program staff should actively participate in defining outcome measures and in the data collection process. –Outcome data should have high face validity. –The use of outcome information should be demonstrated. –Outcome measurements should be repeated on a regular basis.

24 Conclusions Ongoing performance (outcome) monitoring (contd.). –Performance monitoring can be mandated to ensure the collection of data over time and different political administrations. –Outcome information systems may need to be modified to reflect the unique needs of tracing clients over time.


Download ppt "PPA 502 – Program Evaluation Lecture 10 – Maximizing the Use of Evaluation Results."

Similar presentations


Ads by Google