PPA 502 – Program Evaluation

Slides:



Advertisements
Similar presentations
Chapter 2 The Process of Experimentation
Advertisements

Introduction to Monitoring and Evaluation
Performance management guidance
Donald T. Simeon Caribbean Health Research Council
What You Will Learn From These Sessions
Designing an Effective Evaluation Strategy
Chapter 1: An Overview of Program Evaluation Presenter: Stephanie Peterson.
Program Evaluation. Lecture Overview  Program evaluation and program development  Logic of program evaluation (Program theory)  Four-Step Model  Comprehensive.
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Process Evaluation: Considerations and Strategies CHSC 433 Module 4/Chapter 8 L. Michele Issel UIC School of Public Health.
Ray C. Rist The World Bank Washington, D.C.
A brief overview What is program evaluation? How is an evaluation conducted? When should it be used? When can it be used? Used with Permission of: John.
The Lumina Center Grantseeking Workshop Series Presents Outcomes & Evaluations April 20, 2006.
PPA 502 – Program Evaluation Lecture 2b – Evaluability Assessment.
PPA 502 – Program Evaluation
Fundamentals of Information Systems, Second Edition
Program Evaluation In A Nutshell 1 Jonathan Brown, M.A.
Evaluation. Practical Evaluation Michael Quinn Patton.
Types of Evaluation.
Certified Business Process Professional (CBPP®)
Purpose of the Standards
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
How to Develop the Right Research Questions for Program Evaluation
RESEARCH DESIGN.
Dr. G. Johnson, Program Evaluation and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson.
Developing the Logical Frame Work …………….
Monitoring Evaluation Impact Assessment Objectives Be able to n explain basic monitoring and evaluation theory in relation to accountability n Identify.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
CONCEPT PAPER RESULT BASED PLANNING. RESULT-ORIENTED PLANNING Overall Objective/ Goal Specific Objective/ Purposes Expected Result/ Output Activities.
Performance Measures AmeriCorps Project Director Training Saratoga, NY October 7 th – 9 th, 2013.
Demystifying the Business Analysis Body of Knowledge Central Iowa IIBA Chapter December 7, 2005.
Chapter 5 Internal Control over Financial Reporting
Program Evaluation EDL 832 Jeffrey Oescher, Instructor 6 June 2013.
Feasibility Study.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Comp 20 - Training & Instructional Design Unit 6 - Assessment This material was developed by Columbia University, funded by the Department of Health and.
UNDAF M&E Systems Purpose Can explain the importance of functioning M&E system for the UNDAF Can support formulation and implementation of UNDAF M&E plans.
GSSR Research Methodology and Methods of Social Inquiry socialinquiry.wordpress.com January 17, 2012 I. Mixed Methods Research II. Evaluation Research.
Overview of Evaluation ED Session 1: 01/28/10.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
Monitoring and Evaluation
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc., All Rights Reserved. 6-1 Chapter 6 CHAPTER 6 INTERNAL CONTROL IN A FINANCIAL STATEMENT AUDIT.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Chapter Thirteen – Organizational Effectiveness.  Be able to define organizational effectiveness  Understand the issues underpinning measuring organizational.
1 Results-based Monitoring, Training Workshop, Windhoek, Results-based Monitoring Purpose and tasks Steps 1 to 5 of establishing a RbM.
Evaluation design and implementation Puja Myles
Program Evaluation Overview. Definitions of Program Evaluation systematic collection of information abut the activities, characteristics, and outcome.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Laying the Groundwork Before Your First Evaluation Diana Epstein, Ph.D, CNCS Office of Research and Evaluation Adrienne DiTommaso, MPA, CNCS Office of.
Session VI Evaluation & Wrap Up. Evaluation Activities directed at collecting, analyzing, interpreting, and communicating information about the effectiveness.
Project Evaluation for MSP Targeted Partnerships Eric R. Banilower Horizon Research, Inc.
Introduction to Management LECTURE 16: Introduction to Management MGT
Copyright © 2007 Pearson Education Canada 9-1 Chapter 9: Internal Controls and Control Risk.
Developing Monitoring & Evaluation Frameworks: Process or Product? Anne Markiewicz.
Basic Concepts of Outcome-Informed Practice (OIP).
V 2.1 Version 2.1 School-wide PBIS Tiered Fidelity Inventory.
Selection Criteria and Invitational Priorities School Leadership Program U.S. Department of Education 2005.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Stages of Research and Development
Logic Models How to Integrate Data Collection into your Everyday Work.
Introduction to Program Evaluation
MaryCatherine Jones, MPH, Public Health Consultant, CVH Team
14 Cultural Competence Training, Assessment, and Evaluation of Cultural Competence, and Evidence-Based Practices in Culturally Competent Agencies.
Logic Models and Theory of Change Models: Defining and Telling Apart
Regulated Health Professions Network Evaluation Framework
M & E Plans and Frameworks
Presentation transcript:

PPA 502 – Program Evaluation Lecture 3a – Expressing and Assessing Program Theory

Introduction The focus in this chapter is on the second of the five evaluation domains in the RFL evaluation framework or model (needs assessment, program theory assessment, implementation assessment, impact assessment, and efficiency assessment).

Introduction You must figure out exactly what the program theory is—that is, you must articulate the program theory; you must make it explicit, so that it can be examined and evaluated. Typically, a program theory will be implicit, at best. That’s why the evaluator must understand how the program is intended to operate. Program theory (like needs assessment) is very important, because it is foundational for later types of evaluation in the RFL model (i.e., implementation assessment, impact assessment, and efficiency assessment).

Introduction

The Evaluability Assessment Perspective Assessing whether the program is ready to be managed for results, what changes are needed to do so, and whether the evaluation would contribute to improved program performance (from Foundations of program evaluation, 1991, Sage Pub.) Negotiation and investigation undertaken jointly by the evaluator, the evaluation sponsor, and possibly other stakeholders to determine if a program meets the preconditions for evaluation and, if so, how the evaluation should be designed to ensure maximum utility (from RFL, p.154).

The Evaluability Assessment Perspective Explores the objectives, expectations, and information needs of program managers and policy-makers; explores program reality; assesses the likelihood that program activities will achieve measurable progress toward program objectives; and assesses the extent to which evaluation information is likely to be used by program management. The products of evaluability assessment are: (1) a set of agreed-on program objectives, important side-effects, and performance indicators on which the program can realistically be held accountable; and (2) a set of evaluation/management options which represent ways in which management can change program activities, objectives, or uses of information in ways likely to improve program performance.

The Evaluability Assessment Rules 1.    Work with management or others who are likely to use evaluation results, and determine their informational needs. 

The Evaluability Assessment Rules 2.    Do not evaluate a program if it is not ready to be evaluated.   Are the objectives realistic given program conceptualization and resources? If not, tell your client the program is not ready to be evaluated.  Conduct an evaluation only when someone is going to use the evaluation results (especially managers who have influence over the development and operation of the program).  3.    Articulate the program theory. When the intended program theory is carefully examined and combined with needs assessment data, you will understand whether the program has a chance of working. If a program is based on a faulty conceptualization, it is going to fail no matter how vigorously you implement it. In an earlier chapter, we called this problem theory failure.  

The Evaluability Assessment Rules 4.    Make sure that measurable performance indicators can be obtained if you are going to recommend that a program be evaluated. 

The Evaluability Assessment Rules 5.    Different program circumstances call for different levels or types of evaluation. Wholey calls this the sequential purchase of information. Here are the three key types of evaluation discussed by Wholey (not covered in RFL): 1) Rapid feedback evaluation which is a quick assessment of program performance in terms of agreed-upon objectives and indicators; it also provides designs for more valid, reliable, full-scale evaluation.   2) Performance monitoring which is the establishment of an ongoing process and outcome program monitoring system. 3) Intensive evaluation which is a rigorous experimental evaluation to test the validity of causal assumptions linking program activities to outcomes. 6.    During the evaluability assessment you should determine whether there should be (a) no program evaluation, (b) a rapid feedback evaluation, (c) performance monitoring, or (d) an intensive evaluation. 

Eliciting and Expressing Program Theory An articulated program theory is an explicitly stated version of program theory that is spelled out in some detail as part of a program’s documentation and identity or as a result of efforts by the evaluator and stakeholders to formulate the theory (RFL’s definition).

Eliciting and Expressing Program Theory Typically a program will not be fully articulated. What will be present will be an implicit program theory, which is a largely unstated theory that is implicit in the program assumptions and operations. RFL define implicit program theory as the set of “assumptions and expectations inherent in a program’s services and practices that have not been fully articulated and recorded.”

Eliciting and Expressing Program Theory First, you need to come up with a clear definition of the program, its objectives, and its boundaries that is negotiated and agreed upon by the evaluator and the primary stakeholders.

Eliciting and Expressing Program Theory Next, you can use some or all of the following procedures for articulating or explicating or describing the program theory. Review program documents and other secondary or extant data. Interview program personnel and other stakeholders. Make site visits and observe the program.

Eliciting and Expressing Program Theory Topics to explore. Determine the program goals and objectives. Determine the program functions, components, and activities. Determine the logic or sequence that links the program functions, activities, and components. Corroborate the theory description that you have now developed with the primary stakeholders

Assessing Program Theory First, assess the program theory in relation to the identified problem and social needs. In assessing the impact theory (i.e., the causal theory), compare the specifics of the impact theory to what the needs assessment data indicate is required to improve the local conditions and problem. In assessing the process theory (i.e., the theory of how the program is maintained and implemented), compare the assumptions associated with the service utilization and organizational plans with the needs assessment data that relate to the target population’s opportunities to obtain services and how they are likely to react to the program.

Assessing Program Theory Second, assess the program theory by using logic and by determining the plausibility of the theory assumptions, the logic of the various theory components, and the theory’s overall logic. Is the program theory well defined? Is the program theory reasonable? Question 1. Are the program goals and objectives well defined, and are they measurable? Question 2. Are the program goals and objectives feasible, and is it realistic to assume that they can actually be attained as a result of program action? Question 3. Is the change process presumed in the program theory (especially the impact theory) plausible? For example, do you agree with the cause-and-effect chain? Question 4. Are the program procedures for identifying members of the target population, delivering service to them, and sustaining that service through completion well defined and sufficient? Question 5. Are the components, activities, and functions of the program well defined and sufficient to attain the intended program goals and objectives? Question 6. Are the resources allocated to the program and its various components and activities adequate?

Assessing Program Theory Third, assess the program theory by comparing it with evidence from research and practice. Is the program theory compatible with research evidence (both applied research and basic research) and practical experience of others?

Assessing Program Theory Fourth, assess the program theory by comparing it with what you observe when you examine the program in operation.

Conclusion The outcome of the process described in this chapter is a description of the program theory and an evaluation (i.e., judgment of the worth and merit) of that program theory. The evaluator can also provide formative evaluative information by pointing out what components or specific program parts or activities need to be revised or reconceptualized. It is important that the stakeholders participate each step of the process outlined in this chapter because they must “buy into” the program theory, they must implement the program theory, and they are the ones that you hope will use the results of your theory assessment.