Download presentation
Presentation is loading. Please wait.
Published byByron Edwards Modified over 6 years ago
1
Evaluation of Health Programs A Postgraduate Overview Course
MODULE 1: Evaluation as a Strategic Tool for Public Programs and Policy Developed by the GEMNet-Health Task Group on Curriculum for Postgraduate Evaluation Courses
2
Module 1 Learning objectives and sessions
At the end of the module, students will be able to: Identify the role of evaluation within the public policy cycle Illustrate use of evaluation to inform evidence-based policies and programs Sessions: Session 1: Monitoring and Evaluation Within the Public Policy Cycle (2 hours) Session 1A: Background: Overview of Monitoring and Evaluation Session 1B: Policies, Policy Agendas, and Programs Session 1C: Evaluation and Public Policies Session 2: Evaluation to Inform Evidence-Based Policies and Programs (2 hours) This module covers evaluation as a strategic tool for public programs and policy. At the end of the module, students will be able to: Identify the role of evaluation within the public policies cycle Illustrate use of evaluation to inform evidence-based policies and programs This module is divided into two sessions: Monitoring and evaluation within the public policy cycle and evaluation to inform evidence-based policies and programs. The first session is also divided up into four subsessions: Background: Overview of Monitoring and Evaluation Policies, Policy Agendas, and Programs (includes the process of making public policies) Evaluation and Public Policies
3
Module 1, Session 1 Monitoring and evaluation within the public policy cycle
4
Module 1, Session 1A Background: Overview of monitoring and evaluation
5
While this module’s focus is on policy, this session, Session 1A, provides necessary background information on monitoring and evaluation. We will discuss the definitions of M&E and the differences between the two, program components, program logic models, program theories of change, and the types of evaluation. At the end of the session we will complete a group activity looking at the types of evaluation. Objective and Outline Background: Overview of monitoring and evaluation (M&E) Objective Define key concepts used in M&E Outline Definition of M&E Differences between M&E Definition of program components Program logic model Program theory of change Types of evaluation
6
Monitoring and Evaluation
Many M&E experts maintain a strict distinction between monitoring and evaluation. One of our first tasks is to define these basic concepts so that we can all speak the same language. Divide the class into five groups and ask each group to come up with a list of what M&E is and what it is not. Give them five minutes to discuss. Lead the group in a full discussion on M&E. Do this by quickly going from group to group and asking them to shout out one thing at a time while you write it down on the flip chart. Organize the ideas into two columns: “What M&E is” and “What M&E is not.” Then, ask each table to come up with a definition for “monitoring” and a definition for “evaluation.” Next, facilitate a discussion about monitoring and evaluation: How are they different? How do they fit together? Class activity: Monitoring is the routine process of data collection and measurement of progress toward program objectives. Monitoring involves counting what we are doing. Monitoring involves routinely looking at the quality of our services. Fill in the discussion with the following points: Evaluation is the use of social research methods to systematically investigate a program’s effectiveness. Evaluation requires a special study design. Evaluation sometimes requires a control or comparison group. Evaluation involves measurements over time. **make sure to highlight differences between monitoring and evaluation throughout the discussion** Background/additional information for definitions: Monitoring is the routine process of data collection and measurement of progress toward program objectives. Evaluation is the use of specific study designs and special studies to measure the extent to which changes in desired health outcomes are attributable to a program’s interventions. (from M&E Fundamentals: A Self-Guided Minicourse: Monitoring of a program or intervention involves the collection of routine data that measure progress toward achieving program objectives. It is used to track changes in program performance over time. Its purpose is to permit stakeholders to make informed decisions regarding the effectiveness of programs and the efficient use of resources. Monitoring is sometimes referred to as process evaluation, because it focuses on the implementation process and asks key questions: How much does implementation vary from site to site? How well has the program been implemented? Did the program benefit the intended people? At what cost? Monitoring: Requires the collection of data at multiple points throughout the program cycle, including at the beginning, to provide a baseline Is an ongoing, continuous process Can be used to determine if activities need adjustment during the intervention to improve desired outcomes Example data sources often used for monitoring include: program data, DHIS 2, HMIS/RHIS data Evaluation measures how well the program activities have met expected objectives and/or the extent to which changes in outcomes can be attributed to the program or intervention. The difference in the outcome of interest between having or not having the program or intervention is known as its “impact,” and measuring that is commonly referred to as “impact evaluation.” Evaluations require: A control or comparison group, in order to measure whether the changes in outcomes can be attributed to the program Data collection at the start of a program (to provide a baseline) and again at the end, rather than at repeated intervals during program implementation A well-planned study design Example data sources often used for evaluation include periodic surveys, such as the demographic and health surveys Monitoring and Evaluation What is the difference? The purpose of M&E is to measure program effectiveness. Monitoring: Tracking changes in program performance over time Evaluation: Attributing program outcomes to their causes
7
Illustration of Monitoring
Program indicator Explain elements of the graph shown: vertical axis can be any program indicator, horizontal axis is the time over which a program runs, and each bar represents the periodic measurement of the indicator over the lifetime of the program. As this graph illustrates, monitoring requires data to construct indicators for your outcomes of interest at several points. At a minimum, the program must have all data necessary to calculate your indicator’s value before or near the start of the related intervention, and at or after the end of the intervention. Ideally, monitoring will measure the indicator at periodic intervals while the program is ongoing, both to track incremental program performance and to discover if activities or other factors need adjustment during the intervention in order to improve the ultimate outcome. For instance, if recurrent stock-outs occur, either increasing supply levels or multiplying supply schedules could lead to a higher measured use due to the program in the final evaluation. Additional background Note that monitoring does not involve determining or attributing the cause of a change in the measured indicator. Even cumulative data can be used to monitor performance—the rate of change is not investigated, but rather notice taken of the overall change in the measured level of a relevant outcome over a period of time. Methodological issues are less complex than issues that need to be taken into consideration. Program start TIME-> Program end
8
Illustration of Evaluation
Program start end TIME-> With program Without program impact To measure program impact, an evaluation is typically conducted at the start of the program and again at the end of a program, rather than at repeated intervals while the program is being implemented. At the same time, these baseline and follow-up measurements are made in areas without the program. Attributing changes in outcomes to a particular program/intervention requires one to rule out all other possible explanations. We need to control for all external or confounding factors that may account for the results. Therefore, extensive knowledge of sampling and statistical analysis are sometimes required for measuring program impact. If the study design does not involve a randomly-assigned control group, the difference in outcome between areas with the program and areas without the program is an analytical exercise rather than a direct measurement. There are some similarities between monitoring and evaluation. Both require knowledge of baseline values and final values, often with an interim measurement during the project. However, evaluation differs crucially from monitoring in that the goal of evaluation is to determine how much of the change in outcome is due to the program or intervention. In other words, pure numbers cannot tell the evaluation tale; evaluation is fundamentally an analytical exercise to help decision-makers understand when, how, and to what extent the program is responsible for particular measured impact. However, relatively few programs go as far as establishing cause-and-effect between the program and the change.
9
Activity 1: Differences between monitoring and evaluation
Determine whether each of the following questions should be answered with monitoring or evaluation. How many condoms have been distributed? Does a condom distribution program reduce risky sexual behavior among community sex workers? Use examples based on your work, and areas of interest to the participants (e.g., HIV, family planning, nutrition). Adapted from an exercise from PHFI. Answers for the slide examples: This is monitoring because it is concerned with counting the number of something (condoms distributed) This is evaluation because it is concerned with the impact of a particular program Optional questions include: The Ministry of health (MOH) wants to know if the decentralisation of MDR treatment in province H has improved treatment outcomes. Response: Evaluation The government of the republic wants to know how many fixed dose combinations of antiretrovirals (ARVs) were dispensed in each health district for a year. Response: Monitoring The president wants to know if the programs being carried out in region Z are increasing insecticide-treated mosquito net (ITN) use among pregnant women and children under five in that region. Response: Evaluation
10
Programs Group discussion What are programs? Levels Health program
Examples Levels National/subnational Sector wide facility Health program An organized response to eliminate or reduce one or more problems where the response includes one or more objectives, performance of one or more activities, and the expenditure of resources* *Dictionary of epidemiology
11
Program Components All programs/projects have (implicit or explicit):
Objectives Key outcomes and targets Target areas and target population The intervention(s) are the delivery mechanism(s) Participant eligibility criteria and process for selecting participants A conceptual framework or program theory presenting the causal chain to be induced by the program to change the outcome Implementation plan: start date, duration, deployment plan Monitoring and evaluations is always undertaken for a specific project or program and not an activity in isolation. However, that requires a thorough understanding of the project or program of interest for M&E activity. There are three useful approaches or tools to help understand a program: The systems approach—resulting in The logic model (logical framework) Review of documentation on program objectives, outcomes, target population, interventions, beneficiaries, conceptual framework, and implementation plan
12
Program Logic Model Example Inputs Processes Outputs Outcome Impact
Program staff Space Training materials Inputs Conduct training events for providers Processes Providers trained in new clinical techniques Outputs Increase in # of clients served by (newly) trained providers Outcome Declining morbidity levels in target population Impact (From M&E Fundamentals: A Self-Guided Minicourse: A logic model can be an effective way to organize program components. A logic model can serve as a tool for planning, monitoring, and evaluation, as well as for advocacy and as an educational tool for stakeholders. Logic models have five essential components: Inputs: The resources invested in a program—for example, technical assistance, computers, condoms, or training Processes: The activities carried out to achieve the program’s objectives Outputs: The immediate results achieved at the program level through the execution of activities Outcomes: The set of short-term or intermediate results at the population level achieved by the program through the execution of activities Impacts: The long-term effects, or end results, of the program—for example, changes in health status. In this context, the term “impact” refers to the health status or conditions that the program is intended ultimately to influence (mortality, morbidity, fertility, etc.), as measured by appropriate indicators. Measuring “impact” in this way, however, should be distinguished from impact evaluation, which is a specific type of evaluation activity that focuses on examining how much of an observed change in outcomes or “impact” can be attributed to the program. In other words, inputs (or resources) are used in processes (or activities) that produce immediate intermediate results (or outputs), ultimately leading to longer term or broader results (or outcomes) and impacts. Have the group generate categories and specific examples for each component. Record these on the board or slide (alternatively, add examples to slide before presentation). This slide shows a logic model with example inputs, processes, outputs, outcomes, and impact listed. This example presents a straightforward view of a project designed to reduce population morbidity, by increasing the number of clients served by trained health care providers. As you can see, it does not try to account for all factors that may be influencing operations (M&E Fundamentals: A Self-Guided Mini Course, pg. #19) or results as a conceptual framework would, but instead focuses specifically on the project’s activities and impacts. This narrow focus helps program managers and M&E planners as they clarify the direct relationships between elements of particular interest within a particular program effort.
13
Program Theory of Change
A theory of change describes the desired outcomes, arranged graphically in a causal framework, the types of interventions that bring about the outcomes depicted in the outcomes framework map. Each intervention is tied to an outcome in the causal framework, revealing the often complex web of activity required to bring about change. The framework provides a working model against which to test hypotheses and assumptions about what actions will best produce the outcomes in the model. It provides clarity on the desired long-term outcomes of the program, the preconditions and supporting preconditions, and makes explicit the assumptions, interventions, and the indicators.
14
Types of Evaluation Type of information Type of evaluation
About health problems and health conditions Diagnostic evaluation (Needs assessment) About the program and its design Program design evaluation About the functioning of the program Process evaluation About the program results and program impact Evaluation of results/ outcome evaluation Impact evaluation About the costs, efficiency, and cost-effectiveness Economic evaluation Evaluation is a collection of activities. The type of evaluation is determined by the evaluation question or the information needed for decision making. An evaluation of a program could have a combination of one or more types. Types of monitoring is descriptive of the focus on the program component (e.g., input /output monitoring, outcome monitoring, impact monitoring) (Do not project the next slide yet: Allow participants to work in groups and answer the questions in Exercise 2.)
15
Objective and Outline Background: Overview of M&E Objective Outline
In this session we covered the definitions of M&E and the differences between the two, program components, program logic models, program theories of change, and the types of evaluation. This information will be important as a foundation for our discussion of M&E, and especially evaluation, in the context of policy. Do you have any questions about what we discussed? Objective and Outline Background: Overview of M&E Objective Define key concepts used in M&E Outline Definition of M&E Differences between M&E Definition of program components Program logic model Program theory of change Types of evaluation
16
Module 1, Session 1B Policies, policy agendas, and programs
17
Objectives and Outline
Policies, policy agendas, and programs Objectives Define key concepts used in the policy formulation environment Describe how public policies (including health policies) are made Outline Define policy Types of policies—public policy and health policy Policy cycle and role of evidence In this session, we will define key concepts related to policy. This will include discussing policy, types of policies (including public policy and health policy, the policy cycle, and programs). These concepts are important for us to understand before we move onto Session 1C, where we will look at how policy is made. Whereas “policy” remains our focus, the vehicle for the implementation and fulfillment of public policies is through national programs.
18
Defining Policy What is a policy?
A formal statement or official document by a government or institution of: Perceived problems regarding an issue Proposed solutions Desired goals and specific objectives A plan for implementation Plan or strategy—document providing a medium or long-term plan defining: Program objectives Strategies and activities to attain objectives Resources to implement strategies Appropriate organizational structure Policies are anchored in beliefs, goals, and values Policies may also be influenced by research and data (Pose question to participants, then click and the definition will appear on the slide.) A policy can be defined as a formal statement or official document from a government or institution. A formal statement from a government or institution may cover problems surrounding an issue, proposed solutions, and desired goals and objectives. The policy should also include a plan for implementation. The plan for implementation may include program objectives, strategies, and activities to meet those objectives, the necessary resources, and an appropriate organizational structure.
19
Levels and Types of Policies
Examples of policies Public and private policies Internationally ratified agreements National constitutions and laws Operational policies; government rules, guidelines, and procedures Local policy and plans Policies can be issued by different types of institutions. What are examples of specific policies of each type? (Pose question to participants—they can write their responses on a flip chart.)
20
Public Policy Public policies are policies formulated at any level of government Universal or targeted Examples Economic policy Agricultural policy Health policy Covers courses of action and inaction that affect the set of institutions, organizations, services, and funding arrangements of the health care system Now, we will look more closely at public policy, with a focus on health policy.
21
Why Health Policy is Important
Create common vision Establish authority to act Legitimize programs Guide action plans Provide basis for resource mobilization Provides framework for governance and monitoring Set framework for program evaluation Provide basis for further action (Have the participants generate a list of why health policy is important. This could be done as a whole group with the facilitator taking notes on a flip chart, or in small groups, and then shared with the whole group. Then, click to reveal the rest of the slide and discuss.)
22
Policy and Public Health
Policies are the underpinnings that affect every aspect of the implementation of public health services (programs). They determine: Which services will be available? How will implementation be monitored? What governance is necessary for sustainability? How many resources/how much public funding will be available? When, where, and under what conditions will services be available? Who can provide them? Who is eligible to receive them? How will private sector services be regulated? (Have the participants generate a list of how policy relates to public health services. This could be done as a whole group with the facilitator taking notes on a flip chart, or in small groups, and then shared with the whole group. Then, click to reveal the rest of the slide and discuss.)
23
Policy Cycle Evaluation Implementation Policy formulation
Agenda setting The policy cycle is a summary of the process in which policies are initiated, developed, negotiated, communicated, implemented, and evaluated. The first step in the policy cycle is agenda setting. A policy agenda is a list of issues to which an organization or government is giving serious attention at any one time, with a view to taking some sort of action. The next step is policy formulation. As mentioned previously, policy is based on beliefs, goals, or values. Policy formulation also, hopefully, takes into account research and data. We’ll talk more about setting the agenda and policy formulation in the next session. After policy is created it is implemented (programs) and then evaluated. This course will discuss the role evaluation plays in the policy cycle—evaluation results provide necessary data to inform agenda setting and policy formulation and evaluations are a critical component of evaluating policies, providing accountability and informing future policy and programs.
24
Evidence What is evidence? What is the role of evidence in policy?
Evidence is any form of knowledge, including but not confined to research of sufficient quality to be used to inform decisions. (Ask participants: Where does evidence fit in the policy cycle? Example responses are: informing policy formulation; researchers as actors—at the table for decision-making; speaking truth to power.) Session 2 will look more at the important role evidence plays in the policy cycle.
25
Phases in the Policy Cycle
Policy formulation: Problem identification Design – developing solutions and setting goals Dialogue/negotiations Transparent or non-transparent Incentives for actors: patronage, public interest, transcendence of management Policy agenda: The list of issues to which an organization is giving serious attention at any one time with a view of taking some sort of action. Agenda setting: The process by which certain issues came onto the policy agenda from the larger number of issues potentially worthy of attention by policymakers. Hall model: Legitimacy Feasibility Support We’re now going to look more closely at the policy cycle introduced in the last session, (click) focusing in on agenda setting and policy formulation. (click: agenda box appears) As noted previously, a policy agenda is the list of issues to which an organization is giving serious attention. Agenda setting is the process by which certain issues came onto the policy agenda from the larger number of issues. One theoretical model for understanding agenda setting is the Hall model. This model looks at the legitimacy, feasibility, and support for the agenda. (click: policy formulation box appears) The first steps in policy formulation are: (1) problem identification; (2) designing/developing solutions; and (3) setting goals. This process includes negotiation between actors; stronger policies are made when there is effective coordination between actors. These can include both formal and informal actors. (Ask group for examples of both: [formal: executive, congress, courts, bureaucracy, political parties; informal: social movements, industry, media].) The process of defining public policy can be either transparent or nontransparent. Different actors will also have different levels of influence based on the context. Incentives for actors include patronage systems, public interest, and transcendence of management. Each of the steps involves a number of actors. Each step may be characterized by dialogues, negotiations, and power plays in reaching the decision for action. A decision has to be made on the criteria for decision-making. Evidence-based decisions then gain legitimacy and become the norm and practice in each of the four steps. Evaluation Implementation Policy formulation Agenda setting
26
Objectives and Outline
Policies, policy agendas, and programs Objectives Define key concepts used in the policy formulation environment Describe how public policies (including health policies) are made Outline Define policy Types of policies: public policy and health policy Policy cycle and role of evidence In Session 2B, we defined key concepts related to policy, types of policy—including public policy and health policy, the policy cycle, and programs. These concepts are important for us to understand before we move onto Session 1C, where we will look at how policy is made. Whereas “policy” remains our focus, the vehicle for the implementation and fulfillment of public policies is through national programs.
27
Module 1, Session 1C Evaluation and public policies
28
Objective and Outline Evaluation and public policies Objectives
Discuss how evaluations contribute to public policies Review and discuss examples of best practices in evaluation of policy Analyze (critique) the policy process Outline Characteristics of excellence in public policymaking Case study 1 Case study 2 Case study 3 In this session, we will look at the characteristics of excellence in public policy and use them to critically assess the policymaking process from three case studies.
29
Evaluation’s Role Throughout
The policy cycle Evaluation Implementation Policy formulation Agenda setting We will talk more about the role of evidence and evaluations in Session 1D.
30
Characteristics of Public Policies of Excellence
Wide rationale Includes costs and financing perspective Cost-benefit analysis Internal consistency and also consistent with overall framework Includes stakeholder analysis Seek political opportunity Sequencing to increase take-up Clarity in its objectives and means Instrumental functionality Indicators and evaluation What are the main features of what we could describe as public policies of excellence? Evidence-based is among their main attributes; in general recognizing the relevance of using broadly existing knowledge to support their approach Including, from the beginning, a perspective that takes costs into consideration and also the mechanism that would allow for the resources to implement them Including an explicit comparison of the costs versus the benefits that are expected Having an explicit design that is logical within itself but also with the broader policy context and constraints Include the participation of all relevant stakeholders, with an explicit analysis of their relevance, position, and leverage Timing to promote them that is adequate in terms of public priorities, resources, overall context Clear process to implement, with each phase described Tools for implementation that are well established and operative And, very relevant: Having defined indicators of success and an evaluation plan
31
Successful Public Policy
Begins at inception To create strong policy, you must first: 1 Define the problem 2 Gather evidence 3 Identify causes 6 Select best solution 5 Develop solutions 4 Evaluate policy 7 Evaluate benefits and costs 8 Utilize the prince system 9 Develop political strategies Source:
32
Analyze and critique Public policy examples
Using the following case studies and our knowledge of the public policy cycle and the characteristics of excellence for policies, we will practice analyzing and critiquing public policies.
33
Early Childhood Development
South Africa – Case study 1 Wide rationale Cost and financing perspective Cost benefit analysis Internal consistency and with overall framework Stakeholder analysis Political opportunity Sequencing Clarity Instrumental functionality Indicators and evaluation (Have everyone read the case study BEFORE the session. During the session explain that you will use the Characteristics of Excellence, shown on the slide, to guide discussion of the case study. Example questions are below.) Example questions around the Characteristics of Excellence: Use these questions to guide discussion. Who were the main stakeholders involved? What role did each of them play in the process of policy development? Were there stakeholders that weren’t mentioned who might have shared a different perspective or benefited from involvement in the policy development process?
34
Activity 2: Process of public policies In your groups, brainstorm:
Potential obstacles to research being accepted and used by policymakers; and Strategies to overcome these obstacles (Break the class up into small groups. Have each group spend time brainstorming around points 1 and 2 from the slide. Let them know when to shift focus [e.g., 10 minutes for each point]. Have reporters from each group take notes and report to the group. During the reporting out session take notes of all potential obstacles and then strategies. Have each subsequent group add any additional ideas to both lists.)
35
Wrap Up Evaluation and public policies Objectives Outline
Discuss how evaluations contribute to public policies and programs Review and discuss examples of best practices in evaluation of policy Analyze (critique) the policy process Outline Characteristics of excellence in public policymaking Case study 1 Case study 2 Case study 3 In this session, we critically assessed the policymaking process through three case studies. (Summarize discussion.) In the next module, we will take a deeper look at the role evidence plays in policymaking.
36
This presentation was produced with the support of the United States Agency for International Development (USAID) under the terms of MEASURE Evaluation cooperative agreement AID-OAA-L MEASURE Evaluation is implemented by the Carolina Population Center, University of North Carolina at Chapel Hill in partnership with ICF International; John Snow, Inc.; Management Sciences for Health; Palladium; and Tulane University. Views expressed are not necessarily those of USAID or the United States government.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.