Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.

Slides:



Advertisements
Similar presentations
Introduction to Impact Assessment
Advertisements

Explanation of slide: Logos, to show while the audience arrive.
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Donald T. Simeon Caribbean Health Research Council
What You Will Learn From These Sessions
I am cutting rocks.
Designing an Effective Evaluation Strategy
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Monitoring and Evaluation Frameworks Kyiv, Ukraine May 23, 2006 MEASURE Evaluation.
An Introduction to Monitoring and Evaluation for National TB Programs.
Decision-Making and Strategic Information Workshop on M&E of PHN Programs July 24-August 11, 2006 Addis Ababa.
Introduction to Monitoring and Evaluation for National TB Programs 20 September 2005.
The Lumina Center Grantseeking Workshop Series Presents Outcomes & Evaluations April 20, 2006.
Monitoring and Evaluation Frameworks   What is an M&E Framework?   Why do we use M&E Frameworks?   How do we develop M&E Frameworks? MEASURE Evaluation.
Role of Result-based Monitoring in Result-based Management (RBM) Setting goals and objectives Reporting to Parliament RBM of projects, programs and.
Types of Evaluation.
Community Planning Training 1-1. Community Plan Implementation Training 1- Community Planning Training 1-3.
Continuous Quality Improvement (CQI)
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
How to Develop the Right Research Questions for Program Evaluation
Dr. G. Johnson, Program Evaluation and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson.
Performance Measurement and Evaluation Basics 2014 AmeriCorps External Reviewer Training.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
Health promotion and health education programs. Assumptions of Health Promotion Relationship between Health education& Promotion Definition of Program.
This project is funded by the EUAnd implemented by a consortium led by MWH Logical Framework and Indicators.
Program Evaluation Using qualitative & qualitative methods.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
May 8, 2012 MWP-K Learning Event Monitoring, evaluation, and learning (MEL) framework for the Millennium Water Program, Kenya.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Evaluation Assists with allocating resources what is working how things can work better.
Unit 10. Monitoring and evaluation
KEYWORDS REFRESHMENT. Activities: in the context of the Logframe Matrix, these are the actions (tasks) that have to be taken to produce results Analysis.
Evelyn Gonzalez Program Evaluation. AR Cancer Coalition Summit XIV March 12, 2013 MAKING A DIFFERENCE Evaluating Programmatic Efforts.
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
M&E Basics Miguel Aragon Lopez, MD, MPH. UNAIDS M&E Senior Adviser 12 th May 2009.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
28 February, 2011 University of Pretoria
Copyright © 2014 by The University of Kansas Refining the Program Intervention Based on Research.
Evaluation: what is it? Anita Morrison. What is evaluation? Evaluation… –…is the process of determining the merit, worth, or value of something, or the.
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
The Major Steps of a Public Health Evaluation 1. Engage Stakeholders 2. Describe the program 3. Focus on the evaluation design 4. Gather credible evidence.
AmeriCorps Grantee Training Evaluation and Research September 11, 2014.
Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement Summarized from Berk & Rossi’s Thinking About Program Evaluation,
Quasi Experimental and single case experimental designs
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Program Assessment. Before you get started Need to determine if the program can be evaluated or should be evaluated. Is the answer already available?
Evaluation: from Objectives to Outcomes Janet Myers, PhD MPH AIDS Education and Training Centers National Evaluation Center
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Session 6: Data Flow, Data Management, and Data Quality.
Session 2: Developing a Comprehensive M&E Work Plan.
RE-AIM Framework. RE-AIM: A Framework for Health Promotion Planning, Implementation and Evaluation Are we reaching the intended audience? Is the program.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
Session 5: Selecting and Operationalizing Indicators.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 10. Evaluation.
Logic Models How to Integrate Data Collection into your Everyday Work.
Project monitoring and evaluation
Designing Effective Evaluation Strategies for Outreach Programs
Monitoring and Evaluation Frameworks
Gender-Sensitive Monitoring and Evaluation
Gender-Sensitive Monitoring and Evaluation
M&E Basics Miguel Aragon Lopez, MD, MPH
Introduction to Program Evaluation
SOCIAL SCIENCES &TECHNOLOGY
Monitoring and Evaluation using the
Session 3: Principles of Monitoring and Evaluation
Monitoring and Evaluation
monitoring & evaluation THD Unit, Stop TB department WHO Geneva
M & E Plans and Frameworks
Presentation transcript:

Session 7: Planning for Evaluation

Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring and outcome evaluation Impact monitoring and impact evaluation Evaluation study designs 2

Session Learning Objectives By the end of the session, the participant will be able to:  plan for program-level monitoring and evaluation activities;  understand basic principles of evaluation study design; and  link evaluation designed to the type of decisions that need to be made. 3

Components of an Avian Influenza Program 4 InputsProcessesOutputs Outcomes Program level Population level Resources Staff Drugs FP supplies Equipment Functions, Activities Training Logistics IEC Services % facilities providing quality infection control % backyard farmers receiving AI awareness materials Number of trained staff Utilization % commercial farmers vaccinating flocks Intermediate Increased knowledge, behavior of ways to protect poultry from AI Long-term Infection rate Mortality

Monitoring Monitoring involves:  routine tracking of information about a program and its intended outputs, outcomes and impacts;  measurement of progress toward achieving program objectives;  tracking cost and program/project functioning; and  providing a basis for program evaluation, when linked to a specific program. 5

Evaluation Evaluation involves:  activities designed to determine the value or worth of a specific program;  the use of epidemiological or social research methods to systematically investigate program effectiveness;  may include the examination of performance against defined standards, an assessment of actual and expected results and/or the identification of relevant lessons; and  should be planned from the beginning of a project. 6

Comparing M&E Monitoring: What are we doing? Tracking inputs and outputs to assess whether programs are performing according to plans (e.g. people trained, outbreaks investigated) Evaluation: What have we achieved? Assessment of impact of the program on behavior or health outcome (e.g. reporting sick poultry, AI case fatality ratio) 7 Both monitoring and evaluation can occur at process, outcome, and impact levels but they serve different purposes.

Process Monitoring Every program should engage in process monitoring. Process monitoring:  collects and analyzes data on inputs, processes, and outputs;  answers such questions as w hat staffing/resources are being used, what services are being delivered, or what populations are being served; and  provides information for program planning and management. 8

Process Evaluation Most programs, even small ones, should engage in process evaluation. Process evaluation:  occurs at specific points in time during the life of the program;  provides insights into the operations of the program, barriers and lessons learned; and  can help inform decisions as to whether conducting an outcome evaluation would be worthwhile. 9

Outcome monitoring is basic tracking of indicators related to desired program/project outcomes. It answers the question:  Did the expected outcomes occur; e.g., expected knowledge gained, expected change in behavior occurred, expected client use of services occurred? Example outcome indicators for AI programs include:  knowledge of appropriate poultry handling procedures  quality of infection control in hospitals  rapid response team coverage  biosecurity levels at wet markets 10 Outcome Monitoring

Some programs, particularly larger ones, conduct outcome evaluations. Outcome evaluations collect and analyze data used to determine if and by how much an intervention achieved its intended outcomes. Answers the question:  Did the intervention cause the expected outcomes? 11 Outcome Evaluation

Illustration: Outcome Evaluation 12 Program Start Time Program End With program Without program Program Impact Program Outcome Indicator

Impact monitoring, a variation on outcome monitoring, is focused on long term outcomes including AI infection and mortality rates. Impact monitoring answers the question:  What long-term changes in AI infection and mortality can we observe? Impact monitoring is usually conducted by national authorities, such as the ministries of health or agriculture. 13 Impact Monitoring

Impact evaluation is a variation on outcome evaluation. It is focused on the increase or decrease in disease incidence as a result of a program. Impact evaluation usually addresses changes in an entire population – therefore, it is appropriate for large, well- established programs, such as the national AI program. Very few projects/programs conduct impact evaluations. 14 Impact Evaluation

Monitoring and Evaluation Pipeline 15

16 Exercise: Monitoring and Evaluation Can monitoring or evaluation answer these questions? If so, what type of M&E is involved (process, outcome, impact):  How many information pamphlets were distributed this year?  Did the information campaign increase the number of people washing hands after touching poultry?  Did training officials in quarantine procedures result in fewer cross-border outbreaks? If so, did all provinces benefit equally?  Which provinces received training in quarantine procedures?  Has the number of fatalities from Influenza A/H5N1 decreased?

Evaluation Designs

Design Types ExperimentalQuasi-experimentalNon-experimental Partial coverage/ new programs Control group Strongest design Most expensive Partial coverage/ new programs Comparison group Weaker than experimental design Moderately expensive Full coverage programs -- Weakest design Least expensive

Pre- and Post-Program with Control Group RANDOMLY ASSIGN PEOPLE FROM THE SAME TARGET POPULATION TO GROUP A OR GROUP B TARGET GROUP A CONTROL GROUP B ASSESS TARGET GROUP A (pre-test) IMPLEMENT PROGRAM WITH TARGET GROUP A ASSESS TARGET GROUP A (post-test) ASSESS CONTROL GROUP B (pre-test) ASSESS CONTROL GROUP B (post-test) Experimental Design

Pre- and Post-Program with Comparison Group IDENTIFY TARGET GROUP AND COMPARISON GROUP TARGET GROUP A COMPARISON GROUP B ASSESS TARGET GROUP A (pre-test) IMPLEMENT PROGRAM WITH TARGET GROUP A ASSESS TARGET GROUP A (post-test) ASSESS COMP GROUP B (pre-test) ASSESS COMP GROUP B (post-test) Quasi-Experimental Design

Non-experimental Designs 1.One program group – post-test only (weakest design) 2.One group, pre- and post-test (better) IMPLEMENT PROGRAM ASSESS TARGET GROUP AFTER PROGRAM ASSESS TARGET GROUP BEFORE PROGRAM IMPLEMENT PROGRAM ASSESS TARGET GROUP AFTER PROGRAM

Non-experimental Designs 2.One group, multiple observations (even better) ASSESS TARGET GROUP BEFORE PROGRAM (multiple times) IMPLEMENT PROGRAM ASSESS TARGET GROUP AFTER PROGRAM (multiple times)

Choosing the Right Design for You Non-experimental designs will allow you to link program outcomes to program activities. If you want to have more certainty, try to use a pre- and post- test design. If you want to be more confident, you should try to use a comparison group or control group – it will be more costly. If you have the resources, hire an evaluator to help determine which design will maximize your program’s resources and answer your team’s evaluation questions with the greatest degree of certainty. Disseminate your findings and share lessons learned!

24 Exercise: Evaluation Plan In your groups: Using your M&E plan project, begin thinking about the evaluation plan for your program:  Identify one or two program evaluation questions for each level of evaluation (process, outcome, impact).  Suggest the type of design you would like to use to answer those questions – considering the type of data you’ll need to collect and the resources/timeline of the project.