Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India.

Slides:



Advertisements
Similar presentations
New Challenges in M&E Lets go. Scaling Up Monitoring & Evaluation Strategic Information PROGRAM GUIDANCE RESULT NEEDS OPPORTUNITIES Resources New directions.
Advertisements

Conceptualising and operationalising a structural approach to HIV prevention Justin Parkhurst LSHTM / LIDC.
DATA DEMAND AND USE: S HARING I NFORMATION AND P ROVIDING F EEDBACK Session 5.
Introduction to Monitoring and Evaluation
Nigeria Case Study HIVAIDS Specific Slides. ANALYZING AND INTERPRETING DATA.
February Dakar, Senegal
Donald T. Simeon Caribbean Health Research Council
Mywish K. Maredia Michigan State University
Rwanda Case Study Additional Slides on Stakeholder Involvement.
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
GENERATING DEMAND FOR DATA Module 1. Session Objectives  Understand the importance of improving data-informed decision making  Understand the role of.
Begin with the End in Mind
LINKING DATA TO ACTION Session 6. Session Objectives By the end of this session, you will be able to:  Identify priority decisions and programmatic questions.
DETERMINANTS OF DATA USE Session 2. Session Objectives  Explain the data-use conceptual framework  Highlight the determinants of data use  List potential.
Monitoring and Evaluation: Evaluation Designs. Objectives of the Session By the end of this session, participants will be able to: Understand the purpose,
Context of Decision Making
UNDERSTANDING DATA AND INFORMATION FLOW Session 4.
M&E Framework for Programmes for Most-at-Risk Populations
Program Evaluation In A Nutshell 1 Jonathan Brown, M.A.
Measuring Progress: Strategies for Monitoring and Evaluation Rebecca Stoltzfus.
Sociology 3322a. “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards.
Technical Approach to and Experiences from Strengthening National Monitoring and Evaluation System for Most Vulnerable Children Program in Tanzania Prisca.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Study Designs Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /4/20151.
Strengthening Health Information Systems: Creating an Information Culture Manila, June 14, 2011 Theo Lippeveld, MD, MPH,
 2008 Johns Hopkins Bloomberg School of Public Health Evaluating Mass Media Anti-Smoking Campaigns Marc Boulay, PhD Center for Communication Programs.
28 February, 2011 University of Pretoria
Case management versus M&E in the context of OVC programs: What have we learned? Jenifer Chapman, PhD Futures Group/MEASURE Evaluation.
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
Bridging the Research-to-Practice Gap Session 1. Session Objectives  Understand the importance of improving data- informed decision making  Understand.
Institute for International Programs An international evaluation consortium Institute for International Programs An international evaluation consortium.
Data Triangulation. Objectives:  At the end of the session, participants will be able to:  Describe the role of data triangulation in program evaluation.
PLACE Method Priorities for Local AIDS Control Efforts 1 1 MEASURE Evaluation, A Manual for implementing the PLACE Method.
Evaluation design and implementation Puja Myles
Linking Data with Action Part 1: Seven Steps of Using Information for Decision Making.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University.
Evaluation Designs Presented by Prof. PGM Mujinja, PhD SPHSS, MUHAS March, 2013.
RE-AIM Framework. RE-AIM: A Framework for Health Promotion Planning, Implementation and Evaluation Are we reaching the intended audience? Is the program.
Regional Forum: Use of Gender Data in Sub-national Decision-making Kigali, Rwanda August 2012 Key Gender Terms and Concepts.
Evaluation Design. By the end of the session, participants will be a able to: 1.Name the criteria for inferring causality 2.Understand internal & external.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
WHY IS THIS HAPPENING IN THE PROGRAM? Session 5 Options for Further Investigation & Information Flow.
Implementation Science: Finding Common Ground and Perspectives Laura Reichenbach, Evidence Project, Population Council International Conference on Family.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
Introduction to Group Work. Learning Objectives The goal of the group project is to provide workshop participants with an opportunity to further develop.
Data Use for Gender-Aware Health Programming Session 1: Setting the Gender Policy Context.
Unmet Need Exercise  Review the trends in CPR. What do you conclude about the performance of the FP program in each country?  Review trends in unmet.
MEASURE Evaluation Data Quality Assurance Workshop Session 3 Introduction to Routine Data Quality Assessment.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
MEASURE EVALUATION Session: 7 Developing Action Plans Based on Results Data Quality Assurance Workshop.
Data Use for Gender-Aware Health Programming Welcome and Introductions.
Monitoring & Evaluation Capacity Strengthening Workshop WORKSHOP INTRODUCTION AND OVERVIEW.
Designing Effective Evaluation Strategies for Outreach Programs
Data Quality Assurance Workshop
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 5:
Session: 5 Using the RDQA tool for System Assessment
Introduction MODULE 6: RHIS Data Demand and Use
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
Monitoring and Evaluation of HIV/AIDS Programs Workshop Overview
Introduction to Comprehensive Evaluation
Session: 4 Using the RDQA tool for Data Verification
Introduction to the PRISM Framework
Session: 6 Understanding & Using the RDQA Tool Output
Siân Curtis, PhD OVC Evaluation Dissemination Meeting,
Measuring Data Quality
monitoring & evaluation THD Unit, Stop TB department WHO Geneva
MONITORING AND EVALUATION IN TB/HIV PROGRAMS
Session: 9 On-going Monitoring & Follow Up
Presentation transcript:

Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India

Objectives of the Session By the end of this session, participants will be able to:  Understand the purpose and role of program evaluation  Distinguish between different evaluation types and approaches  Link evaluation designs to the types of decisions that need to be made

Why Evaluate HIV/AIDS Programs?  To improve the design an implementation of a program  To reach informed decisions on the allocation of existing limited resources, thereby increasing program performance and effectiveness  To identify factors that influence health and social outcomes  To generate knowledge, to know what works and what does not.  Good evaluations are public goods

Current Challenges in Evaluating HIV Preventon Programmes  HIV prevention programmes are increasingly complex, multi-component and context-specific  The underlying behavioural theories leading to multiple behaviour changes and ultimately impact are difficult to assess;  Many projects/interventions/services aim to affect HIV risk factors and/or vulnerabilities rather than averting HIV infections directly. Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

All Programs/Projects have (implicitly or explicitly):  Objectives  Expected outcomes  Target population  Mechanism(s) to deliver services (the intervention)  Criteria for participating in the program  A conceptual framework that provides rationale for program existence (sometimes called the Development Hypothesis)

Monitoring vs. Evaluation Objectives of Monitoring:  To provide information on the functioning of the program: a) Is it progressing according to plan? b) Identify problems for correction  To track key program elements over time (to assess changes) Characteristics of Monitoring: Mostly tracks key quantifiable indicators of key program elements: inputs, processes, outputs, and outcomes Often done on a routine basis Key issue: good measurement using relevant indicators No assessment of what is the cause of the change in the indicators

 Objectives of Evaluation: - To determine whether a program achieved its objectives - To determine the impact of the program on the outcome intended by the program - How much of the observed change in the outcome can be attributed to the program and not to other factors?  Characteristics of Evaluation: - Key issues: causality, quantification of program effect - Use of evaluation designs to examine whether an observed change in outcome can be attributed to the program Note: Monitoring tells you that a change occurred; Impact Evaluation will tell you whether it was due to the program Monitoring vs. Evaluation

Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

Deciding Upon An Appropriate Evaluation Design  Indicators: What do you want to measure?  Provision  Utilization  Coverage  Impact  Type of inference: How sure to you want to be?  Adequacy  Plausibility  Probability  Other factors Source: Habicht, Victora and Vaughan (1999)

Clarification of Terms ProvisionAre the services available? Are they accessible? Is their quality adequate? UtilizationAre the services being used? CoverageIs the target population being reached? ImpactWere there improvements in disease patterns or health-related behaviors?

Clarification of Terms Adequacy assessment Did the expected changes occur? Are objectives being met? Plausibility assessment Did the program seem to have an effect above and beyond other external influences? Probability assessment Did the program have an effect (P < x%) Source: Habicht, Victoria and Vaughan (1999)

Adequacy Assessment Inferences  Are objectives being met?  Compares program performance with previously established adequacy criteria, e.g. 80% ORT use rate  No control group  2+ measurements to assess adequacy of change over time  Provision, utilization, coverage  Are activities being performed as planned?  Impact  Are observed changes in health or behavior of expected direction and magnitude?  Cross-sectional or longitudinal Source: Habicht, Victora and Vaughan (1999)

Plausibility Assessment Inferences (1)  Program appears to have effect above & beyond impact of non-program influences  Includes control group  Historical control group  Compares changes in community before & after program and attempts to rule out external factors  Internal control group  Compares 3+ groups/individuals with different intensities of exposure to program (dose-response)  Compares previous exposure to program between individuals with and without the disease (case-control)  External control group  Compares communities/geographic areas with and without the program Source: Habicht, Victora and Vaughan (1999)

Plausibility Assessment Inferences (II)  Provision, utilization, coverage  Intervention group appears to have better performance than control  Cross-sectional, longitudinal, longitudinal-control  Impact  Changes in health/behavior appear to be more beneficial in intervention than control group  Cross-sectional, longitudinal, longitudinal-control, case- control Source: Habicht, Victora and Vaughan (1999)

Probability Assessment Inferences  There is only a small probability that the differences between program and control areas were due to chance (P <.05)  Requires control group  Requires randomization  Often not feasible for assessing program effectiveness  Randomization needed before program starts  Political factors  Scale-up  Inability to generalize results  Known efficacy of intervention Source: Habicht, Victora and Vaughan (1999) Source: Habicht, Victoria and Vaughan (1999)

Evaluation Flow from Simpler to More Complex Designs Type of evaluation ProvisionUtilizationCoverageImpact Adequacy1 st 2 nd 3 rd 4 th (b) Plausibility4 th (a)5 th Probability Source: Habicht, Victoria and Vaughan (1999)

Possible Areas of Concern to Different Decision Makers Type of evaluation ProvisionUtilizationCoverageImpact AdequacyHealth center manager International Agencies District health managers International Agencies PlausibilityInternational Agencies Donor agencies Scientists ProbabilityDonor Agencies & Scientists Source: Habicht, Victora and Vaughan (1999)

Process Evaluations  Assess whether the program was implemented as intended  May look at  Access to services  Reach and coverage of services  Quality of services  Client satisfaction  May also provide an understanding of cultural, socio- political, legal and economic contexts that affect implementation of the programme.

Outcome/Impact Evaluations  Assess whether changes in outcome/impacts are due to the program.  May look at  Outcomes such as HIV-related behaviors,  Health impacts such as HIV status, life expectancy

Program start Program midpoint or end Time Outcome The Evaluation Question: How much of this change is due to the program? Evaluating Program Impact

With program Without program Program start Program midpoint or end Time Outcome Evaluating Program Impact Net Program Impact

Features of Different Study Designs True experimentQuasi-experimentNon-experimental Partial coverage/ new programs Control group Strongest design Most expensive Partial coverage/ new programs Comparison group Weaker than experimental design Less expensive Full coverage programs -- Weakest design Least expensive

Readiness criteria for Outcome & Impact Evaluation  The program  is implemented with sufficient quality  has achieved adequate coverage  is of long enough duration that expected change in the specified outcomes for the evaluation has had time to occur

When to use an experimental or quasi-experimental design  The program has unknown effectiveness  There is the potential for negative effects  The program is politically or otherwise risky Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

Who should plan for Evaluation?  All programs should conduct basic monitoring  Most programs should conduct process evaluations  Implementation assessments  Quality assessments  Coverage assessments  Some programs should conduct outcome evaluation when evidence is needed as to whether the program is effective

References  Adamchak S et al. (2000). A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs. Focus on Young Adults, Tool Series 5. Washington D.C.: Focus on Young Adults.  Fisher A et al. (2002). Designing HIV/AIDS Intervention Studies. An Operations Research Handbook. New York: The Population Council.  Habicht JP et al. (1999) Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. International Journal of Epidemiology, 28:  Rossi P et al. (1999). Evaluation. A systematic Approach. Thousand Oaks: Sage Publications.  UNAIDS (2010). Strategic Guidance for Evaluating HIV Prevention Programmes.

MEASURE Evaluation is a MEASURE project funded by the U.S. Agency for International Development and implemented by the Carolina Population Center at the University of North Carolina at Chapel Hill in partnership with Futures Group International, ICF Macro, John Snow, Inc., Management Sciences for Health, and Tulane University. Views expressed in this presentation do not necessarily reflect the views of USAID or the U.S. Government. MEASURE Evaluation is the USAID Global Health Bureau's primary vehicle for supporting improvements in monitoring and evaluation in population, health and nutrition worldwide.