Presentation is loading. Please wait.

Presentation is loading. Please wait.

Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India.

Similar presentations


Presentation on theme: "Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India."— Presentation transcript:

1 Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India

2 Objectives of the Session By the end of this session, participants will be able to:  Understand the purpose and role of program evaluation  Distinguish between different evaluation types and approaches  Link evaluation designs to the types of decisions that need to be made

3 Why Evaluate HIV/AIDS Programs?  To improve the design an implementation of a program  To reach informed decisions on the allocation of existing limited resources, thereby increasing program performance and effectiveness  To identify factors that influence health and social outcomes  To generate knowledge, to know what works and what does not.  Good evaluations are public goods

4 Current Challenges in Evaluating HIV Preventon Programmes  HIV prevention programmes are increasingly complex, multi-component and context-specific  The underlying behavioural theories leading to multiple behaviour changes and ultimately impact are difficult to assess;  Many projects/interventions/services aim to affect HIV risk factors and/or vulnerabilities rather than averting HIV infections directly. Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

5 All Programs/Projects have (implicitly or explicitly):  Objectives  Expected outcomes  Target population  Mechanism(s) to deliver services (the intervention)  Criteria for participating in the program  A conceptual framework that provides rationale for program existence (sometimes called the Development Hypothesis)

6 Monitoring vs. Evaluation Objectives of Monitoring:  To provide information on the functioning of the program: a) Is it progressing according to plan? b) Identify problems for correction  To track key program elements over time (to assess changes) Characteristics of Monitoring: Mostly tracks key quantifiable indicators of key program elements: inputs, processes, outputs, and outcomes Often done on a routine basis Key issue: good measurement using relevant indicators No assessment of what is the cause of the change in the indicators

7  Objectives of Evaluation: - To determine whether a program achieved its objectives - To determine the impact of the program on the outcome intended by the program - How much of the observed change in the outcome can be attributed to the program and not to other factors?  Characteristics of Evaluation: - Key issues: causality, quantification of program effect - Use of evaluation designs to examine whether an observed change in outcome can be attributed to the program Note: Monitoring tells you that a change occurred; Impact Evaluation will tell you whether it was due to the program Monitoring vs. Evaluation

8 Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

9 Deciding Upon An Appropriate Evaluation Design  Indicators: What do you want to measure?  Provision  Utilization  Coverage  Impact  Type of inference: How sure to you want to be?  Adequacy  Plausibility  Probability  Other factors Source: Habicht, Victora and Vaughan (1999)

10 Clarification of Terms ProvisionAre the services available? Are they accessible? Is their quality adequate? UtilizationAre the services being used? CoverageIs the target population being reached? ImpactWere there improvements in disease patterns or health-related behaviors?

11 Clarification of Terms Adequacy assessment Did the expected changes occur? Are objectives being met? Plausibility assessment Did the program seem to have an effect above and beyond other external influences? Probability assessment Did the program have an effect (P < x%) Source: Habicht, Victoria and Vaughan (1999)

12 Adequacy Assessment Inferences  Are objectives being met?  Compares program performance with previously established adequacy criteria, e.g. 80% ORT use rate  No control group  2+ measurements to assess adequacy of change over time  Provision, utilization, coverage  Are activities being performed as planned?  Impact  Are observed changes in health or behavior of expected direction and magnitude?  Cross-sectional or longitudinal Source: Habicht, Victora and Vaughan (1999)

13 Plausibility Assessment Inferences (1)  Program appears to have effect above & beyond impact of non-program influences  Includes control group  Historical control group  Compares changes in community before & after program and attempts to rule out external factors  Internal control group  Compares 3+ groups/individuals with different intensities of exposure to program (dose-response)  Compares previous exposure to program between individuals with and without the disease (case-control)  External control group  Compares communities/geographic areas with and without the program Source: Habicht, Victora and Vaughan (1999)

14 Plausibility Assessment Inferences (II)  Provision, utilization, coverage  Intervention group appears to have better performance than control  Cross-sectional, longitudinal, longitudinal-control  Impact  Changes in health/behavior appear to be more beneficial in intervention than control group  Cross-sectional, longitudinal, longitudinal-control, case- control Source: Habicht, Victora and Vaughan (1999)

15 Probability Assessment Inferences  There is only a small probability that the differences between program and control areas were due to chance (P <.05)  Requires control group  Requires randomization  Often not feasible for assessing program effectiveness  Randomization needed before program starts  Political factors  Scale-up  Inability to generalize results  Known efficacy of intervention Source: Habicht, Victora and Vaughan (1999) Source: Habicht, Victoria and Vaughan (1999)

16 Evaluation Flow from Simpler to More Complex Designs Type of evaluation ProvisionUtilizationCoverageImpact Adequacy1 st 2 nd 3 rd 4 th (b) Plausibility4 th (a)5 th Probability Source: Habicht, Victoria and Vaughan (1999)

17 Possible Areas of Concern to Different Decision Makers Type of evaluation ProvisionUtilizationCoverageImpact AdequacyHealth center manager International Agencies District health managers International Agencies PlausibilityInternational Agencies Donor agencies Scientists ProbabilityDonor Agencies & Scientists Source: Habicht, Victora and Vaughan (1999)

18 Process Evaluations  Assess whether the program was implemented as intended  May look at  Access to services  Reach and coverage of services  Quality of services  Client satisfaction  May also provide an understanding of cultural, socio- political, legal and economic contexts that affect implementation of the programme.

19 Outcome/Impact Evaluations  Assess whether changes in outcome/impacts are due to the program.  May look at  Outcomes such as HIV-related behaviors,  Health impacts such as HIV status, life expectancy

20 Program start Program midpoint or end Time Outcome The Evaluation Question: How much of this change is due to the program? Evaluating Program Impact

21 With program Without program Program start Program midpoint or end Time Outcome Evaluating Program Impact Net Program Impact

22 Features of Different Study Designs True experimentQuasi-experimentNon-experimental Partial coverage/ new programs Control group Strongest design Most expensive Partial coverage/ new programs Comparison group Weaker than experimental design Less expensive Full coverage programs -- Weakest design Least expensive

23 Readiness criteria for Outcome & Impact Evaluation  The program  is implemented with sufficient quality  has achieved adequate coverage  is of long enough duration that expected change in the specified outcomes for the evaluation has had time to occur

24 When to use an experimental or quasi-experimental design  The program has unknown effectiveness  There is the potential for negative effects  The program is politically or otherwise risky Source: Strategic Guidance for Evaluating HIV Prevention Programmes. UNAIDS 2010

25 Who should plan for Evaluation?  All programs should conduct basic monitoring  Most programs should conduct process evaluations  Implementation assessments  Quality assessments  Coverage assessments  Some programs should conduct outcome evaluation when evidence is needed as to whether the program is effective

26 References  Adamchak S et al. (2000). A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs. Focus on Young Adults, Tool Series 5. Washington D.C.: Focus on Young Adults.  Fisher A et al. (2002). Designing HIV/AIDS Intervention Studies. An Operations Research Handbook. New York: The Population Council.  Habicht JP et al. (1999) Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. International Journal of Epidemiology, 28: 10-18.  Rossi P et al. (1999). Evaluation. A systematic Approach. Thousand Oaks: Sage Publications.  UNAIDS (2010). Strategic Guidance for Evaluating HIV Prevention Programmes.

27 MEASURE Evaluation is a MEASURE project funded by the U.S. Agency for International Development and implemented by the Carolina Population Center at the University of North Carolina at Chapel Hill in partnership with Futures Group International, ICF Macro, John Snow, Inc., Management Sciences for Health, and Tulane University. Views expressed in this presentation do not necessarily reflect the views of USAID or the U.S. Government. MEASURE Evaluation is the USAID Global Health Bureau's primary vehicle for supporting improvements in monitoring and evaluation in population, health and nutrition worldwide.


Download ppt "Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India."

Similar presentations


Ads by Google