TM Introduction to Program Evaluation Victor Balaban, PhD Program Evaluation Team (PET) Field Services and Evaluation Branch (FSEB) Division of Tuberculosis.

Slides:



Advertisements
Similar presentations
Program Evaluation: What is it?
Advertisements

Focusing an Evaluation Ben Silliman, Youth Development Specialist NC 4-H Youth Development.
Introduction to Monitoring and Evaluation
Developing and Implementing a Monitoring & Evaluation Plan
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Donald T. Simeon Caribbean Health Research Council
What You Will Learn From These Sessions
Designing an Effective Evaluation Strategy
TM PEN: CDC Guidance for Developing Plans and Reports 2011 TB ETN/PEN Conference Awal Khan, PhD Lead, Program Evaluation Team Field Services and Evaluation.
Project Monitoring Evaluation and Assessment
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
ASSESSMENT WORKSHOP: SESSION 1 ADMINISTRATIVE SUPPORT SERVICES ACADEMIC AND STUDENT SUPPORT SERVICES PRESENTED BY THE DIVISION OF INSTITUTIONAL EFFECTIVENESS.
Evaluation is a professional and ethical responsibility and is a core part of PHN professional practice Commitment to evaluation helps build the PHN intelligence.
Decision-Making and Strategic Information Workshop on M&E of PHN Programs July 24-August 11, 2006 Addis Ababa.
Comprehensive M&E Systems
The Lumina Center Grantseeking Workshop Series Presents Outcomes & Evaluations April 20, 2006.
PPA 502 – Program Evaluation
Evaluation. Practical Evaluation Michael Quinn Patton.
Unit 4: Monitoring Data Quality For HIV Case Surveillance Systems #6-0-1.
TM Moving Forward from Evaluation Findings: Next Steps Lakshmy Menon, MPH Program Evaluation Team Field Services and Evaluation Branch Division of Tuberculosis.
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
Community Planning Training 1-1. Community Plan Implementation Training 1- Community Planning Training 1-3.
Quality Improvement Prepeared By Dr: Manal Moussa.
Participants should expect to understand: Concept of M&E Importance of gender in M&E Different steps in the M&E process Integrating gender into program/project.
How to Develop the Right Research Questions for Program Evaluation
Dr. G. Johnson, Program Evaluation and the Logic Model Research Methods for Public Administrators Dr. Gail Johnson.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
Program Evaluation Using qualitative & qualitative methods.
Unit 1: Overview of HIV/AIDS Case Reporting #6-0-1.
TM Low Cost and No Cost Tools for Program Evaluation Victor Balaban, PhD Program Evaluation Team (PET) Field Services and Evaluation Branch (FSEB) Division.
May 12 th Monitoring and Project Control. Objectives Anticipated Outcomes Express why Monitoring and Controlling are Important. Differentiate between.
The Evaluation Plan.
Too expensive Too complicated Too time consuming.
1 Introduction to Evaluating the Minnesota Demonstration Program Paint Product Stewardship Initiative September 19, 2007 Seattle, WA Matt Keene, Evaluation.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Evaluation Assists with allocating resources what is working how things can work better.
Unit 10. Monitoring and evaluation
Developing Indicators
Evelyn Gonzalez Program Evaluation. AR Cancer Coalition Summit XIV March 12, 2013 MAKING A DIFFERENCE Evaluating Programmatic Efforts.
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
M&E Basics Miguel Aragon Lopez, MD, MPH. UNAIDS M&E Senior Adviser 12 th May 2009.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Indicators in Malaria Program Phases By Bayo S Fatunmbi [Technical Officer, Monitoring & Evaluation] ERAR-GMS, WHO Cambodia & Dr. Michael Lynch Epidemiologist.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Evaluation: from Objectives to Outcomes Janet Myers, PhD MPH AIDS Education and Training Centers National Evaluation Center
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
ACSM at Country Level Sub Group Meeting
Company LOGO. Company LOGO PE, PMP, PgMP, PME, MCT, PRINCE2 Practitioner.
Session 6: Data Flow, Data Management, and Data Quality.
Session 2: Developing a Comprehensive M&E Work Plan.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
Session 5: Selecting and Operationalizing Indicators.
Development of Gender Sensitive M&E: Tools and Strategies.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
Evaluation Nicola Bowtell. Some myths about evaluation 2Presentation title - edit in Header and Footer always involves extensive questionnaires means.
Developing Program Indicators Measuring Results MEASURE Evaluation.
Evaluation What is evaluation?
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 11. Reporting.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 10. Evaluation.
Logic Models How to Integrate Data Collection into your Everyday Work.
Project monitoring and evaluation
Designing Effective Evaluation Strategies for Outreach Programs
M&E Basics Miguel Aragon Lopez, MD, MPH
Right-sized Evaluation
Monitoring and Evaluation
monitoring & evaluation THD Unit, Stop TB department WHO Geneva
Presentation transcript:

TM Introduction to Program Evaluation Victor Balaban, PhD Program Evaluation Team (PET) Field Services and Evaluation Branch (FSEB) Division of Tuberculosis Elimination (DTBE) NCHHSTP/CDC

TM Disclaimer The contents and conclusions in this presentation have not been formally disseminated by CDC and should not be construed to represent any agency determination or policy.

TM What is Evaluation?

TM Evaluation Evaluation is the systematic investigation of the merit, worth or significance of an object, hence assigning “value” to a program’s efforts means addressing those three inter-related domains: Merit (or quality) Worth (or value, i.e., cost- effectiveness) Significance (or importance) source: CDC Framework for Program Evaluation in Public Health:

TM Evaluation Evaluation is: An activity that assists in planning and measuring programs a way of managing, improving and being accountable for: resources activities results Evaluation answers the question- “Is the program doing what we intend it to do?”

TM What Can Be Evaluated? Direct service interventions Community mobilization efforts Research initiatives Surveillance systems Policy development activities Outbreak investigations Laboratory diagnostics Communication campaigns Infrastructure-building projects Training and educational services Administrative systems MMWR, 1999, Framework for Program Evaluation in Public Health Source: MMWR, 1999, Framework for Program Evaluation in Public Health

TM Why Do We Evaluate?  Effectiveness - to determine if a program achieved it’s objectives  Impact - to assess how well program(s) are working  Improvement - to modify programs that are not working according to plan or take advantage of something that is working exceptionally well  Accountability - to report to stakeholders  To help develop new efforts

TM How Does Evaluation Differ from Surveillance? Surveillance is the routine tracking of disease status or behavior over time Surveillance is not necessarily in relation to any specific program or intervention. Evaluation is conducted in relation to specific program(s) or intervention(s)

TM How Does Evaluation Differ from Research?  The purpose of research is to produce knowledge about how the world works.  Evaluation studies are used to improve programs and inform decisions about future resource allocations.  The standards for evidence are higher in research, and the time lines for generating knowledge can be longer than for evaluation. a dapted from: Michael Patton as interviewed by Lisa Waldick (IDRC)

TM Why is Evaluation Important? Improve knowledge of program design Improve program implementation Reporting Ensure that a program reaches those who need it most Give visibility to work Demonstrate accountability Share information Enhance understanding of what works best and what does not work – and why

TM Example: TB – Completion of Treatment In a State, an organization received funds for a TB program. The program’s goal was to increase the proportion of newly diagnosed TB patients who complete treatment within 12 months to 93.0%. Records showed that in the three years since the program was funded, 85.0% of patients completed treatment within 12 months. Was the program successful?

TM Example: TB – Completion of Treatment The State felt that the target of 93.0% treatment completion within 12 months was not reached and therefore the program had failed. The program staff, however, were confident that the program was a success because only 74.0% of patients had completed treatment within 12 months in the three years before the program was funded. Who is correct?

TM Example: TB – Completion of Treatment Was the program a success or a failure? What program management issues does this example present? What information is needed to make management decisions for the way forward? How could evaluation have helped in this case?

TM Remember The apparent success or failure of a program or activity must always be closely examined What you measure will determine what you are able to find out Evaluation can help us to do things differently and better understand the why and how of program/activity success

TM Summary Evaluation is an activity that helps in program management Evaluation involves assessing a program or activity to find out: What has been achieved What progress has been made What the successes and challenges are What difference has been made by the program

TM Types of Evaluation

TM Types of Evaluation Impact Evaluation Outcome Evaluation Process Evaluation Formative Evaluation Planning Effective Activity Determining if Activity Was Implemented As Intended Determining If Activity Caused Outcomes Determining Broader Impacts

TM When to Evaluate? Conception Completion Planning a NEW program Assessing a DEVELOPING program --Public Health Program-- Assessing a STABLE, MATURE program Assessing a program after it has ENDED

TM Formative Evaluation Collects data describing the needs of a system or population, including those needs to be addressed by a program or activity. Answers questions such as: How should the activity be designed or modified to address participants’ needs? What can we learn from pilot-testing our approach? Are the materials we are going to use appropriate?

TM Process Evaluation Collects more detailed data about the quality of the activity, factors that affected quality, and differences between intended and actual delivery of the activity. Answers questions such as: Was the activity implemented as intended? Did the activity reach the intended audience? Why where there differences between intentions and actual delivery?

TM Outcome Evaluation Collects data to determine if, and by how much, program activities or services achieved their intended outcomes among the targeted population (often with a comparison or control group). Answers questions such as: Did the activity result in the expected outcomes? Can we attribute observed changes among the targeted population to the activity? Can we indicate what might happen in the absence of the activity?

TM Impact Evaluation Collects data about a population or region over time to establish a causal association between programs and what they aimed to achieve beyond the outcomes on individuals targeted by the program(s) Answers questions such as: What long term effect does the activity, combined with other initiatives, have?

TM Selecting an Appropriate Evaluation Method

TM Criteria for Selecting Evaluation Method What evaluation question needs to be answered? Who needs the data? What resources are available for evaluation?

TM What Information Is Needed? Different stakeholders or users have different information needs based on how they will use the information. Information needs also vary at the different stages of a program and the type of evaluation being conducted Staff Funds Materials Facilities Supplies Input (Resources) Trainings Services Education Treatments Interventions Activities (Interventions, Services) # staff trained # condoms distributed # test kits distributed # clients served # tests conducted Output (Immediate Effects) Provider behavior Risk behavior Service use Behavior clinical outcomes Quality of life TB incid/prev Social norms STI incid/prev AIDS morb/mort Economic impact Outcomes (Intermediate Effects) Impact (Long-term Effects) Impact Evaluation Process Evaluation Outcome Evaluation

TM Most Some Few* All Monitoring and Evaluation Pipeline Adaptation of Rehle/Rugg M&E Pipeline Model, FHI 2001 Input/Output Monitoring Process Evaluation Outcome Evaluation Impact Evaluation Number of Programs Levels of Evaluation Effort

TM What Information Is Most Important? How do you prioritize your evaluation questions? Identify the use for the information Consider the feasibility of answering questions given the available resources Determine what you “need to know” vs. what is “nice to know”

TM Three Types of Questions Descriptive Questions - “What is” Describe a program/process Normative Questions – Compare “What is” to “What should be” Measuring against a stated standard Cause and Effect Question – “Effect” Measure before and after – with and without comparisons

TM Main Evaluation Question/Issue QuestionsSub- Questions Type of (Sub) Questions Measures or Indicators Target or Standard Baseline Data ?

TM Indicators A measurable piece of information that helps answer your evaluation question Indicators are signposts, markers or clues of change; they are intended to indicate whether objectives are being achieved Provide a reference point for program or project planning, management, and reporting Relates to the objectives of your evaluation Can be related to processes or outcomes

TM Indicators Is also referred to as a performance measure in the NTIP Can use existing ones or develop ones tailored to a particular question Allow you to assess trends and identify problems Can act as early warning signals for corrective action An indicator is not the actual result, or the data collection method or tool

TM Measures vs. Indicators Measures are descriptions of program functioning, while indicators measure one aspect of a program or a project that is usually directly related to particular objectives. Measures alone do not necessarily provide enough information to indicate how effective a program is in reaching its intended results Anything can be measured, however, every measure is not an indicator of program functioning

TM Example You are buying a used car and want to know what condition the car is in: You can measure many things when you inspect the car: Tire tread How clean the oil is Wear on brake pads Rust on body of car OR You can examine the number of miles the car has been driven

TM Example You are developing indicators to measure HIV testing within your TB program: You can measure many things # of people tested # of people diagnosed # of test kits purchased OR You can examine the percent of program participants aged 15–49 receiving HIV test results in the past 12 months

TM Key Elements of a Good Indicator S pecific: An indicator must be related to the conditions that the program/project wishes to change M easurable: An indicator must be quantifiable and allow for analysis of the data A ppropriate: An indicator must be necessary and have relevance to the management of information needs of the persons who will use it R ealistic: An indicator must be attainable at a reasonable cost using appropriate collection methods T ime-based: An indicator must have a time period for collection clearly stated

TM Examples of Indicators (from NTIP) proportion of patients, with newly diagnosed TB for whom 12 months or less of treatment is indicated, who complete treatment within 12 months proportion of contacts to sputum AFB smear- positive TB patients with newly diagnosed latent TB infection (LTBI) who start treatment percent of cooperative agreement recipients that have a TB training focal point

TM Targets Reasonable expectations about what “success” means Should create one for each indicator Based on the current status of an activity Consider program requirements

TM Collecting Evaluation Data

TM Why Use Data? Data can help your program evaluate its program effectiveness and keep the focus on program outcomes Data can provide feedback to stakeholders about what is working, what needs to continue, and what can be reduced Data can convince stakeholders of the need to change Data can uncover problems that might otherwise remain invisible

TM Types of Data  Quantitative Data  Numbers  More objective  Epidemiological data  Qualitative Data  Words and/or concepts  More subjective  Observations  Both can be used in evaluation

TM Data Collection Methods  Quantitative Data Collection  Surveys/Questionnaire  Secondary data  Surveillance data  Epidemiological data  Qualitative Data Collection  Focus groups  Interviews/Case study  Observations  Mixed Methods

TM Comparison of Data Collection Methods MethodsAdvantagesDisadvantages Surveys Anonymity possible Can administer to groups Efficient & cost effective Forced choices limit response Wording may bias response Impersonal Individual interviews Can build rapport Can probe for more info Can get breadth/depth of info Time consuming Expensive Interview style may bias Focus groups Can get breadth & depth of info in short time frame Can convey key info re program Need trained facilitator Time consuming to analyze responses Observation Can assess fidelity as activities occur Interpretation of behavior difficult Expensive & time consuming Document review Info already exists Doesn’t disrupt program Depends on quality of info Time consuming

TM Data Sources

TM Data Sources Two Options: 1.Collect information from existing sources: surveys. program records, databases, documents, etc. 2.Collect new data

TM Data Sources Where or from whom will you get data for each of your indicators to answer your evaluation questions? Data SourcesExamples Documentsmedical records, meeting minutes, surveillance reports, interview records Individualsstaff, providers, partnership members Observationsdata obtained from observations of staff, environment (reception area), office flow, activities, etc.

TM Existing vs. New Data Be aware that gathering and analyzing new data can be expensive and time consuming Before making any plans to gather new data make sure to check if there are existing data sources that have the information you need If no existing data sources provide the information you need, then you may need to consider collecting new data

TM Data Needs and Sources Needs What data do we need to achieve objectives? For whom do we need to use it? Does the system do what it is supposed to do? What is the timeframe for data use?

TM Good Data Sources Provide the necessary information to answer your evaluation questions Are feasible to access given the available resources Offer confidence in the quality of information gathered Are relevant to the time period you are interested in

TM Existing TB Data Sources Routinely collected data: Record forms at the health facility Record and report forms at the city/county/state level Record and report laboratory forms Census / Vital statistics Surveillance / BRFSS NHANES / NHIS

TM Existing TB Data Sources Other data sources at various levels: Work plans and budgets Annual reports Audits Meeting reports Planning documents Procurement records Storage facility stock cards

TM Conducting an Evaluation

TM 1.Identify program goals and objectives 2.Define the scope of the evaluation 3.Define evaluation questions & indicators 4.Define methods 5.Design instruments and tools 6.Carry out the evaluation 7.Analyze data and write a report 8.Disseminate and use data source: FHI, Impact, USAID manual Essential Steps to Evaluation

TM Evaluation Process Capacity Building Involvement of Stakeholders Developing Indicators Collecting, Analyzing Data Defining What to Evaluate

TM Gathering the Information You Need 1)Determine your evaluation question 2)Identify the type of data you need to answer your questions 3)Identify sources where you can find the information you need 4)Determine the methods you will use to review existing information or collect new data 5)Identify the tools you will use to collect new data

TM Evaluation Barriers

TM Evaluation Barriers  Unrealistic targets/goals  Objectives not linked to program  Not meaningful to the program activities  Measures poorly defined – not useful

TM Overcoming Barriers  Include evaluation during planning phase  Involve key stakeholders from outset  Establish realistic goals/objectives with time frames  Establish appropriate, well-defined evaluation measures  Provide training and/or technical assistance  Build in feedback loops to program (quality improvement)  Establish baseline  Build into existing work processes

TM If We Remember Nothing Else ….. Evaluation is not surveillance or research Evaluation is an activity to help us make decisions about a program and to document its improvements

TM Thank you

TM Acknowledgements DTBE/FSEB/Program Evaluation Team (PET) Awal Khan Christina Dahlstrom Judy Gibson Lakshmy Menon Brandy Peterson Lauren Polansky DTBE/FSEB/Field Services Teams I & II Greg Andrews Dan Ruggiero Bruce Bradley Gail Burns-Grant Alstead Forbes Regina Gore Andy Heetderks Mark Miner Vic Tomlinson Dawn Tuckey