Donald T. Simeon Caribbean Health Research Council

Slides:



Advertisements
Similar presentations
Focusing an Evaluation Ben Silliman, Youth Development Specialist NC 4-H Youth Development.
Advertisements

1 Mateja Bizilj PEMPAL BCOP KEY CONCEPTS AND APPROACHES TO PROGRAMS AND PERFORMANCE Tbilisi, June 28, 2007.
New Challenges in M&E Lets go. Scaling Up Monitoring & Evaluation Strategic Information PROGRAM GUIDANCE RESULT NEEDS OPPORTUNITIES Resources New directions.
Introduction to Monitoring and Evaluation
1 Department of State Program Evaluation Policy Overview Spring 2013.
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Principles of Standards and Measures
Prepared by BSP/PMR Results-Based Programming, Management and Monitoring Presentation to Geneva Group - Paris Hans d’Orville Director, Bureau of Strategic.
Monitoring and evaluation of carers’ services and projects Dr Andrea Wigfield - Associate Professor of Social Policy Centre for International Research.
Project Monitoring Evaluation and Assessment
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Screen 1 of 24 Reporting Food Security Information Understanding the User’s Information Needs At the end of this lesson you will be able to: define the.
Staffing And Scheduling.
An Introduction to Monitoring and Evaluation for National TB Programs.
Evaluation is a professional and ethical responsibility and is a core part of PHN professional practice Commitment to evaluation helps build the PHN intelligence.
Decision-Making and Strategic Information Workshop on M&E of PHN Programs July 24-August 11, 2006 Addis Ababa.
PPA 502 – Program Evaluation Lecture 10 – Maximizing the Use of Evaluation Results.
PPA 502 – Program Evaluation
The Academic Assessment Process
Objectives and Indicators for MCH Programs MCH in Developing Countries January 25, 2011.
Evaluation. Practical Evaluation Michael Quinn Patton.
a judgment of what constitutes good or bad Audit a systematic and critical examination to examine or verify.
Monitoring, Review and Reporting Project Cycle Management A short training course in project cycle management for subdivisions of MFAR in Sri Lanka.
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
Lecture 1: Introduction Health inequality monitoring: with a special focus on low- and middle-income countries.
Community Planning Training 1-1. Community Plan Implementation Training 1- Community Planning Training 1-3.
HOW TO WRITE A GOOD TERMS OF REFERENCE FOR FOR EVALUATION Programme Management Interest Group 19 October 2010 Pinky Mashigo.
Continuous Quality Improvement (CQI)
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Orienting Public Spending towards Achieving Results: Performance and Performance Management Joel Turkewitz World Bank.
Sense of Initiative and Entrepreneurship This project has been funded with support from the European Commission. This [publication] communication reflects.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
Program Collaboration and Service Integration: An NCHHSTP Green paper Kevin Fenton, M.D., Ph.D., F.F.P.H. Director National Center for HIV/AIDS, Viral.
Performance Measurement and Analysis for Health Organizations
May 8, 2012 MWP-K Learning Event Monitoring, evaluation, and learning (MEL) framework for the Millennium Water Program, Kenya.
Unit 10. Monitoring and evaluation
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Comp 20 - Training & Instructional Design Unit 6 - Assessment This material was developed by Columbia University, funded by the Department of Health and.
What is a Business Analyst? A Business Analyst is someone who works as a liaison among stakeholders in order to elicit, analyze, communicate and validate.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
Performance Indicators Table for Youth Substance Abuse & Use Prevention September 15, 2005 Ottawa, Ontario Wanda Jamieson & Tullio Caputo.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
0 ©2015 U.S. Education Delivery Institute While there is no prescribed format for a good delivery plan, it should answer 10 questions What a good delivery.
Strategic Planning Crossing the ICT Bridge Project Trainers: Lynne Gibb Sally Dusting-Laird.
CRITICAL THINKING AND THE NURSING PROCESS Entry Into Professional Nursing NRS 101.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Evaluation: from Objectives to Outcomes Janet Myers, PhD MPH AIDS Education and Training Centers National Evaluation Center
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
HSA 171 CAR. 1436/5/10 3  Concept Of Controlling.  Definition.  Controlling Process. 4.
BSBPMG501A Manage Project Integrative Processes Manage Project Integrative Processes Project Integration Processes – Part 2 Diploma of Project Management.
Capacity Building For Program Evaluation In A Local Tobacco Control Program Eileen Eisen-Cohen, Maricopa County Tobacco Use Prevention Program Tips for.
1 DEMONSTRATION PROJECTS TO ENSURE STUDENTS WITH DISABILITIES RECEIVE A QUALITY HIGHER EDUCATION PROGRAM Performance Measurement, Program and Project Evaluation.
Session 2: Developing a Comprehensive M&E Work Plan.
Session 5: Selecting and Operationalizing Indicators.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
ICT4D: Evaluation Dr David Hollow DSV, University of Stockholm.
Evaluation What is evaluation?
OGB Partner Advocacy Workshop 18 th & 19 th March 2010 Indicators.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 10. Evaluation.
Designing Effective Evaluation Strategies for Outreach Programs
Right-sized Evaluation
Session 1 – Study Objectives
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 2. Session 6. Developing indicators.
Evaluation of Nutrition-Sensitive Programs*
Multi-Sectoral Nutrition Action Planning Training Module
Monitoring and Evaluation
Data for PRS Monitoring: Institutional and Technical Challenges
Presentation transcript:

Donald T. Simeon Caribbean Health Research Council Monitoring and Evaluation Healthy caribbean conference barbados, 0ctober 16-18, 2008

What is Monitoring Monitoring is the routine process of collecting data to measure progress toward program objectives Monitoring involves routinely looking at the way we implement programs, conduct services etc. Examines efficiency

What is Evaluation Evaluation is the use of research methods to systematically investigate a program’s effectiveness Evaluation involves measurements over time Need for baseline Sometimes requires a control or comparison group Evaluation involves special research studies

Why Monitor and Evaluate Programmes? To ensure that programs are being implemented as designed (fidelity of programme implementation) To ensure the delivery of quality services (continuous quality control - CQI) To ensure that the programmes are making a difference (outcomes) To ensure that programmes and funds are used appropriately and efficiently (accountability)

Fidelity of Programme Implementation Are projects and components of projects (i.e., specific activities) being conducted as planned and on schedule? Done primarily through programme monitoring Examine the implementation of activities relative to a planned schedule Programme monitoring ensures that programs are administered and services are delivered in the way they were designed

Continuous Quality Improvement Use information to modify/improve the configuration and implementation of programmes What was learned from implementing the programme that can be improved upon? What went wrong and how can it be corrected next time? What worked especially well and how can those lessons be incorporated into future activities? Did the intervention work? Were the outcomes as expected?

Programme Outcomes Are the projects/interventions having the desired effect on the target populations? For example, are health care providers using clinical guidelines as recommended? Done primarily through programme evaluation Determine whether programme/project made a difference (e.g., does the use of guidelines result in decreased rates of complications in diabetic patients)

Programme Outcomes Usually examined through studies designed to collect data on logical outcomes from the project/intervention (e.g., periodic surveys of target groups) Did the programs have the expected/desired outcomes? If no, why? Was it a function of implementation challenges or poor project design or a study design that failed to capture outcomes? What are the implications for future interventions? Should they be the same or can they be improved in some way?

Accountability Taxpayers, donor agencies and lenders need to know: funds were used as intended programmes made a difference Evaluation findings document achievements as well as what remains to be done Findings can be used to demonstrate unmet needs and facilitate requests for additional funds.

Best Practices for M&E Systems funding should be proportional to programme resources ideally about 7% of the program budget needed at all levels most useful if performed in a logical sequence first assessing input/process/output data (monitoring/process evaluation), then examining behavioural or immediate outcomes and finally assessing disease and social level impacts. minimize data collection burden and maximize limited resources activities should be well coordinated utilize ongoing data collection and analysis as much as possible M&E activities should be proportional to programme resources (ideally about 10% of the programmatic budget) M&E is needed at all levels and is most useful if performed in a logical sequence; first assessing input/process/output data, then examining behavioral or immediate outcomes and finally assessing disease and social level impacts. Indicators and instruments for data collection and analysis build upon wide experience and recent developments, but can be adapted locally. M&E indicators should measure population-based biological, behavioral, and social data to determine the "collective effectiveness" of consolidated programs. These survey efforts should be supplemented with good qualitative data. To minimize data collection burden and maximize limited resources, M&E activities need to be well coordinated and utilize ongoing data collection and analysis, where appropriate, in preference to designing new instruments or stand-alone systems. To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders, such as district and national programme managers, policy makers, community members, and programme participants.

Best Practices for M&E Systems To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders programme managers, policy makers, community members, and programme participants M&E indicators should be comprehensive should also measure population-based biological, behavioural, and social data to determine "collective effectiveness" M&E activities should be proportional to programme resources (ideally about 10% of the programmatic budget) M&E is needed at all levels and is most useful if performed in a logical sequence; first assessing input/process/output data, then examining behavioral or immediate outcomes and finally assessing disease and social level impacts. Indicators and instruments for data collection and analysis build upon wide experience and recent developments, but can be adapted locally. M&E indicators should measure population-based biological, behavioral, and social data to determine the "collective effectiveness" of consolidated programs. These survey efforts should be supplemented with good qualitative data. To minimize data collection burden and maximize limited resources, M&E activities need to be well coordinated and utilize ongoing data collection and analysis, where appropriate, in preference to designing new instruments or stand-alone systems. To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders, such as district and national programme managers, policy makers, community members, and programme participants.

Developing/Selecting Indicators

Indicators Specific measures that reflect a larger set of circumstances Greater emphasis on transparency globally, people want instant summary information, instant feedback Indicators respond to this need

Indicators – things to know only indicate – will never capture the richness and complexity of a system Designed to give ‘slices’ of reality encourage explicitness: they force us to be clear and explicit usually rely on numbers & numerical techniques (rates, ratios, comparisons) have specific measurement protocols which must be respected

Sources of Data Primary Data Sources: Secondary Data Sources: Quantitative program data e.g. from coverage of services Surveys: demographic health surveys, epidemiological, behavioral and other studies Research and impact evaluations. Qualitative data from program staff, key informants and direct observation Secondary Data Sources: National response documentation, expenditures reports and program review reports. Surveillance reports Routine statistics e.g. mortality, hospital admissions

Relating Program Objectives to Indicators Program goals and objectives may be vague or overly broad, making indicator selection difficult Indicators should be clearly related to program goals and objectives Program objectives may have multiple indicators Indicators are used at all levels of the programme implementation process Process indicators Outcome indicators Impact indicators

Types of Indicators Impact Indicators are used for national and global reporting e.g. mortality rates Outcomes Program indicators are used for reporting to national authorities and donors. Changes at end of intervention/program period e.g. rate of HBP control among targeted patients, hospital admissions et. Outputs Selected Interventions Indicators (such as approval of a policy, health care professionals trained) are used for programmatic decision making Inputs Resource allocation indicators may be included Financial, human, material, and technical resources Performance Indicators What are they? Performance indicators are measures of inputs, processes, outputs, outcomes, and impacts for development projects, programs, or strategies. When supported with sound data collection—perhaps involving formal surveys—analysis and reporting, indicators enable managers to track progress, demonstrate results, and take corrective action to improve service delivery. Participation of key stakeholders in defining indicators is important because they are then more likely to understand and use indicators for management decision-making. Independent Evaluation Group, The World Bank, 2xxx Note: Handout of indicators Indicators form the basis for Gathering baseline information before beginning implementation Selecting desired level of improvement Setting targets for assessing performance during implementation and adjusting as needed Evaluating and reporting results to stakeholders and constituents Note: Project indicators contribute to strategic programs can be programmatic or intervention related as per the goals of specific projects and should contirbute implicit in the be

Some considerations How can the main focus of the objective best be measured? What practical constraints are there to measuring the indicator? Are there alternative or complementary measures that should be considered? What resources (human and financial) does the indicator require? Do standard (validated, internationally recognized) indicators exist? How will the results not captured by the selected indicator be measured? (Indicators are imperfect)

General Criteria of Good Indicators Indicators should be expressed in terms of: Quantity Quality Population Time For example, an indicator written for the program objective of “Improving glycemic control in diabetic patients” might specify: “Increase from 30% to 50% (quantity) of gylcaemic control rates (quality) among diabetic patients (population) by October 2009 (time).”

Examples of Indicators # care providers trained to use clinical guidelines in the past year % patients with controlled diabetes in health centres % A&E admissions for diabetes related complications

General Criteria of Good Indicators Simple, clear and understandable Valid – does it measure what it is intended to measure Specific – Should measure only the conditions or event under observation and nothing else Reliable – should produce the same result when used more than once to measure the same event Simple, clear and understandable. The indicator should be immediately understood as a measure of project effectiveness. • Useful. The indicator should be functional and practical. • Valid. The indicator should actually measure the phenomenon it is intended to measure. • Specific. The indicator should measure only the phenomenon it is intended to measure. There should be one indicator related explicitly to each activity. • Reliable. Conclusions based on any given indicator should be the same regardless of whom, when, and under what conditions the assessments were completed. • Replicable. The indicator is not unique to one project or time frame. • Relevant. Decisions will be made based on the data collected. The proposed indicators and the data associated with them must be appropriate to the project rationale.

General Criteria of Good Indicators Relevant – related to your work Sensitive – will it measure changes over time Operational –should be measurable or quantifiable using definitions and standards Affordable – should impose reasonable measurement costs Feasible – should be able to be carried out using the existing data collection system Other criteria include: Sensitive They should reflect changes in the state of the condition or event under observation. Operational They should be measurable or quantifiable using tested definitions and reference standards. Affordable They should represent reasonable measurement costs. Feasible They should be possible to carry out in the proposed data collection.

Summary To ensure that programs are being implemented as designed and funds are used appropriately and efficiently To ensure that the programmes are making a difference The selection of appropriate indicators (relative to program objectives) is critical to the success

Thank you