Monitoring and Evaluation of Health Services

Slides:



Advertisements
Similar presentations
Gender Audit. Traditional use of audit relates to accounting: Analysis of gender budget Gender audit still evolving… -now used interchangeably with evaluation.
Advertisements

Framework for Operations and Implementation Research in Health
Results Based Monitoring (RBM)
HEALTH PLANNING AND MANAGEMENT
Introduction to Monitoring and Evaluation
M & E for K to 12 BEP in Schools
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Donald T. Simeon Caribbean Health Research Council
Orientation to the Planning 10 IRP  How does Planning 10 fit into the 2004 Graduation Program?  How was Planning 10 developed?  What’s the difference.
Team 6 Lesson 3 Gary J Brumbelow Matt DeMonbrun Elias Lopez Rita Martin.
OECD/INFE High-level Principles for the evaluation of financial education programmes Adele Atkinson, PhD OECD With the support of the Russian/World Bank/OECD.
Enhancing Data Quality of Distributive Trade Statistics Workshop for African countries on the Implementation of International Recommendations for Distributive.
Evaluation. Practical Evaluation Michael Quinn Patton.
Evaluating Physical Activity Intervention Programs Thomas Schmid, PhD Physical Activity and Health Branch CDC Atlanta, Georgia, USA.
HOW TO WRITE A GOOD TERMS OF REFERENCE FOR FOR EVALUATION Programme Management Interest Group 19 October 2010 Pinky Mashigo.
Process of Development of Five Year Strategic Plan for Child Health Development Dr Myint Myint Than Deputy Director (WCHD) Department of Health.
Developing the Logical Frame Work …………….
Proposal Writing for Competitive Grant Systems
INTRODUCTION TO PROJECT PLANNING AND APPRAISAL SWOT ANALYSIS.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
Health promotion and health education programs. Assumptions of Health Promotion Relationship between Health education& Promotion Definition of Program.
TEMPUS IV- THIRD CALL FOR PROPOSALS Recommendation on how to make a good proposal TEMPUS INFORMATION DAYS Podgorica, MONTENEGRO 18 th December 2009.
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Evaluation Assists with allocating resources what is working how things can work better.
2004 National Oral Health Conference Strategic Planning for Oral Health Programs B.J. Tatro, MSSW, PhD B.J. Tatro Consulting Scottsdale, Arizona.
Developing Indicators
Project design & Planning The Logical Framework Approach An Over View Icelandic International Development Agency (ICEIDA) Iceland United Nations University.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
OECD/INFE Tools for evaluating financial education programmes Adele Atkinson, PhD Policy Analyst OECD With the support of the Russian/World Bank/OECD Trust.
Comp 20 - Training & Instructional Design Unit 6 - Assessment This material was developed by Columbia University, funded by the Department of Health and.
Toolkit for Mainstreaming HIV and AIDS in the Education Sector Guidelines for Development Cooperation Agencies.
1. Development Planning and Administration MPA – 403 Lecture 17 FACILITATOR Prof. Dr. Mohammad Majid Mahmood Bagram.
IAOD Evaluation Section, the Development Agenda (DA) and Development Oriented Activities Julia Flores Marfetan, Senior Evaluator.
1 Women Entrepreneurs in Rural Tourism Evaluation Indicators Bristol, November 2010 RG EVANS ASSOCIATES November 2010.
MOD 6050 PROJECT MANAGEMENT AND FUND RAISING TOPIC – PROPOSAL WRITING AND FUNDRAISING (WK 6 &8) LECTURER: DR. G. O. K’AOL.
UNDAF M&E Systems Purpose Can explain the importance of functioning M&E system for the UNDAF Can support formulation and implementation of UNDAF M&E plans.
Regional Seminar 2005 EVALUATING POLICY Are your policies working? How do you know? School Development Planning Initiative.
SUB-MODULE 5. MEANS OF VERIFICATION RESULTS BASED LOGICAL FRAMEWORK TRAINING Quality Assurance and Results Department (ORQR.2)
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
UK Aid Direct Introduction to Logframes (only required at proposal stage)
Project Management Learning Program 7-18 May 2012, Mekong Institute, Khon Kaen, Thailand Writing Project Report Multi-Purpose Reporting.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 24, 2009.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
1 Tempus Tempus Workshop Sarajevo 7 June 2006 « Good practice in Preparing an Application » Anne Collette European Training Foundation Tempus Department.
United Nations Oslo City Group on Energy Statistics OG7, Helsinki, Finland October 2012 ESCM Chapter 8: Data Quality and Meta Data 1.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
EVALUATION OF THE SEE SARMa Project. Content Project management structure Internal evaluation External evaluation Evaluation report.
IDEV 624 – Monitoring and Evaluation Evaluating Program Outcomes Elke de Buhr, PhD Payson Center for International Development Tulane University.
Logical Framework Approach 1. Approaches to Activity Design Logical Framework Approach (LFA) – Originally developed in the 1970s, this planning process.
Capacity Building For Program Evaluation In A Local Tobacco Control Program Eileen Eisen-Cohen, Maricopa County Tobacco Use Prevention Program Tips for.
Session 2: Developing a Comprehensive M&E Work Plan.
IMPLEMENTING LEAPS IN CENTRAL AND EASTERN EUROPE: TRAINERS’ HANDBOOK Monitoring and Evaluating Results.
United Nations Statistics Division Developing a short-term statistics implementation programme Expert Group Meeting on Short-Term Economic Statistics in.
Evaluation What is evaluation?
MEASURE Evaluation Data Quality Assurance Workshop Session 3 Introduction to Routine Data Quality Assessment.
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
 Meaning  Purpose  Steps in evaluation  Models of evaluation  Program evaluators and the role.
Logic Models How to Integrate Data Collection into your Everyday Work.
Project monitoring and evaluation
Background Non-Formal Education is recognized as an important sub-sector of the education system, providing learning opportunities to those who are not.
Strategic Planning for Learning Organizations
Controlling Measuring Quality of Patient Care
Albania 2021 Population and Housing Census - Plans
Measuring Data Quality
monitoring & evaluation THD Unit, Stop TB department WHO Geneva
Presentation transcript:

Monitoring and Evaluation of Health Services Dr. Rasha Salama PhD Public Health Faculty of Medicine Suez Canal University-Egypt

Presentation Outline Evaluation Monitoring Monitoring and Evaluation of health services Evaluation Definition and Concept Evaluation challenges Monitoring Monitoring versus evaluation Types Designs Methods Process: FIVE phases of Evaluation Definition and concept

Monitoring and Evaluation (M&E) Monitoring progress and evaluating results are key functions to improve the performance of those responsible for implementing health services. M&E show whether a service/program is accomplishing its goals. It identifies program weaknesses and strengths, areas of the program that need revision, and areas of the program that meet or exceed expectations. To do this, analysis of any or all of a program’s domains is required

Where does M&E fit?

Monitoring versus Evaluation A planned, systematic process of observation that closely follows a course of activities, and compares what is happening with what is expected to happen A process that assesses an achievement against preset criteria. Has a variety of purposes, and follow distinct methodologies (process, outcome, performance, etc).

Evaluation Monitoring A systematic process to determine the extent to which service needs and results have been or are being achieved and analyse the reasons for any discrepancy. Attempts to measure service’s relevance, efficiency and effectiveness. It measures whether and to what extent the programme’s inputs and services are improving the quality of people’s lives. The periodic collection and review of information on programme implementation, coverage and use for comparison with implementation plans. Open to modifying original plans during implementation Identifies shortcomings before it is too late. Provides elements of analysis as to why progress fell short of expectations

Comparison between Monitoring and Evaluation

Evaluation

Evaluation can focus on: Projects normally consist of a set of activities undertaken to achieve specific objectives within a given budget and time period. Programs are organized sets of projects or services concerned with a particular sector or geographic region Services are based on a permanent structure, and, have the goal of becoming, national in coverage, e.g. Health services, whereas programmes are usually limited in time or area. Processes are organizational operations of a continuous and supporting nature (e.g. personnel procedures, administrative support for projects, distribution systems, information systems, management operations). Conditions are particular characteristics or states of being of persons or things (e.g. disease, nutritional status, literacy, income level). Processes Services Projects Conditions Programs

Evaluation may focus on different aspects of a service or program: Inputs are resources provided for an activity, and include cash, supplies, personnel, equipment and training. Processes transform inputs into outputs. Outputs are the specific products or services, that an activity is expected to deliver as a result of receiving the inputs. A service is effective if it “works”, i.e. it delivers outputs in accordance with its objectives. A service is efficient or cost-effective if effectiveness is achieved at the lowest practical cost. Outcomes refer to peoples’ responses to a programme and how they are doing things differently as a result of it. They are short-term effects related to objectives. Impacts are the effects of the service on the people and their surroundings. These may be economic, social, organizational, health, environmental, or other intended or unintended results of the programme. Impacts are long-term effects. Processes Inputs Impacts Outputs Efficiency Effectiveness Outcomes

So what do you think? When is evaluation desirable?

When Is Evaluation Desirable? Program evaluation is often used when programs have been functioning for some time. This is called Retrospective Evaluation. However, evaluation should also be conducted when a new program within a service is being introduced. These are called Prospective Evaluations. A prospective evaluation identifies ways to increase the impact of a program on clients; it examines and describes a program’s attributes; and, it identifies how to improve delivery mechanisms to be more effective.

Prospective versus Retrospective Evaluation Prospective Evaluation, determines what ought to happen (and why) Retrospective Evaluation, determines what actually happened (and why)

Evaluation Matrix The broadest and most common classification of evaluation identifies two kinds of evaluation: Formative evaluation. Evaluation of components and activities of a program other than their outcomes. (Structure and Process Evaluation) Summative evaluation. Evaluation of the degree to which a program has achieved its desired outcomes, and the degree to which any other outcomes (positive or negative) have resulted from the program.

Evaluation Matrix

Components of Comprehensive Evaluation

Evaluation Designs Ongoing service/program evaluation End of program evaluation Impact evaluation Spot check evaluation Desk evaluation

Who conducts evaluation?

Who conducts evaluation? Internal evaluation (self evaluation), in which people within a program sponsor, conduct and control the evaluation. External evaluation, in which someone from beyond the program acts as the evaluator and controls the evaluation.

Tradeoffs between External and Internal Evaluation

Tradeoffs between External and Internal Evaluation Source: Adapted from UNICEF Guide for Monitoring and Evaluation, 1991.

Guidelines for Evaluation (FIVE phases) A: Planning the Evaluation B: Selecting Appropriate Evaluation Methods C: Collecting and Analysing Information D: Reporting Findings E: Implementing Evaluation recommendations

Phase A: Planning the Evaluation *Provide background information on the history and current status of the programme being evaluated including: How it works: its objectives, strategies and management process) Policy environment Economic and financial feasibility Institutional capacity Socio-cultural aspects Participation and ownership Environment Technology Determine the purpose of the evaluation. Decide on type of evaluation. Decide on who conducts evaluation (evaluation team) Review existing information in programme documents including monitoring information. List the relevant information sources Describe the programme. * Assess your own strengths and limitations.

Phase B:Selecting Appropriate Evaluation Methods Identify evaluation goals and objectives. (SMART) Formulate evaluation questions and sub- questions Decide on the appropriate evaluation design Identify measurement standards Identify measurement indicators Develop an evaluation schedule Develop a budget for the evaluation.

Sample evaluation questions: What might stakeholders want to know? Program clients: Does this program provide us with high quality service? Are some clients provided with better services than other clients? If so, why? Program Staff: Does this program provide our clients with high quality service? Should staff make any changes in how they perform their work, as individuals and as a team, to improve program processes and outcomes? Program managers: Does this program provide our clients with high quality service? Are there ways managers can improve or change their activities, to improve program processes and outcomes? Funding bodies: Does this program provide its clients with high quality service? Is the program cost-effective? Should we make changes in how we fund this program or in the level of funding to the program?

Indicators..... What are they? An indicator is a standardized, objective measure that allows— A comparison among health facilities A comparison among countries A comparison between different time periods A measure of the progress toward achieving program goals

Characteristics of Indicators Clarity: easily understandable by everybody Useful: represent all the important dimensions of performance Measurable Quantitative: rates, proportions, percentage, common denominator (e.g., population) Qualitative: “yes” or “no” Reliability: can be collected consistently by different data collectors Validity: measure what we mean to measure

Which Indicators? The following questions can help determine measurable indicators: How will I know if an objective has been accomplished? What would be considered effective? What would be a success? What change is expected?

Is staff supply sufficient? Staff-to-client ratios Evaluation Area (Formative assessment ) Evaluation Question Examples of Specific Measurable Indicators Staff Supply Is staff supply sufficient? Staff-to-client ratios Service Utilization What are the program’s usage levels? Percentage of utilization Accessibility of Services How do members of the target population perceive service availability? Percentage of target population who are aware of the program in their area • Percentage of the “aware” target population who know how to access the service Client Satisfaction How satisfied are clients? Percentage of clients who report being satisfied with the service received

Evaluation Area (Summative Assessment) Evaluation question Examples of specific measurable indicators Changes in Behaviour Have risk factors for cardiac disease have changed? Compare proportion of respondents who reported increased physical activity Morbidity/Mortality Has lung cancer mortality decreased by 10%? Has there been a reduction in the rate of low birth weight babies? Age-standardized lung cancer mortality rates for males and females Compare annual rates of low-birth weight babies over five years period

So what will we do ? Use Importance Feasibility Matrix

Face reality! Assess your strengths and weakness

Eventually......

Phase C: Collecting and Analysing Information Develop data collection instruments. Pre-test data collection instruments. Undertake data collection activities. Analyse data. Interpret the data

Development of a frame logical model A program logic model provides a framework for an evaluation. It is a flow chart that shows the program’s components, the relationships between components and the sequencing of events.

Use of IF-THEN Logic Model Statements To support logic model development, a set of “IF-THEN” statements helps determine if the rationale linking program inputs, outputs and objectives/outcomes is plausible, filling in links in the chain of reasoning

CAT SOLO mnemonic Next, the CAT Elements (Components, Activities and Target Groups) of a logic model can be examined

Gathering of Qualitative and Quantitative Information: Instruments Qualitative tools: There are five frequently used data collection processes in qualitative evaluation (more than one method can be used): 1. Unobtrusive seeing, involving an observer who is not seen by those who are observed; 2. Participant observation, involving an observer who does not take part in an activity but is seen by the activity’s participants. 3. Interviewing, involving a more active role for the evaluator because she /he poses questions to the respondent, usually on a one-on-one basis 4. Group-based data collection processes such as focus groups; and 5. Content analysis, which involves reviewing documents and transcripts to identify patterns within the material

Quantitative tools: “Quantitative, or numeric information, is obtained from various databases and can be expressed using statistics.” Surveys/questionnaires; Registries Activity logs; Administrative records; Patient/client charts; Registration forms; Case studies; Attendance sheets.

Pretesting or piloting......

Other monitoring and evaluation methods: Biophysical measurements Cost-benefit analysis Sketch mapping GIS mapping Transects Seasonal calendars Most significant change method Impact flow diagram ( cause- effect diagram) Institutional linkage diagram (Venn/Chapati diagram) Problem and objectives tree Systems (inputs-outputs) diagram Monitoring and evaluation Wheel (spider web)

Spider Web Method: This method is a visual index developed to identify the kind of indicators/criteria that can be used to monitor change over the program period. This would present a ‘before’ and ‘after’ program/project situation. It is commonly used in participatory evaluation.

Phase D: Reporting Findings Write the evaluation report. Decide on the method of sharing the evaluation results and on communication strategies. Share the draft report with stakeholders and revise as needed to be followed by follow up. Disseminate evaluation report.

Example of suggested outline for an evaluation report

Phase E:Implementing Evaluation Recommendations Develop a new/revised implementation plan in partnership with stakeholders. Monitor the implementation of evaluation recommendations and report regularly on the implementation progress. Plan the next evaluation

Challenges to Evaluation

References WHO: UNFPA. Programme Manager’s Planning Monitoring & Evaluation Toolkit. Division for oversight services, August 2004, Ontario Ministry of Health and Long-Term Care, Public Health Branch In: The Health Communication Unit at the Centre for Health Promotion. Introduction to evaluation health promotion programs. November 23, 24, 2007. Donaldson SI, Gooler LE, Scriven M. (2002). Strategies for managing evaluation anxiety: Toward a psychology of program evaluation. American Journal of Evaluation. 23(3), 261-272. CIDA. “CIDA Evaluation Guide”, Performance Review Branch, 2000. OECD. “Improving Evaluation Practices: Best Practice Guidelines for Evaluation and Background Paper”, 1999. UNDP. “Results-Oriented Monitoring and Evaluation: A Handbook for Programme Managers”, Office of Evaluation and Strategic Planning, New York, 1997. UNICEF. “A UNICEF Guide for Monitoring and Evaluation: Making a Difference?”, Evaluation Office, New York, 1991.

References (cont.) UNICEF. “Evaluation Reports Standards”, 2004. USAID. “Performance Monitoring and Evaluation – TIPS # 3: Preparing an Evaluation Scope of Work”, 1996 and “TIPS # 11: The Role of Evaluation in USAID”, 1997, Centre for Development Information and Evaluation. Available at http://www.dec.org/usaid_eval/#004 U.S. Centres for Disease Control and Prevention (CDC). “Framework for Program Evaluation in Public Health”, 1999. Available in English at http://www.cdc.gov/eval/over.htm U.S. Department of Health and Human Services. Administration on Children, Youth, and Families (ACYF), “The Program Manager’s Guide to Evaluation”, 1997.

Thank You