Presentation on theme: "Donald T. Simeon Caribbean Health Research Council"— Presentation transcript:
1 Donald T. Simeon Caribbean Health Research Council Monitoring and Evaluation Healthy caribbean conference barbados, 0ctober 16-18, 2008
2 What is MonitoringMonitoring is the routine process of collecting data to measure progress toward program objectivesMonitoring involves routinely looking at the way we implement programs, conduct services etc.Examines efficiency
3 What is EvaluationEvaluation is the use of research methods to systematically investigate a program’s effectivenessEvaluation involves measurements over timeNeed for baselineSometimes requires a control or comparison groupEvaluation involves special research studies
4 Why Monitor and Evaluate Programmes? To ensure that programs are being implemented as designed (fidelity of programme implementation)To ensure the delivery of quality services (continuous quality control - CQI)To ensure that the programmes are making a difference (outcomes)To ensure that programmes and funds are used appropriately and efficiently (accountability)
5 Fidelity of Programme Implementation Are projects and components of projects (i.e., specific activities) being conducted as planned and on schedule?Done primarily through programme monitoringExamine the implementation of activities relative to a planned scheduleProgramme monitoring ensures that programs are administered and services are delivered in the way they were designed
6 Continuous Quality Improvement Use information to modify/improve the configuration and implementation of programmesWhat was learned from implementing the programme that can be improved upon?What went wrong and how can it be corrected next time?What worked especially well and how can those lessons be incorporated into future activities?Did the intervention work? Were the outcomes as expected?
7 Programme OutcomesAre the projects/interventions having the desired effect on the target populations?For example, are health care providers using clinical guidelines as recommended?Done primarily through programme evaluationDetermine whether programme/project made a difference (e.g., does the use of guidelines result in decreased rates of complications in diabetic patients)
8 Programme OutcomesUsually examined through studies designed to collect data on logical outcomes from the project/intervention (e.g., periodic surveys of target groups)Did the programs have the expected/desired outcomes? If no, why? Was it a function of implementation challenges or poor project design or a study design that failed to capture outcomes?What are the implications for future interventions? Should they be the same or can they be improved in some way?
9 AccountabilityTaxpayers, donor agencies and lenders need to know:funds were used as intendedprogrammes made a differenceEvaluation findings document achievements as well as what remains to be doneFindings can be used to demonstrate unmet needs and facilitate requests for additional funds.
10 Best Practices for M&E Systems funding should be proportional to programme resourcesideally about 7% of the program budgetneeded at all levelsmost useful if performed in a logical sequencefirst assessing input/process/output data (monitoring/process evaluation),then examining behavioural or immediate outcomesand finally assessing disease and social level impacts.minimize data collection burden and maximize limited resourcesactivities should be well coordinatedutilize ongoing data collection and analysis as much as possibleM&E activities should be proportional to programme resources (ideally about 10% of the programmatic budget)M&E is needed at all levels and is most useful if performed in a logical sequence; first assessing input/process/output data, then examining behavioral or immediate outcomes and finally assessing disease and social level impacts.Indicators and instruments for data collection and analysis build upon wide experience and recent developments, but can be adapted locally.M&E indicators should measure population-based biological, behavioral, and social data to determine the "collective effectiveness" of consolidated programs. These survey efforts should be supplemented with good qualitative data.To minimize data collection burden and maximize limited resources, M&E activities need to be well coordinated and utilize ongoing data collection and analysis, where appropriate, in preference to designing new instruments or stand-alone systems.To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders, such as district and national programme managers, policy makers, community members, and programme participants.
11 Best Practices for M&E Systems To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholdersprogramme managers, policy makers, community members, and programme participantsM&E indicators should be comprehensiveshould also measure population-based biological, behavioural, and social data to determine "collective effectiveness"M&E activities should be proportional to programme resources (ideally about 10% of the programmatic budget)M&E is needed at all levels and is most useful if performed in a logical sequence; first assessing input/process/output data, then examining behavioral or immediate outcomes and finally assessing disease and social level impacts.Indicators and instruments for data collection and analysis build upon wide experience and recent developments, but can be adapted locally.M&E indicators should measure population-based biological, behavioral, and social data to determine the "collective effectiveness" of consolidated programs. These survey efforts should be supplemented with good qualitative data.To minimize data collection burden and maximize limited resources, M&E activities need to be well coordinated and utilize ongoing data collection and analysis, where appropriate, in preference to designing new instruments or stand-alone systems.To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders, such as district and national programme managers, policy makers, community members, and programme participants.
13 IndicatorsSpecific measures that reflect a larger set of circumstancesGreater emphasis on transparency globally, people want instant summary information, instant feedbackIndicators respond to this need
14 Indicators – things to know only indicate – will never capture the richness and complexity of a system Designed to give ‘slices’ of realityencourage explicitness: they force us to be clear and explicitusually rely on numbers & numerical techniques (rates, ratios, comparisons)have specific measurement protocols which must be respected
15 Sources of Data Primary Data Sources: Secondary Data Sources: Quantitative program data e.g. from coverage of servicesSurveys: demographic health surveys, epidemiological, behavioral and other studiesResearch and impact evaluations.Qualitative data from program staff, key informants and direct observationSecondary Data Sources:National response documentation, expenditures reports and program review reports.Surveillance reportsRoutine statistics e.g. mortality, hospital admissions
16 Relating Program Objectives to Indicators Program goals and objectives may be vague or overly broad, making indicator selection difficultIndicators should be clearly related to program goals and objectivesProgram objectives may have multiple indicatorsIndicators are used at all levels of the programme implementation processProcess indicatorsOutcome indicatorsImpact indicators
17 Types of Indicators Impact Indicators are used for national and global reporting e.g. mortality ratesOutcomesProgram indicators are used for reporting to national authorities and donors. Changes at end of intervention/program period e.g. rate of HBP control among targeted patients, hospital admissions et.OutputsSelected Interventions Indicators (such as approval of a policy, health care professionals trained) are used for programmatic decision makingInputsResource allocation indicators may be includedFinancial, human, material, and technical resourcesPerformance IndicatorsWhat are they?Performance indicators are measures of inputs, processes, outputs, outcomes, andimpacts for development projects, programs, or strategies. When supported with sounddata collection—perhaps involving formal surveys—analysis and reporting, indicatorsenable managers to track progress, demonstrate results, and take corrective action toimprove service delivery. Participation of key stakeholders in defining indicators isimportant because they are then more likely to understand and use indicators formanagement decision-making.Independent Evaluation Group, The World Bank, 2xxxNote: Handout of indicatorsIndicators form the basis forGathering baseline information before beginning implementationSelecting desired level of improvementSetting targets for assessing performance during implementation and adjusting as neededEvaluating and reporting results to stakeholders and constituentsNote: Project indicators contribute to strategic programs can be programmatic or intervention related as per the goals of specific projects and should contirbute implicit in the be
18 Some considerationsHow can the main focus of the objective best be measured?What practical constraints are there to measuring the indicator?Are there alternative or complementary measures that should be considered?What resources (human and financial) does the indicator require?Do standard (validated, internationally recognized) indicators exist?How will the results not captured by the selected indicator be measured? (Indicators are imperfect)
19 General Criteria of Good Indicators Indicators should be expressed in terms of:QuantityQualityPopulationTimeFor example, an indicator written for the program objective of “Improving glycemic control in diabetic patients” might specify:“Increase from 30% to 50% (quantity) of gylcaemic control rates (quality) among diabetic patients (population) by October 2009 (time).”
20 Examples of Indicators # care providers trained to use clinical guidelines in the past year% patients with controlled diabetes in health centres% A&E admissions for diabetes related complications
21 General Criteria of Good Indicators Simple, clear and understandableValid – does it measure what it is intended to measureSpecific – Should measure only the conditions or event under observation and nothing elseReliable – should produce the same result when used more than once to measure the same eventSimple, clear and understandable. The indicator should be immediately understood as a measure of project effectiveness.• Useful. The indicator should be functional and practical.• Valid. The indicator should actually measure the phenomenon it is intended to measure.• Specific. The indicator should measure only the phenomenon it is intended to measure. There should be one indicator related explicitly to each activity.• Reliable. Conclusions based on any given indicator should be the same regardless of whom, when, and under what conditions the assessments were completed.• Replicable. The indicator is not unique to one project or time frame.• Relevant. Decisions will be made based on the data collected. The proposed indicators and the data associated with them must be appropriate to the project rationale.
22 General Criteria of Good Indicators Relevant – related to your workSensitive – will it measure changes over timeOperational –should be measurable or quantifiable using definitions and standardsAffordable – should impose reasonable measurement costsFeasible – should be able to be carried out using the existing data collection systemOther criteria include:Sensitive They should reflect changes in the state of the conditionor event under observation.Operational They should be measurable or quantifiable using testeddefinitions and reference standards.Affordable They should represent reasonable measurement costs.Feasible They should be possible to carry out in the proposed datacollection.
23 SummaryTo ensure that programs are being implemented as designed and funds are used appropriately and efficientlyTo ensure that the programmes are making a differenceThe selection of appropriate indicators (relative to program objectives) is critical to the success