Presentation is loading. Please wait.

Presentation is loading. Please wait.

Donald T. Simeon Caribbean Health Research Council

Similar presentations


Presentation on theme: "Donald T. Simeon Caribbean Health Research Council"— Presentation transcript:

1 Donald T. Simeon Caribbean Health Research Council
Monitoring and Evaluation Healthy caribbean conference barbados, 0ctober 16-18, 2008

2 What is Monitoring Monitoring is the routine process of collecting data to measure progress toward program objectives Monitoring involves routinely looking at the way we implement programs, conduct services etc. Examines efficiency

3 What is Evaluation Evaluation is the use of research methods to systematically investigate a program’s effectiveness Evaluation involves measurements over time Need for baseline Sometimes requires a control or comparison group Evaluation involves special research studies

4 Why Monitor and Evaluate Programmes?
To ensure that programs are being implemented as designed (fidelity of programme implementation) To ensure the delivery of quality services (continuous quality control - CQI) To ensure that the programmes are making a difference (outcomes) To ensure that programmes and funds are used appropriately and efficiently (accountability)

5 Fidelity of Programme Implementation
Are projects and components of projects (i.e., specific activities) being conducted as planned and on schedule? Done primarily through programme monitoring Examine the implementation of activities relative to a planned schedule Programme monitoring ensures that programs are administered and services are delivered in the way they were designed

6 Continuous Quality Improvement
Use information to modify/improve the configuration and implementation of programmes What was learned from implementing the programme that can be improved upon? What went wrong and how can it be corrected next time? What worked especially well and how can those lessons be incorporated into future activities? Did the intervention work? Were the outcomes as expected?

7 Programme Outcomes Are the projects/interventions having the desired effect on the target populations? For example, are health care providers using clinical guidelines as recommended? Done primarily through programme evaluation Determine whether programme/project made a difference (e.g., does the use of guidelines result in decreased rates of complications in diabetic patients)

8 Programme Outcomes Usually examined through studies designed to collect data on logical outcomes from the project/intervention (e.g., periodic surveys of target groups) Did the programs have the expected/desired outcomes? If no, why? Was it a function of implementation challenges or poor project design or a study design that failed to capture outcomes? What are the implications for future interventions? Should they be the same or can they be improved in some way?

9 Accountability Taxpayers, donor agencies and lenders need to know: funds were used as intended programmes made a difference Evaluation findings document achievements as well as what remains to be done Findings can be used to demonstrate unmet needs and facilitate requests for additional funds.

10 Best Practices for M&E Systems
funding should be proportional to programme resources ideally about 7% of the program budget needed at all levels most useful if performed in a logical sequence first assessing input/process/output data (monitoring/process evaluation), then examining behavioural or immediate outcomes and finally assessing disease and social level impacts. minimize data collection burden and maximize limited resources activities should be well coordinated utilize ongoing data collection and analysis as much as possible M&E activities should be proportional to programme resources (ideally about 10% of the programmatic budget) M&E is needed at all levels and is most useful if performed in a logical sequence; first assessing input/process/output data, then examining behavioral or immediate outcomes and finally assessing disease and social level impacts. Indicators and instruments for data collection and analysis build upon wide experience and recent developments, but can be adapted locally. M&E indicators should measure population-based biological, behavioral, and social data to determine the "collective effectiveness" of consolidated programs. These survey efforts should be supplemented with good qualitative data. To minimize data collection burden and maximize limited resources, M&E activities need to be well coordinated and utilize ongoing data collection and analysis, where appropriate, in preference to designing new instruments or stand-alone systems. To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders, such as district and national programme managers, policy makers, community members, and programme participants.

11 Best Practices for M&E Systems
To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders programme managers, policy makers, community members, and programme participants M&E indicators should be comprehensive should also measure population-based biological, behavioural, and social data to determine "collective effectiveness" M&E activities should be proportional to programme resources (ideally about 10% of the programmatic budget) M&E is needed at all levels and is most useful if performed in a logical sequence; first assessing input/process/output data, then examining behavioral or immediate outcomes and finally assessing disease and social level impacts. Indicators and instruments for data collection and analysis build upon wide experience and recent developments, but can be adapted locally. M&E indicators should measure population-based biological, behavioral, and social data to determine the "collective effectiveness" of consolidated programs. These survey efforts should be supplemented with good qualitative data. To minimize data collection burden and maximize limited resources, M&E activities need to be well coordinated and utilize ongoing data collection and analysis, where appropriate, in preference to designing new instruments or stand-alone systems. To increase the utilization of evaluation results, M&E design planning, analysis, and reporting should actively involve key stakeholders, such as district and national programme managers, policy makers, community members, and programme participants.

12 Developing/Selecting Indicators

13 Indicators Specific measures that reflect a larger set of circumstances Greater emphasis on transparency globally, people want instant summary information, instant feedback Indicators respond to this need

14 Indicators – things to know
only indicate – will never capture the richness and complexity of a system Designed to give ‘slices’ of reality encourage explicitness: they force us to be clear and explicit usually rely on numbers & numerical techniques (rates, ratios, comparisons) have specific measurement protocols which must be respected

15 Sources of Data Primary Data Sources: Secondary Data Sources:
Quantitative program data e.g. from coverage of services Surveys: demographic health surveys, epidemiological, behavioral and other studies Research and impact evaluations. Qualitative data from program staff, key informants and direct observation Secondary Data Sources: National response documentation, expenditures reports and program review reports. Surveillance reports Routine statistics e.g. mortality, hospital admissions

16 Relating Program Objectives to Indicators
Program goals and objectives may be vague or overly broad, making indicator selection difficult Indicators should be clearly related to program goals and objectives Program objectives may have multiple indicators Indicators are used at all levels of the programme implementation process Process indicators Outcome indicators Impact indicators

17 Types of Indicators Impact
Indicators are used for national and global reporting e.g. mortality rates Outcomes Program indicators are used for reporting to national authorities and donors. Changes at end of intervention/program period e.g. rate of HBP control among targeted patients, hospital admissions et. Outputs Selected Interventions Indicators (such as approval of a policy, health care professionals trained) are used for programmatic decision making Inputs Resource allocation indicators may be included Financial, human, material, and technical resources Performance Indicators What are they? Performance indicators are measures of inputs, processes, outputs, outcomes, and impacts for development projects, programs, or strategies. When supported with sound data collection—perhaps involving formal surveys—analysis and reporting, indicators enable managers to track progress, demonstrate results, and take corrective action to improve service delivery. Participation of key stakeholders in defining indicators is important because they are then more likely to understand and use indicators for management decision-making. Independent Evaluation Group, The World Bank, 2xxx Note: Handout of indicators Indicators form the basis for Gathering baseline information before beginning implementation Selecting desired level of improvement Setting targets for assessing performance during implementation and adjusting as needed Evaluating and reporting results to stakeholders and constituents Note: Project indicators contribute to strategic programs can be programmatic or intervention related as per the goals of specific projects and should contirbute implicit in the be

18 Some considerations How can the main focus of the objective best be measured? What practical constraints are there to measuring the indicator? Are there alternative or complementary measures that should be considered? What resources (human and financial) does the indicator require? Do standard (validated, internationally recognized) indicators exist? How will the results not captured by the selected indicator be measured? (Indicators are imperfect)

19 General Criteria of Good Indicators
Indicators should be expressed in terms of: Quantity Quality Population Time For example, an indicator written for the program objective of “Improving glycemic control in diabetic patients” might specify: “Increase from 30% to 50% (quantity) of gylcaemic control rates (quality) among diabetic patients (population) by October 2009 (time).”

20 Examples of Indicators
# care providers trained to use clinical guidelines in the past year % patients with controlled diabetes in health centres % A&E admissions for diabetes related complications

21 General Criteria of Good Indicators
Simple, clear and understandable Valid – does it measure what it is intended to measure Specific – Should measure only the conditions or event under observation and nothing else Reliable – should produce the same result when used more than once to measure the same event Simple, clear and understandable. The indicator should be immediately understood as a measure of project effectiveness. • Useful. The indicator should be functional and practical. • Valid. The indicator should actually measure the phenomenon it is intended to measure. • Specific. The indicator should measure only the phenomenon it is intended to measure. There should be one indicator related explicitly to each activity. • Reliable. Conclusions based on any given indicator should be the same regardless of whom, when, and under what conditions the assessments were completed. • Replicable. The indicator is not unique to one project or time frame. • Relevant. Decisions will be made based on the data collected. The proposed indicators and the data associated with them must be appropriate to the project rationale.

22 General Criteria of Good Indicators
Relevant – related to your work Sensitive – will it measure changes over time Operational –should be measurable or quantifiable using definitions and standards Affordable – should impose reasonable measurement costs Feasible – should be able to be carried out using the existing data collection system Other criteria include: Sensitive They should reflect changes in the state of the condition or event under observation. Operational They should be measurable or quantifiable using tested definitions and reference standards. Affordable They should represent reasonable measurement costs. Feasible They should be possible to carry out in the proposed data collection.

23 Summary To ensure that programs are being implemented as designed and funds are used appropriately and efficiently To ensure that the programmes are making a difference The selection of appropriate indicators (relative to program objectives) is critical to the success

24 Thank you


Download ppt "Donald T. Simeon Caribbean Health Research Council"

Similar presentations


Ads by Google