Presentation is loading. Please wait.

Presentation is loading. Please wait.

9-Feb-141 Monitoring & Evaluation in NIE Module 20.

Similar presentations


Presentation on theme: "9-Feb-141 Monitoring & Evaluation in NIE Module 20."— Presentation transcript:

1 9-Feb-141 Monitoring & Evaluation in NIE Module 20

2 9-Feb-142 Learning objectives Be familiar with the basic concepts and main characteristics of monitoring and evaluation Understand the differences between various kinds of evaluations Explain the different kinds of indicators Describe the very basics of a log frame Optional: Be familiar with the monitoring and evaluation of CMAM interventions

3 9-Feb-143

4 4 The project cycle ASSESSMENT PROGRAMME DESIGN EVALUATION Monitoring IMPLEMENTATION Disaster

5 Monitoring &Evaluation *

6 M&E A WASP NEST………? outputs effectiveness performanc e impact outcomes assessme nt target accountability Qualitative indicators DO NO HARM INPUTS timeliness Logframes Quantitative indicators connectedness efficiency coverage appropriateness *

7 Definition Monitoring The systematic and continuous assessment of the progress of a piece of work over time…. To continuously measure progress against programme objectives and check on relevance of the programme It involves collecting and analysing data/information It is NOT only about PROCESS *

8 Purpose of monitoring to document progress and results of project to provide the necessary information to Management for timely decision taking and corrective action (if necessary) to promote accountability* to all stakeholders of a project (to beneficiaries, donors, etc) *

9 Information collected for monitoring must be: Useful and relevant Accurate Regular Acted upon Shared Timely *

10 Monitoring is an implicit part of an evaluation. It is often done badly: – Routine data collection not done routinely! – Data collection done poorly – Information not processed/used in a timely manner – Focus only on process indicators and neglecting (lack of) preliminary impact *

11 Can you give examples of Monitoring in your current work? For example - From a CMAM programme? -From a Micronutrient programme? -From a General Food Distribution? -From a Health programme? -From a Livelihoods programme? *

12 Monitoring Monitoring compares intentions with results It guides project revisions, verifies targeting criteria and whether assistance is reaching the people intended. It checks the relevance of the project to the needs. It integrates and responds to community feedback It enhances transparency and accountability

13 Difference between Monitoring of Process/activities Impact/results *

14 The project cycle ASSESSMENT PROGRAMME DESIGN EVALUATION Monitoring IMPLEMENTATION Disaster *

15 Why would you do an evaluation of a programme? *

16 Definitions Evaluation The aim is to determine relevance and fulfilment of objectives, as well as efficiency, effectiveness, impact and sustainability of a project. It involves the objective assessment of an ongoing or completed project/programme, its design, implementation and results. *

17 There has been an increased focus on evaluation of humanitarian action as part of efforts to improve quality and standards *

18 Evaluation It aims to – Improve policy and practice – Enhance accountability *

19 Evaluations are done when / because: – Monitoring highlights unexpected results – More information is needed for decision making – Implementation problems or unmet needs are identified – Issues of sustainability, cost effectiveness or relevance arise – Recommendations for actions to improve performance are needed – Lessons learning are necessary for future activities

20 9-Feb-1420 Evaluations Evaluation involves the same skills as assessment and analysis Evaluation should be done impartially and ideally by external staff Evaluation can also occur during (e.g. mid-term) and after implementation of the project * One of the most important sources of information for evaluations is data used for monitoring

21 The OECD-DAC criteria Organisation for Economic Co-operation and Development The Development Assistance Committee (DAC) evaluation criteria are currently at the heart of the evaluation of humanitarian action. The DAC criteria are designed to improve evaluation of humanitarian action. *

22 Relevance/Appropriateness: Doing the right thing in the right way at the right time. Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Efficiency: Were the results delivered in the least costly manner possible? Effectiveness: To what extent has the intervention achieved its objectives? Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. * Evaluation looks at

23 Relevance/Appropriateness: Doing the right thing in the right way at the right time. Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Efficiency: The extent to which results have been delivered in the least costly manner possible. Effectiveness: The extent to which an intervention has achieved its objectives – Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. * Evaluation looks at

24 Example on General Food Distribution Relevance/Appropriateness: Doing the right thing in the right way at the right time. Was food aid the right thing to do, not cash? Connectedness: Are their detrimental effects of the intervention on long run? Did food aid lower food prices? Did local farmers suffer from that?

25 Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Were those that needed food aid indeed reached? Efficiency: Were the results delivered in the least costly manner possible? Was it right to import the food or should it have been purchased locally? Could the results have been achieved with less (financial) resources? Food aid was provided, would cash have been more cost-effective?

26 Effectiveness: To what extent has the intervention achieved its objectives? Did food aid avoid undernutrition? (assuming it was an objective) Impact: Doing the right thing, changing the situation more profoundly and in the longer- term. Did the food aid avoid people becoming displaced? Did the people become dependent on food aid?

27 Impact: -Very much related to the general goal of the project -Measures both positive and negative long-term effects, as well as intended and unintended effects. GFD: did it lower general food prices with long-term economic consequences for certain groups ? Were people that received food aid attacked because of the ration? (therefore more death…?) -Need for baseline information!!!! (to measure results against….)

28 To evaluate projects well is a real skill! And you often need a team… *

29 M&E in emergencies? Any project without Monitoring and/or Evaluation is a BAD project *

30 Help! *

31 The M and the E… MonitoringEvaluation Primary use of the data Project management Accountability Planning (future projects) Frequency of data collection OngoingPeriodic Type of data collected Info on process and effects Info on effects Who collects the data Project staffExternal evaluators

32 Evaluations in Humanitarian Context Single-agency evaluation (during/after project) There is an increasing move towards: – Inter-agency evaluations: the objective is to evaluate responses as a whole and the links between interventions – Real-time evaluations: carried out 8 to 12 weeks after the onset of an emergency and are processed within one month of data collection

33 Real-time evaluations (1) WHY? Arose from concern that evaluations came too late to affect the operations they were assessing Various groups of organizations aim to undertake real-time evaluations Same purpose as any other evaluation Common characteristics: – Takes place during the course of implementation – In a short time frame *

34 Real-time evaluations (2) It is an improvement-oriented review; it can be regarded more as an internal function than an external process. It helps to bring about changes in the programme, rather than just reflecting on its quality after the event. A real-time evaluator is a facilitator, working with staff to find creative solutions to any difficulties they encounter. It helps to get closer to the people affected by crisis, and this enables to improve accountability to beneficiaries. *

35 9-Feb-1435 Monitoring & Evaluation systems Main components of M&E systems: – M&E work plan for data collection and analysis, covering baseline, on-going M&E – Logical framework, including indicators and means/source of verification – Reporting flows and formats – Feedback and review plan – Capacity building design – Implementation schedule – Human resources and budget

36 Examples of data collection methods for M&E Quantitative MethodsQualitative methods Administering structured oral or written interviews with closed questions Semi structured interviews e.g. key informant Population based surveysFocus group discussion Reviewing medical and financial recordsObserving Completing forms and tally sheetsCase studies Direct measurement (anthropometry, biochemical analysis, clinical signs) Mapping, ranking, scoring Lot quality assessmentProblem sorting, ranking

37 Focus on INDICATORS

38 Indicators An indicator is a measure that is used to show change in a situation, or the progress in/results of an activity, project, or programme. Indicators: – enable us to be watchdogs; – are essential instruments for monitoring and evaluation. – are objectively verifiable measurements

39 What are the Qualities of a Good Indicator? S pecific M easurable A chievable R elevant T ime-bound And there is also the SMART initiative…. Standardised Monitoring and Assessment in Relief and Transition Initiative - interagency initiative to improve the M&E of humanitarian assistance The Sphere Project provides the most accepted indicators for nutrition and food security interventions in emergencies: see Module 21.

40 Types of indicators Indicators exist in many different forms: Direct Global / standardised Locally developed Indirect / proxy Qualitativ e Quantitativ e Direct indicators correspond precisely to results at any performance level. Indirect or "proxy" indicators demonstrate the change or results if direct measures are not feasible. Indicators are usually quantitative measures, expressed as percentage or share, as a rate, etc. Indicators may also be qualitative observations. Standardised global indicators are comparable in all settings. Other indicators tend to be context specific and must be developed locally.

41 Outcom e Impact Output Input

42 Outcom e Impact Output Input Related to Activities/Resources Related to Objectives (or Purposes) Related to Goal Related to Outputs

43 Outcom e Impact Output Input Related to Activities/Resources Related to Objectives (or Purposes) Related to Goal Related to Outputs Nutritional education to mothers on complementary food X number of mothers know about good complementary food and how to prepare that % of young children getting appropriate complementary food Malnutrition rates amongst young children reduced *

44

45 What is a Log Frame? The logical framework or logframe is an analytical tool used to plan, monitor, and evaluate projects. Victim of a log frame? ? ? ? ?

46 INPUTS Log Frames IMPACT OUTCOME

47 Impact Outcome Output Impact Outcome Output INPUTS

48 The means of verification of progress towards achieving the indicators highlights the sources from where data is collected. The process of identifying the means of verification at this stage is useful as discussions on where to find information or how to collect it often lead to reformulation of the indicator. Assumptions are external factors or conditions that have the potential to influence the success of a programme. They may be factors outside the control of the programme. The achievement of a programmes aims depends on whether or not assumptions hold true or anticipated risks do not materialise. 9-Feb-1448 Other terms that can be found in a logframe:

49 logical framework for M&E Project descriptionIndicatorsSource / mean of verification Assumptions / risks Goal Objectives / outcomes Deliverable outputs Activities If the OBJECTIVES are produced, then this should contribute to the overall GOAL If adequate ACTIVITIES are conducted, then OUTPUT/RESULTS can be produced If OUTPUTS/RESULTS are produced, then the OBJECTIVES are accomplished If adequate RESOURCES/INPUTS are provided; then activities can be conducted

50 Activities versus Results Completed activities are not results. e.g. a hospital was built, does not mean that injured and sick people can be treated in the hospital, maybe the hospital has no water and the beds have not been delivered. Results are the actual benefits or effects of completed activities: e.g. Injured and sick people have access to a fully functional health facility. *

51 Log frames

52 Example

53 Another Example… *

54

55 Key messages The monitoring of nutrition interventions in emergencies is an integral part of saving lives and maintaining nutrition status of the affected population. Successful monitoring systems allow for improvements in interventions in real time. Evaluations are important tools for learning, assessing interventions, comparing the costs of the interventions and their impact. Essential evaluation parameters are: effectiveness; efficiency; relevance/appropriateness; impact and coverage Involving communities in M&E places the affected population at the heart of the response, providing the opportunity for their views and perceptions to be incorporated into programme decisions and increases accountability towards them. A common mistake of designing M&E systems is creating a framework which is overly complex. Always make an M&E system practical and doable. The logical framework or logframe is an analytical tool used to plan, monitor, and evaluate projects.

56 9-Feb-1456 Monitoring for CMAM interventions Types of monitoring, e.g. – Individual case monitoring, – Programme / activities monitoring

57 9-Feb-1457 Individual monitoring for CMAM It is the basic follow up of cases in SFP / OTP / SC services: – Anthropometric / clinical assessment Tools for individual case follow up include: – Medical / nutrition and action protocols – Individual follow up card – Referral forms –…–…

58 9-Feb-1458 Objectives of monitoring CMAM activities Assess service performance / outcomes Identify further needs – Support decision-taking for quality improvement (staffing, training, resources, site location,…) Contribute to the analysis of the general situation – Assessing the nutrition trends in the area

59 Methods and tools for monitoring CMAM interventions Monthly / weekly reporting: Reporting needs to be done per site (service unit) and compiled per area (district…) up to the national level Routine supervision External evaluations Coverage surveys are one of the most important tools for evaluation of CMAM interventions 9-Feb-1459

60 Routine data collection for monitoring CMAM interventions Routine data is collected for specified time-periods: – Nb. of new admissions, – Nb. of discharges (total and by category: cured, died, defaulted, non-recovered – Nb. of cases in treatment (nb. of beneficiaries registered at the end of the reporting time-period) Data on admissions should be disaggregated by gender 9-Feb-1460

61 CategoryCriteria (Children 6 – 59 months) New admissions for children 6 – 59 months (or > 60 months but <130 cm height) MUAC <11.5 cm or W/H < -3 Z scores (WHO) or <70% of median (NCHS) or Bilateral pitting oedema grade + or ++ and child is alert, has appetite, and is clinically well Other new admissions Carer refuses inpatient care despite advice Returned Defaulter Child has previously defaulted and has returned to OTP (the child must meet admission criteria to be re-admitted). Readmissions/Relapses A child is treated in OTP until discharge after meeting discharge criteria but relapses hence need for readmission Transfer from inpatient care (SC) From in-patient care after stabilisation treatment Transfer from OTP Patients moved in from another OTP site 9-Feb-1461

62 CategoryCriteria (Children 6 – 59 months) Cured MUAC > 12.5cm and WFH > -2Z scores and no oedema for two consecutive visits And Child is clinically well Defaulted Absent for 3 consecutive visits Died Died during time registered in OTP Non-Cured Has not reached discharge criteria within four months of treatment Link the child to other programmes e.g. SFP. IYCF, GMP, targeted food distributions Transferred to SC Condition has deteriorated and requires inpatient care Transfer to other OTP Child has been transferred to another OTP site 9-Feb-1462

63 Monitoring of CMAM interventions: key indicators for SAM (Sphere) The proportion of discharges from therapeutic care should be: – Recovered > 75 % – Deaths < 10 % – Defaulter < 15 % They are primarily applicable to the 6–59 month age group, although others may be part of the programme. Distance: > 90 % of the target population is within less than one days return walk (including time for treatment) of the service / site. Coverage is > 50 % in rural areas, > 70 % in urban areas and >90 % in camp situations 9-Feb-1463

64 Monitoring of CMAM interventions: key indicators for MAM (Sphere) The proportion of discharges from targeted SFP should be: – Recovered > 75 % – Deaths < 3 % – Defaulter < 15 % They are primarily applicable to the 6–59 month age group, although others may be part of the programme. Distance: > 90 % of the target population is within less than one days return walk (including time for treatment) of the programme site for dry ration SFP and no more than one hours walk for on-site wet SFP Coverage is > 50 % in rural areas, > 70 % in urban areas and > 90 % in a camp situation 9-Feb-1464

65 Additional data for monitoring CMAM interventions Derived from routine monitoring and other sources: Average length of stay Average weight gain Relapse rate Distribution of admissions per type, per age, per origin… Causes of death Reasons for defaulting Investigation of non-recovery cases Sources of data: Registration books Individual follow up charts Interviews and Focus group discussions Observation, home-visits … 9-Feb-1465

66 9-Feb-1466 M&E for CMAM interventions: Supervision Supportive supervision visits to sites are designed to ensure / improve the quality of care offered by: Identifying weaknesses in the performance of activities, taking immediate action and applying shared corrective solutions Strengthening the technical capacity of health workers and motivating staff through encouragement of good practices Supervisors and managers ensure that the performance of activities and organization of the services meet quality standards.

67 9-Feb-1467 Evaluation of SAM management interventions Effectiveness: programme performance with a strong focus on coverage Appropriateness: e.g. distribution and time of opening of treatment sites Connectedness: relates to the links with health system and shows levels of possible integration Cost-effectiveness has also been measured with various methods and showing high differences between contexts and different approaches

68 9-Feb-1468 M&E of CMAM interventions: population level assessments Community level assessment can be done through: – Repeated anthropometric surveys – Programme coverage

69 Evaluation of coverage for CMAM Coverage is one of the most important elements behind the success of the CMAM approach. – It is measured through studies using two main approaches: The centric systematic area sampling (CSAS) The Semi-Quantitative Evaluation of Access and Coverage (SQUEAC) Coverage should reach at least 90% of severe cases in camps situation, 70% in urban setting, 50% in rural setting (SPHERE standards) 9-Feb-1469

70 9-Feb-1470 Evaluation of management of MAM interventions Same criteria as for all other interventions (relevance, efficiency, etc.) SFP evaluations are rarely shared, but evidence showed that defaulting and non-response are very common Needs for evaluating use of Ready-to-Use- Supplementary Food products in terms of efficiency: gain of weight, effect of defaulting, effect on easiness for beneficiaries, etc.


Download ppt "9-Feb-141 Monitoring & Evaluation in NIE Module 20."

Similar presentations


Ads by Google