Monitoring & Evaluation in NIE

Slides:



Advertisements
Similar presentations
Il Project Cycle Management :A Technical Guide The Logical Framework Approach 1 1.
Advertisements

HEALTH PLANNING AND MANAGEMENT
Introduction to Monitoring and Evaluation
Intelligence Step 5 - Capacity Analysis Capacity Analysis Without capacity, the most innovative and brilliant interventions will not be implemented, wont.
Overview M&E Capacity Strengthening Workshop, Maputo 19 and 20 September 2011.
Donald T. Simeon Caribbean Health Research Council
AN INTRODUCTION TO SPHERE AND THE EMERGENCY CONTEXT
The MRP – Development of a comprehensive CMAM reporting tool using a set of standardised indicators CMAM conference London 17 th – 18 th October 2013.
Prepared by BSP/PMR Results-Based Programming, Management and Monitoring Presentation to Geneva Group - Paris Hans d’Orville Director, Bureau of Strategic.
Project Monitoring Evaluation and Assessment
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Results-Based Management: Logical Framework Approach
Green Recovery And Reconstruction: Training Toolkit For Humanitarian Aid Project Design, Monitoring and Evaluation Session 2: Environmental Monitoring.
Return On Investment Integrated Monitoring and Evaluation Framework.
Action Writing Action Statements Writing action statements is the first step in the second (action) stage of the public health nutrition (PHN) intervention.
Performance Management Upul Abeyrathne, Dept. of Economics, University of Ruhuna, Matara.
Food and Nutrition Surveillance and Response in Emergencies
HOW TO WRITE A GOOD TERMS OF REFERENCE FOR FOR EVALUATION Programme Management Interest Group 19 October 2010 Pinky Mashigo.
Quality Improvement Prepeared By Dr: Manal Moussa.
Carlos Navarro-Colorado SC-UK, ENN Hanoi, Supported by: OFDA, CDI Current practice in the treatment of Moderate Malnutrition in emergencies. Reflections.
CASE STUDIES IN PROJECT MANAGEMENT
Monitoring Evaluation Impact Assessment Objectives Be able to n explain basic monitoring and evaluation theory in relation to accountability n Identify.
Edward M. Haugh Jr. ESC Consultant. III. Recommendations for Applying Outcomes Planning to ESC  I. Introduction to Outcomes Planning II. A Sample ESC.
1 RBM Background Development aid is often provided on a point to point basis with no consistency with countries priorities. Development efforts are often.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
1 What are Monitoring and Evaluation? How do we think about M&E in the context of the LAM Project?
May 8, 2012 MWP-K Learning Event Monitoring, evaluation, and learning (MEL) framework for the Millennium Water Program, Kenya.
Monitoring Monitoring forms part of the project cycle: Project Identification Planning Appraisal - Decision Implementation – Monitoring Evaluation Difference.
The role of assumptions
Developing Indicators
Project design & Planning The Logical Framework Approach An Over View Icelandic International Development Agency (ICEIDA) Iceland United Nations University.
Centro de Estudos e Sistemas Avançados do Recife PMBOK - Chapter 4 Project Integration Management.
IAOD Evaluation Section, the Development Agenda (DA) and Development Oriented Activities Julia Flores Marfetan, Senior Evaluator.
Assessments. Assessment in the Project Cycle DESIGN IMPLEMENTATION MONITORING EVALUATION ASSESSMENT.
UNDAF M&E Systems Purpose Can explain the importance of functioning M&E system for the UNDAF Can support formulation and implementation of UNDAF M&E plans.
Monitoring & Evaluation Presentation for Technical Assistance Unit, National Treasury 19 August 2004 Fia van Rensburg.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
UK Aid Direct Introduction to Logframes (only required at proposal stage)
Screen 1 of 22 Food Security Policies – Formulation and Implementation Policy Monitoring and Evaluation LEARNING OBJECTIVES Define the purpose of a monitoring.
Scenario building workshop Dec Objectives of the workshop: Impact Intervention  Introduce different scenario building concepts and tools  Develop.
Monitoring and Evaluation
M&E TRAINING MODULES Indicators.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 24, 2009.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Advanced Engineering Projects Management Dr. Nabil I El Sawalhi Associate Professor of Construction Management 1AEPM 4.
Kathy Corbiere Service Delivery and Performance Commission
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 9, 2012.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
Key Components of a successful Proposal OAS / IACML Workshop on Technical Assistance. San José, Costa Rica May 8, 2007 By: José Luis Alvarez R. Consultant.
HUMANITARIAN RESPONSE MONITORING. HOW TO USE THIS PRESENTATION This presentation contains a complete overview of all aspects of Response Monitoring Presenting.
Session 2: Developing a Comprehensive M&E Work Plan.
S3.1 session day 3 1 training delivered by Oxfam GB, RedR India and Humanitarian Benchmark; January 2012, Yangon, Myanmar approved by the Advisory.
Organizations of all types and sizes face a range of risks that can affect the achievement of their objectives. Organization's activities Strategic initiatives.
Evaluation What is evaluation?
Developing a Monitoring & Evaluation Plan MEASURE Evaluation.
How to show your social value – reporting outcomes & impact
Project monitoring and evaluation
Gender-Sensitive Monitoring and Evaluation
Gender-Sensitive Monitoring and Evaluation
Right-sized Evaluation
Session 1 – Study Objectives
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 2. Session 6. Developing indicators.
Monitoring and Evaluation
Prepared by BSP/PMR Results-Based Programming, Management and Monitoring Presentation to Geneva Group - Paris Hans d’Orville Director, Bureau of Strategic.
MONITORING AND EVALUATION IN FOOD SECURITY AND NUTRITION INTERVENTIONS KOLLIESUAH, NELSON P.
CATHCA National Conference 2018
What is new in the Sphere Handbook 2018 and how to get benefit from it
Monitoring and Evaluation in Communication Management
Integrating Gender into Rural Development M&E in Projects and Programs
Presentation transcript:

Monitoring & Evaluation in NIE Module 20 This session is designed for a duration of approx. 90 min. Note: this presentation focuses on M&E of programmes, NOT situations such as early warning systems etc 27-Mar-17

Learning objectives Be familiar with the basic concepts and main characteristics of monitoring and evaluation Understand the differences between various kinds of evaluations Explain the different kinds of indicators Describe the very basics of a ‘log frame’ Optional: Be familiar with the monitoring and evaluation of CMAM interventions 27-Mar-17

Has anyone been involved in Monitoring & Evaluation? How? 27-Mar-17

The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN Disaster The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN IMPLEMENTATION source: IFRC 3Ps/SPHERE 27-Mar-17

Monitoring &Evaluation What is M&E? Source: UCL + Makerere University School of Public Health, Uganda Through group debate the first key words related to M&E will appear. Later the formal definitions will be presented. The following slide might capture of of the key words that participants might mention *

M&E assessment performance A WASP NEST………? efficiency target Logframes outputs effectiveness appropriateness outcomes Quantitative indicators Qualitative indicators target Logframes impact assessment DO NO HARM coverage A WASP NEST………? INPUTS Source: MTV for UCL + Makerere University School of Public Health, Uganda A lot of difficult terms are related to M&E. This sessions tries to deal with them and aims to make M&E more understandable. connectedness accountability timeliness *

Definition Monitoring It is NOT only about PROCESS ‘The systematic and continuous assessment of the progress of a piece of work over time….’ ‘To continuously measure progress against programme objectives and check on relevance of the programme’ It involves collecting and analysing data/information It is NOT only about PROCESS There are various definitions but they all aim at the same: we measure progress and changes need to be made if progress is not made. Obviously ‘progress’ has all to do with what our programme originally aimed for, i.e. the objectives. Are we in the process of reaching them? If not, we need to make adjustments to ensure we reach these objectives eventually. ‘The systematic and continuous assessment of the progress of a piece of work over time….It is a basic and universal management tool for identifying the strengths and weaknesses in a programme. Its purpose is to help all the people involved make appropriate and timely decisions that will improve the quality of the work.’ Gosling and Edwards, 1995 cited in ALNAP Review of Humanitarian Action 2003 *

Purpose of monitoring to document progress and results of project to provide the necessary information to Management for timely decision taking and corrective action (if necessary) to promote accountability* to all stakeholders of a project (to beneficiaries, donors, etc) Source: MTV for UCL + Makerere University School of Public Health, Uganda *accountable = being responsible to someone for some action Monitoring and evaluation, though two distinct activities are very closely linked. Monitoring is a routine activity with data collected on a regular e.g. daily or monthly basis. Its basic purpose is to keep track of programme activities and improve the efficiency of interventions. Evaluation tends to be episodic, undertaken at critical points in a project cycle and its basic purpose is more to do with improving effectiveness and informing future programming. Monitoring data provides essential inputs into more episodic evaluation. Monitoring data may highlight specific issues in the programme’s implementation that require deeper investigation through evaluation to be resolved. In turn, evaluation can help to identify what needs to be monitored in the future. In a well designed M&E system, data routinely collected through monitoring activities can contribute greatly towards evaluation *

Information collected for monitoring must be: Useful and relevant Accurate Regular Acted upon Shared Timely Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Monitoring is an implicit part of an evaluation. It is often done badly: Routine data collection not done routinely! Data collection done poorly Information not processed/used in a timely manner Focus only on process indicators and neglecting (lack of) preliminary impact Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Can you give examples of Monitoring in your current work? For example - From a CMAM programme? From a Micronutrient programme? From a General Food Distribution? From a Health programme? From a Livelihoods programme? *

Monitoring Monitoring compares intentions with results It guides project revisions, verifies targeting criteria and whether assistance is reaching the people intended. It checks the relevance of the project to the needs. It integrates and responds to community feedback It enhances transparency and accountability

Difference between Process/activities Impact/results Monitoring of * Both are important and fulfil their own specific role: monitoring process/activities refer to whether the programme implementation is on track and whether the planned activities are taking place. Monitoring results look whether the intended impact is on track to be reached. *

The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN Disaster The project cycle ASSESSMENT EVALUATION Monitoring PROGRAMME DESIGN IMPLEMENTATION source: IFRC 3Ps/SPHERE *

Why would you do an evaluation of a programme? *

Definitions Evaluation It involves the objective assessment of an The aim is to determine relevance and fulfilment of objectives, as well as efficiency, effectiveness, impact and sustainability of a project. It involves the objective assessment of an ongoing or completed project/programme, its design, implementation and results. These are all terms we will look at more in detail. Evaluations are also very important to identify LESSONS LEARNT *

There has been an increased focus on evaluation of humanitarian action as part of efforts to improve quality and standards Quality and standards will be more presented in detail in Module 21. *

Evaluation Improve policy and practice Enhance accountability It aims to Improve policy and practice Enhance accountability Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Evaluations are done when / because: Monitoring highlights unexpected results More information is needed for decision making Implementation problems or unmet needs are identified Issues of sustainability, cost effectiveness or relevance arise Recommendations for actions to improve performance are needed Lessons learning are necessary for future activities Additionally, the donor (often from one or various Governments) that funds the project is accountable to its tax payers, and the donor wants to know what exactly is done with the money and whether the objectives were reached. And if not, why not.

Evaluations Evaluation involves the same skills as assessment and analysis Evaluation should be done impartially and ideally by external staff Evaluation can also occur during (e.g. mid-term) and after implementation of the project Why? Evaluation should be done impartially and ideally by external staff: people that have been involved or are responsible for the project might not be objective towards the data collected, results, etc. They might have a stake and therefore lose their objectivity. Some might be defending negative findings or the opposite: they might be extra negative towards their project as they might not have agreed with certain decisions in the past. This will not enable a neutral and objective process of learning. External people might have a fresh eye on all the findings and are likely to be more impartial. One of the most important sources of information for evaluations is data used for monitoring 27-Mar-17 *

The OECD-DAC criteria Organisation for Economic Co-operation and Development The Development Assistance Committee (DAC) evaluation criteria are currently at the heart of the evaluation of humanitarian action. The DAC criteria are designed to improve evaluation of humanitarian action. Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Evaluation looks at Relevance/Appropriateness: Doing the right thing in the right way at the right time. Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Efficiency: Were the results delivered in the least costly manner possible? Effectiveness: To what extent has the intervention achieved its objectives? Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Evaluation looks at Relevance/Appropriateness: Doing the right thing in the right way at the right time. Connectedness (and coordination): Was there any replication or gaps left in programming due to a lack of coordination? Coherence: Did the intervention make sense in the context of the emergency and the mandate of the implementing agency? Are their detrimental effects of the intervention on long run? Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Efficiency: The extent to which results have been delivered in the least costly manner possible. Effectiveness: The extent to which an intervention has achieved its objectives – Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Example on General Food Distribution Relevance/Appropriateness: Doing the right thing in the right way at the right time. Was food aid the right thing to do, not cash? Connectedness: Are their detrimental effects of the intervention on long run? Did food aid lower food prices? Did local farmers suffer from that? Source: UCL + Makerere University School of Public Health, Uganda Relevance / appropriateness -> assessing whether the project is in line with local needs and priorities/tailoring of activities to local needs, increasing ownership, accountability and cost-effectiveness. It assesses whether the type of intervention was the ‘best fit’ for a situation.

Were those that needed food aid indeed reached? Coverage: Who has been reached by the intervention, and where: linked to effectiveness? Were those that needed food aid indeed reached? Efficiency: Were the results delivered in the least costly manner possible? Was it right to import the food or should it have been purchased locally? Could the results have been achieved with less (financial) resources? Food aid was provided, would cash have been more cost-effective? Source: UCL + Makerere University School of Public Health, Uganda Coverage Reaching major population groups facing life-threatening suffering wherever they are. Can target geographical area (flooded area) or specific groups (malnourished children). In all cases, baseline information is required on the area or the population targeted. Estimates are often used to compensate lack of baseline. Efficiency: The outputs achieved by maximizing resources Discussions and debates around efficiency as decisions taken cannot always aim towards efficiency, mainly in early stages of an emergency Looks at method employed for achieving a result: cash or food aid for prevention of malnutrition? BP5 or local food? Needs for intervention to be rapid often limits possibility of focus on efficiency in early stages of emergencies

Did food aid avoid undernutrition? (assuming it was an objective) Effectiveness: To what extent has the intervention achieved its objectives? Did food aid avoid undernutrition? (assuming it was an objective) Impact: Doing the right thing, changing the situation more profoundly and in the longer-term. Did the food aid avoid people becoming displaced? Did the people become dependent on food aid? Source: UCL + Makerere University School of Public Health, Uganda Effectiveness: the extent to which an activity achieves its purposeFor supplementary feeding programmes (for example) effectiveness is measured through exit indicator such as cure, death and defaulter rates. Impact Impact evaluation in emergency situations has numerous difficulties as many factors can interfere with the ‘normal’ running of any intervention, and the cause-and-effect chain varies for different types of intervention.. Same interventions have very different impact in different settings or contexts. Measure of impact requires an adequate baseline information (often absent), but even if available, it is required to understand what causal factors have influenced the measured impact.

Impact: Very much related to the general goal of the project Measures both positive and negative long-term effects, as well as intended and unintended effects. GFD: did it lower general food prices with long-term economic consequences for certain groups ? Were people that received food aid attacked because of the ration? (therefore more death…?) Need for baseline information!!!! (to measure results against….)

To evaluate projects well is a real skill! And you often need a team… Source: UCL + Makerere University School of Public Health, Uganda *

M&E in emergencies? YES Any project without Monitoring and/or Evaluation is a BAD project Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Help! Source: MTV for UCL + Makerere University School of Public Health, Uganda This slides aims to ‘monitor’ how the participants are doing on M&E. Do they feel they belong to the category on the left: they all understand most of the issues dealt with so far. Or are many people feeling lost in the terminology? If so, try to find out what is difficult and refer back to previous slides. *

The “M” and the “E”… Monitoring Evaluation Primary use of the data Project management Accountability Planning (future projects) Frequency of data collection Ongoing Periodic Type of data collected Info on process and effects Info on effects Who collects the data Project staff External evaluators Source: UCL + Makerere University School of Public Health, Uganda

Evaluations in Humanitarian Context Single-agency evaluation (during/after project) There is an increasing move towards: Inter-agency evaluations: the objective is to evaluate responses as a whole and the links between interventions Real-time evaluations: carried out 8 to 12 weeks after the onset of an emergency and are processed within one month of data collection Source: MTV for UCL + Makerere University School of Public Health, Uganda Many evaluations are done on work conducted by a single agency. However, more and more one also sees evaluations that cover interventions done by various agencies at the same time. In this way the whole response can be evaluated rather than just one agency’s work.

Real-time evaluations (1) WHY? Arose from concern that evaluations came too late to affect the operations they were assessing Various groups of organizations aim to undertake real-time evaluations Same purpose as any other evaluation Common characteristics: Takes place during the course of implementation In a short time frame Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Real-time evaluations (2) It is an improvement-oriented review; it can be regarded more as an internal function than an external process. It helps to bring about changes in the programme, rather than just reflecting on its quality after the event. A real-time “evaluator” is a “facilitator”, working with staff to find creative solutions to any difficulties they encounter. It helps to get closer to the people affected by crisis, and this enables to improve accountability to ‘beneficiaries’. Source: MTV for UCL + Makerere University School of Public Health, Uganda *

Monitoring & Evaluation systems Main components of M&E systems: M&E work plan for data collection and analysis, covering baseline, on-going M&E Logical framework, including indicators and means/source of verification Reporting flows and formats Feedback and review plan Capacity building design Implementation schedule Human resources and budget 27-Mar-17

Examples of data collection methods for M&E Quantitative Methods Qualitative methods Administering structured oral or written interviews with closed questions Semi structured interviews e.g. key informant Population based surveys Focus group discussion Reviewing medical and financial records Observing Completing forms and tally sheets Case studies Direct measurement (anthropometry, biochemical analysis, clinical signs) Mapping, ranking, scoring Lot quality assessment Problem sorting, ranking

Focus on INDICATORS Source: UCL + Makerere University School of Public Health, Uganda

Indicators An indicator is a measure that is used to show change in a situation, or the progress in/results of an activity, project, or programme. Indicators: enable us to be “watchdogs”; are essential instruments for monitoring and evaluation. are objectively verifiable measurements Source: UCL + Makerere University School of Public Health, Uganda

What are the Qualities of a Good Indicator? Specific Measurable Achievable Relevant Time-bound The Sphere Project provides the most accepted indicators for nutrition and food security interventions in emergencies: see Module 21. Source: MTV for UCL + Makerere University School of Public Health, Uganda What are the characteristics of a good indicator? A good indicator is SMART: S for SPECIFIC: By specific, we mean that it specifies the magnitude of the attribute that we’re measuring in a particular time frame and for a particular population In other words, that they measure what they’re supposed to measure… blood retinol measures vitamin A status (not iron status, not any other vitamin…) Can you think of another word starting with “S” that describes a good indicator? Good! it’s SIMPLE: clearly and precisely defined M for Measurable: Measurable indicators are objective: we are able to quantify them You can measure the temperature, you can count how many correct answers students get on a test, you can observe if someone is washing their hands before they handle food Can you think of other examples of indicators and how to measure them? A for Achievable: Achievable indicators means that we can actually obtain them: you need to have the resources and capacities to collect, store and process the information obtained from that indicator R is for relevant: Relevant to you project, your situation And T is for Time bound: Always stating the time period in which that indicator was measured Beginning and end of projects are commonly used time-marks Seasons and seasonality too: amount of grain produced in the month of July, total sales in the pre-holidays season, malnutrition rates in the lean (or hungry) season preceding the harvest. Can you think of other examples of indicators that are time-bound? SMART Initiative[1] The Standardised Monitoring and Assessment in Relief and Transition (SMART) Initiative - is an interagency initiative, begun in 2002, to improve the M&E of humanitarian assistance interventions through: • The development of standardised methodologies for determining comparative needs based on nutritional status, mortality rate and food security. • Establishing comprehensive, collaborative systems to ensure reliable data is used for decision-making and reporting A Standardised Training Package (STP) for SMART methodology has recently been released. [1] More information available at http://www.smartmethodology.org/ And there is also the SMART initiative…. Standardised Monitoring and Assessment in Relief and Transition Initiative - interagency initiative to improve the M&E of humanitarian assistance

Types of indicators Examples? Indicators exist in many different forms: Examples? Direct Direct indicators correspond precisely to results at any performance level. Indirect or "proxy" indicators demonstrate the change or results if direct measures are not feasible. Indirect / proxy Indicators are usually quantitative measures, expressed as percentage or share, as a rate, etc. Indicators may also be qualitative observations. Qualitative Quantitative Source: UCL + Makerere University School of Public Health, Uganda Direct - Number of children with acute malnutrition Proxy – diet composition for nutritional status Global / standardised Standardised global indicators are comparable in all settings. Other indicators tend to be context specific and must be developed locally. Locally developed

Impact Outcome Output Input Source: UCL + Makerere University School of Public Health, Uganda

Impact Outcome Output Input Related to Goal Related to Objectives (or Purposes) Output Related to Outputs Input Related to Activities/Resources Source: UCL + Makerere University School of Public Health, Uganda

Impact Outcome Output Input * Malnutrition rates amongst young children reduced Related to Goal Outcome % of young children getting appropriate complementary food Related to Objectives (or Purposes) X number of mothers know about good complementary food and how to prepare that Output Related to Outputs Nutritional education to mothers on complementary food Input Related to Activities/Resources Source: UCL + Makerere University School of Public Health, Uganda *

All the previous terms, such as impact, outcome, etc can be put in this ‘tree’. It is just a different way of putting them together.

What is a Log Frame? The logical framework or logframe is an analytical tool used to plan, monitor, and evaluate projects. ? ? ? The logical framework or logframe derives its name from the logical linkages set out by the planner(s) to connect a project’s means with its ends. ? Victim of a log frame?

Log Frames IMPACT OUTCOME INPUTS Log frames appear in different shapes and with different names sometimes, but overall they look like this. INPUTS

? ? ? ? ? ? ? …and can be put in a ‘tree’ like this. INPUTS Impact Outcome Output ? ? Impact ? Outcome Output ? Output Output Source: UCL + Makerere University School of Public Health, Uganda …and can be put in a ‘tree’ like this. ? INPUTS ? ?

Other terms that can be found in a logframe: The means of verification of progress towards achieving the indicators highlights the sources from where data is collected. The process of identifying the means of verification at this stage is useful as discussions on where to find information or how to collect it often lead to reformulation of the indicator. Assumptions are external factors or conditions that have the potential to influence the success of a programme. They may be factors outside the control of the programme. The achievement of a programme’s aims depends on whether or not assumptions hold true or anticipated risks do not materialise. This slide is optional and should only be shown if the main issues on inputs, outcomes, outcomes and impact are well understood. 27-Mar-17 48

logical framework for M&E Project description Indicators Source / mean of verification Assumptions / risks Goal Objectives / outcomes Deliverable outputs Activities If the OBJECTIVES are produced, then this should contribute to the overall GOAL If OUTPUTS/RESULTS are produced, then the OBJECTIVES are accomplished If adequate ACTIVITIES are conducted, then OUTPUT/RESULTS can be produced This is another way of showing a log frame If adequate RESOURCES/INPUTS are provided; then activities can be conducted 49

Activities versus Results Completed activities are not results. e.g. a hospital was built, does not mean that injured and sick people can be treated in the hospital, maybe the hospital has no water and the beds have not been delivered. Results are the actual benefits or effects of completed activities: e.g. Injured and sick people have access to a fully functional health facility. *

Log frames

Example More examples to illustrate the aforementioned theory. This example shows Vit A supplementation, but in a reversed order, starting with input and then subsequently working its way down to impact.

Another Example… *

Source: UCL + Makerere University School of Public Health, Uganda Time for questions, discussion, recap, etc

Key messages The monitoring of nutrition interventions in emergencies is an integral part of saving lives and maintaining nutrition status of the affected population. Successful monitoring systems allow for improvements in interventions in ‘real time’. Evaluations are important tools for learning, assessing interventions, comparing the costs of the interventions and their impact. Essential evaluation parameters are: effectiveness; efficiency; relevance/appropriateness; impact and coverage Involving communities in M&E places the affected population at the heart of the response, providing the opportunity for their views and perceptions to be incorporated into programme decisions and increases accountability towards them. A common mistake of designing M&E systems is creating a framework which is overly complex. Always make an M&E system practical and doable. The logical framework or logframe is an analytical tool used to plan, monitor, and evaluate projects.

Monitoring for CMAM interventions Types of monitoring, e.g. Individual case monitoring, Programme / activities monitoring The following slides are explaining more information on M&E in CMAM programmes. If an extended session on this topic is requested, this part can be used. 27-Mar-17

Individual monitoring for CMAM It is the basic follow up of cases in SFP / OTP / SC services: Anthropometric / clinical assessment Tools for individual case follow up include: Medical / nutrition and action protocols Individual follow up card Referral forms … 27-Mar-17

Objectives of monitoring CMAM activities Assess service performance / outcomes Identify further needs Support decision-taking for quality improvement (staffing, training, resources, site location,…) Contribute to the analysis of the general situation Assessing the nutrition trends in the area 27-Mar-17

Methods and tools for monitoring CMAM interventions Monthly / weekly reporting: Reporting needs to be done per site (service unit) and compiled per area (district…) up to the national level Routine supervision External evaluations Coverage surveys are one of the most important tools for evaluation of CMAM interventions 27-Mar-17

Routine data collection for monitoring CMAM interventions Routine data is collected for specified time-periods: Nb. of new admissions , Nb. of discharges (total and by category: cured, died, defaulted, non-recovered Nb. of cases in treatment (nb. of beneficiaries registered at the end of the reporting time-period) Data on admissions should be disaggregated by gender 27-Mar-17

Criteria (Children 6 – 59 months) Category Criteria (Children 6 – 59 months) New admissions for children 6 – 59 months (or > 60 months but <130 cm height) MUAC <11.5 cm or W/H < -3 Z scores (WHO) or <70% of median (NCHS) Bilateral pitting oedema grade + or ++ and child is alert, has appetite, and is clinically well  Other new admissions   Carer refuses inpatient care despite advice Returned Defaulter Child has previously defaulted and has returned to OTP (the child must meet admission criteria to be re-admitted). Readmissions/Relapses A child is treated in OTP until discharge after meeting discharge criteria but relapses hence need for readmission Transfer from inpatient care (SC) From in-patient care after stabilisation treatment Transfer from OTP  Patients moved in from another OTP site  Ensure that the definitions are clear for everybody when monitoring is done. 27-Mar-17

Criteria (Children 6 – 59 months) Category Criteria (Children 6 – 59 months)    Cured MUAC > 12.5cm and WFH > -2Z scores and no oedema for two consecutive visits  And Child is clinically well Defaulted Absent for 3 consecutive visits Died Died during time registered in OTP Non-Cured Has not reached discharge criteria within four months of treatment Link the child to other programmes e.g. SFP. IYCF, GMP, targeted food distributions Transferred to SC Condition has deteriorated and requires inpatient care Transfer to other OTP Child has been transferred to another OTP site 27-Mar-17

Monitoring of CMAM interventions: key indicators for SAM (Sphere) The proportion of discharges from therapeutic care should be: Recovered > 75 % Deaths < 10 % Defaulter < 15 % They are primarily applicable to the 6–59 month age group, although others may be part of the programme. Distance: > 90 % of the target population is within less than one day’s return walk (including time for treatment) of the service / site. Coverage is > 50 % in rural areas, > 70 % in urban areas and >90 % in camp situations 27-Mar-17

Monitoring of CMAM interventions: key indicators for MAM (Sphere) The proportion of discharges from targeted SFP should be: Recovered > 75 % Deaths < 3 % Defaulter < 15 % They are primarily applicable to the 6–59 month age group, although others may be part of the programme. Distance: > 90 % of the target population is within less than one day’s return walk (including time for treatment) of the programme site for dry ration SFP and no more than one hour’s walk for on-site wet SFP Coverage is > 50 % in rural areas, > 70 % in urban areas and > 90 % in a camp situation 27-Mar-17

Additional data for monitoring CMAM interventions Derived from routine monitoring and other sources: Sources of data: Average length of stay Average weight gain Relapse rate Distribution of admissions per type, per age, per origin… Causes of death Reasons for defaulting Investigation of non-recovery cases Registration books Individual follow up charts Interviews and Focus group discussions Observation, home-visits … 27-Mar-17

M&E for CMAM interventions: Supervision Supportive supervision visits to sites are designed to ensure / improve the quality of care offered by: Identifying weaknesses in the performance of activities, taking immediate action and applying shared corrective solutions Strengthening the technical capacity of health workers and motivating staff through encouragement of good practices Supervisors and managers ensure that the performance of activities and organization of the services meet quality standards. 27-Mar-17 66

Evaluation of SAM management interventions Effectiveness: programme performance with a strong focus on coverage Appropriateness: e.g. distribution and time of opening of treatment sites Connectedness: relates to the links with health system and shows levels of possible integration Cost-effectiveness has also been measured with various methods and showing high differences between contexts and different approaches 27-Mar-17

M&E of CMAM interventions: population level assessments Community level assessment can be done through: Repeated anthropometric surveys Programme coverage 27-Mar-17

Evaluation of coverage for CMAM Coverage is one of the most important elements behind the success of the CMAM approach. It is measured through studies using two main approaches: The centric systematic area sampling (CSAS) The Semi-Quantitative Evaluation of Access and Coverage (SQUEAC) Coverage should reach at least 90% of severe cases in camps situation, 70% in urban setting, 50% in rural setting (SPHERE standards) 27-Mar-17

Evaluation of management of MAM interventions Same criteria as for all other interventions (relevance, efficiency, etc.) SFP evaluations are rarely shared, but evidence showed that defaulting and non-response are very common Needs for evaluating use of Ready-to-Use-Supplementary Food products in terms of efficiency: gain of weight, effect of defaulting, effect on easiness for beneficiaries, etc. 27-Mar-17