Download presentation
Presentation is loading. Please wait.
Published byRalph Ferguson Modified over 7 years ago
1
Monitoring and Evaluation (M&E) in the context of Managing for Development Results (MfDR)
PAEPARD Training on Processes, Procedures, Partnerships and Monitoring and Evaluation” during RUFORUM Conference, Century City Conference Centre, Cape Town, South Africa, October 2016
2
Goals for this Training Workshop
To introduce you to some concepts for planning, designing, and implementing a results-based monitoring and evaluation system To demonstrate how an M&E system is a valuable tool to support good public management
3
Ten Steps to Designing, Building and Sustaining a Results-Based M&E System
Selecting Key Indicators to Monitor Outcomes Planning for Improvement — Selecting Results Targets Conducting a Readiness Assessment Using Evaluation Findings The Role of Evaluations 1 2 3 4 5 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Evaluation Findings Sustaining the M&E System
4
Part 1: Introduction to Managing for Development Results
MfDR tools, concepts and principles Implementing MfDR: Results-Based Management
5
The Power of Measuring Results
If you do not measure results, you can not tell success from failure If you can not see success, you can not reward it If you can not reward success, you are probably rewarding failure If you can not see success, you can not learn from it If you can not recognize failure, you can not correct it If you can demonstrate results, you can win public support
6
MfDR: Definition Managing for Development Results (MfDR) is a management strategy that focuses on development performance and on sustainable improvements in country outcomes (OECD Policy Brief, March 2009) MfDR = a framework for development effectiveness (i.e. performance information aimed at better decision-making) which includes practical tools for: strategic planning and budgeting risk management monitoring progress outcome evaluation
7
MfDR: General Concepts
Harmonisation and Alignment: Development partners must harmonise their operational procedures and align their support with country priorities/strategies. Country Ownership: Countries must own the goals and objectives of all development programmes and processes. Results-Based Management: As a change management process, this is an important aspect and a prerequisite for enhancing aid effectiveness. Partnership: The best avenue for improving development efficiency and effectiveness Capacity Building: Need for investing individual/institutional capacity building Learning and Decision-Making: Learning and feedback are powerful management tools with the potential to improve the public intervention performance for development.
8
MfDR: Key Tools Results-Based Management: RBM provides a management framework and tools for strategic planning, risk management, and performance monitoring and evaluation. Logical Models: A logical model is a technical tool which summarises all the relevant information related to aid for development, in a programme/project. It is usually presented in a matrix, such as a Logical Framework Matrix. Results-Based Monitoring and Evaluation (M&E): M&E = a systematic collection of performance information on progress towards results, which could later be used in taking management decisions. It is an indispensable tool for increasing development effectiveness.
9
MfDR Principles Principle 1– At each stage of the process – from strategic planning to execution, and even after – focus the dialogue on results for country partners, development organisations and so on. Principle 2 – Align concrete activities to programming, monitoring and evaluation of achieved results. Principle 3 – Make sure that the results reporting system is as simple, beneficial and user-friendly as possible. Principle 4 – Focus management on obtaining results, rather than on managing results, or assign needed resources to attain anticipated results . Principle 5 – Use information on results for learning and management-related decision-making, and also for reporting and empowerment. ATTENTION: Performance information can fuel resistance and risk averse behaviours on the part of certain actors
10
Part 2: Overall Results-Based Management (RBM) Concepts
Definition of RBM Results -Based Planning (RBP) MfDR and "results chain" Results Chain: some definitions
11
RBM: How is it defined? RBM is a management strategy which allows an organisation to track how its operations contribute to the attainment of clearly defined "results". Consequently it is a resource management approach geared at attaining previously set objectives (as needed) through the achievement of desired results (changes) within a set strategy. The attainment of results is systematically and continuously monitored and evaluated (performance measurement system) in order to meet set objectives and manage risks. Finally, the information procured through monitoring and evaluation is communicated to relevant decision-makers for future decision-making (corrective measures and planning).
12
Results Based Planning
Planning can be defined as a process which helps define objectives, develop strategies, map out the great links of implementation arrangements and allocate necessary resources for the achievement of these objectives. Good planning, associated with effective monitoring and evaluation, can play a fundamental role in improving the effectiveness of development interventions. Good planning helps focus on important results so that development interventions contribute to improving the socioeconomic conditions of populations. Results-based planning (RBP), as part of MfDR, is rooted in the importance of a prior plan – for all development interventions – the expected results before developing the strategy for attaining these results.
13
5 Key stages in the RBP process
Start by identifying an obstacle (shortcoming or a specific problem) that is important and needs to be overcome. This is the project idea. Then identify the key stakeholders of the obstacle or identified problem (stakeholder analysis). Next, evaluate the development issues from the perspective of potential beneficiaries (problem analysis). Then examine possible solutions and determine the best solution for each problem (objective analysis). Next, define realistic results taking into consideration the country situation, partner capacity and available resources (alternative analysis).
14
RBM & the “Results Chain“/Objectives
15
Standard Results Chain Concept
Inputs: Human, financial & material resources – facilities, trainers, materials Activities: Actions to convert inputs into outputs: training courses, studies… Outputs: Products or concrete deliverables: numbers completing training, building, technology Outcomes (short-term) – Use of outputs by target people: increased skills, employment opportunities Outcomes (intermediate) Impacts: Longer Term Goal or Objective without attribution: job, growth, income, access Area of Control Internal to the Organization or programme Outputs Reach Direct Beneficiaries Area of Influence External to the Organization/ programme Efficiency Effectiveness Planning/Formulation Implementation (Supply) Results (Demand) Quality and relevance of goods and services
16
Results Chain: Some definitions (1/2)
Results Chain: Monitors cause and effect relationships which lead to the attainment of objectives in development interventions. The results chain begins with resource allocation, followed by activities and their outputs (or products). It leads to achievements (or results) and impacts, and leads to feedback. Inputs: Financial, human and material resources used for development intervention. Activities: Actions, enterprises or work undertaken with the aim of producing specific outcomes (products/outputs). The activity mobilises resources such as funds, technical assistance and other means.
17
Results Chain: Some definitions (2/2)
Outputs: Goods, equipment or services resulting from development intervention. The term can be applied to changes resulting from an intervention that can lead to direct results. They are generated by the inputs and the activities of development intervention. Outcome: They indicate the direct outcome or expected behavioural changes from the standardisation and use of programme outcomes/outputs. They are dependant on the use of outputs generated through interventions by targeted beneficiaries. Impacts: Long term, positive and negative, primary and secondary effects, intentionally or non-intentionally generated by a development intervention. It is largely accepted that a development intervention is not accountable for achieved impacts as it doesn’t contribute to its occurrence (attribution vs. contribution).
18
Part 3: The Logical Framework Matrix (Logframe, Results Framework))
Importance of the Results Chain in the LFM Performance Indicators/Objectively Verifiable Indicators (OVIs) Distinction between outcome/output indicators and effect indicators Indicator design: 7 stages
19
Logical Framework Matrix
Column 1: Results Chain Column 2: Indicators Column 3: MoVs Column 4: Assumptions Impact / Goal: Outcome/Purpose: Reformulate the concerns identified by stakeholders into positive, desirable outcomes / benefits Outcomes are usually not directly measured—only reported on. Outcomes must be translated to a set of key indicators. An outcome indicator identifies a specific numerical measurement that tracks progress (or not) toward achieving an outcome Outputs: Activities:
20
Logical Framework Matrix
The LFM summarises in one page: Why is the programme being executed? (Goal / Impacts) What are the programme’s anticipated effects? (Programme Objective/ Effects) How will the planned results be achieved? (Activities) What are the external conditions influencing the programme’s success? (Major Assumptions) How will the programme’s success be measured? (Indicators) Where will the necessary information for measuring the programme’s success be found? (Verification Sources) What Is the project cost and the necessary resources for implementing the programme? (Resources / Inputs)
21
Logical Framework Matrix
Logical Framework = Management tool aimed at improving the design of interventions. This involves identifying strategic elements (resources, outputs, achievements, impacts) and their causal relationships, indicators, as well as external factors (risks) which can influence the success or failure of action. The columns are completed in this order: Hierarchy of Objectives (Results Chain) Assumptions Performance Indicators Data Sources (Means of Verification of indicators)
22
The Results Chain: Of what use is it?
In Results-Based Management (RBM), the results chain helps respond to 3 key questions: What objectives/results does the programme target? How will the programme achieve the objectives/results it has been tasked with? How do you determine if the programme has actually achieved the objectives/results it has been charged with? The results chain is a determinant logical relationship between activities and outputs of a development intervention and the results that they are supposed to produce.
23
Results and Activities
24
Distinction between outcomes/outputs and effects
Professional training courses organised Rural road fixed Quality drinking water produced and delivered Agricultural inputs distributed Number of interns recruited increased Transport costs reduced Incidence of water-borne diseases reduced Yield per hectare increased
25
Assumptions, Hypotheses and Risks
Factors and external conditions important for the success of development action but not directly influenced by these (what is called a "suitable" environment. ) A Risk is a potential event or occurrence that could adversely affect achievement of the desired results. An Assumption is a necessary condition for the achievement of results at different levels. Assumptions are like objectives/results, except that they do not fall under the direct responsibility of the programme
26
Assumptions, Hypotheses and Risks
How to formulate assumptions Examples of formulating a risk and an assumption They can be deduced from the hierarchy of objectives. They are stated in the positive form like objectives/results. They are evaluated according to their importance to the success of the development action and their probability for fulfilment or otherwise.
27
Vertical Logic of the Theory of Change/Impact Pathway
“Results Chain,” articulates the “Impact Pathway” or “Theory of Change” that underlies the project design. In conformity with the vertical logic of the LFM, the achievement of each result is conditioned by the existence of a written assumption opposite the result.
28
3 Categories of Risk Preventable Risks, arising from within the programme, are monitored and controlled through rules, values, and standard compliance tools. Strategy Risks and External Risks require distinct processes that encourage managers to openly discuss risks and find cost-effective ways to reduce the likelihood of risk events or mitigate their consequences An assumption with a low possibility of achievement (or with a high possibility of not being achieved) is synonymous with high risk for a programme.
29
Assumption & Risk Evaluation Matrix
30
Horizontal Logic of the Theory of Change/Impact Pathway
Column 1: Results Chain Column 2: Indicators Column 3: MoVs Column 4: Assumptions Impact / Goal: Outcome/Purpose: Outputs: Activities: Framework of Estimated Costs
31
What is a RBM indicator? It is a quantitative or qualitative factor or variable which measures the achievement of a result and provides information on changes linked to the development intervention or helps evaluate the performance of a development actor. The specification must be collectively accepted by partners and stakeholders of the development intervention. If it can be measured, it can be managed All indicators must be expressed in terms of quantity, quality and time (QQT). The result only highlights the change; the indicator gives evidence on the scale of change.
32
Why are indicators necessary?
They describe successful results of the programme through the establishment of targets. They help in outlining each result (outcomes/ outputs, effects and goal/impact) by making statements more clear and precise. They help verify outcomes in an "objective" manner so as to arrive at an agreement on the progress as demonstrated by the evidence. They constitute the foundation for programme monitoring and evaluation. The progress towards achieving anticipated results can be measured at the outcomes/outputs and effect levels, at which the programme is more "accountable". They constitute the foundation for making informed decisions in the case of corrective measures.
33
Characteristics of a good indicator
It must be targeted and satisfy 5 dimensions: Quantity (what quantity will attained? ) Quality (with what quality?) Time (when? Or between when and when?) Target-Group (who will improve/change?) Place (where will the changes be measured?) It must be objectively verifiable (measured in the same way for all, and linked to the verification sources column). It must be practical (measure what is important– ensure that the targeted parameters are attainable). For the outcome indicators, it is important to have the baseline data in order to set targeted parameters and measure changes.
34
Types of indicators Quantitative indicators: measures change in numeric terms such as number, percentage, frequency, ratio, proportion, etc. Percentage of girls and boys attending primary school. Female and male unemployment rates in rural areas. Qualitative indicators: give information on judgements, opinions, perceptions, or attitudes of people and groups. It can be expressed in terms of satisfaction, perception of change, applicability, etc. Satisfaction of water service beneficiaries. Community interests in project activities.
35
Indicator: a sine qua none condition
There cannot be an "indicator" for monitoring or evaluating a development intervention if there isn’t a system (existing or planned) which enables the collection of information corresponding to that indicator, in a regular manner and in real time.
36
Output indicators They measure the most immediate or delivered outputs of a development intervention (during and at the end of implementation). They highlight the physical quantities of goods produced or services supplied through the development intervention. They highlight the number of beneficiaries having had access to – or received – these goods or services produced by the development intervention. They must also include a qualitative dimension and a time frame (QQT).
37
Outcome indicators They help in measuring the short and medium term effects (or outcomes) resulting from beneficiary use of outputs/outcomes produced through the development intervention. Generally, they highlight a change in behaviour, attitude, practices or an improvement in skills and capacities. They can also be used to measure a change in preferences or the satisfaction of beneficiaries with regard to the quality of received goods and services.
38
Designing indicators: 7 stages
Objective of the intervention: Agricultural food production increased Identify the indicator: Small farmers increase their wheat production Set the quantity: Increase in production by 50% Specify the group: 10,000 small farmers (owning 3 ha or less) Specify the quality: Use of new varieties of bread-making quality Set the time frame: Between October 2010 and October 2012 Specify the location: Province X 7. Construct the Objectively Verifiable Indicator: 10,000 small farmers (owning 3 ha or less) of province X, increased their wheat production by 50% between October 2010 and October 2012, using new varieties of bread-making quality.
39
Means (Sources) of Verification
The verification sources help find the necessary data for verifying progress taken towards the achievement of an indicator, and so provide evidence of the achievement of a result. VERY IMPORTANT REMINDER - There cannot be something called an "indicator" for monitoring or evaluating a development intervention if there isn’t a system (existing or planned) which allows the regular and real-time collection of data which matches the indicator. Indicators and verification sources: Must be practical and economical ( at the least cost); and Create a foundation for monitoring and evaluation of the development intervention.
40
Means (Sources) of Verification
The verification sources must specify (3Fs): The information provider (service for programme accountability, National Institute of Statistics, etc.) The format in which the information will be available (i.e. progress reports, official statistics, account books, etc.) The information supply or collection frequency (or cycle) (i.e. monthly, quarterly, annual, etc.) Too much information is noise. It’s not the quantity which counts, but the quality of the information obtained!!! A source is not necessarily dependable Only collect what can actually be processed Where a source is unavailable, change the indicator.
41
Part 4: Monitoring, Evaluation and Information System
Monitoring and Evaluation Concepts in RBM Differences and Similarities between monitoring and evaluation Monitoring General evaluation concepts Information system in support of M&E
42
Link between Logframe and M&E
The Logical Framework can be used as the foundation of a programme or project’s Monitoring & Evaluation (M&E). The 2nd and 3rd columns of the LFM constitute the basic elements of a M&E system: they define the performance indicators, set the targeted objectives to be achieved, and describe the system’s information sources.
43
What is Monitoring? Monitoring = a systematic process of verifying the effectiveness [effects] and efficiency [outcomes/outputs] of a development intervention’s implementation (programme, project) in order to: Assess progress towards results and identify insufficiencies (or gaps); and, Recommend corrective measures for optimising desired results. In the life cycle of a programme, monitoring does not take place before the implementation phase. It is based on specific indicators during the design phase of the programme.
44
Distinction of Monitoring in RBM
Results-Based Management (RBM) = Management strategy for a project/ programme focused on performance, the attainment of outputs and the accomplishment of direct effects. In the case of a RBM approach, "good monitoring" is: Continuous and systematic; Participation of key stakeholders in a development intervention; and, Particular attention to the achievement of anticipated results. In some programmes, key stakeholders include beneficiaries, the executing agency, programme, public administration, etc.
45
Difference between Monitoring and Evaluation
Monitoring = continuous process of systematic collection of information on chosen indicators for an ongoing development intervention Evaluation = systematic assessment of design, execution, efficiency, effectiveness, process, and results of an ongoing or completed programme/project. Monitoring is continuous, evaluation is occasional or periodic (undertaken at a specific time). Evaluation can take place at different stages of programme cycle and often draws on external specialists; not involved in the execution of the programme to be evaluated.
46
Why Monitor? To improve programme performance and the quality of achievements. To learn from experiences on the ground. To develop clear corrective measures and take good decisions. Finally, to ensure the achievement of anticipated results and plan while executing. Monitoring is costly. BUT not monitoring is more costly. Monitoring has no use if it does not allow for improvements in programme/organisational performance in taking good decisions.
47
Where and how does monitoring intervene?
All sites where the programme takes place. Involving communities and beneficiaries. With data collection tools. According to the indicators set while executing the programme. By assessing quantitatively, qualitatively and in real time, all programme achievements …
48
Where and how does monitoring intervene?
Good monitoring must cover all areas where a programme intervenes and all programme achievements. Be careful of resistance!!! Do not think that good monitoring will be accepted by everyone. There are always people who have a negative perception of M&E and so oppose it with resistance!!!!
49
How to do successful monitoring?
Think about and organise for M&E right from the idea stage (1st stage of the project cycle) and throughout implementation. Involve key stakeholders in developing the M&E plan for the programme (promote agreement on anticipated results and the required performance, strengthen engagement and trust, etc.). Exhibit firmness and rigor in executing the M&E plan of the programme.
50
Who participates in Monitoring?
Strongly recommended to clarify who the stakeholders are in monitoring and to specify their roles and responsibilities ("Who does what and when?") Strongly recommended to include identified stakeholders right from the start when putting together the M&E plan for the programme. Necessary to train these stakeholders on M&E concepts, according to their assigned roles and responsibilities.
51
Programme Evaluation Evaluation = Systematic and objective assessment of the conception, execution and results of an ongoing or completed project, programme or policy, in order to determine its relevance and attainment of objectives, efficiency with regards to development, effectiveness, impact and sustainability. Evaluation helps to respond to questions such as: What are the programme effects and impacts? Is the programme evolving as anticipated? Were the accomplished activities executed as planned(quantity, quality, duration)? What contributed to the changes identified through monitoring? Are the identified differences between the various programme sites due to the way the programme was operated? Who really benefits from the programme and its ripple effects?
52
Evaluation: 3 fundamental questions
Descriptive Questions: to show what is happening (describes the process, prevailing conditions, organisational relationships and points of view of various stakeholders in the programme). Normative Questions: to compare what is happening with what was planned (activities, achievements, fulfilled or non-fulfilled objectives ). Could also be relevant to resources/inputs, activities, and outcomes/outputs. Cause and Effect Question: to focus on results and to try to determine to what extent the programme is fuelling change.
53
Why must an intervention be evaluated?
As a general rule, an evaluation becomes necessary when periodic data from monitoring show that ongoing performance is clearly and significantly different from what was planned. Need for evidence on what works(bad performance and budgetary restrictions can be damaging!!!). Need for improving programme execution and the performance of public organisations(for example, to improve the design of social programmes and methods for targeting beneficiaries). Need for reliable information on the sustainability of results obtained by a programme (does the programme lead to sustainable solutions to problems by addressing causes?).
54
Evaluation types and the programme cycle
Prospective Evaluation: assesses results and potential objectives of a programme before its launch and their probability of being achieved. Conducted before the launch, also known as an ex-ante evaluation. Example: Cost-benefit analysis. Formative Evaluation: seeks to improve performance, usually undertaken during programme implementation. Sometimes called process evaluation for a study of internal dynamics of organisations. Example: Mid-term evaluation. Summative Evaluation: conducted at the end of the programme (or at the end of a programme phase), seeks to determine the level of achievement of anticipated results. Sometimes called ex-post evaluation. Example: Impact evaluation.
55
What to evaluate: the 5 main OECD criteria
Criterion Definition Relevance Measure according to programme objectives which correspond to beneficiary expectations, country needs, global priorities, partner policies and donors. Effectiveness Measure a programme’s achieved results – or in process of being achieved, bearing in mind their relative importance. Efficiency Measure which of the programme resources have been transformed into outcomes/outputs at better cost. Sometimes requires an economic analysis of different alternatives. Impact Assessment of long term effects, positive and negative, primary and secondary, resulting from a programme, directly or otherwise, intentionally or otherwise. Sustainability Assessment of the sustainability of benefits resulting from a development intervention after a programme’s completion. Probability of gaining long-term benefits.
56
Evaluation from a RBM Perspective
Goal (Impacts) Long-term results, widespread improvement in society (reduced number of people living in poverty. Consequence 0f agricultural programme) Results Outcomes Mid-term outcomes or Intermediate effects of outputs on clients (what beneficiaries achieve due to new access to services, etc.) – e.g. greater agricultural yields Outputs Short-term results or outputs - Products and services produced (what managers or those responsible of the project do), e.g. access to services, awareness campaign Activities Short-term activity results - Tasks personnel plan and undertake to transform inputs to outputs, e.g. meetings, training events Implementation Inputs Short-term input results (what project managers and development partners put as - financial, human, and material resources) e.g. agricultural inputs
57
Theory of Change and Evaluation
A theory of change describes a plan for social change from the formulation of hypotheses before design to the definition of long term objectives. This theory is often presented in the form of a diagram (logical model) which analyses the links between resources and results. It is often presented in form of a table outlining the stages, data or resources until the achievement of the objective envisioned by the programme (logical framework). Building a theory of change allows an evaluator to: Understand the philosophy upon which a programme is based. Examine existing evidence through a research synthesis. View a complex programme as a chain of interventions aimed at behavioural changes.
58
Theory of Change as basis for M&E
Theory of Change: impact we are seeking, outcomes that must change in order to achieve the impact we seek, strategies to be used by partners to bring about the outcomes we desire and processes that will create the conditions and capacity of the system to put these strategies in place. Results Framework: logical linear representation of how the various inputs and activities translate into outputs, outcomes and impact, [the assumed “results chain/ impact pathway”] Theory of Action: the connection between the actions undertaken and the effect (s) which these actions are meant to produce. It is critical to establish a rational and well-researched basis for believing that the cause is strongly related to the intended effect. There is no point in merely hoping that doing something will produce the desired effect Theory of Reach: the necessary and sufficient coverage to produce credible claim to observed change
59
Theory of Change (Model)
Assumptions Assumptions Assumptions Assumptions Assumptions
60
Evaluation data collection methods
Quantitative methods: numerically assesses certain aspects of an object of evaluation. More suitable for formulating statistical and generalizable conclusions. Example: Survey. Shortcoming: Sampling (question of external validity). Qualitative methods: often used to get to the depth of qualitative aspects of the object of evaluation. Suitable for their flexibility and easy use. Example: Focus group. Shortcoming: The evaluator plays the role of a facilitator. Mixed methods: complementary combination of quantitative and qualitative methods in order to collect quantifiable data and qualitative assessments. Example: Direct observation during a survey interview. Shortcoming: Good methodological combination and the risk of triangulation.
61
Evaluation data collection methods
Panel Surveys Key informant interviews Conversation/ Interview with concerned individuals/stakeholders Focus Group Interviews One-Time Survey Participant(ory) Observation Community Interviews/fora Direct observation Census/Inventory Reviews of official records (MIS , GIS and admin data) Field experiments Field/Site visits Questionnaires Informal/Less Structured Methods More Structured/Formal Methods
62
Evaluation conclusions and recommendations
Provide clear, precise responses to evaluation questions posed in the TORs (show causal relationships). Very often, presence of value judgements (potential conflicts). Ethically, a conclusion must be linked to data and analysis. All questions must be answered in the conclusions. Otherwise… Methodological limits and context: highlight the robustness of a link between data and conclusions if the analysis can be generalised . Evaluation recommendations Represent suggestions for improving, reforming or renewing the programme. Draw one or two conclusions vis-à-vis the problems. Prioritised and ranked with specific recipients.
63
Sharing/use of evaluation results
Disseminating evaluation results An entirely separate stage of the evaluation process, after the production and validation of the evaluation report, but planned from the start. Indispensable stage for potential users to utilise the evaluation (transparency essential). Based on different users, different communication streams are employed. Using evaluation results Taking decisions, help in forming judgments, to know the programme effects. Can be used differently by different users. Must be anticipated from the beginning and guide the evaluation launch.
64
Part 5: Management Information System, Data quality, collection and Analysis
MIS in support of M&E and reporting: Importance of an information system M&E and the use of reliable data
65
After the Logframe, the M&E Plan…
After finalising the logical framework of the programme, develop a monitoring and evaluation plan for the programme, including: Definition of the data collection methodology for monitoring (sources, frequency, transmission mode, etc.); Definition of the assessment and analysis methodology for collected data; Designation of support mechanisms for disseminating monitoring information; Definition of the methodology for different programme evaluations and creation of their terms of reference; Definition of the methodology for undertaking different audits (institutional and technical), if necessary, and develop different terms of reference pertaining to it…
66
After the M&E Plan, the Information System…
After developing a monitoring and evaluation plan for the programme, it’s time to design and implement a Management Information System in support of the programme’s monitoring and evaluation: Create an inventory for the information system (infrastructure, protocols, contacts, etc.); Analyse the existing information system, conceptually and functionally, and identify gaps based on the monitoring and evaluation needs of the programme in question; Create a M&E database; Schedule implementation ahead of managing the M&E database; Finalise the establishment of the information system; Train users of the information system.
67
Information System: importance of the database
Definition of a database In general, a database is a set of organised documents, generally structured in columns and in table form. For electronic databases, computer science speak of dataset that is structured and organised in such a way that a computer application can quickly select desired elements from this set. The most common type of database in the world is the relational database. In such a database, the data is not presented in the same table, but in different tables with links between them.
68
Advantages of a M&E Database
Makes data immediately available when the need arises; Always ensures the availability of data in a format which allows for different analysis without manual calculations; More effectiveness and precision in management and data usage; Allows for comparing different data elements; Quick and precise handling of large data sets; Reduces data analysis processes and time spent in managing data; Transforms disparate data into consolidated information; Improves the quality, speed, and understanding of information; Supports spatial analysis with the help of a geographical information system (GIS) and the presentation of data on maps for easy comprehension by decision-makers…
69
Using databases: To remember!!!
Never expect technology to "have all the answers" when it comes to M&E. Take into consideration government policy on Information and Communication Technology when handling databases of public agencies. Keep daily track of the functionality and security of the database in order to ensure integrity, availability and data quality. Identify data which must be included in the database. Determine what software or application will be used for analysis. Bear in mind that the availability of a spatial analysis software is helpful. Take all necessary measures for merging a new database with existing ones (data transfer) Identify capacity building needs in design and management from the start in order to improve database usage and information access.
70
M&E (Information) System: a question of and need for data…
A M&E system = the combination of planning, collection, analysis, data use, and the dissemination of M&E results. It takes advantage of sufficient resources and required expertise to implement M&E with the aim of improving decision-making for management and programme implementation, performance improvement, and experience building. Once all the monitoring and evaluation instruments of a programme have been prepared (logical framework, M&E plan, information system, etc.), it is time to supply the system with quality data …. Supplying the system is not limited to data collection; there are many stages beyond that: (1) data input and formatting; (2) storage; (3) processing and analysis; (4) dissemination of M&E information; and (5) use of M&E information.
71
M&E (Information) System: a question of and need for data…
Data quality is an integral part of managing M&E data, and thus the manager of a development intervention must constantly pay attention to it throughout the process of producing M&E information. Quality is defined as the totality of characteristics and properties of a product or service which confers upon it the ability to satisfy expressed and implicit needs. Bad data quality is due primarily to data record errors at the moment of data collection or data input errors when being added to a database. Orthographic mistakes, erroneous codes, incorrect abbreviations, inputting in a wrong column are also instances and sources of reducing data quality which can have harmful consequences during later stages of information processing, analysis and diffusion. The risk of error multiplies with many aggregation levels in the M&E system and/or based on the manual transcription or discontinued computerisation (multiple manual input).
72
Data Collection Problems
Lack of Clarity: a major source of errors in questionnaires, very often worsened by the diversity of interviewers. Ask clear, concise, simple, unequivocal questions which have the same significance for everyone. Use of jargon: Technical terms and bureaucratic jargon are not always used and understood by all. Limit their use, if possible. Suggestive questions: can lead to a specific response (manipulation of the study). Always avoid these kind of questions as they generate uncertain data. Words or phrases with a negative or positive connotation: Two quasi-similar words can have different connotations. Always study the meaning of words to be used. Embarrassing questions: can put people being question in an uncomfortable situation. Avoid this as it can close access to a potentially useful information source. Hypothetical questions: based on conjecture (for example, "if you were chief of police, what would you do to control crime?"). To be avoided because it does not permit the collection of reliable data that is representative of real opinions. Preference for prestige: some informants might have the tendency of responding in a staged or misleading manner. In this case, triangulation might be necessary.
73
Data Collection Problems
“Better to have an approximate answer to the right question, than an exact answer to the wrong question.” Paraphrased from statistician, John W. Tukey “Better to be approximately correct than precisely wrong.” Paraphrased from Bertrand Russell Informants often give “normative responses” (what they are expected to do) instead of real responses (what they actually do). Make sure they have really understood the objective of the data collection so they provide what is necessary.
74
Data Input: Errors and Quality Control
Transposition errors: (39 inputted instead of 93), often resulting in other mistakes. Input errors: (1 is inputted as 7 or 0 (zero) inputted as letter “O” or I inputted as 1). Coding errors: entering an incorrect code (an interviewer circles 1= Yes, and the input agent records 2= No). Sorting errors: occurs when a person fills a questionnaire and puts numbers in an incorrect place or order. Consistency errors: when there are contradictory responses on the same questionnaire (birth date and age). Range errors: when a response falls outside the range of possible or probable values.
75
Data Quality Criteria Reliability: The extent to which the data collection approach is stable and consistent across time and space Validity: Extent to which data clearly and directly measure the performance we intend to measure Timeliness Frequency (how often are data collected?) Current (how recently have data been collected?) Relevance (data need to be available on a frequent enough basis to support management decisions)
76
Analysis and Interpretation of M&E data
Data Analysis = verifies the achievement of programme objectives and summarises data. Does not necessarily signify the use of a sophisticated information programme… Instead, it is the assessment of collected data with respect to questions asked (to know if the programme is operating as expected; to compare objectives and its actual performance). Interpretation = to discover causes for its performance resulting from M&E evidence. The analysis and interpretation of M&E data helps generate information which could help in decision-making on a programme. It is for this reason that quality data is essential
77
Conclusion Results-Based Management
Shifts from an input-activity-output focus to a focus on the outcomes of actions and initiatives Responds to demands for accountability Stresses knowledge and learning through continuous improvements Emphasizes effective resource allocations Provides information to help answer the “so what” question of intended actions Also helps in assessing if “scarce” resources are being used most appropriately But recognizes this is a political process with technical dimensions – not vice versa
78
Conclusion Results Based Management
Emphasizes both Implementation and Results Results-based management involves the regular collection of information on how effective (Outcomes & Impact) the organization is performing Results-based management demonstrates whether a project, program, or policy is achieving its stated goals, or outcomes
79
THANK YOU
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.