Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Monitoring and Evaluation Use of statistics Module 5 Session 9.

Similar presentations

Presentation on theme: "1 Monitoring and Evaluation Use of statistics Module 5 Session 9."— Presentation transcript:

1 1 Monitoring and Evaluation Use of statistics Module 5 Session 9

2 2 Use of Statistics in Monitoring and Evaluation 1.Why consider statistics in M&E? 2.Where are the entry points for using stats? 3.Improving the quality of indicators 4.Enhancing monitoring of indicators 5.Informing evaluations

3 3 1. Why use statistics in M&E? When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure it, then you cannot express it in numbers, your knowledge is of the meager and unsatisfactory kind. – Lord Kelvin (British Physicist)

4 4 2. Entry points Planning – informing the selection of indicators Monitoring – measuring progress against quantitative indicator targets Evaluation – using statistics from different sources to inform on achievement in context

5 5 3. Planning – enhancing the quality of indicators Definition: Indicators are signposts of change along the path to development. Indicators are what we observe in order to verify whether – or to what extent – it is true that progress is being made towards our goals, which define what we want to achieve. Indicators make it possible to demonstrate results. Indicators can also help in producing results by providing a reference point for monitoring, decision-making, stakeholder consultations and evaluation. In particular, indicators can help to: Measure progress and achievements; Clarify consistency between activities, outputs, outcomes and goals; Ensure legitimacy and accountability to all stakeholders by demonstrating progress; Assess project and staff performance.

6 6 Indicator qualities Good indicators have the following five characteristics: Numeric. While not always more objective, numerical precision lends itself better to an agreement over the future interpretation of data. On the other hand, factual indicators provide only a very crude measurement due to their limited scale (mostly yes/no). For a set of factual indicators, no monitoring system is needed, since the status is mostly known by stakeholders (e.g.: law passed by parliament: yes or no). If it not possible to avoid factual indicators due to the nature of the project, factual indicators should be at least be supplemented by numeric indicators in a comprehensive set. Objective. An indicator which involves a subjective judgement by somebody is not objective. For a good indicator, there has to be a general agreement over interpretation of data.

7 7 Indicator qualities (2) Specific. The indicator needs to be as specific as possible in terms of quantity, quality, time, location, target groups, baseline, targets etc. Relevant. The indicator needs to relate directly to the respective output, outcome or impact. In other words, a good indicator is a relevant measure for the objective. Feasible. Even if an indicator fulfils all other criteria, it is not useful if the data collection for the indicator is not feasible. First, data for the indicator needs to be easily available.

8 8 Examples

9 9 Description of indicator Precise definition: The definition must be detailed enough to ensure that different people at different times, given the task of collecting data For a given indicator, would collect identical types of data. Potentially ambiguous terms (for example: small farmers, poor households, disadvantaged groups) need to be clearly defined (for example: farmers with < 1 hectare of land, households below national poverty line). Unit of measure: Define the precise parameter used to describe the magnitude or size of the indicator (for example: number of individuals, percentage, shillings, hectares, cumulative, average, etc.). Disaggregated by: Identify how data will be separated to improve the breadth of understanding of results reported (for example: gender, district, urban/rural, etc.).

10 10 Baseline and Targets Baseline: The baseline is the value of the indicator prior to an action. The baseline value establishes the starting point from which change can be measured. Benchmarks: Benchmarks are values of the indicator while an action is still ongoing. Benchmarks are therefore intermediate targets. Target: The targets is the expected value of the indicator after an action.

11 11 4. Thinking about monitoring Planning for data acquisition Attributes of good data Timing of relevant data collection exercises

12 12 Planning for data acquisition Data collection method and timing: Describe exactly and in detail how and when you will collect the data. Identify what methods and instruments you will use. Note any tool or survey required to collect the data. Attach data forms when necessary. Examples of data collection methods are secondary data, surveys, expert judgments, etc.). Data source: The data source is the entity from which the data are obtained (e.g. a government department, an NGO, other donors, etc.). Estimated cost of data acquisition: Provide a rough estimate of what it will cost to collect and analyze the data. Individual responsible and location of data storage: Describe who will take the lead for collecting this indicator. Describe how data will be stored over time and in what format.

13 13 Data quality issues Known data limitations and significance: Identify where data may be weak or limited. Identify actions taken or planned to address data limitations:

14 14 Attributes of good data The source of data should be known. Typically someone at the agency that collected the data should be available to clarify and explain details; The reason for which quantitative data was collected should be known and documented. This is important because it helps to understand if for example there were any biases in response or if you need to account for interviewer/response biases/errors; Codified responses should be carefully documented at some place. Frequently responses are ranks or ordinal responses;

15 15 Attributes of good data (2) The date of collection, scale, frequency, sampling unit, enumeration units, selection process and coverage of data should be known. Thus the number (scale) of households (sampling unit) covered on a certain date, should be known. The number of times it is collected (frequency) and what region of the country it covers (coverage) should be known. If data is collected on livestock then the units that this is recorded in (thousands or millions) should be known. If the data does not include all the observations as initially planned, it is good to know the reason for this; The method used to identify respondents should be known. There are many methods to collect data and identify respondents. These can be random sample over a country, or a village, or variations thereof. Thus for example if data was collected randomly or to reflect proportionate representation of ethnic groups or in an interview format – then all of this should be known. Data can via a rapid rural assessment. The method for identifying respondents helps to know the extent to which data can be scaled-down or disaggregated.

16 16 Attributes of good data (3) Spatially explicit data that is either derived from satellite images or ground collected are also another type of data. Documentation for these data is also extremely important. Additional knowledge about scale and format of data is critical. Errors or faults in data collection (there are always some) should be known well. People working with the data should be very familiar with the software they use to extract data, transform data and construct data. Frequently a lot of errors in analyses occur because researchers are not familiar with the way their software handles data.

17 17 Timing of data collection exercises Monitoring indicators require updating quarterly or biannually. Identify which reliable instruments may provide some useful data for district level monitoring (e.g. consumer price index in urban centres; biomass data; migration statistics etc)

18 18 5. Informing Evaluations What type of data do evaluations require? What sources of data exist? How to access and manage that data?

19 19 Evaluation data requirements Evaluations ask the question of not only what has and has not been achieved, but also why it has or has not been achieved. The what question requires answers to often quantitative indicators. For example, the PEAP evaluation needs to address the question of whether poverty headcount has gone down as per the target. Equally, an agricultural project evaluation might ask what has changed against indicators such as: Increased yield/acre Expansion of acreage Share of high value crops in total production

20 20 Evaluation data requirements (2) The why question of evaluation requires a broader investigation into data: Explanatory data: What else has been occurring that may have affected the intervention (e.g. reduction in international coffee prices; heavily floods in area etc) Trend data: what has occurred over time in key indicators Mixing quantitative and qualitative data: using qualitative approaches (e.g. focus groups) to better understand possible cause and effect relationships

21 21 What sources of data exist? Best sources of quantitative data are: Surveys (representative at regional level) Demographic health survey Household budget survey National household survey Sero behavioural survey National service delivery survey Informal cross-border survey Other Evaluations and Reviews Censuses (population, agriculture, etc)

22 22 Without data, all you are is just another person with an opinion - Unknown Source: Statistical Literacy Project, Bureau of Development Policy, UNDP

23 23 How to access that information Survey data sets are available in reports, and raw at UBOS Partner evaluation reports are available online, e.g. World Bank N.B. Often necessary to visit key offices to identify and access key data sets

Download ppt "1 Monitoring and Evaluation Use of statistics Module 5 Session 9."

Similar presentations

Ads by Google