Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Workshop for Government Officials and Their Development Partners

Similar presentations


Presentation on theme: "A Workshop for Government Officials and Their Development Partners"— Presentation transcript:

1 A Workshop for Government Officials and Their Development Partners
Designing and Building a Results-Based Monitoring and Evaluation System: A Tool for Public Sector Management A Workshop for Government Officials and Their Development Partners ©2000 The International Bank for Reconstruction and Development / THE WORLD BANK 1818 H Street N.W. Washington, DC 20433 All rights reserved Manufactured in the United States of America First printing October 2000

2 Introduction to the Workshop

3 Designing and Building a Results-Based Monitoring and Evaluation System
A Tool for Public Sector Management Table of Contents 1 Introduction to Workshop 2 Introduction to Monitoring and Evaluation 3 Step 1 – Conducting a “Readiness Assessment” 4 Step 2 – Agreeing on Outcomes to Monitor and Evaluate 5 Step 3 – Selecting Key Indicators to Monitor Outcomes

4 Designing and Building Results-Based Monitoring and Evaluation System (Cont.)
Table of Contents 6 Step 4 – Baseline Data on Indicators— Where Are We Today? 7 Step 5 – Planning for Improvement— Setting Results Targets 8 Step 6 – Monitoring for Results 9 Step 7 – The Role of Evaluations 10 Step 8 – Reporting Your Findings 11 Step 9 – Using Your Findings 12 Step 10 – Sustaining the Monitoring and Evaluation System within Your Organization

5 Goals for This Workshop
To prepare you to plan, design, and implement a results-based monitoring and evaluation system within your organization To demonstrate how an M&E system is a valuable tool to support good public management

6 Workshop Overview This workshop focuses on ten steps that describe how results-based monitoring and evaluation systems are designed and built These steps begin with conducting a “Readiness Assessment” and on through designing and managing your monitoring and evaluation system We will be discussing these steps, the tasks needed to complete them, and the tools available to help along the way

7 Ten Steps to Designing, Building and Sustaining a Results-Based Monitoring and Evaluation System
Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 1 2 3 4 5 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

8 Introduction to Results-Based Monitoring and Evaluation

9 The Power of Measuring Results
If you do not measure results, you can not tell success from failure If you can not see success, you can not reward it If you can not reward success, you are probably rewarding failure If you can not see success, you can not learn from it If you can not recognize failure, you can not correct it If you can demonstrate results, you can win public support Adapted from Osborne & Gaebler, 1992

10 Introduction to Results-Based Monitoring and Evaluation
What Are We Talking About? Results-based monitoring and evaluation measures how well governments are performing Results-based monitoring and evaluation is a management tool! Results-based monitoring and evaluation emphasizes assessing how outcomes are being achieved over time

11 Who Are Stakeholders That Care About Government Performance?
Government officials/Parliament Program managers and staff Civil society (Citizens, NGOs, Media, Private Sector etc.) Donors

12 Remember Monitoring and evaluation are two separate, but interrelated strategies to collect data and report the findings on how well (or not) the public sector is performing During this workshop, we will be discussing: Monitoring as a tool Evaluation as a tool How the two interrelate to support good public management The ten steps to build a results-based monitoring and evaluation system to measure government performance

13 Reasons to Do Results-Based M&E
Provides crucial information about public sector performance Provides a view over time on the status of a project, program, or policy Promotes credibility and public confidence by reporting on the results of programs Helps formulate and justify budget requests Identifies potentially promising programs or practices

14 Reasons to Do Results-Based M&E (cont.)
Focuses attention on achieving outcomes important to the organization and its stakeholders Provides timely, frequent information to staff Helps establish key goals and objectives Permits managers to identify and take action to correct weaknesses Supports a development agenda that is shifting towards greater accountability for aid lending

15 Important… It takes leadership commitment to achieve a better-performing organization Plus redeployment of resources to build monitoring and evaluation systems Plus individuals committed to improve public sector performance So…it comes down to a combination of institutional capacity and political will.

16 Definition Results-Based Monitoring (what we will call “monitoring”) is a continuous process of collecting and analyzing information to compare how well a project, program or policy is performing against expected results

17 Major Activities Where Results Monitoring Is Needed
Setting goals and objectives Reporting to Parliament and other stakeholders Managing projects, programs and policies Reporting to donors Allocating resources

18 A New Emphasis on Both Implementation and Results-Based Monitoring
Traditional monitoring focuses on implementation monitoring This involves tracking inputs ($$, resources, strategies), activities (what actually took place) and outputs (the products or services produced) This approach focuses on monitoring how well a project, program or policy is being implemented Often used to assess compliance with workplans and budget

19 A New Emphasis on Both Implementation and Results-Based Monitoring
Results-based monitoring involves the regular collection of information on how effectively government (or any organization) is performing Results-based monitoring demonstrates whether a project, program, or policy is achieving its stated goals

20 Results-Based Monitoring
Goal (Impacts) Long-term, widespread improvement in society Results Outcomes Intermediate effects of outputs on clients Outputs Products and services produced Activities Tasks personnel undertake to transform inputs to outputs Implementation Inputs Financial, human, and material resources Binnendijk, 2000

21 Results-Based Monitoring: Oral Re-hydration Therapy
Goal (Impacts) Child mortality and morbidity reduced Outcomes Improved use of ORT in management of childhood diarrhea Outputs Increased maternal knowledge of and access to ORT services Activities Media campaigns to educate mothers, health personnel trained in ORT, etc. Inputs Funds, ORT supplies, trainers, etc. Binnendijk, 2000

22 Results-Based Monitoring: Adult Literacy
Goal (Impacts) Higher income levels; increase access to higher skill jobs Outcomes Increased literacy skill; more employment opportunities Outputs Number of adults completing literacy courses Activities Literacy training courses Inputs Facilities, trainers, materials

23 Exercise: Identify the Sequence of Inputs, Activities, Outputs and Outcomes
Goal: Ensure Healthier Children in Rural Communities Information is made available for parents about the importance of sterilizing water before making formula Fewer children are going to hospital to be treated for diarrhea diseases Increased numbers of Babies drink formula that has been made from sterilized water Children morbidity rates decrease in local community New funds available to introduce information campaign on sterilizing water in making baby formula Knowledge among parents grows about importance of boiling water before making infant formula

24 Exercise: Identify the Sequence of Inputs, Activities, Outputs and Outcomes
Goal: Create economically viable women-owned micro-enterprises Government makes available funds for micro-enterprise loans Government approves 61 applications from program graduates 90% of successful applicants begin operating new businesses after government approves application 15 qualified course trainers available 72 women complete training Income of graduates increases 25% in first year after course completion 100 women attend training in micro-enterprise business management

25 Some Examples of Results Monitoring
Infant Health Girls Education Policy Monitoring Decreasing Infant Mortality Rates Increasing girls education attainment Program Clinic-based pre-natal care is being used by pregnant women # of girls in secondary schools completing math and science courses Project Information on good pre-natal care provided in 6 targeted villages # of girls in four urban neighborhoods completing primary education

26 Definition Results-Based Evaluation An assessment of a planned, ongoing, or completed intervention to determine its relevance, efficiency, effectiveness, impact and sustainability. The intent is to incorporate lessons learned into the decision-making process.

27 Evaluation Addresses “Why” Questions –
What caused the changes we are monitoring “How” Questions What was the sequence or processes that led to successful (or not) outcomes “Compliance/ Accountability Questions” Process/ Implementation Questions Did the promised activities actually take place and as they were planned? Was the implementation process followed as anticipated, and with what consequences

28 Designing Good Evaluations
Getting the questions right is critical Answering the questions is critical Supporting public sector decision-making with credible and useful information is critical

29 Designing Good Evaluations
“Better to have an approximate answer to the right question, than an exact answer to the wrong question.” Paraphrased from statistician John W. Tukey

30 Designing Good Evaluations
“Better to be approximately correct than precisely wrong.” Paraphrased from Bertrand Russell

31 Some Examples of Evaluation
Privatizing Water Systems Resettlement Policy Evaluations Comparing model approaches to privatizing public water supplies Comparing strategies used for resettlement of rural villages to new areas Program Evaluations Assessing fiscal management of government systems Assessing the degree to which resettled village farmers maintain previous livelihood Project Evaluations Assessing the improvement in water fee collection rates in 2 provinces Assessing the farming practices of resettled farmers in one province

32 Complementary Roles of Results-Based Monitoring and Evaluation
Clarifies program objectives Analyzes why intended results were or were not achieved Links activities and their resources to objectives Assesses specific causal contributions of activities to results Translates objectives into performance indicators and set targets Examines implementation process Routinely collects data on these indicators, compares actual results with targets Explores unintended results Reports progress to managers and alerts them to problems Provides lessons, highlights significant accomplishment or program potential, and offers recommendations for improvement

33 Summary Results-based monitoring and evaluation are generally viewed as distinct but complementary functions Each provides a different type of performance information Both are needed to be able to better manage policy, program, and project implementation

34 Summary Implementing results-based monitoring and evaluation systems can strengthen public sector management Implementing results-based monitoring and evaluation systems requires commitment by leadership and staff alike We are discussing a political process with technical dimensions – not the reverse

35 Ten Steps to Designing, Building and Sustaining a Results-Based Monitoring and Evaluation System
Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 1 2 3 4 5 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

36 Step 1 Conducting a “Readiness Assessment”

37 Conducting a Readiness Assessment
Step One: Conducting a Readiness Assessment Conducting a Readiness Assessment Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes The Role of Evaluations Using Your Findings 1 1 2 3 4 5 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

38 What is a Readiness Assessment?
An analytical framework to assess a country’s ability to monitor and evaluate its development goals :

39 Why Do a Readiness Assessment?
1. To understand what incentives (or lack thereof) exist to effectively monitor and evaluate development goals? 2. To understand the roles and responsibilities of those organizations and individuals involved in monitoring and evaluating government policies, programs, and projects? E.g. Supreme Audit Office Ministry of Finance Parliament Ministry of Planning 3. To identify issues related to the capacity ( or lack of) to monitor and evaluate government programs

40 Incentives Help Drive The Need For A Results System
First examine whether incentives exist in any of these four areas to begin designing and building an M&E system? Political (citizen demand) Institutional (legislative/legal framework) Personal ( desire to improve government= champions) Economic ( donor requirement)

41 Champions Can Help Drive A Results System
Who are the champion(s) and what is motivating them? Government (social reforms) Parliament (effective expenditures) Civil society (holding government accountable) Donors (PRSP) Others Note: who will not benefit?

42 Roles and Responsibilities
Assess the roles and responsibilities and existing structures to monitor and evaluate development goals What is the role of central and line ministries? What is the role of Parliament? What is the role of the Supreme Audit Agency? What is the role of civil society? What is the role of statistical groups/agencies?

43 Roles and Responsibilities
Who in the country produces data? National Government: Central ministries (MOF, MOP) Line ministries Specialized units/offices (National Audit Office) Census Bureau National Statistics Office

44 Role and Responsibilities (Cont.)
Who in the country produces data? Sub-national/regional government: Provincial central ministries Provincial line ministries Other? Local government NGO’s Donors Others

45 Roles and Responsibilities (Cont.)
Where in the government are data used? Preparing the budget Resource allocation Program policy making Parliament/legislation & accountability Planning Fiscal management Evaluation and oversight

46 Capacity Assess current capacity to monitor and evaluate:
Technical skills Managerial skills Existing data systems and their quality Technology available Fiscal resources available Institutional experience

47 Barriers Do any of these immediate barriers now exist to getting started in building an M&E system? Lack of fiscal resources Lack of political will Lack of champion Lack of expertise & knowledge Lack of strategy Lack of prior experience

48 Key Elements of Success
Assess the Country’s Capacity Against the Following: Does a clear mandate exist for M&E? PRSP?, Law? Civil Society? Other? Is there the presence of strong leadership at the most senior level of the government? Are resource and policy decisions linked to the budget? How reliable is information that may be used for policy and management decision making? How involved is civil society as a partner with government, or voice with government? Are there pockets of innovation that can serve as beginning practices or pilot programs?

49 Step 2 Choosing Outcomes to Monitor & Evaluate

50 Agreeing on Outcomes to Monitor and Evaluate
Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 2 1 2 3 4 5 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

51 Why an Emphasis on Outcomes?
Makes explicit the intended objectives of government action (“Know where you are going before you get moving”) Outcomes are what produce benefits They tell you when you have been successful or not

52 Why Is It Important to Choose a Set of Key Goals or Outcomes?
“If you don’t know where you’re going, any road will get you there.” Paraphrased from Lewis Carroll’s Alice in Wonderland

53 Issues to Consider in Choosing Outcomes to Monitor and Evaluate
Are there stated national/sectoral goals? Have political promises been made that specify improved performance of the government? Do citizen polling data indicate specific concerns? Is authorizing legislation present? Other? (Millennium Development Goals) Is aid lending linked with specific goals?

54 Note: When Choosing Outcomes, Remember – “Do Not Go It Alone!”
Develop a participative approach that includes the views and ideas of key stakeholder groups

55 Choosing Outcomes—who Needs to be at the Table?
Who – Government Civil Society Donors Why – To build consensus for the process

56 Why Building Consensus Is Important
“The new realities of governance, globalization, aid lending, and citizen expectations require an approach that is consultative, cooperative and committed to consensus building.”

57 Developing Outcome Statements
Reformulate the concerns identified by stakeholders into positive, desirable outcomes From To Rural Crops are spoiling before getting to the market Improve Farmers Access to Markets Children are dropping out of School Create Incentives For Families To Keep Kids In School No Longer safe to go out after dark Improve crime prevention programs

58 Outcomes Statements Need Disaggregation
Outcome: Increase the percentage of employed people In order to know when we will be successful in achieving this outcome, we need to disaggregate the outcome to answer the following: For whom? Where? How much? By when?

59 Outcome Statements are Derived from identified problems or issues
Policy Area: Education From To School buildings are not maintained and are made from poor materials Many Children of rural families are unable to travel to distances to school Improve school structures to meet standards of market economy. Rural children gain equal access to educational services. Schools are not teaching our youth the content they need for the market economy. The poor and vulnerable are falling behind and not getting a decent education. Improved curricula meets market-based economy standards. Children most in need are receiving educational assistance

60 Outcome Statements Should Capture Only One Objective
Why? Consider this Outcome Statement: Students in rural areas improve learning and gain better quality of life. What are the measurement issues??

61 Developing Outcomes for One Policy Area:
Example: Education

62 In Summary: Why an Emphasis on Outcomes?
Makes explicit the intended objectives of government action (“Know where you are going before you get moving”) Outcomes are the results governments hope to achieve Clear setting of outcomes is key to results-based M&E system Note: Budget to outputs, manage to outcomes!

63 Outcomes Summary Continued
Outcomes are usually not directly measured—only reported on Outcomes must be translated to a set of key indicators

64 Step 3 Selecting Key Indicators to Monitor Outcomes

65 Selecting Key Indicators to Monitor Outcomes
Selecting Key Performance Indicators to Monitor Outcomes Selecting Key Indicators to Monitor Outcomes Planning for Improvement — Selecting Results Targets Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 3 1 2 3 4 5 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

66 Selecting Key Performance Indicators to Monitor Outcomes
Outcome indictors are not the same as outcomes Each outcome needs to be translated into one or more indicators An outcome indicator identifies a specific numerical measurement that tracks progress (or not) toward achieving an outcome Urban Institute 1999

67 “How will we know success when we see it?”
An Outcome Indicator Answers the question: “How will we know success when we see it?”

68 Selecting Outcome Indicators
The “CREAM” of Good Performance A good performance indicator must be: Clear (Precise and unambiguous) Relevant (Appropriate to subject at hand) Economic (Available at reasonable cost) Adequate (Must provide a sufficient basis to assess performance) Monitorable (Must be amenable to independent validation) Salvatore-Schiavo-Campo 2000

69 When Selecting Your Project, Program, or Policy Indicators
Select several for any one outcome Make sure the interest of multiple stakeholders are considered Know that over time, it is ok (and expected) to add new ones and drop old ones Have at least three points of measurement before you consider changing your indicator

70 How Many Indicators Are Enough?
The minimum number that answers the question: “Has the outcome been achieved?”

71 Why Use Proxy Indicators?
Only use indirect measures (proxies) when data for direct indicators are not available or feasible to collect at regular intervals Example… Number of new tin roofs or televisions as a proxy measure of increased household income

72 Outcome: Increased Access of Farmers to Markets
An Example Indicators - Outcome or not? % change in annual revenue of farmers % change in amount of spoiled crops % change in crop pricing due to competition % change in agricultural employment % change in rural to urban migration % change in types of crops being cultivated

73 Outcome: Reduction in Childhood Morbidity
An Example Indicators – Outcome or not? % in missed school days due to illness % reduction in hospital admission due to illness More medical doctors hired % change in prevalence of communicable diseases Number of children immunized % working days missed by parents % change in childhood gastrointestinal diseases

74 Developing A Set of Outcomes Indicators for a Policy Area:
Example: Education Outcomes Indicators Baselines Targets 1. Nation’s children have improved access to pre-school programs % of eligible urban children enrolled in pre-school education 2. % of eligible rural children enrolled in pre-school education Primary school learning outcomes for children are improved % of Grade 6 students scoring 70% or better on standardized math and science tests

75 Checklist for Assessing Proposed Indicators
Outcome to be measured: ______________________________ Indicator selected: ____________________________________ Is the Indicator… 1 As direct as possible a reflection of the outcome itself? 2 Sufficiently precise to ensure objective measurement? 3 Calling for the most practical, cost-effective collection of data 4 Sensitive to change in the outcome, but relatively unaffected by other changes? 5 Disaggregated as needed when reporting on the outcome? United Way of America

76 Using Pre-Designed Indicators *
A number of development agencies have created indicators to track development goals, including Millennium Development Goals (MDGs) UNDP – Sustainable Human Development World Bank – Rural Development Handbook IMF – Macroeconomic indicators * A pre-defined list of indicators are those indicators established independent of the context of any individual country or organization

77 Using Pre-Designed Indicators: Pros and Cons
Can be aggregated across similar types of projects/programs/policies Reduces costs of building multiple unique measurement systems Creates greater harmonization of donor requirements Cons – Often does not address country specific goals Often viewed as imposed—coming from the top down Does not promote key stakeholder participation and ownership Multiple competing indicators

78 In Summary: Developing Indicators
You will need to develop your own indicators to meet your own needs. Developing good indicators often takes more than one try! Arriving at the final indicators you will use will take time! Pilot, Pilot, Pilot!

79 Exercise: Select Key Performance Indicators for the Following Outcomes
Outcome #1 Improved delivery of health care to citizens living in rural areas Outcome #2 Improve quality of agriculture export products Outcome #3 Safe urban communities

80 Step 4 Baseline Data on Indicators – Where Are We Today

81 Baseline Data on Indicators – Where Are We Today
Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 4 1 2 3 5 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

82 “If you do not know where you are, you will have difficulty determining where you need to go.”
Harry Hatry Urban Institute, 1999

83 Establishing Baseline Data on Indicators
A performance baseline is… Information (quantitative or qualitative) that provides data at the beginning of, or just prior to, the monitoring period. The baseline is used to: Learn about recent levels and patterns of performance on the indicator; and to Gauge subsequent policy, program, or project performance

84 The challenge now is to think about how to obtain baseline information for results indicators selected for each outcome

85 Identify Data Sources for Your Indicators
Sources are who or what provide data – not the method of collecting data What types of data sources can you think of for performance indicators in Highway Transportation Safety?

86 Building Baseline Information
84 84 84 84 84 84

87 Data Sources May Be Primary or Secondary
PRIMARY data are collected directly by your organization, for example, through surveys, direct observation, and interviews. SECONDARY data have been collected by someone else, initially for a purpose other than yours. Examples include survey data collected by another agency, a Demographic Health Survey, or data from a financial market. Secondary data often can save you money in acquiring data you need, but be careful!

88 Sources of Data Written records (paper and electronic)
Individuals involved with the program General public Trained observers Mechanical measurements and tests

89 Design Data Collection Methods
1. Decide how to obtain the data you need from each source 2. Prepare data collection instruments 3. Develop procedures for use of the data collection instruments

90 Data Collection Methods
Panel Surveys Key informant interviews Conversation with concerned individuals Focus Group Interviews One-Time Survey Participant Observation Community Interviews Direct observation Census Reviews of official records (MIS and admin data) Field experiments Field visits Questionnaires Informal/Less Structured Methods More Structured/Formal Methods

91 Practicality Are the data associated with the indicator practical?
Ask whether… Quality data are currently available The data can be procured on a regular and timely basis Primary data collection, when necessary, is feasible and cost-effective

92 Comparison of Major Data Collection Methods
Date Collection Method Characteristic Review of Program Records Self-Administered Questionnaire Interview Rating by Trained Observer Cost Low Moderate Moderate to High Depends on Availability of Low-Cost Observers Amount of Training Required for Data Collectors Some None to Some Completion Time Depends on Amount of Data Needed Short to Moderate Response Rate High, if Records Contain Needed Data Depends on How Distributed Generally Moderate to Good High United Way of America

93 Developing Baseline Data for One Policy Area:
Example: Education Outcomes Indicators Baselines Targets 1. Nation’s children have Improved access to pre-school programs % of eligible urban children enrolled in pre-school education 75% urban children ages 3-5 in 1999 2. % of eligible rural children enrolled in pre-school education 40% rural children ages 3-5 in 2000 Primary school learning outcomes for children are improved % of Grade 6 students scoring 70% or better on standardized math and science tests 75% in 2002 scored 70% or better in math % in 2002 scored 70% or better in science.

94 Establishing Baseline Data on Indicators
In Summary: Establishing Baseline Data on Indicators A baseline is… Information (quantitative or qualitative) that provides data at the beginning of, or just prior to, the monitoring period. The baseline is used to: Learn about recent levels and patterns of performance on the indicator; and to Gauge subsequent policy, program, or project performance

95 Step 5 Planning for Improvement – Selecting Results Targets

96 Planning for Improvement – Selecting Results Targets
Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 5 1 2 3 4 6 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

97 Definition Targets are the quantifiable levels of the indicators that a country or organization wants to achieve at a given point in time— For Example, Agricultural exports will increase by 20% in the next three years over the baseline

98 Desired Level of Improvement Baseline Indicator Level
Identifying Expected or Desired Level of Project or Program or Policy Results Requires Selecting Performance Targets + Desired Level of Improvement Assumes a finite and expected level of inputs, activities, and outputs = Target Performance Desired level of performance to be reached within a specific time Baseline Indicator Level

99 Examples of Targets Related to Development
1. Goal: Economic Well-Being Outcome target: Reduce by 20% the proportion of people living in extreme poverty by 2008 against the baseline 2. Goal: Social Development Outcome target: Improve by 30% the Primary Education enrollment rates in Kyrgyz Republic by 2008 against the baseline Outcome target: Reduce by 20% the incidence of hepatitis rates for infants by 2006 against the baseline. 3. Goal: Environmental Sustainability Outcome target: Implement a national strategy for sustainable forest management by 2005

100 Factors to Consider When Selecting Indicator Targets
Clear understanding of baseline starting point (e.g. average of last 3 years, last year, average trend, etc.) Funding and level of personnel resources expected throughout the target period Amount of outside resources expected to supplement the program’s resources Political concerns Institutional capacity

101 Additional Considerations in Setting Indicator Targets
Only one target is desirable for each indicator If the indicator is new (not previously used) be careful on setting firm targets (use a range) Most targets are set yearly, but some could be set quarterly; others set for longer periods (not more than 5 years) It takes time to observe the effects of improvements; therefore, be realistic when setting targets Adapted from the Urban Institute, 1999

102 Additional Considerations When Setting Indicator Targets
A target does not have to be one single numerical value; it can be a range Consider previous performance Take your baseline seriously Targets should be feasible, given all the resource (input) considerations Adapted from the Urban Institute, 1999

103 “Games Sometimes Played When Setting Targets”
Set targets so modest (easy) that they will surely be met Move the target (as needed) to fit performance Pick targets that are not politically sensitive

104

105 Targets Support Public Accountability
“Whether they concern the time someone waits for treatment for cancer or the number of police officers on the beat, targets can help ensure that attention is focused and energy concentrated in the right directions. Targets challenge low expectations and give the public a clear benchmark against which they can measure progress.” David Miliband Financial Times (October 9, 2003)

106 Developing Targets for One Policy Area:
Education Outcomes Indicators Baselines Targets 1. Nation’s children have improved access to pre-school programs % of eligible urban children enrolled in pre-school education 75% urban children ages 3-5 in 1999 85% urban children ages 3-5 by 2006 2. % of eligible rural children enrolled in pre-school education 40% rural children ages 3-5 in 2000 60% rural children ages 3-5 by 2006 Primary school learning outcomes for children are improved % of Grade 6 students scoring 70% or better on standardized math and science tests 75% in 2002 scored 70% or better in math. 61% in 2002 scored 70% or better in science 80% scoring 70% or better in math by 2006. 67% scoring 70% or better in science by 2006.

107 Now We Have A Results Framework
Note: This completed matrix becomes your results framework! It defines your outcomes and gives you a plan for how you will know if you have been successful (or not) in achieving these outcomes

108 Desired Level of Improvement Baseline Indicator Level
In Summary… + Desired Level of Improvement Assumes a finite and expected level of inputs, activities, and outputs = Target Performance Desired level of performance to be reached within a specific time Baseline Indicator Level

109 Step 6 Monitoring For Results

110 Building a Monitoring System
Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 6 1 2 3 4 5 7 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

111 Monitoring for Results
A results-based monitoring system tracks both implementation (inputs, activities, outputs) and results (outcomes and goals) Implementation monitoring is supported through the use of management tools – budget, staffing plans, and activity planning

112 Monitoring for Results (cont.)
Implementation monitoring tracks the means and strategies used by the organization Means and strategies are found in annual and multi-year workplans Do not forget: Results framework is not the same as a work plan Do not forget: Budget to outputs, manage to outcomes

113 Developing A Results Plan
Once a set of outcomes are identified, it is time to develop a plan to assess how the organization will begin to achieve these outcomes In the traditional approach to developing a plan, the first thing a manager usually did was to identify activities and assign responsibilities But the shortcoming in this approach is that completing all the activities does not mean the same as reaching the outcome goal

114 Key Types of Monitoring
Impact Results Results Monitoring Outcome Output Implementation Activity Implementation Monitoring (Means and Strategies) Input

115 Translating Outcomes to Action
Note: Activities are crucial! They are the actions you take to manage and implement your programs, use your resources, and deliver the services of government But the sum of these activities may or may not mean you have achieved your outcomes Question is: How will you know when you have been successful?

116 Implementation Monitoring Links to Results Monitoring
Outcome Means and Strategies (Multi-Year and Annual Work Plans) Target 1 Target 2 Means and Strategies (Multi-Year and Annual Work Plans) Target 3 Means and Strategies (Multi-Year and Annual Work Plans)

117 Linking Implementation Monitoring to Results Monitoring
Goal Children’s mortality reduced Outcome Children’s morbidity reduced Target Reduce incidence of childhood gastrointestinal disease by 20% over 3 years Improve cholera prevention programs provision of vitamin A supplements use of oral re-hydration therapy Means and Strategies

118 Achieving Results Through Partnership
Goal Outcome Target 2 Target 1 Means & Strategy Partner 1 Partner 3 Partner 2 Partner 1 Partner 3 Partner 2 Partner 2 Partner 1 Partner 3

119 Building a Monitoring System: A Group Exercise
Take this chart and complete the information requirements for Year 1 and Year 2: Impact Increase educational opportunities for children Outcome Increase availability of pre-school education for poor children Increase by 25% the number of poor children ages 2-5 attending pre-school by 2005 Target Year 1 Year 2 Means and Strategies

120 Key Principles in Building a Monitoring System
1. There are results information needs at the project, program, and policy levels 2. Results information needs to move both horizontally and vertically in the organization 3. Demand for results information at each level needs to be identified

121 Key Principles in Building a Monitoring System (cont.)
4. Responsibility at each level needs to be clear for: What data are collected (source) When data are collected (frequency) How data are collected (methodology) Who collects the data Who analyzes the data For whom the data are collected Who reports the data

122 Every Monitoring System Needs:
Ownership Management Maintenance Credibility

123 Managing for Results Calls for Analysis of Performance Data…
ID Published in the New Yorker 5/16/1994 120 120 A bird, in a suit, notices charts which compare ‘hour of rising’ with ‘worm acquisition.’ Refers to the saying, “The early bird catches the worm.”

124 Performance Monitoring System Framework
For each outcome/goal you need: Data Collection Strategy Indicator Baseline Target Data Analysis Reporting Plan

125 Monitoring System Strategy Should Include a Data Collection and Analysis Plan
The plan should cover: Units of analysis Sampling procedures Data collection instruments to be used Frequency of data collection Expected methods of data analysis Who collects the data For whom the data are being collected

126 Key Criteria for Collecting Quality Performance Data
Reliability Validity Timeliness

127 The Data Quality Triangle
Reliability The extent to which the data collection approach is stable and consistent across time and space

128 The Data Quality Triangle
Validity Extent to which data clearly and directly measure the performance we intend to measure

129 The Data Quality Triangle
Timeliness Frequency (how often are data collected?) Currency (how recently have data been collected?) Relevance (data need to be available on a frequent enough basis to support management decisions)

130 Quality Assurance Challenges
What will be collected, and by what methods, are tempered by what is practical and realistic in the country and program context How much existing data relevant to our project, program, or policy are already available? How much of the available data are good enough to meet your organization’s needs?

131 Pretest Your Data Collection Instruments and Procedures
You will never really know how good your data collection approach is until you test it Pretesting is learning how to improve your instruments or procedures, before your data collection is fully under way Avoiding pretesting probably will result in mistakes. The mistake could cost your organization a lot of wasted time and money, and maybe its valued reputation with the public.

132 Data Collection Strategy
In Summary…. For each outcome/goal you need: Data Collection Strategy Indicator Baseline Target Data Analysis Reporting Plan

133 Step 7 The Role of Evaluations

134 The Role of Evaluations
Planning for Improvement — Selecting Results Targets The Role of Evaluations Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment Using Your Findings 7 1 2 3 4 5 6 8 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring For Results Reporting Your Findings Sustaining the M&E System Within Your Organization

135 Definition Evaluation An assessment of planned, ongoing or completed intervention to determine its relevance, efficiency, effectiveness, impact and sustainability. The intent is to incorporate lessons learned into the decision-making process.

136 Uses of Evaluation To make resource decisions
To re-think the causes of a problem To identify issues around an emerging problem, i.e. children dropping out of school Decision-making on best alternatives Support of public sector reform / innovation To help build consensus among stakeholders on how to respond to a problem

137 Evaluation Means Information on:
Strategy Whether we are doing the right things Rationale/justification Clear theory of change Operation Whether we are doing things right Effectiveness in achieving expected outcomes Efficiency in optimizing resources Client satisfaction Learning Whether there are better ways of doing it Alternatives Best practices Lessons learned

138 Characteristics of Quality Evaluations
Impartiality Usefulness Technical adequacy Stakeholder involvement Feedback/ dissemination Value for money

139 Eight Types of Questions Answered by Evaluation
Descriptive: Describe the content of the information campaign in country X for HIV/ AIDS prevention Normative/compliance: How many days during the year were national drinking water standards met? ( looks for how a project, program or policy met stated criteria) Correlational: What is the relation between the literacy rate and number of trained teachers in locality? ( shows the link between two situations, or conditions, but does not specify causality

140 Eight Types of Questions Answered by Evaluation
Cause and Effect: Has the introduction of a new hybrid seed caused increased crop yield? (establishes a causal relation between two situations or conditions) Program Logic: Is the sequence/strategy of planned activities likely to increase the number of years girls stay in school? (used to assess whether the design has correct causal sequence) Implementation/process: Was a project, program or policy to improve the quality of water supplies in an urban area implemented as intended? (establishes if proposed activities are conducted)

141 Eight Types of Questions Answered by Evaluation
Performance: Are the planned outcomes and impacts from a policy being achieved? (establishes links between inputs, activities, outputs, outcomes and impacts) Appropriate use of policy tools : Has the government made use of the right policy tool in providing subsidies to indigenous villagers who need to be resettled due to the construction of a new dam? ( establishes whether government selected appropriate instrument to achieve its aims)

142 When Is It Time to Make Use of Evaluation?
When regular results measurement suggests actual performance diverges sharply from planned performance Planned Actual

143 When Is it Time to Make Use of Evaluation?
When you want to determine the roles of both design and implementation on project, program, or policy outcomes Strength Of Design Hi Lo Strength of Implementation 1. 2. 3. 4.

144 When Is it Time to Make Use of Evaluation? (cont.)
Resource and budget allocations are being made across projects, programs, or policies A decision is being made whether to (or not) expand a pilot There is a long period with no evidence of improvement in the problem situation Similar projects, programs or policies are reporting divergent outcomes There are conflicting political pressures on decision-making in ministries or parliament Public outcry over a governance issue To identify issues around an emerging problem, I.e. children dropping out of school

145 Six Types Of Evaluation
Impact Evaluation Process Implementation Performance Logic Chain Meta-Evaluation Case Study Pre-Implementation Assessment

146 1) Performance Logic– Chain Assessment
Asks questions about the basic causal logic of the project, program, or policy (cause and effect assumptions) Asks about the rationale for the sequence of activities of the project, program, or policy Asks about the plausibility of achieving intended effects based on research and prior experience

147 2) Pre-Implementation Assessment
Preliminary evaluation of a project, program, or policy’s implementation strategy to assure that three standards are met: Objectives are well defined Implementation plans are plausible Intended uses of resources are well defined and appropriate to achievement of objectives

148 3) Process Implementation Evaluation
Provides detailed information on whether the program is operating as it ought ( are we doing things right?) Provides detailed information on program functioning to those interested in replication or scaling up a pilot Provides continuous feedback loops to assist managers

149 4) Case Study A case study is a method for learning about a complex situation and is based on a comprehensive understanding of that situation.

150 Six Basic Types of Case Study
Program effects Critical instance Illustrative Cumulative Program implementation Exploratory

151 5) Impact Evaluation Provides information on how and why intended (and un-intended) project, program, or policy outcomes and impacts were achieved (or not)

152 6) Meta-Evaluation Pulls together known studies on a topic to gain greater confidence in findings and generalizability Addresses where there are credible supportable evaluation findings on a topic Compares different studies with disparate findings about a topic against a common set of criteria

153 Evaluation Means Information on
In Summary: Evaluation Means Information on Strategy Whether we are doing the right things Rationale/justification Clear theory of change Operation Whether we are doing things right Effectiveness in achieving expected outcomes Efficiency in optimizing resources Client satisfaction Learning Whether there are better ways of doing it Alternatives Best practices Lessons learned

154 Reporting Your Findings
Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 8 1 2 3 4 5 6 7 9 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

155 “If You Do Not Measure Results, You Can Not Tell Success From Failure”
Analyzing and Reporting Data: Gives information on the status of projects, programs, and policies Provides clues to problems Creates opportunities to consider improvements in the (projects, programs, or policy) implementation strategies Provides important information over time on trends and directions Helps confirm or challenge theory of change

156 Analyzing Your Results Data
Examine changes over time Compare present to past data to look for trends and other changes The more data points you have, the more certain you are of your trends Time Improving access to rural markets Access ? Time Improving access to rural markets Access

157 Reporting Your Results Data
Report results data in comparison to earlier data and to your baseline (Remember—Comparisons over time are critical!) You can report your data by: Expenditure/income Organizational units Raw numbers Geographical locations Percentages Demographics Statistical tests Client satisfaction scales (high, medium, low)

158 Present Your Data in Clear and Understandable Form
Present most important data only Use an appendix or a separate report to convey detailed data Use visual presentations (charts, graphs, maps) to highlight key points Avoid “data dumps”

159 When Reporting Your Finding Use Explanatory Notes
Suggestions: Combine qualitative information along with quantitative When comparisons show unexpected trends or values, provide explanations, if known Report internal explanatory notes e.g. loss of program personnel or other resources Report external explanatory notes, e.g unexpected natural disaster, or political changes Summarize important findings The Urban Institute, 1999

160 What Happens If the Results News Is Bad?
A good results measurement system is intended to surface problems (early warning system) Reports on performance should include explanations about poor outcomes and identify steps taken or planned to correct problems Protect the messenger Adapted from The Urban Institute, 1999

161 Outcomes Reporting Format
Actual Outcomes Versus Targets Outcome Indicator Baseline (%) Current Target Difference Rates of hepatitis (N=6000) 30 25 20 -5 Percentage of children with improved overall health status (N=9000) 24 -4 Percentage of children who show 4 out of 5 positive scores on physical exams (N=3500) 50 65 Percentage of children with improved nutritional status (N = 14,000) Source: Made-up data, 2003 80 85 83 +2

162 Analyzing and Reporting Data:
In Summary: Analyzing and Reporting Data: Gives information on the status of projects, programs, and policies Provides clues to problems Creates opportunities to consider improvements in the (projects, programs, or policy) implementation strategies Provides important information over time on trends and directions

163 Step 9 Using Your Findings

164 9 Using Your Findings 1 2 3 4 5 6 7 8 10 Using Your Findings
Planning for Improvement — Selecting Results Targets Using Your Findings Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations 9 1 2 3 4 5 6 7 8 10 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

165 Using Your Findings 10 Uses of Results Findings
1 Responds to elected officials’ and the public’s demands for accountability 2 Helps formulate and justify budget requests 3 Helps in making operational resource allocation decisions 4 Triggers in-depth examinations of what performance problems exist and what corrections are needed

166 Using Your Findings (cont.)
10 Uses of Results Findings 5 Helps motivate personnel to continue making program improvements 6 Monitors the performance of contractors and grantees 7 Provides data for special, in-depth program evaluations 8 Helps provide services more efficiently 9 Supports strategic and other long-term planning efforts (by providing baseline information and later tracking progress) 10 Communicates better with the public to build public trust

167 Nine Strategies for Sharing Information
Empower the Media Enact “Freedom of Information” legislation Institute E-government Add information on internal and external internet sites Publish annual budget reports Engage civil society and citizen groups Strengthen parliamentary oversight Strengthen the Office of the Auditor General Share and compare results findings with development partners

168 Credible Information Strengthens Public Accountability
“In the National Health Service it is not always clear that the board asks the right questions,” because “inadequate information reduces the clarity behind decision-making that is necessary to achieve effective accountability”. Nicole Timmins Financial Times (October 14, 2003)

169 Step 10 Sustaining the M&E System Within Your Organization

170 Sustaining the M&E System Within Your Organization
Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment The Role of Evaluations Using Your Findings 10 1 2 3 4 5 6 7 8 9 Agreeing on Outcomes to Monitor and Evaluate Baseline Data on Indicators—Where Are We Today? Monitoring for Results Reporting Your Findings Sustaining the M&E System Within Your Organization

171 6 Critical Components of Sustaining Monitoring & Evaluation Systems
Demand Clear Roles and Responsibilities Trustworthy and Credible Information Accountability Capacity Incentives

172 Critical Component One: Demand
Structured requirements for reporting on results e.g. European Union Accession or national legislation The results from M&E system are sought and available for the government, civil society, and for donors Officials want evidence on their own performance Organizations seek better accountability

173 Critical Component Two: Clear Roles and Responsibilities
Establish formal organizational lines of authority (that are clear) for collecting, analyzing, and reporting of performance information Build a system that links the central planning and finance ministries to line/sector ministries (internal coordination) Issue clear guidance on who is responsible for which components of the M&E system and procedures

174 Critical Component Two: Clear Roles and Responsibilities (cont.)
Build a system that goes beyond national government to other levels of government for data collection and analysis Build a system that has demand for results information at every level where information is collected and analyzed, i.e. there is no level in the system that is only a “pass through” of the information

175 Critical Component Three: Trustworthy and Credible Information
The system has to be able to produce results information that brings both good and bad news The producers of results information need protection from political reprisals The information produced by the M&E system should be transparent and subject to independent verification The data collection and analysis procedures should be subject to review by national audit office and/or Parliament

176 The Blame Game “Stop whimpering and spin the wheel of blame, Lipton!”
Cartoon by Scott Arthur Masear, Harvard Business Review, November 2003.

177 Critical Component Four: Accountability
Civil society organizations play a role by encouraging transparency of the information The media, private sector, and the Parliament all have roles to ensure that the information is timely, accurate,and accessible Failure is not rewarded Problems are acknowledged and addressed

178 Critical Component Five: Capacity
Sound technical skills in data collection and analysis Managerial skills in strategic goal setting and organizational development Existing data collection and retrieval systems Ongoing availability of financial resources Institutional experience

179 Critical Component Six: Incentives
Incentives need to be introduced to encourage use of performance information: Success is acknowledged and rewarded Problems are addressed Messengers are not punished Organizational learning is valued Budget savings are shared Others?

180 Last Reminders! The demand for capacity building never ends! The only way an organization can coast is downhill… Keep your champions on your side and help them! Establish the understanding with the Ministry of Finance and the Parliament that an M&E system needs sustained resources. Look for every opportunity to link results information to budget and resource allocation decisions. Begin with pilot efforts to demonstrate effective results-based monitoring: Begin with an enclave strategy (e.g. islands of innovation) as opposed to a whole-of-government approach. Monitor both implementation progress and results achievements. Complement performance monitoring with evaluations to ensure better understanding of public sector results.


Download ppt "A Workshop for Government Officials and Their Development Partners"

Similar presentations


Ads by Google