3 Designing and Building a Results-Based Monitoring and Evaluation System A Tool for Public Sector ManagementTable of Contents1 Introduction to Workshop2 Introduction to Monitoring and Evaluation3 Step 1 – Conducting a “Readiness Assessment”4 Step 2 – Agreeing on Outcomes to Monitor and Evaluate5 Step 3 – Selecting Key Indicators to Monitor Outcomes
4 Designing and Building Results-Based Monitoring and Evaluation System (Cont.) Table of Contents6 Step 4 – Baseline Data on Indicators— Where Are We Today?7 Step 5 – Planning for Improvement— Setting Results Targets8 Step 6 – Monitoring for Results9 Step 7 – The Role of Evaluations10 Step 8 – Reporting Your Findings11 Step 9 – Using Your Findings12 Step 10 – Sustaining the Monitoring and Evaluation System within Your Organization
5 Goals for This Workshop To prepare you to plan, design, and implement a results-based monitoring and evaluation system within your organizationTo demonstrate how an M&E system is a valuable tool to support good public management
6 Workshop OverviewThis workshop focuses on ten steps that describe how results-based monitoring and evaluation systems are designed and builtThese steps begin with conducting a “Readiness Assessment” and on through designing and managing your monitoring and evaluation systemWe will be discussing these steps, the tasks needed to complete them, and the tools available to help along the way
7 Ten Steps to Designing, Building and Sustaining a Results-Based Monitoring and Evaluation System Planning for Improvement — Selecting Results TargetsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings12345678910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
8 Introduction to Results-Based Monitoring and Evaluation
9 The Power of Measuring Results If you do not measure results, you can not tell success from failureIf you can not see success, you can not reward itIf you can not reward success, you are probably rewarding failureIf you can not see success, you can not learn from itIf you can not recognize failure, you can not correct itIf you can demonstrate results, you can win public supportAdapted from Osborne & Gaebler, 1992
10 Introduction to Results-Based Monitoring and Evaluation What Are We Talking About?Results-based monitoring and evaluation measures how well governments are performingResults-based monitoring and evaluation is a management tool!Results-based monitoring and evaluation emphasizes assessing how outcomes are being achieved over time
11 Who Are Stakeholders That Care About Government Performance? Government officials/ParliamentProgram managers and staffCivil society (Citizens, NGOs, Media, Private Sector etc.)Donors
12 RememberMonitoring and evaluation are two separate, but interrelated strategies to collect data and report the findings on how well (or not) the public sector is performingDuring this workshop, we will be discussing:Monitoring as a toolEvaluation as a toolHow the two interrelate to support good public managementThe ten steps to build a results-based monitoring and evaluation system to measure government performance
13 Reasons to Do Results-Based M&E Provides crucial information about public sector performanceProvides a view over time on the status of a project, program, or policyPromotes credibility and public confidence by reporting on the results of programsHelps formulate and justify budget requestsIdentifies potentially promising programs or practices
14 Reasons to Do Results-Based M&E (cont.) Focuses attention on achieving outcomes important to the organization and its stakeholdersProvides timely, frequent information to staffHelps establish key goals and objectivesPermits managers to identify and take action to correct weaknessesSupports a development agenda that is shifting towards greater accountability for aid lending
15 Important…It takes leadership commitment to achieve a better-performing organizationPlus redeployment of resources to build monitoring and evaluation systemsPlus individuals committed to improve public sector performanceSo…it comes down to a combination of institutional capacity and political will.
16 DefinitionResults-Based Monitoring (what we will call “monitoring”) is a continuous process of collecting and analyzing information to compare how well a project, program or policy is performing against expected results
17 Major Activities Where Results Monitoring Is Needed Setting goals and objectivesReporting to Parliament and other stakeholdersManaging projects, programs and policiesReporting to donorsAllocating resources
18 A New Emphasis on Both Implementation and Results-Based Monitoring Traditional monitoring focuses on implementation monitoringThis involves tracking inputs ($$, resources, strategies), activities (what actually took place) and outputs (the products or services produced)This approach focuses on monitoring how well a project, program or policy is being implementedOften used to assess compliance with workplans and budget
19 A New Emphasis on Both Implementation and Results-Based Monitoring Results-based monitoring involves the regular collection of information on how effectively government (or any organization) is performingResults-based monitoring demonstrates whether a project, program, or policy is achieving its stated goals
20 Results-Based Monitoring Goal (Impacts)Long-term, widespread improvement in societyResultsOutcomesIntermediate effects of outputs on clientsOutputsProducts and services producedActivitiesTasks personnel undertake to transform inputs to outputsImplementationInputsFinancial, human, and material resourcesBinnendijk, 2000
21 Results-Based Monitoring: Oral Re-hydration Therapy Goal (Impacts)Child mortality and morbidity reducedOutcomesImproved use of ORT in management of childhood diarrheaOutputsIncreased maternal knowledge of and access to ORT servicesActivitiesMedia campaigns to educate mothers, health personnel trained in ORT, etc.InputsFunds, ORT supplies, trainers, etc.Binnendijk, 2000
22 Results-Based Monitoring: Adult Literacy Goal (Impacts)Higher income levels; increase access to higher skill jobsOutcomesIncreased literacy skill; more employment opportunitiesOutputsNumber of adults completing literacy coursesActivitiesLiteracy training coursesInputsFacilities, trainers, materials
23 Exercise: Identify the Sequence of Inputs, Activities, Outputs and Outcomes Goal: Ensure Healthier Children in Rural CommunitiesInformation is made available for parents about the importance of sterilizing water before making formulaFewer children are going to hospital to be treated for diarrhea diseasesIncreased numbers of Babies drink formula that has been made from sterilized waterChildren morbidity rates decrease in local communityNew funds available to introduce information campaign on sterilizing water in making baby formulaKnowledge among parents grows about importance of boiling water before making infant formula
24 Exercise: Identify the Sequence of Inputs, Activities, Outputs and Outcomes Goal: Create economically viable women-owned micro-enterprisesGovernment makes available funds for micro-enterprise loansGovernment approves 61 applications from program graduates90% of successful applicants begin operating new businesses after government approves application15 qualified course trainers available72 women complete trainingIncome of graduates increases 25% in first year after course completion100 women attend training in micro-enterprise business management
25 Some Examples of Results Monitoring Infant HealthGirls EducationPolicyMonitoringDecreasing InfantMortality RatesIncreasing girls education attainmentProgramClinic-based pre-natal care is being used by pregnant women# of girls in secondary schools completing math and science coursesProjectInformation on good pre-natal care provided in 6 targeted villages# of girls in four urban neighborhoods completing primary education
26 DefinitionResults-Based Evaluation An assessment of a planned, ongoing, or completed intervention to determine its relevance, efficiency, effectiveness, impact and sustainability. The intent is to incorporate lessons learned into the decision-making process.
27 Evaluation Addresses “Why” Questions – What caused the changes we are monitoring“How” QuestionsWhat was the sequence or processes that led to successful (or not) outcomes“Compliance/ Accountability Questions”Process/ Implementation QuestionsDid the promised activities actually take place and as they were planned?Was the implementation process followed as anticipated, and with what consequences
28 Designing Good Evaluations Getting the questions right is criticalAnswering the questions is criticalSupporting public sector decision-making with credible and useful information is critical
29 Designing Good Evaluations “Better to have an approximate answer to the right question, than an exact answer to the wrong question.” Paraphrased from statistician John W. Tukey
30 Designing Good Evaluations “Better to be approximately correct than precisely wrong.” Paraphrased from Bertrand Russell
31 Some Examples of Evaluation Privatizing Water SystemsResettlementPolicy EvaluationsComparing model approaches to privatizing public water suppliesComparing strategies used for resettlement of rural villages to new areasProgram EvaluationsAssessing fiscal management of government systemsAssessing the degree to which resettled village farmers maintain previous livelihoodProject EvaluationsAssessing the improvement in water fee collection rates in 2 provincesAssessing the farming practices of resettled farmers in one province
32 Complementary Roles of Results-Based Monitoring and Evaluation Clarifies program objectivesAnalyzes why intended results were or were not achievedLinks activities and their resources to objectivesAssesses specific causal contributions of activities to resultsTranslates objectives into performance indicators and set targetsExamines implementation processRoutinely collects data on these indicators, compares actual results with targetsExplores unintended resultsReports progress to managers and alerts them to problemsProvides lessons, highlights significant accomplishment or program potential, and offers recommendations for improvement
33 SummaryResults-based monitoring and evaluation are generally viewed as distinct but complementary functionsEach provides a different type of performance informationBoth are needed to be able to better manage policy, program, and project implementation
34 SummaryImplementing results-based monitoring and evaluation systems can strengthen public sector managementImplementing results-based monitoring and evaluation systems requires commitment by leadership and staff alikeWe are discussing a political process with technical dimensions – not the reverse
35 Ten Steps to Designing, Building and Sustaining a Results-Based Monitoring and Evaluation System Planning for Improvement — Selecting Results TargetsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings12345678910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
37 Conducting a Readiness Assessment Step One: Conducting a Readiness AssessmentConducting a Readiness AssessmentPlanning for Improvement — Selecting Results TargetsSelecting Key Indicators to Monitor OutcomesThe Role of EvaluationsUsing Your Findings112345678910Agreeing onOutcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
38 What is a Readiness Assessment? An analytical framework to assess a country’s ability to monitor and evaluate its development goals :
39 Why Do a Readiness Assessment? 1. To understand what incentives (or lack thereof) exist to effectively monitor and evaluate development goals?2. To understand the roles and responsibilities of those organizations and individuals involved in monitoring and evaluating government policies, programs, and projects? E.g.Supreme Audit OfficeMinistry of FinanceParliamentMinistry of Planning3. To identify issues related to the capacity ( or lack of) to monitor and evaluate government programs
40 Incentives Help Drive The Need For A Results System First examine whether incentives exist in any of these four areas to begin designing and building an M&E system?Political (citizen demand)Institutional (legislative/legal framework)Personal ( desire to improve government= champions)Economic ( donor requirement)
41 Champions Can Help Drive A Results System Who are the champion(s) and what is motivating them?Government (social reforms)Parliament (effective expenditures)Civil society (holding government accountable)Donors (PRSP)OthersNote: who will not benefit?
42 Roles and Responsibilities Assess the roles and responsibilities and existing structures to monitor and evaluate development goalsWhat is the role of central and line ministries?What is the role of Parliament?What is the role of the Supreme Audit Agency?What is the role of civil society?What is the role of statistical groups/agencies?
43 Roles and Responsibilities Who in the country produces data?National Government:Central ministries (MOF, MOP)Line ministriesSpecialized units/offices (National Audit Office)Census BureauNational Statistics Office
44 Role and Responsibilities (Cont.) Who in the country produces data?Sub-national/regional government:Provincial central ministriesProvincial line ministriesOther?Local governmentNGO’sDonorsOthers
45 Roles and Responsibilities (Cont.) Where in the government are data used?Preparing the budgetResource allocationProgram policy makingParliament/legislation & accountabilityPlanningFiscal managementEvaluation and oversight
46 Capacity Assess current capacity to monitor and evaluate: Technical skillsManagerial skillsExisting data systems and their qualityTechnology availableFiscal resources availableInstitutional experience
47 BarriersDo any of these immediate barriers now exist to getting started in building an M&E system?Lack of fiscal resourcesLack of political willLack of championLack of expertise & knowledgeLack of strategyLack of prior experience
48 Key Elements of Success Assess the Country’s Capacity Against the Following:Does a clear mandate exist for M&E?PRSP?, Law? Civil Society? Other?Is there the presence of strong leadership at the most senior level of the government?Are resource and policy decisions linked to the budget?How reliable is information that may be used for policy and management decision making?How involved is civil society as a partner with government, or voice with government?Are there pockets of innovation that can serve as beginning practices or pilot programs?
50 Agreeing on Outcomes to Monitor and Evaluate Planning for Improvement — Selecting ResultsTargetsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings212345678910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
51 Why an Emphasis on Outcomes? Makes explicit the intended objectives of government action(“Know where you are going before you get moving”)Outcomes are what produce benefitsThey tell you when you have been successful or not
52 Why Is It Important to Choose a Set of Key Goals or Outcomes? “If you don’t know where you’re going, any road will get you there.”Paraphrased from Lewis Carroll’s Alice in Wonderland
53 Issues to Consider in Choosing Outcomes to Monitor and Evaluate Are there stated national/sectoral goals?Have political promises been made that specify improved performance of the government?Do citizen polling data indicate specific concerns?Is authorizing legislation present?Other? (Millennium Development Goals)Is aid lending linked with specific goals?
54 Note: When Choosing Outcomes, Remember – “Do Not Go It Alone!” Develop a participative approach that includes the views and ideas of key stakeholder groups
55 Choosing Outcomes—who Needs to be at the Table? Who – Government Civil Society DonorsWhy – To build consensus for the process
56 Why Building Consensus Is Important “The new realities of governance, globalization, aid lending, and citizen expectations require an approach that is consultative, cooperative and committed to consensus building.”
57 Developing Outcome Statements Reformulate the concerns identified by stakeholders into positive, desirable outcomesFromToRural Crops are spoiling before getting to the marketImprove Farmers Access to MarketsChildren are dropping out of SchoolCreate Incentives For Families To Keep Kids In SchoolNo Longer safe to go out after darkImprove crime prevention programs
58 Outcomes Statements Need Disaggregation Outcome: Increase the percentage of employed peopleIn order to know when we will be successful in achieving this outcome, we need to disaggregate the outcome to answer the following:For whom?Where?How much?By when?
59 Outcome Statements are Derived from identified problems or issues Policy Area: EducationFromToSchool buildings are not maintained and are made from poor materialsMany Children of rural families are unable to travel to distances to schoolImprove school structures to meet standards of market economy.Rural children gain equal access to educational services.Schools are not teaching our youth the content they need for the market economy.The poor and vulnerable are falling behind and not getting a decent education.Improved curricula meets market-based economy standards.Children most in need are receiving educational assistance
60 Outcome Statements Should Capture Only One Objective Why? Consider this Outcome Statement:Students in rural areas improve learning and gain better quality of life.What are the measurement issues??
61 Developing Outcomes for One Policy Area: Example: Education
62 In Summary: Why an Emphasis on Outcomes? Makes explicit the intended objectives of government action(“Know where you are going before you get moving”)Outcomes are the results governments hope to achieveClear setting of outcomes is key to results-based M&E systemNote: Budget to outputs, manage to outcomes!
63 Outcomes Summary Continued Outcomes are usually not directly measured—only reported onOutcomes must be translated to a set of key indicators
64 Step 3Selecting Key Indicators to Monitor Outcomes
65 Selecting Key Indicators to Monitor Outcomes Selecting Key Performance Indicators to Monitor OutcomesSelecting Key Indicators to Monitor OutcomesPlanning for Improvement — Selecting Results TargetsConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings312345678910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
66 Selecting Key Performance Indicators to Monitor Outcomes Outcome indictors are not the same as outcomesEach outcome needs to be translated into one or more indicatorsAn outcome indicator identifies a specific numerical measurement that tracks progress (or not) toward achieving an outcomeUrban Institute 1999
67 “How will we know success when we see it?” An Outcome IndicatorAnswers the question:“How will we know success when we see it?”
68 Selecting Outcome Indicators The “CREAM” of Good PerformanceA good performance indicator must be:Clear(Precise and unambiguous)Relevant(Appropriate to subject at hand)Economic(Available at reasonable cost)Adequate(Must provide a sufficient basis to assess performance)Monitorable(Must be amenable to independent validation)Salvatore-Schiavo-Campo 2000
69 When Selecting Your Project, Program, or Policy Indicators Select several for any one outcomeMake sure the interest of multiple stakeholders are consideredKnow that over time, it is ok (and expected) to add new ones and drop old onesHave at least three points of measurement before you consider changing your indicator
70 How Many Indicators Are Enough? The minimum number that answers the question:“Has the outcome been achieved?”
71 Why Use Proxy Indicators? Only use indirect measures (proxies) when data for direct indicators are not available or feasible to collect at regular intervalsExample…Number of new tin roofs or televisions as a proxy measure of increased household income
72 Outcome: Increased Access of Farmers to Markets An ExampleIndicators - Outcome or not?% change in annual revenue of farmers% change in amount of spoiled crops% change in crop pricing due to competition% change in agricultural employment% change in rural to urban migration% change in types of crops being cultivated
73 Outcome: Reduction in Childhood Morbidity An ExampleIndicators – Outcome or not?% in missed school days due to illness% reduction in hospital admission due to illnessMore medical doctors hired% change in prevalence of communicable diseasesNumber of children immunized% working days missed by parents% change in childhood gastrointestinal diseases
74 Developing A Set of Outcomes Indicators for a Policy Area: Example: EducationOutcomesIndicatorsBaselinesTargets1.Nation’s children have improved access to pre-school programs% of eligible urban children enrolled in pre-school education2.% of eligible rural children enrolled in pre-school educationPrimary school learning outcomes for children are improved% of Grade 6 students scoring 70% or better on standardized math and science tests
75 Checklist for Assessing Proposed Indicators Outcome to be measured: ______________________________Indicator selected: ____________________________________Is the Indicator…1As direct as possible a reflection of the outcome itself?2Sufficiently precise to ensure objective measurement?3Calling for the most practical, cost-effective collection of data4Sensitive to change in the outcome, but relatively unaffected by other changes?5Disaggregated as needed when reporting on the outcome?United Way of America
76 Using Pre-Designed Indicators * A number of development agencies have created indicators to track development goals, includingMillennium Development Goals (MDGs)UNDP – Sustainable Human DevelopmentWorld Bank – Rural Development HandbookIMF – Macroeconomic indicators* A pre-defined list of indicators are those indicators established independent of the context of any individual country or organization
77 Using Pre-Designed Indicators: Pros and Cons Can be aggregated across similar types of projects/programs/policiesReduces costs of building multiple unique measurement systemsCreates greater harmonization of donor requirementsCons –Often does not address country specific goalsOften viewed as imposed—coming from the top downDoes not promote key stakeholder participation and ownershipMultiple competing indicators
78 In Summary: Developing Indicators You will need to develop your own indicators to meet your own needs.Developing good indicators often takes more than one try!Arriving at the final indicators you will use will take time!Pilot, Pilot, Pilot!
79 Exercise: Select Key Performance Indicators for the Following Outcomes Outcome #1 Improved delivery of health care to citizens living in rural areasOutcome #2 Improve quality of agriculture export productsOutcome #3 Safe urban communities
80 Step 4Baseline Data on Indicators – Where Are We Today
81 Baseline Data on Indicators – Where Are We Today Planning for Improvement — Selecting Results TargetsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings41235678910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
82 “If you do not know where you are, you will have difficulty determining where you need to go.” Harry Hatry Urban Institute, 1999
83 Establishing Baseline Data on Indicators A performance baseline is…Information (quantitative or qualitative) that provides data at the beginning of, or just prior to, the monitoring period. The baseline is used to:Learn about recent levels and patterns of performance on the indicator; and toGauge subsequent policy, program, or project performance
84 The challenge now is to think about how to obtain baseline information for results indicators selected for each outcome
85 Identify Data Sources for Your Indicators Sources are who or what provide data – not the method of collecting dataWhat types of data sources can you think of for performance indicators in Highway Transportation Safety?
87 Data Sources May Be Primary or Secondary PRIMARY data are collected directly by your organization, for example, through surveys, direct observation, and interviews.SECONDARY data have been collected by someone else, initially for a purpose other than yours. Examples include survey data collected by another agency, a Demographic Health Survey, or data from a financial market.Secondary data often can save you money in acquiring data you need, but be careful!
88 Sources of Data Written records (paper and electronic) Individuals involved with the programGeneral publicTrained observersMechanical measurements and tests
89 Design Data Collection Methods 1. Decide how to obtain the data you need from each source2. Prepare data collection instruments3. Develop procedures for use of the data collection instruments
90 Data Collection Methods PanelSurveysKey informant interviewsConversation with concerned individualsFocusGroupInterviewsOne-Time SurveyParticipant ObservationCommunity InterviewsDirect observationCensusReviews of official records (MIS and admin data)Field experimentsField visitsQuestionnairesInformal/Less Structured MethodsMore Structured/Formal Methods
91 Practicality Are the data associated with the indicator practical? Ask whether…Quality data are currently availableThe data can be procured on a regular and timely basisPrimary data collection, when necessary, is feasible and cost-effective
92 Comparison of Major Data Collection Methods Date Collection MethodCharacteristicReview of Program RecordsSelf-Administered QuestionnaireInterviewRating by Trained ObserverCostLowModerateModerate to HighDepends on Availability of Low-Cost ObserversAmount of Training Required for Data CollectorsSomeNone to SomeCompletion TimeDepends on Amount of Data NeededShort to ModerateResponse RateHigh, if Records Contain Needed DataDepends on How DistributedGenerally Moderate to GoodHighUnited Way of America
93 Developing Baseline Data for One Policy Area: Example: EducationOutcomesIndicatorsBaselinesTargets1.Nation’s children have Improved access to pre-school programs% of eligible urban children enrolled in pre-school education75% urban children ages 3-5 in 19992.% of eligible rural children enrolled in pre-school education40% rural children ages 3-5 in 2000Primary school learning outcomes for children are improved% of Grade 6 students scoring 70% or better on standardized math and science tests75% in 2002 scored 70% or better in math % in 2002 scored 70% or better in science.
94 Establishing Baseline Data on Indicators In Summary:Establishing Baseline Data on IndicatorsA baseline is…Information (quantitative or qualitative) that provides data at the beginning of, or just prior to, the monitoring period. The baseline is used to:Learn about recent levels and patterns of performance on the indicator; and toGauge subsequent policy, program, or project performance
95 Step 5Planning for Improvement – Selecting Results Targets
96 Planning for Improvement – Selecting Results Targets Selecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings51234678910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
97 DefinitionTargets are the quantifiable levels of the indicators that a country or organization wants to achieve at a given point in time—For Example, Agricultural exports will increase by 20% in the next three years over the baseline
98 Desired Level of Improvement Baseline Indicator Level Identifying Expected or Desired Level of Project or Program or Policy Results Requires Selecting Performance Targets+Desired Level of ImprovementAssumes a finite and expected level of inputs, activities, and outputs=Target PerformanceDesired level of performance to be reached within a specific timeBaseline Indicator Level
99 Examples of Targets Related to Development 1. Goal: Economic Well-BeingOutcome target: Reduce by 20% the proportion of people living in extreme poverty by 2008 against the baseline2. Goal: Social DevelopmentOutcome target: Improve by 30% the Primary Education enrollment rates in Kyrgyz Republic by 2008 against the baselineOutcome target: Reduce by 20% the incidence of hepatitis rates for infants by 2006 against the baseline.3. Goal: Environmental SustainabilityOutcome target: Implement a national strategy for sustainable forest management by 2005
100 Factors to Consider When Selecting Indicator Targets Clear understanding of baseline starting point (e.g. average of last 3 years, last year, average trend, etc.)Funding and level of personnel resources expected throughout the target periodAmount of outside resources expected to supplement the program’s resourcesPolitical concernsInstitutional capacity
101 Additional Considerations in Setting Indicator Targets Only one target is desirable for each indicatorIf the indicator is new (not previously used) be careful on setting firm targets (use a range)Most targets are set yearly, but some could be set quarterly; others set for longer periods (not more than 5 years)It takes time to observe the effects of improvements; therefore, be realistic when setting targetsAdapted from the Urban Institute, 1999
102 Additional Considerations When Setting Indicator Targets A target does not have to be one single numerical value; it can be a rangeConsider previous performanceTake your baseline seriouslyTargets should be feasible, given all the resource (input) considerationsAdapted from the Urban Institute, 1999
103 “Games Sometimes Played When Setting Targets” Set targets so modest (easy) that they will surely be metMove the target (as needed) to fit performancePick targets that are not politically sensitive
105 Targets Support Public Accountability “Whether they concern the time someone waits for treatment for cancer or the number of police officers on the beat, targets can help ensure that attention is focused and energy concentrated in the right directions. Targets challenge low expectations and give the public a clear benchmark against which they can measure progress.”David MilibandFinancial Times (October 9, 2003)
106 Developing Targets for One Policy Area: EducationOutcomesIndicatorsBaselinesTargets1.Nation’s children have improved access to pre-school programs% of eligible urban children enrolled in pre-school education75% urban children ages 3-5 in 199985% urban children ages 3-5 by 20062.% of eligible rural children enrolled in pre-school education40% rural children ages 3-5 in 200060% rural children ages 3-5 by 2006Primary school learning outcomes for children are improved% of Grade 6 students scoring 70% or better on standardized math and science tests75% in 2002 scored 70% or better in math.61% in 2002 scored 70% or better in science80% scoring 70% or better in math by 2006.67% scoring 70% or better in science by 2006.
107 Now We Have A Results Framework Note: This completed matrix becomes your results framework!It defines your outcomes and gives you a plan for how you will know if you have been successful (or not) in achieving these outcomes
108 Desired Level of Improvement Baseline Indicator Level In Summary…+Desired Level of ImprovementAssumes a finite and expected level of inputs,activities, and outputs=Target PerformanceDesired level of performance to be reached within a specific timeBaseline Indicator Level
110 Building a Monitoring System Planning for Improvement — Selecting Results TargetsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings61234578910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
111 Monitoring for Results A results-based monitoring system tracks both implementation (inputs, activities, outputs) and results (outcomes and goals)Implementation monitoring is supported through the use of management tools – budget, staffing plans, and activity planning
112 Monitoring for Results (cont.) Implementation monitoring tracks the means and strategies used by the organizationMeans and strategies are found in annual and multi-year workplansDo not forget: Results framework is not the same as a work planDo not forget: Budget to outputs, manage to outcomes
113 Developing A Results Plan Once a set of outcomes are identified, it is time to develop a plan to assess how the organization will begin to achieve these outcomesIn the traditional approach to developing a plan, the first thing a manager usually did was to identify activities and assign responsibilitiesBut the shortcoming in this approach is that completing all the activities does not mean the same as reaching the outcome goal
114 Key Types of Monitoring ImpactResultsResults MonitoringOutcomeOutputImplementationActivityImplementation Monitoring (Means and Strategies)Input
115 Translating Outcomes to Action Note: Activities are crucial! They are the actions you take to manage and implement your programs, use your resources, and deliver the services of governmentBut the sum of these activities may or may not mean you have achieved your outcomesQuestion is: How will you know when you have been successful?
116 Implementation Monitoring Links to Results Monitoring OutcomeMeans and Strategies (Multi-Year and Annual Work Plans)Target 1Target 2Means and Strategies (Multi-Year and Annual Work Plans)Target 3Means and Strategies (Multi-Year and Annual Work Plans)
117 Linking Implementation Monitoring to Results Monitoring GoalChildren’s mortality reducedOutcomeChildren’s morbidity reducedTargetReduce incidence of childhood gastrointestinal disease by 20% over 3 yearsImprove cholera prevention programsprovision of vitamin A supplementsuse of oral re-hydration therapyMeans and Strategies
119 Building a Monitoring System: A Group Exercise Take this chart and complete the information requirements for Year 1 and Year 2:ImpactIncrease educational opportunities for childrenOutcomeIncrease availability of pre-school education for poor childrenIncrease by 25% the number of poor children ages 2-5 attending pre-school by 2005TargetYear 1Year 2Means and Strategies
120 Key Principles in Building a Monitoring System 1. There are results information needs at the project, program, and policy levels2. Results information needs to move both horizontally and vertically in the organization3. Demand for results information at each level needs to be identified
121 Key Principles in Building a Monitoring System (cont.) 4. Responsibility at each level needs to be clear for:What data are collected (source)When data are collected (frequency)How data are collected (methodology)Who collects the dataWho analyzes the dataFor whom the data are collectedWho reports the data
122 Every Monitoring System Needs: OwnershipManagementMaintenanceCredibility
123 Managing for Results Calls for Analysis of Performance Data… ID Published in the New Yorker 5/16/1994120120A bird, in a suit, notices charts which compare‘hour of rising’ with ‘worm acquisition.’ Refersto the saying, “The early bird catches the worm.”
124 Performance Monitoring System Framework For each outcome/goal you need:Data Collection StrategyIndicatorBaselineTargetData AnalysisReporting Plan
125 Monitoring System Strategy Should Include a Data Collection and Analysis Plan The plan should cover:Units of analysisSampling proceduresData collection instruments to be usedFrequency of data collectionExpected methods of data analysisWho collects the dataFor whom the data are being collected
126 Key Criteria for Collecting Quality Performance Data ReliabilityValidityTimeliness
127 The Data Quality Triangle ReliabilityThe extent to which the data collection approach is stable and consistent across time and space
128 The Data Quality Triangle ValidityExtent to which data clearly and directly measure the performance we intend to measure
129 The Data Quality Triangle TimelinessFrequency (how often are data collected?)Currency (how recently have data been collected?)Relevance (data need to be available on a frequent enough basis to support management decisions)
130 Quality Assurance Challenges What will be collected, and by what methods, are tempered by what is practical and realistic in the country and program contextHow much existing data relevant to our project, program, or policy are already available?How much of the available data are good enough to meet your organization’s needs?
131 Pretest Your Data Collection Instruments and Procedures You will never really know how good your data collection approach is until you test itPretesting is learning how to improve your instruments or procedures, before your data collection is fully under wayAvoiding pretesting probably will result in mistakes. The mistake could cost your organization a lot of wasted time and money, and maybe its valued reputation with the public.
132 Data Collection Strategy In Summary….For each outcome/goal you need:Data Collection StrategyIndicatorBaselineTargetData AnalysisReporting Plan
134 The Role of Evaluations Planning for Improvement — Selecting Results TargetsThe Role of EvaluationsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentUsing Your Findings71234568910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring For ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
135 DefinitionEvaluation An assessment of planned, ongoing or completed intervention to determine its relevance, efficiency, effectiveness, impact and sustainability. The intent is to incorporate lessons learned into the decision-making process.
136 Uses of Evaluation To make resource decisions To re-think the causes of a problemTo identify issues around an emerging problem, i.e. children dropping out of schoolDecision-making on best alternativesSupport of public sector reform / innovationTo help build consensus among stakeholders on how to respond to a problem
137 Evaluation Means Information on: StrategyWhether we are doing the right thingsRationale/justificationClear theory of changeOperationWhether we are doing things rightEffectiveness in achieving expected outcomesEfficiency in optimizing resourcesClient satisfactionLearningWhether there are better ways of doing itAlternativesBest practicesLessons learned
138 Characteristics of Quality Evaluations ImpartialityUsefulnessTechnical adequacyStakeholder involvementFeedback/ disseminationValue for money
139 Eight Types of Questions Answered by Evaluation Descriptive: Describe the content of the information campaign in country X for HIV/ AIDS preventionNormative/compliance: How many days during the year were national drinking water standards met? ( looks for how a project, program or policy met stated criteria)Correlational: What is the relation between the literacy rate and number of trained teachers in locality? ( shows the link between two situations, or conditions, but does not specify causality
140 Eight Types of Questions Answered by Evaluation Cause and Effect: Has the introduction of a new hybrid seed caused increased crop yield? (establishes a causal relation between two situations or conditions)Program Logic: Is the sequence/strategy of planned activities likely to increase the number of years girls stay in school? (used to assess whether the design has correct causal sequence)Implementation/process: Was a project, program or policy to improve the quality of water supplies in an urban area implemented as intended? (establishes if proposed activities are conducted)
141 Eight Types of Questions Answered by Evaluation Performance: Are the planned outcomes and impacts from a policy being achieved? (establishes links between inputs, activities, outputs, outcomes and impacts)Appropriate use of policy tools : Has the government made use of the right policy tool in providing subsidies to indigenous villagers who need to be resettled due to the construction of a new dam? ( establishes whether government selected appropriate instrument to achieve its aims)
142 When Is It Time to Make Use of Evaluation? When regular results measurement suggests actual performance diverges sharply from planned performancePlannedActual
143 When Is it Time to Make Use of Evaluation? When you want to determine the roles of both design and implementation on project, program, or policy outcomesStrengthOf DesignHiLoStrength of Implementation18.104.22.168.
144 When Is it Time to Make Use of Evaluation? (cont.) Resource and budget allocations are being made across projects, programs, or policiesA decision is being made whether to (or not) expand a pilotThere is a long period with no evidence of improvement in the problem situationSimilar projects, programs or policies are reporting divergent outcomesThere are conflicting political pressures on decision-making in ministries or parliamentPublic outcry over a governance issueTo identify issues around an emerging problem, I.e. children dropping out of school
145 Six Types Of Evaluation Impact EvaluationProcessImplementationPerformanceLogic ChainMeta-EvaluationCase StudyPre-ImplementationAssessment
146 1) Performance Logic– Chain Assessment Asks questions about the basic causal logic of the project, program, or policy (cause and effect assumptions)Asks about the rationale for the sequence of activities of the project, program, or policyAsks about the plausibility of achieving intended effects based on research and prior experience
147 2) Pre-Implementation Assessment Preliminary evaluation of a project, program, or policy’s implementation strategy to assure that three standards are met:Objectives are well definedImplementation plans are plausibleIntended uses of resources are well defined and appropriate to achievement of objectives
148 3) Process Implementation Evaluation Provides detailed information on whether the program is operating as it ought ( are we doing things right?)Provides detailed information on program functioning to those interested in replication or scaling up a pilotProvides continuous feedback loops to assist managers
149 4) Case StudyA case study is a method for learning about a complex situation and is based on a comprehensive understanding of that situation.
150 Six Basic Types of Case Study Program effectsCritical instanceIllustrativeCumulativeProgram implementationExploratory
151 5) Impact EvaluationProvides information on how and why intended (and un-intended) project, program, or policy outcomes and impacts were achieved (or not)
152 6) Meta-EvaluationPulls together known studies on a topic to gain greater confidence in findings and generalizabilityAddresses where there are credible supportable evaluation findings on a topicCompares different studies with disparate findings about a topic against a common set of criteria
153 Evaluation Means Information on In Summary:Evaluation Means Information onStrategyWhether we are doing the right thingsRationale/justificationClear theory of changeOperationWhether we are doing things rightEffectiveness in achieving expected outcomesEfficiency in optimizing resourcesClient satisfactionLearningWhether there are better ways of doing itAlternativesBest practicesLessons learned
154 Reporting Your Findings Planning for Improvement — Selecting Results TargetsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings81234567910Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
155 “If You Do Not Measure Results, You Can Not Tell Success From Failure” Analyzing and Reporting Data:Gives information on the status of projects, programs, and policiesProvides clues to problemsCreates opportunities to consider improvements in the (projects, programs, or policy) implementation strategiesProvides important information over time on trends and directionsHelps confirm or challenge theory of change
156 Analyzing Your Results Data Examine changes over timeCompare present to past data to look for trends and other changesThe more data points you have, the more certain you are of your trendsTimeImproving access to rural marketsAccess?TimeImproving access to rural marketsAccess
157 Reporting Your Results Data Report results data in comparison to earlier data and to your baseline(Remember—Comparisons over time are critical!)You can report your data by:–Expenditure/incomeOrganizational unitsRaw numbersGeographical locationsPercentagesDemographicsStatistical testsClient satisfaction scales (high, medium, low)
158 Present Your Data in Clear and Understandable Form Present most important data onlyUse an appendix or a separate report to convey detailed dataUse visual presentations (charts, graphs, maps) to highlight key pointsAvoid “data dumps”
159 When Reporting Your Finding Use Explanatory Notes Suggestions:Combine qualitative information along with quantitativeWhen comparisons show unexpected trends or values, provide explanations, if knownReport internal explanatory notese.g. loss of program personnel or other resourcesReport external explanatory notes,e.g unexpected natural disaster, or political changesSummarize important findingsThe Urban Institute, 1999
160 What Happens If the Results News Is Bad? A good results measurement system is intended to surface problems (early warning system)Reports on performance should include explanations about poor outcomes and identify steps taken or planned to correct problemsProtect the messengerAdapted from The Urban Institute, 1999
161 Outcomes Reporting Format Actual Outcomes Versus TargetsOutcome IndicatorBaseline(%)CurrentTargetDifferenceRates of hepatitis (N=6000)302520-5Percentage of children with improved overall health status (N=9000)24-4Percentage of children who show 4 out of 5 positive scores on physical exams (N=3500)5065Percentage of children with improved nutritional status (N = 14,000)Source: Made-up data, 2003808583+2
162 Analyzing and Reporting Data: In Summary:Analyzing and Reporting Data:Gives information on the status of projects, programs, and policiesProvides clues to problemsCreates opportunities to consider improvements in the (projects, programs, or policy) implementation strategiesProvides important information over time on trends and directions
164 9 Using Your Findings 1 2 3 4 5 6 7 8 10 Using Your Findings Planning for Improvement — Selecting Results TargetsUsingYour FindingsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of Evaluations91234567810Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
165 Using Your Findings 10 Uses of Results Findings 1 Responds to elected officials’ and the public’s demands for accountability2 Helps formulate and justify budget requests3 Helps in making operational resource allocation decisions4 Triggers in-depth examinations of what performance problems exist and what corrections are needed
166 Using Your Findings (cont.) 10 Uses of Results Findings5 Helps motivate personnel to continue making program improvements6 Monitors the performance of contractors and grantees7 Provides data for special, in-depth program evaluations8 Helps provide services more efficiently9 Supports strategic and other long-term planning efforts (by providing baseline information and later tracking progress)10 Communicates better with the public to build public trust
167 Nine Strategies for Sharing Information Empower the MediaEnact “Freedom of Information” legislationInstitute E-governmentAdd information on internal and external internet sitesPublish annual budget reportsEngage civil society and citizen groupsStrengthen parliamentary oversightStrengthen the Office of the Auditor GeneralShare and compare results findings with development partners
168 Credible Information Strengthens Public Accountability “In the National Health Service it is not always clear that the board asks the right questions,” because “inadequate information reduces the clarity behind decision-making that is necessary to achieve effective accountability”.Nicole TimminsFinancial Times (October 14, 2003)
169 Step 10Sustaining the M&E System Within Your Organization
170 Sustaining the M&E System Within Your Organization Planning for Improvement — Selecting Results TargetsSelecting Key Indicators to Monitor OutcomesConducting a Readiness AssessmentThe Role of EvaluationsUsing Your Findings10123456789Agreeing on Outcomes to Monitor and EvaluateBaseline Data on Indicators—Where Are We Today?Monitoring for ResultsReporting Your FindingsSustaining the M&E System Within Your Organization
171 6 Critical Components of Sustaining Monitoring & Evaluation Systems DemandClear Roles and ResponsibilitiesTrustworthy and Credible InformationAccountabilityCapacityIncentives
172 Critical Component One: Demand Structured requirements for reporting on results e.g. European Union Accession or national legislationThe results from M&E system are sought and available for the government, civil society, and for donorsOfficials want evidence on their own performanceOrganizations seek better accountability
173 Critical Component Two: Clear Roles and Responsibilities Establish formal organizational lines of authority (that are clear) for collecting, analyzing, and reporting of performance informationBuild a system that links the central planning and finance ministries to line/sector ministries (internal coordination)Issue clear guidance on who is responsible for which components of the M&E system and procedures
174 Critical Component Two: Clear Roles and Responsibilities (cont.) Build a system that goes beyond national government to other levels of government for data collection and analysisBuild a system that has demand for results information at every level where information is collected and analyzed, i.e. there is no level in the system that is only a “pass through” of the information
175 Critical Component Three: Trustworthy and Credible Information The system has to be able to produce results information that brings both good and bad newsThe producers of results information need protection from political reprisalsThe information produced by the M&E system should be transparent and subject to independent verificationThe data collection and analysis procedures should be subject to review by national audit office and/or Parliament
176 The Blame Game “Stop whimpering and spin the wheel of blame, Lipton!” Cartoon by Scott Arthur Masear, Harvard Business Review, November 2003.
177 Critical Component Four: Accountability Civil society organizations play a role by encouraging transparency of the informationThe media, private sector, and the Parliament all have roles to ensure that the information is timely, accurate,and accessibleFailure is not rewardedProblems are acknowledged and addressed
178 Critical Component Five: Capacity Sound technical skills in data collection and analysisManagerial skills in strategic goal setting and organizational developmentExisting data collection and retrieval systemsOngoing availability of financial resourcesInstitutional experience
179 Critical Component Six: Incentives Incentives need to be introduced to encourage use of performance information:Success is acknowledged and rewardedProblems are addressedMessengers are not punishedOrganizational learning is valuedBudget savings are sharedOthers?
180 Last Reminders!The demand for capacity building never ends! The only way an organization can coast is downhill…Keep your champions on your side and help them!Establish the understanding with the Ministry of Finance and the Parliament that an M&E system needs sustained resources.Look for every opportunity to link results information to budget and resource allocation decisions.Begin with pilot efforts to demonstrate effective results-based monitoring: Begin with an enclave strategy (e.g. islands of innovation) as opposed to a whole-of-government approach.Monitor both implementation progress and results achievements.Complement performance monitoring with evaluations to ensure better understanding of public sector results.