Download presentation
Presentation is loading. Please wait.
Published byStephanie Fields Modified over 9 years ago
2
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14
3
I am cutting rocks http://img359.imageshack.us/img359/7104/picture420bt2.jpg The essence of theory of change – linking activities to intended outcomes I am building a temple
4
Theory of change “the process through which it is expected that inputs will be converted to expected outputs, outcome and impact” DfID Further Business Case Guidance “Theory of Change”
5
Theory of change Start with a RESULTS CHAIN
6
The results chain: tips Activities OutputsOutcomes We produce Influence Contribute to We control Clients Clients control We are accountable for We expectShould occurr 100% attribution Some attribution Partial attribution Readily changed Less flexibility to change Long term Delivered annually By end of program Long-term
7
Monitoring – activities and outputs
8
Personal Monitoring Tools
9
No monitoring - blind and deaf
10
Monitoring and Evaluation MonitoringEfficiency Measures how productively inputs (money, time, personnel, equipment) are being used in the creation of outputs (products, results) An efficient organisation is one that achieves its objectives with the least expenditures of resources Evaluation Effectiveness Measures the degree to which results / objectives have been achieved An effective organisation is one that achieves its results and objectives
11
10 All Most Some Inputs Outputs Outcomes MONITORING focused on project process (per individual project) EVALUATION focused on effectiveness of project process (for many projects) Resources Staff Funds Facilities Supplies Training Project deliverables achieved “Count” (quantified) what has been done Short and intermediate effects. Long term effects and changes
12
Resist temptation, there must be a better way! Clear objectives Few key indicators Quick simple methods Existing data sources Participatory method Short feed-back loops Action results!
13
Monitoring/Evaluation objectives must be SMART Specific Measurable Achievable Realistic Timed (see 10 Easy Mistakes, page 5)
14
Evaluation: who evaluates whom? The value of a joint approach
15
The Logical Chain 1.Define Objectives (and Methodology) 2.Supply Inputs 3.Achieve Outputs 4.Generate Outcome 5.Identify and Measure Indicators 6.Evaluate by comparing Objectives with Indicators 7.Redefine Objectives (and Methodology)
16
www.brac.net Impact Evaluation An assessment of the causal effect of a project, program or policy beneficiaries. Uses a counterfactual… Impacts = Outcomes - What would have happened anyway
17
www.brac.net When to use Impact Evaluation? Evaluate impact when project is: Innovative Replicable/ scalable Strategically relevant for reducing poverty Evaluation will fill knowledge gap Substantial policy impact Use evaluation within a program to test alternatives and improve programs
18
www.brac.net Impact Evaluation Answers What was the effect of the program on outcomes? How much better of the beneficiaries because of the program policies? How would outcome change if changed program design? Is the program cost-effective?
19
www.brac.net Different Methods to measure impact evaluation Randomised Assignment – experimental Non Experimental: Matching Difference-in-Difference Regression Discontinuity Design Instrumental Variable / Random Promotion
20
www.brac.net Randomization The “gold standard” in evaluating the effects of interventions It allows us to form a “treatment” and “control” groups – identical characteristics – differ only by intervention Counterfactual: randomized-out group
21
www.brac.net Matching Matching uses large data sets and heavy statistical techniques to construct the best possible artificial comparison group for a given treatment group. Selected basis of similarities in observed characteristics Assumes no selection bias based on unobservable characteristics. Counterfactual: matched comparison group
22
www.brac.net Difference-in-difference Compares the change in outcomes overtime between the treatment group and comparison group Controls for the factors constant overtime in both groups ‘parallel trends’ in both the groups in the absence of the program Counter-factual: changes over time for the non- participants
23
www.brac.net DesignWhen to use Randomization ► Whenever possible ► When an intervention will not be universally implemented Random Promotion ► When an intervention is universally implemented Regression Discontinuity ► If an intervention is assigned based on rank Diff-in-diff ► If two groups are growing at similar rates Matching ► When other methods are not possible ► Matching at baseline can be very useful Uses of Different Design
24
www.brac.net Qualitative and Quantitative Methods Qualitative methods focus on how results were achieved (or not). They can be very helpful for process evaluation. It is often very useful to conduct a quick qualitative study before planning an experimental (RCT) study.
25
Thank you!
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.