1 Learning vs. accountability What is (are) the purpose(s) of evaluation? Alberto Martini
2 Learning how to spend effectively is a different task than being accountable for the money spent
3 Being accountable for how the money was spent Looks at the past Rarely used for future decisions Does not cumulate in time Lots of data, little to say Makes people feel good
4 Learning how to spend effectively It is much more difficult Looks at the past but is used in the future Does cumulate in time Makes people feel bad most of time, especially politicians
5 So, learn what works, for whom, and why This is not emphasized yet very much by the European Commission, but there is more attention given to impact evaluation now than just three years ago For sure impact evaluation involves fundamentally different cognitive tasks than monitoring for accountability
An example of a widely used policy: Giving grants to private enterprises to invest or to innovate. Is this an effective use of the money? 6
Is it enough to compare firms who get the subsidy and those who do not even apply for the subsidy? 7 ? no
Is it enough to compare firms before they get the subsidy and where they are two years later? 8 ? no
AVERAGEN PRE POST OBSERVED CHANGE R&D EXPENDITURES AMONG THE FIRMS RECEIVING GRANTS Is the true average impact of the grant? 9
10
11
Things change over time by “natural dynamics” How do we disentangle the change due to the policy from the myriad changes that would have occurred anyway? 12
AVERAGEN T= T= DIFFERENCE TREATED - NON TREATED IS THE TRUE IMPACT OF THE POLICY? 13 DIFFERENCE TREATED - NON TREATED
WITH-WITHOUT 14
We cannot use experiments with firms, for political-practical reasons There are lots of non-experimental counterfactual methods 15
16
17
18
19
20 THE PROBLEM HERE IS THAT THEY ARE DIFFERENT AFTER BUT WERE SO ALSO BEFORE
21
22
23
24
25
26
27
28
POST DIFFE- RENCE PRE DIFFE- RENCE 29
POST DIFFE- RENCE PRE DIFFE- RENCE 30
POST DIFFERENCE = PRE DIFFERENCE = = Impact =
HOW DO WE KNOW THE PARALLELISM ASSUMPTION IS TRUE? With four observed means, we cannot The parallelism becomes testable if we have two additional pre-intervention data points PRE-PRE 32
33
34
35
36
37
38
39 An italian case: law 488 Grants to support investment 3 Billion Euro 6000 Firms. Located in distressed areas 500,000 average grant value Which employment effect? About 2 Jobs Created per firm AVERAGE COST PER JOB CREATED: 250,000 EURO
40
41
42 DOES IT VARY BY SIZE OF THE GRANT?
43
44 What have we learned? It takes time and effort to learn Effort pays off in the long run Overall average impacts are not terribly informative If impacts vary across beneficiaries, Subgroup analysis can provide useful information for targeting IN EVERY CASE, RATIONING IS IMPORTANT TO HAVE