Presentation on theme: "Counterfactual impact evaluation: what it can (and cannot) do for cohesion policy Alberto Martini Progetto Valutazione Torino, Italy"— Presentation transcript:
Counterfactual impact evaluation: what it can (and cannot) do for cohesion policy Alberto Martini Progetto Valutazione Torino, Italy firstname.lastname@example.org
ALL I REALLY NEED TO KNOW I LEARNED IN KINDERGARTEN by Robert Fulghum Share. Play fair. Don't hit people. Clean up your own mess. Wash your hands before you eat. Flush.
ALL IT REALLY MATTERS IN IMPACT EVALUATION COMES FROM COMMON SENSE Its nice to have an impact. Not all we obtain is due to our actions. Some things happen without our help. To improve things we must understandem We must separate what we caused from what would happen anyway. Flush.
Do we need counterfactuals? The answer is simple: it depends on what we need (can, want) to know and for which purpose Ill follow the COSCE approach (Common Sensical Counterfactual Evaluation) [COSCE = Conference On Security and Cooperation in Europe] What would have happened anyway = counterfactual
COSCE rule n.1 If your purpose is to be accountable, dont worry too much about counterfactuals Your main worry is to show that the money was spent Maybe you want to show how well it was spent Maybe you want to show for whom it was spent You might go further by showing your contribution to objectives; e.g. to the Lisbon strategy To impress DG-Regio, use a macro-model
COSCE rule n.2 If your purpose is to improve policy, macro models will not do If your purpose is to improve policy, probably indicators will not do If your purpose is to improve policy, you need to learn: What works and, if it does, why it works What does not work and, if it doesnt, why it doesnt work
COSCE rule n.3 Learning what works precedes logically learning why it works Otherwise we do not know what to explain Learning why it works (or doesnt) is: More important More interesting More difficult than learning what works This is why it should be done later
COSCE rule n.4 Counterfactul Impact Evaluation tries to learn something about what works on average (not very interesting) and for whom it works (data permitting) It produces numbers It requires good data and large samples It imposes non-testable assumptions Its results are NOT the truth, are NOT universal laws, are NOT scientific It is (should be) a fallible, improvable, intellectually honest human enterprise
COSCE rule n. 5 Theory-based Impact Evaluation tries to learn something about why it works indentifying the mechanisms that make a policy produce its effects (or fail to do so) It produces narratives and insights It collects its data through qualitative methods and doesnt need large samples It develops a theory of change and then observes policies as they are implemented, to learn which elements of the theory are verified
COSCE rule n.6 To learn something about what works one needs to clarify Effects (impacts) on what? Which outcomes Y Effects (impacts) of what? Which treatment T COSCE curse n. 1 Effects and impacts are the same thing, the best example of distinction without a difference
COSCE rule n. 7 The heart of CIE is to answer the question: what is the direction, size and significance of the effect of treatment T of outcome Y? AN EXAMPLE A program providing subsidies to increase R&D expenditures among small and medium enterprises subsidizing SME to do more R&D
A MULTIPLE CHOICE TEST What is the effect of the subsidies? the number of R&D projects funded and completed the take-up rate of the subsidy among eligible SME the increase in R&D expenditures among subsidized SME the difference in R&D expenditure among subsidized and non subsidized SME none of the above
the number of R&D projects funded and completed the take-up rate of the subsidy among eligible SME The number can be very high, the take-up rate can be 100 %, the effect can be zero COSCE curse n. 2 the number of R&D projects is not a gross impact. It is not an impact. Its a measure of activity. There is no such thing as a gross impact
the increase in R&D expenditures among subsidized SME COSCE curse n. 3 The deadweight (DW) is nothing else than the counterfactual. The only special thing about it is that is used when money is clearly wasted. Demonstrable Waste (DW) is a better name for it The increase in not an effect, the subsidies might have gone to firms with growing R&D expenditures
the difference in R&D expenditure among subsidized and non subsidized SME The post-treatment difference in outcomes does not identify any effect, the difference might be all due to initial differences (selection bias) COSCE curse n. 3 The Commission is stuck on the decomposition gross impact=net effect + deadweight The world literature focuses on the decomposition observed difference=effect + selection bias
The world-wide social science literature has made substantial advances to reduce, prevent or eliminate selection bias, and estimates effect by comparing treated and not treated exploiting random assignment when feasible and a variety of (ever developing) non experimental methods matching double difference discontinuity instrumental variables
What does COSCE have to say about the limitations of counterfactual impact evaluation? In some quarters, CIE is seen as a universal approach, able to solve all the inferential problems through use of ever more sophisticated methods. COSCE disagree and views the CIE as an important contribution, with important limitations in their applicability to Structural Funds, both in terms or relevance and compatibility.
HIGH MIXEDLOW HIGH Support for R&D projects Transport infra- structure Human capital investment Urban renewal Renewable energy Investment support Behavioral (vs. redistributive) motive Replicable nature (vs. idiosyncratic) Homogenous treatment (vs. composite) Large numbers of eligible units ++ +++ +-- + +- -+ +++ +-- ++ Different types of cohesion policies Relevance and compatibility
What timing for counterfactual impact evaluation? When it is prospective, i.e. it is designed together with the intervention, impact evaluation can have a strong disciplinary effect. First, it can help focus the attention of both policy-makers and beneficiaries on objectives. Secondly, it creates an incentive to assemble the information necessary to assess results. Thirdly, it brings to light the criteria by which beneficiaries are selected BARCA DIXIT
Above timing, above relevance, above compatibility, the most important determinant of the diffusion of counterfactual impact evaluation is the interest and willingness, on the part of some influential stakeholder, of truly learning about what works and why.