Presentation on theme: "The Role of Pilots Alex Bryson Policy Studies Institute ESRC Methods Festival, St Catherines College, Oxford University, 1st July 2004."— Presentation transcript:
The Role of Pilots Alex Bryson Policy Studies Institute ESRC Methods Festival, St Catherines College, Oxford University, 1st July 2004
Assumptions underpinning main objective of piloting Main objective: to test the impacts of policies before national roll-out. 2 key assumptions: –we can identify unbiased estimate of impact –this impact, identified for a pilot, will tell us something useful about the policy if applied more generally.
Potential for misleading results because wrong methodology for the policy issue at hand, eg. TT instead of ATE when looking to extend a programme nationally 2. correct methodology but implemented poorly – either on the ground (eg. securing RA) or poor data (eg. early on in EMA) 3.Impacts shift in the longer run. Few studies addressing long-term impacts but clear that they are often very different from shorter-term impacts – often reversing earlier results. Eg. GAIN – work first fading relative to human capital investment, as you might expect. 4. When general equilibrium effects are a big deal – that is, when programme has big effects on non-participants, eg. where helping programme participants proves to be at the expense of other disadvantaged groups beyond the programme. Depends on the size of the programme, how effective it is in benefiting participants, and the extent to which participants are capable of taking the jobs of non- participants. 5. Blind empiricism: trusting to the numbers too much. We need THEORY as well as numbers.
The big issue…. Pilots can never prove certain policies (page 3 of report) because pilots can NEVER give a once and for all answer. Effects differ: –With programme ageing –Size/composition of entry cohorts –Changes in external environment, eg. business cycle –Interactions with other policies Therefore always ask the big questions: –A priori, why do we think policies are going to have a particular impact? –What happens when similar policies evaluated at different points in time, or across regions/countries, produce different results? –How can we learn from these differences? How can we understand what generates them?
Practical steps... 1.Increasing evaluator knowledge of evaluation processes and mechanisms - particularly important in understanding which treatment parameter is appropriate for the policy at hand 2. Getting government and evaluators to understand what data and practical measures are needed in advance to secure the appropriate evaluation methodology 3. Importance of replication studies: -same data, same methods -same data, alternative methods -extensions to the data, same methods -extensions to the data, alternative methods 4. More meta-analyses 5. Going beyond the pilot: post-roll out, how have things changed and what are implications? Eg: Low Pay Commissions remit for SNMW. Reason: long-term commitment to policy with potentially far-reaching consequences for the economy.
A Footnote: What Does it worked mean? 1.What works economically competes with what works politically - Clearly important signalling to electorate (selling welfare) is key. 2. Economic outcomes (poverty reduction, increasing employment/quality of employment) are largely uncontested. But these goods - which are both private and public goods - come at a cost. 3. The issue here is: benefits to whom, and at what cost (a) to Exchequer (b) to others (substitution etc.) and (c) what are opportunity costs of not spending money on other potential interventions? 4. Finally, it is inherently more difficult to get at distributional outcomes than it is to get at mean outcomes (a point worth mentioning for a government interested in the distribution of outcomes).