1 First steps in practice Daniel Mouqué Evaluation Unit DG REGIO.

Slides:



Advertisements
Similar presentations
You have been given a mission and a code. Use the code to complete the mission and you will save the world from obliteration…
Advertisements

©2011 1www.id-book.com Evaluation studies: From controlled to natural settings Chapter 14.
Advanced Piloting Cruise Plot.
Copyright © 2002 Pearson Education, Inc. Slide 1.
Chapter 1 The Study of Body Function Image PowerPoint
1 Active Labour Market Policies in the UK: What is the Secret of the British Success? March 2005 Bill Wells: UK Department for Work & Pensions. at:
Alignment and Quality 2/8/2014 Alignment and Quality 1 What is it? What do I need to look for?
1 ESTIMATION IN THE PRESENCE OF TAX DATA IN BUSINESS SURVEYS David Haziza, Gordon Kuromi and Joana Bérubé Université de Montréal & Statistics Canada ICESIII.
Web Design Issues in a Business Establishment Panel Survey Third International Conference on Establishment Surveys (ICES-III) June 18-21, 2007 Montréal,
Designing and Building a Results-Based Monitoring and Evaluation System: A Tool for Public Sector Management.
Aviation Security Training Module 4 Design and Conduct Exercise II 1.
How to Factor Quadratics of the Form
1 7 th Progress Report: The regional and urban dimension of Europe 2020 Lewis Dijkstra Deputy Head of the Analysis Unit DG for Regional Policy European.
1 The best of both worlds: combining approaches Daniel Mouqué Evaluation Unit, DG REGIO.
Counterfactual impact evaluation: what it can (and cannot) do for cohesion policy Alberto Martini Progetto Valutazione Torino, Italy
EN Regional Policy EUROPEAN COMMISSION Impact evaluation: some introductory words Daniel Mouqué Evaluation unit, DG REGIO Brussels, November 2008.
Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
1 Practical Issues in Applying CIE Daniele Bondonio University of Piemonte Orientale.
Regional Policy Changes in Common Indicators Definitions and Discussion Brussels, 14 th March
1 European Union Regional Policy – Employment, Social Affairs and Inclusion Draft guidance on monitoring and evaluation : Concepts and recommendations.
Human Performance Improvement Process
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
My Alphabet Book abcdefghijklm nopqrstuvwxyz.
Copyright © 2010 Pearson Education, Inc. Slide
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Addition Facts
B45, Second Half - The Technology of Skill Formation 1 The Economics of the Public Sector – Second Half Topic 9 – Analysis of Human Capital Policies Public.
1 Part 1 Presented by Mavis Ames Portsmouth City Council.
Child Care Subsidy Data and Measurement Challenges 1 Study of the Effects of Enhanced Subsidy Eligibility Policies In Illinois Data Collection and Measurement.
OH 3-1 Finding and Recruiting New Employees Human Resources Management and Supervision 3 OH 3-1.
Monitoring and Evaluation of ICT Use in Rural Schools Ok-choon Park Global Symposium on ICT in Education – Measuring Impact: Monitoring and Evaluation.
Time Management F OR A S MALL B USINESS. TIMEMANAGEMENT 2 Welcome 1. Agenda 2. Ground Rules 3. Introductions.
Maintaining data quality: fundamental steps
What Is Cost Control? 1 Controlling Foodservice Costs OH 1-1.
ABC Technology Project
Different Methods of Impact Evaluation
VOORBLAD.
Identifying Our Own Style Extended DISC ® Personal Analysis.
1 Recruitment in a Career-banding World Office of State Personnel July 2008.
© 2012 National Heart Foundation of Australia. Slide 2.
Hideyuki Horii, Professor i.school, The University of Tokyo
MARKETING MANAGEMENT 13th edition
Chapter 5 Test Review Sections 5-1 through 5-4.
College Financial Planning Workshop III Planning to Budget Glow Foundation 2010 Glow Online Curriculum Session 5.
Addition 1’s to 20.
Model and Relationships 6 M 1 M M M M M M M M M M M M M M M M
25 seconds left…...
Week 1.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
We will resume in: 25 Minutes.
Module 12 WSP quality assurance tool 1. Module 12 WSP quality assurance tool Session structure Introduction About the tool Using the tool Supporting materials.
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
PSSA Preparation.
© 2007 BST. All rights reserved. Confidential Information. SLU – 1 PDS_139 (0503) L2 Applying Problem- Solving Tools.
#ieGovern Impact Evaluation Workshop Istanbul, Turkey January 27-30, 2015 Measuring Impact 1 Non-experimental methods 2 Experiments Vincenzo Di Maro Development.
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
1 Support to enterprise – a counterfactual approach Daniel Mouqué Evaluation Unit, DG REGIO Ex post evaluation – WP 6c.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
1 Counterfactual impact evaluation: What is it, why do it? Daniel Mouqué Evaluation Unit DG REGIO.
1 Counterfactual methods – summer school, future work Daniel Mouqué Evaluation Unit, DG REGIO.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
Impact Evaluation Terms Of Reference
Evaluation Network Meeting
Presentation transcript:

1 First steps in practice Daniel Mouqué Evaluation Unit DG REGIO

2 The story so far… Indicators useful for management, accountability, but do not give impacts For impacts, need to estimate a counterfactual

3 Notice that « classic » methods often imply counterfactuals Indicators – before vs after Indicators – with « treatment » vs without Qualitative methods – expert opinion Beneficiary surveys – beneficiary opinion Macromodels – model includes a baseline But all of these have strong assumptions, often implicit

4 How to weaken the assumptions… … and improve the estimation of impacts Comparison of similar assisted and non- assisted units (finding « twins ») There are various ways to do this - lets start with a simple example

5 Training for long term unemployed Innovative training for those who have been out of work for >12 months « Classic » evaluation: for those trained, pre-post comparison of employment status, income Whats wrong with this? So we combine with a beneficiary survey Is this much better?

6 A simple counterfactual (random assignment) 10,000 candidates for the training, randomly assign 5000 to training/5000 to traditional support Compare employment status and earnings one year after training Whats useful about this? Can you see any potential problems?

7 Lets try again (« discontinuity design ») Offer the training to all For evaluation, compare a subset of these with a similar, but non-eligible group: –Unemployed for months (eligible) –Unemployed for 9-12 months (not eligible) Whats better about this than the previous evaluation example? Whats worse?

8 3rd time lucky (« pipeline ») This time we stagger the training over 2 years 5000 are randomly chosen to take the training this year, 5000 next year Next years treatment group is this years control group Whats good about this? What limitations can you see?

9 Some observations Notice: This is not just one method, but a family of methods Two families in fact - well come back to this Different possibilities have different strengths & weaknesses, therefore different applications Varies from simple to very complicated Well look at common features and requirements now (with Kai)

10 What do we need? Kai Stryczynski Evaluation Unit DG REGIO

11 The methods require Large « n », ie a large number of similar units (to avoid random differences) Good data for treated and non-treated units –Basic data (who are the beneficiaries?) –Target variables (what is policy trying to change?) –Descriptive variables (eg to help us find matches) –Ability to match the various datasets

12 Sectoral applicability Good candidates (large « n ») include: Enterprise support (including R&D) Labour market and training measures Other support to individuals (eg social) But…. only where good data exist Bad candidates (small « n ») include: Large infrastructure (transport, waste water etc) Networks (eg regional innovation systems)

13 Rule of thumb < 50% of cases applicable of which < 50% have enough data And even then, be selective. Its a powerful learning tool, but can be hard work & expensive.

14 A pragmatic strategy 2 pronged approach (monitor all, evaluation for a selection) CIE where can, classic methods where cant (survey better than nothing) Mix methods (triangulate, qualitative to explain CIE results) Be honest and humble about what we know... And dont know Use working hypotheses, build picture over time

15 Lets get started Daniel Mouqué Evaluation Unit DG REGIO

16 The options There are many options… …. But two broad families of counterfactual impact evaluation

17 Experimental methodsQuasi-experimental methods Eg random assignment, pipeline Eg Albertos difference in difference, my discontinuity From the outset, some form of random assignment – evaluation drives selection process Selection process as normal – does not interfere with policy process Must be installed from the outset of the measure Can be conducted ex post (although earlier better, for data collection) Weakest assumptions, best estimate of impact Relatively weak assumptions, can usually be considered a good estimate of impact

18 A « rule of thumb » Randomised/experimental methods most likely to be useful for: –Pilot projects –Different treatment options (especially genuine policy choices, such as grants vs loans) Quasi-experimental – more generally applicable However, randomised simpler, so a good introduction (Quasi-experimental methods in depth tomorrow)

19 New friends part 1 Experimental (« randomised ») methods Some experimental/randomised options for your exercises in the group work: Random assignment Pipeline (delaying treatment for some) Random encouragement Tip: most costly (mess with selection process), but most reliable

20 New friends part 2 Quasi-experimental methods You dont need to know all these yet (tomorrow will treat in depth Intuition: treat as usual, compare with similar, but not quite comparable, treated units Difference-in-difference Discontinuity design (comparing « just qualified for treatment » with « just missed it »)

21 In your group work, we want you to start thinking What are policy/impact questions in my field(s)? Can I randomise from the beginning, to get an insight into these results? Random or not (and often the answer will be not!) can I get outcome data for similar non-treated units?

22 To clarify, a real example (from enterprise support)

23 The set-up Eastern Germany Investment and R&D grants to firms Really increases investment, employment? Could not randomise (too late, too political) Clever matching procedures (well tell you more later in the course) to compare similar assisted/non-assisted firms

24 The results Investment grants of 8k/employee led to estimated extra investment of 11-12k Same grants led to an extra 25-30,000 extra jobs R&D grants of 8k/employee led to 8k extra investment

25 What does this tell us? This gives comfort to the views: Enterprise and R&D grants work in lagging regions (at the very least, generate private investment) Grants have a bigger effect on productivity than on jobs Gross jobs - especially jobs safeguarded - overstate case

26 What does this not tell us? We still do not know for certain: If same pattern would hold outside E. Germany (specific situation, specific selection process) If investment will translate into long term growth and R&D (but have weakened assumption) If other instruments better than grants Crowding out in other enterprises How to cure cancer (astonishingly, 1 study did not crack all the secrets of the universe) But know more than before, and this is not the last evaluation we will ever do

27 Potential benefits - motivation for the coming days Learning what works, by how much (typically) Learning what instrument is appropriate in a given situation (eg grants or advice to enterprise) Learning on whom to target assistance (« stratification », eg target training measure on the group most likely to benefit) Building up a picture over time