Evaluation of Labour Market Policies: The Use of Data-Driven Analyses in Ireland Elish Kelly Economic and Social Research Institute National Development.

Slides:



Advertisements
Similar presentations
Active labour market measures and entrepreneurship in Poland Rafał Trzciński Impact Evaluation Spring School Hungary,
Advertisements

Linking regions and central governments: Indicators for performance-based regional development policy 6 th EUROPEAN CONFERENCE ON EVALUATION OF COHESION.
1 Service Providers Capacity Assessment Framework Presentation to the Service Delivery Advisory Group August 28, 2008.
Activation in Ireland: Are we on the Right Path? Elish Kelly (ESRI) Seamus McGuinness (ESRI) Philip O’Connell (UCD Geary Institute) Conference on Irish.
Job Search Assistance Strategies Evaluation Presentation for American Public Human Services Association February 25, 2014.
Estimating net impacts of the European Social Fund in England Paul Ainsworth Department for Work and Pensions July 2011
Project Monitoring Evaluation and Assessment
Ensuring Equality of Access to Enterprise Supports (EEATES) Presentation by the Galway Traveller Movement to the ‘Making it Real’ Conference 22 nd October.
Incorporating considerations about equity in policy briefs What factors are likely to be associated with disadvantage? Are there plausible reasons for.
System Office Performance Management
1 International marketing Session 4- International Marketing Research Ana Colovic.
Research Methods for Business Students
Making Impact Evaluations Happen World Bank Operational Experience 6 th European Conference on Evaluation of Cohesion Policy 30 November 2009 Warsaw Joost.
TOOLS OF POSITIVE ANALYSIS
Types of Evaluation.
Australia’s Experience in Utilising Performance Information in Budget and Management Processes Mathew Fox Assistant Secretary, Budget Coordination Branch.
Standards and Guidelines for Quality Assurance in the European
PAI786: Urban Policy Class 2: Evaluating Social Programs.
TRADUIRE LA RECHERCHE EN ACTION Employment RCTs in France Bruno Crépon.
IPhVWP Polish Presidency, Warsaw October 6 th 2011 Almath Spooner Irish Medicines Board Monitoring the outcome of risk minimisation activities.
PILOT PROJECT: External audit of quality assurance system on HEIs Agency for Science and Higher Education Zagreb, October 2007.
Labour Market Evaluation: Theory and Practice Seamus McGuinness 8 th November 2013.
THE SUPPORTED EMPLOYMENT PROGRAMME OPPORTUNITIES AND CHALLENGES TOM RONAYNE WRC SOCIAL AND ECONOMIC CONSULTANTS IASE Conference
2011 PK Mwangi Global Consulting Forming a Strategy for your Business. Strategy refers to the plan that needs to be put in place to assist the business.
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
IAEA International Atomic Energy Agency Reviewing Management System and the Interface with Nuclear Security (IRRS Modules 4 and 12) BASIC IRRS TRAINING.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Needs Assessment EDTC General Definition The process of comparing a desired goal state with existing conditions Data is fundamental to all decision.
Module 2 Stakeholder analysis. What’s in Module 2  Why do stakeholder analysis ?  Identifying the stakeholders  Assessing stakeholders importance and.
The Impact of Training Programme Type and Duration on the Employment Chances of the Unemployed in Ireland Group 5: Niall Cassidy, David Murphy Friday,
Jobcentre Plus Get Britain Working Measures Department for Work and Pensions Mariangela Hankinson Business Development Partner Merseyside District 23/11/11.
Labour Market Evaluation: Theory and Practice Seamus McGuinness 11 th November 2011.
Configuration Management and Change Control Change is inevitable! So it has to be planned for and managed.
Developing Evidence on “What Works” in Moving TANF Recipients to Work through Job Search Assistance Karin Martinson, Abt Associates February,
+ IDENTIFYING AND IMPLEMENTING EDUCATIONAL PRACTICES SUPPORTED BY RIGOROUS EVIDENCE: A USER FRIENDLY GUIDE Presented by Kristi Hunziker University of Utah.
Impact of evaluations matters IDEAS Conference 2011, Amman “Evidence to Policy: Lessons Learnt from Influential Impact Evaluations” Presenter: Daniel Svoboda,
The Impact of Training Programme Type and Duration on the Employment Chances of the Unemployed in Ireland Philip O’Connell, Seamus McGuinness & Elish Kelly.
Developing an Investment Governance Framework
The Risk Management Process
Kathy Corbiere Service Delivery and Performance Commission
Designing New Programs Design & Chronological Perspectives (Presentation of Berk & Rossi’s Thinking About Program Evaluation, Sage Press, 1990)
Overview of evaluation of SME policy – Why and How.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
1 Joint meeting of ESF Evaluation Partnership and DG REGIO Evaluation Network in Gdańsk (Poland) on 8 July 2011 The Use of Counterfactual Impact Evaluation.
AssessPlanDo Review QuestionYesNo? Do I know what I want to evaluate and why? Consider drivers and audience Do I already know the answer to my evaluation.
Do European Social Fund labour market interventions work? Counterfactual evidence from the Czech Republic. Vladimir Kváča, Czech Ministry of Labour and.
Development, Validation, Implementation and Enhancement for a Voluntary Protection Programs Center of Excellence (VPP CX) Capability for the Department.
Department of Defense Voluntary Protection Programs Center of Excellence Development, Validation, Implementation and Enhancement for a Voluntary Protection.
Ireland’s National Employment Action Plan Preventive Strategy MISEP meeting Paris, 3 rd November 2008 Nessan Vaughan FÁS – Training and Employment Authority.
Department of Defense Voluntary Protection Programs Center of Excellence Development, Validation, Implementation and Enhancement for a Voluntary Protection.
[Presentation location] [Presentation date] (Confirm ABT logo) Building Bridges and Bonds (B3): An introduction.
© Copyright  People at Work Project - Overview  People at Work Project - Theoretical Underpinnings  People at.
Development Account: 6th Tranche Strengthening the capacity of National Statistical Offices (NSOs) in the Caribbean Small Island Developing States to fulfill.
Australian National Audit Office Better Practice Guide: Implementation of Programme and Policy Initiatives Presentation to the Canberra PMI Chapter 7 March.
Looking for statistical twins
Forming a Strategy for your Business.
An exploration of (area-based) social inclusion and community development training programmes in Ireland Seamus McGuinness Research Professor Pobal Conference:
HUMAN RESOURCE GOVERNANCE, RISK MANAGEMENT AND COMPLIANCE
Higher physical education
A Managers Guide to Parental Leave
ESF EVALUATION PARTNERSHIP MEETING Bernhard Boockmann / Helmut Apel
The Use of Counterfactual Impact Evaluation Methods in Cohesion Policy
Preparing for Universal Credit: lessons from the front-line".
Class 2: Evaluating Social Programs
Class 2: Evaluating Social Programs
Counterfactual Impact Analysis applied in the ESF-Evaluation in Austria (period ) Contribution to the Expert-Hearing: Member States Experiences.
Ireland’s National Employment Action Plan Preventive Strategy
Estimating net impacts of the European Social Fund in England
Presentation transcript:

Evaluation of Labour Market Policies: The Use of Data-Driven Analyses in Ireland Elish Kelly Economic and Social Research Institute National Development Agency: 4 th International Evaluation Conference (26-27 September 2013)

Outline Overview: - Why conduct evaluations? - Barriers to conducting effective evaluations - Most common forms of labour market evaluations How to evaluate a programme’s effectiveness, and issues that need to be considered during this process Practical example: An evaluation of Ireland’s activation strategy – the National Employment Action Plan (NEAP) Conclusions: Implications for labour market policy in Ireland from the findings of the evaluation

Why is Evaluation Necessary? 1.It assesses the extent to which policy initiatives are achieving their expected targets and objectives Drawing from this, the evaluator, and consequently policy-makers, will identify the nature of any shortfalls in either programme delivery or the stated objectives. 2.Effective use of public resources, which is particularly important in the current economic environment. 3.Overall, evaluations help to ensure that policy is evidence-based and that ineffective programmes are modified or closed.

Barriers to Effective Evaluations Lack of an evaluation culture among policy-makers: Why? - Policy-makers may view evaluation as a threat and actively seek a less rigorous form of assessment - Lack of complex evaluation expertise and the competencies required to use large administrative datasets. Lack of Independence: the organisation being evaluated has the power to set the terms of reference and is involved in choosing the evaluating body. Often little consideration is given to programme evaluation at the programme design and implementation stages (consequently, lack of a viable control group to assess the counterfactual). Data constraints: Lack of available and “linkable” administrative datasets make proper evaluation difficult.

Most Common Forms of Labour Market Evaluations Generally, labour economists tend to focus on impact evaluation i.e., is the programme achieving its desired impacts e.g. training programme for the unemployed leading to employment? Process evaluation - i.e., is the programme being delivered as intended?, is less common. However, in practice most impact evaluations will also consider the efficiency of programme delivery and implementation. Overall, the bulk of impact evaluations focus on labour market programmes that are designed to improve outcomes related to employment, earnings and labour market participation.

How do we Evaluate a Labour Market Programme’s Impact? Not straightforward, but it is possible if the correct steps are taken at the i) design stage of a programme, ii) its implementation and iii) the utilisation of the correct mechanisms (e.g. data and methodologies) at the evaluation stage of the process. With labour market programme evaluations, we want to know what would happen to individuals had the programme not been in place (e.g. unemployed person did not receive training) i.e. we attempt to measure the counterfactual. There are various methods used for estimating the counterfactual, however, they all generally rely on measuring the difference in outcomes between people participating in the programme (the treatment group) and those eligible for participation but did not (the control group).

How do we ensure we have a Counterfactual to Evaluate a Programme’s Effectiveness? In other words, how do we ensure that evaluators have a control group? This needs to be considered at the programme design stage and built in during the implementation stage. One way is to pilot the programme i.e roll out the programme to different areas at difference times. - Evaluators then need to have access to administrative data on the targeted population (e.g. unemployment register data). - At the same time, records need to be kept of unsuccessful applicants to the programme in instances where the demand for programme places exceeds supply (these individuals are the counterfactual).

Issues: The Selection Problem Comparison of a treatment and control group is not straightforward: - Substantial differences may exist between the two groups that must be factored out as assignment to either is rarely random - Such differences can also arise as a consequence of ineffective control group construction. Non-random selection refers to the possibility that: i) programme administrators engaged in “picking winners” in order to ensure the programmes success, or ii) more capable individuals are more likely to put themselves forward for intervention. Failure to account for the selection problem will result in a biased estimate of the programme’s effectiveness.

Other Issues to Consider Dynamic Bias: How do we ensure that control group members will not have been activated at some point in the future (or are expecting to be activated and behave accordingly)? Unobserved Heterogeneity: Are there unobserved differences between the control and treatment group (such as ability) that have the potential to bias our estimates? Some of these issues need to be considered at the design stage of a programme and its subsequent roll-out, while others have to be addressed at the evaluation stage.

Activation in Ireland: An Evaluation of thr National Employment Action Plan (NEAP) Commissioned by the Department of Social Protection Research conducted by the Economic and Social Research Institute (2011)

Overview of Ireland’s Activation Strategy The NEAP is Ireland’s principal tool for activating unemployed individuals back into the labour market. The NEAP is currently being revamped but at the time of the evaluation the activation strategy operated as follows: 1. Individuals registering for unemployment benefit were “automatically” referred by the Department of Social Protection (DSP) to FÁS, formally Ireland’s national employment and training authority, for an activation interview after 3 months on the UE benefit system. 2. During the activation interview, clients could have been provided with Job Search Assistance (JSA) and/or referred to employment or training opportunities. Individuals with previous exposure to the NEAP – i.e. those with a previous history of unemployment, are excluded and will not be referred to FÁS for a second time. At the time of the evaluation, the NEAP was quite distinct in an international sense in that it was characterised by an almost complete absence of monitoring and sanctions, and it did not apply the ‘mutual obligation’ principal.

The NEAP Evaluation Objectives The study examined the effectiveness of two key components of the NEAP Strategy using data for the period 2006 to 2008: 1. The impact of the NEAP referral and interview process (i.e. JSA) on NEAP programme participants (the treatment group) likelihood of exiting unemployment to employment relative to non-NEAP participants (the control group) 2. To assess the extent to which individuals in receipt of both a referral interview and training had enhanced employment prospects relative to those in receipt of an interview only (i.e. assess the impact of training). Today’s presentation will focus on how we went about evaluating the effectiveness of the referral and interview component of the NEAP (1).

First Issue Encountered: No Control Group? Selection under the NEAP is automated and universal: if all claimants are automatically sent for interview at 3 months of their claim, how can we construct a counterfactual? - Remember, the counterfactual assesses what happens to individuals in the absence of the programme. The only eligible people not exposed to the programme are those already in employment by the 3 month time point. This problem illustrates that evaluation of the NEAP was not considered in the programme’s design or implementation stages.

What Did We Do? Only option was to utilise the fact that individuals with previous exposure to NEAP cannot access it again (as an aside, this could be viewed as counter-intuitive rule as those most in need of support are excluded from receiving assistance again). We took an initial control group of individuals who had previous exposure to NEAP more than two years prior to the study whose contact was limited to a FAS interview. - Given the time lapse, and changing macroeconomic conditions, any advice received by the control group should have declined in relevance; therefore, allowing for some assessment of the impact of the JSA component of the NEAP. - However, even if the above were true we were still left with a selection problem as prior to the study all of the control group would have had a previous unemployment spell of at least 13 weeks, whereas none of the treatment group did. This difference cannot be eradicated by matching, and consequently our estimates of the programme’s effectiveness were unlikely to be free of bias.

Next Step: Construction of the Evaluation Dataset Weekly Population of Live Register Claimants Weekly Population of Live Register Claimant Closure Files Profiling Questionnaire InformationforClaimant Population Issued June to September 2006 Live Register Claimant Population (September 2006– June 2008) Dataset for NEAP Evaluation FAS Events Histories Constructed using a combination of i) administrative data from the Live Register, ii) survey data from the DSP’s Profiling Database and iii) FAS’s client history administrative data

After Dataset Construction New Control Group Found…. On linking the data, we found that approximately 25% of new claimants had not been referred by the DSP to FAS after 3 months unemployment duration, despite these individuals having no previous exposure to the NEAP. Before using this group as a counterfactual, we needed to establish what was going on: i) were we missing something in terms of the referral process? ii) if not, what factors drove the omission of this group of individuals, and are they random? A list containing the PPS numbers of our potential new control group was sent to DSP for validation.

Validation Checks The DSP confirmed these individuals had fallen through the net. No concrete explanation found: most likely that these individuals were not referred when the number of referrals in DSP offices exceeded slots in local FAS offices and they were subsequently overlooked when slots became available. Even before we had begun our evaluation of the NEAP, we had uncovered two major problems with the programme’s processes: i) 25 % of potential claimants excluded and ii) a further 25% missed. This is a clear example of how ‘process evaluation’ can become a component of an ‘impact evaluation’.

The Final Treatment and Control Groups

Two control groups for the evaluation, but how random are they i.e., is there a selection problem? An initial step in addressing this issue is to compare the characteristics of the treatment and control groups – you want their characteristics to be well matched

But ultimately evaluators need to utilise econometric techniques to deal with the Selection Bias issue In this evaluation, we employed matching estimators (PSM): - Duration models and difference-in-difference estimates are other techniques that can be used. Various sensitivity tests were conducted to address the dynamic bias issue: - Changed the unemployment duration threshold from minimum of 20 weeks to 25 and 30; - Also estimated the models for various exit points – 12, 15 and 18 months.

Given this, what were the findings on the effectiveness of the NEAP? Comparing the employment prospects of those who received JSA under the NEAP (treatment group) with those who were not referred (Control Group I), this component of the NEAP was found to have a negative impact: based on the table above, their chances of entering employment were reducted by about 15 per cent; When the treatment group’s employment prospects were compared with those who had participated in a NEAP interview in the past (Control Group II), the current NEAP treatment group did no better than this control group; Thus, the JSA component of the NEAP was found to be an ineffective route to employment – Why? Results held after various sensitivity checks. Results held after various sensitiv

How Reliable are out Results? We controlled for a wide-range of observables implying that unobserved factors should be less of a factor; Sensitivity tests seemed to confirm this. We had a highly representative control group. Still, while our matching estimator framework allows us to test the sensitivity of our estimates to unobserved bias, it does not eradicate it completely. In this regard, we are seeing the increased use of combined PSM and difference-in-difference methods to ensure that evaluation estimates are free from both selection bias (on observables) and unobserved bias (picking winners etc).

Implications of Findings Findings suggested the need for an overhaul of the NEAP eligibility and administration as it existed at the time of the evaluation – the system is currently being revamped. Also, provision of more intensive job search assistance – not feasible at present due to budget, competency and resource constraints within the DSP. Findings also suggested the need for Ireland to follow international best practice by developing a fully compulsory activation system with effective monitoring and sanction mechanisms – principal of mutual obligation with sanctions is now being applied, but again resource constraints are preventing the full implementation of regular and effective monitoring of clients job search intensity.

For further information: Report and Papers available at:

Thank you