The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/2008 1 The Fundamental Evaluation Problem and its Solution.

Slides:



Advertisements
Similar presentations
Impact analysis and counterfactuals in practise: the case of Structural Funds support for enterprise Gerhard Untiedt GEFRA-Münster,Germany Conference:
Advertisements

REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
Differences-in-Differences
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Pooled Cross Sections and Panel Data II
The Fundamental Problem of Causal Inference Alexander Tabarrok January 2007.
Employment Effects of Short and Medium Term Further Training Programs in Germany in the Early 2000s Martin Biewen, University of Mainz, IZA, DIW Bernd.
How Do Employment Effects of Job Creation Schemes Differ with Respect to the Foregoing Unemployment Duration? Reinhard Hujer University Frankfurt/M. 3rd.
1 Comment on Zabel/Schwartz/Donald: An Analysis of the Impact of SSP on Wages Alexander Spermann Mannheim 28 October 2006.
AADAPT Workshop Latin America Brasilia, November 16-20, 2009 Non-Experimental Methods Florence Kondylis.
Global Workshop on Development Impact Evaluation in Finance and Private Sector Rio de Janeiro, June 6-10, 2011 Mattea Stein Quasi Experimental Methods.
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Welfare Reform and Lone Parents Employment in the UK Paul Gregg and Susan Harkness.
Evaluating Job Training Programs: What have we learned? Haeil Jung and Maureen Pirog School of Public and Environmental Affairs Indiana University Bloomington.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program for Education Impact Evaluation David Evans Impact Evaluation Cluster, AFTRL Slides by Paul J.
Causal Inference: experimental and quasi-experimental methods Draft ©G. Mason (2005)
Applying impact evaluation tools A hypothetical fertilizer project.
Non-experimental methods Markus Goldstein The World Bank DECRG & AFTPM.
Randomized Assignment Difference-in-Differences
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
1 Joint meeting of ESF Evaluation Partnership and DG REGIO Evaluation Network in Gdańsk (Poland) on 8 July 2011 The Use of Counterfactual Impact Evaluation.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
4. Tobit-Model University of Freiburg WS 2007/2008 Alexander Spermann 1 Tobit-Model.
1 REFORMING LONG-TERM CARE IN GERMANY: PRELIMINARY FINDINGS FROM A SOCIAL EXPERIMENT WITH MATCHING TRANSFERS Melanie Arntz (ZEW) Jochen Michaelis (University.
Alexander Spermann University of Freiburg, SS 2008 Matching and DiD 1 Overview of non- experimental approaches: Matching and Difference in Difference Estimators.
September 2005Winterhager/Heinze/Spermann1 Deregulating Job Placement in Europe: A Microeconometric Evaluation of an Innovative Voucher Scheme in Germany.
Henrik Winterhager Econometrics III Before After and Difference in Difference Estimators 1 Overview of non- experimental approaches: Before After and Difference.
Job Security and New Restrictive Permanent Contracts. Do Spanish Workers Fear Losing Their Jobs? Elisabetta Trevisan — Tilburg University and Centraal.
The Targeted Negative Income Tax (TNIT) in Germany: Evidence from a quasi-experiment European Econonomic Association Amsterdam, 27 August 2005 Alexander.
Looking for statistical twins
Issues in Evaluating Educational Research
Measuring Results and Impact Evaluation: From Promises into Evidence
Quasi Experimental Methods I
General belief that roads are good for development & living standards
Quasi Experimental Methods I
An introduction to Impact Evaluation
Quasi-Experimental Methods
Quasi-Experimental Methods
Introduction to Microeconometrics
Evaluation Partnership Meeting March 2015
Chapter Eight: Quantitative Methods
Introduction to Design
Deregulating Job Placement in Europe:
Impact evaluation: The quantitative methods with applications
Matching Methods & Propensity Scores
Matching Methods & Propensity Scores
Methods of Economic Investigation Lecture 12
Experiments and Quasi-Experiments
ESF EVALUATION PARTNERSHIP MEETING Bernhard Boockmann / Helmut Apel
Introduction to Microeconometrics
European Econonomic Association Amsterdam, 27 August 2005
Impact Evaluation Methods
Empirical Tools of Public Finance
1 Causal Inference Counterfactuals False Counterfactuals
The Use of Counterfactual Impact Evaluation Methods in Cohesion Policy
Informal Caregiving Formal Employment.
Matching Methods & Propensity Scores
Experiments and Quasi-Experiments
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Experimental Research
Applying Impact Evaluation Tools: Hypothetical Fertilizer Project
Counterfactual Impact Analysis applied in the ESF-Evaluation in Austria (period ) Contribution to the Expert-Hearing: Member States Experiences.
Experimental Research
Positive analysis in public finance
Presentation transcript:

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ The Fundamental Evaluation Problem and its Solution

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Evaluation Problem 2.Treatment Effects 3.Selection Bias 4.Solution Approaches Hagen, Tobias und Bernd Fitzenberger (2004), Mikroökonometrische Methoden zur Ex-post-Evaluation, in: Hagen, Tobias und Alexander Spermann, Hartz-Gesetze – Methodische Ansätze zu einer Evaluierung, ZEW-Wirtschaftsanalysen, 74, S.45-72

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Example: Evaluation of programs of active labour market policy (e.g. job creation measures)  Investigation of a program‘s effect on a certain outcome variable (e.g. the employment probability)

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Measurement of the program‘s success by the share of individuals entering employment during a certain period of time is not sufficient. Problem: The causal effect of the program is not measured – employment take-up could happen without program participation

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Causal effect: Employment probability of participants versus employment probability of participants had they not participated Problem: „counterfactual situation“ – participants cannot simultaneously be non- participants!

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Solution: Estimation of the hypothetical employment probability participants in case of non- participation by using the employment probability for non-participants. Use of a „comparison or control group“, to be able to estimate the success of participation.

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ In case participants and control group differ with respect to observable or non- observable characteristics that do have an influence on outcome variables Selection Bias

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Question: What is the effect of a program on the outcome variable y? y 1 : Outcome variable in case of participation y 0 : Outcome variable in case of non-participation C : Dummy variable, set to 1 in case of participation

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ The actual observed outcome variable for an individual i results from: The program effect is:

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Problem: It is not possible to calculate an individual causal effect. No individual can be in two different states of participation at the same point in time.

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ However, it is possible to estimate The mean effect of participation on the group of participants („Average Effect of Treatment on the Treated“ – ATT). The mean effect of participation expected for an individual drawn randomly from a population of participants and non-participants („Average Treatment Effect“ – ATE).

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ E[y 1 ] only is observable for participants and E[y 0 ] only is observable for non- participants.

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Case differentiation: 1.Participants and non-participants („control group“) differ neither with respect to observed nor to unobserved characteristics  consistent estimates of expected value of the outcome variable using the sample mean:

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ WhereasT: Number of participants NT: Number of non- participants

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Participants and non-participants differ in regard to observed and unobserved characteristics  Selection Bias  Difference of sample means does not lead to consistent estimators

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ X-Heterogeneity: Heterogeneity of the treatment effect that can be explained by differences in observed variables. U-Heterogeneity: Heterogeneity of the treatment effect that can be explained by differences in unobserved variables.

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Definition homogeneous treatment effect: Treatment has the same effect on individuals with different observed attributes, i.e. no X-heterogeneity. Measure has the same effect on individuals with different unobserved attributes, i.e. no U-heterogeneity. Treatment effect is identical for all individuals and ATT=ATE.

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ X-Heterogeneity / U-Heterogeneity:  Heterogeneous treatment effect  Selection bias 1.Selection bias due to observed variables 2.Selection bias due to unobserved variables

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ „selection on observables“ „selection on unobservables“ Regression Methods „Propensity- Score- Matching“ Difference-in- Difference- Estimators (DiD) Selection Models Instrumental Variable Approaches (IV)

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Social experiments 2.Natural experiments 3. Quasi experiments 4. Non experimental statistical / econometric methods

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Social Experiments Example: Introduction of a care budget Basic Information (Arntz/Spermann, ZEW DP 04-84): 7 sites in East and West Germany goal: 2000 participants; assigned randomly to 1000 in the program & 1000 in the control group duration:

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Matching transfer plus case management Treatment: Outcome: Duration in home care Life satisfaction Quality of care Home Care arrangements

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Discussion of biases: Randomization bias Treatment dropout bias Control group substitution bias Attrition bias General equilibrium effects Preliminary results: Swiss Journal 2006

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ No randomization bias No structural change of participants and non- participants due to the fact that they participate in a social experiment

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ No Treatment Group Dropout Bias R=1  T=1 R=0 indicator variable for control group R=1 indicator variable for program group T=0 indicator variable for non-receipt of program T=1 indicator variable for program receipt Persons who were assigned to the program group have to receive the program

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ III) No Control Group Substitution Bias R=0  T=0 Persons of the control group do not participate in comparable programs

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ No General Equlibrium Effects No indirect effect of the program that could change the direct effect of the program

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ No Attrition Bias Program or control group members may not get lost during the experiment

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Natural Experiments See ZEW-DP (Boockmann/Zwick/Ammermüller/Maier) on „Eingliederungszuschüssen an ältere Arbeitnehmer“ (Hartz I-III evaluation) See later presentation on Difference-in- Difference estimators

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Quasi Experiment  Spermann/Strotmann (ZEW DP 05-68) about the „ Targeted Negative Income Tax (TNIT) experiment“ in Mannheim Main features: 1.Target group: Means-tested unemployed 2.Employee subsidy (earnings supplement) 3.Time-restriction which varies between household types

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Example for Quasi Experiments: Evaluation Design for TNIT Study Site-randomized control group in the same local labour market Program district in the northern part of Mannheim Comparison district in a comparable southern part on Mannheim Program and comparison group are comparable due to t-tests on important observables

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Example for Quasi Experiments: Treatment, Outcome, & Identification Information about the potential earnings supplement in case of participation in the private labour market Check by survey: Did the program group understand the program (proxy for receipt of treatment) Result: Program group understood the basic idea Treatment:

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Example for Quasi Experiments: Treatment, Outcome, & Identification Participation: available Income: available Hours of work: not available Duration of jobs after time limit: not available Outcome:

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Identification: Is the selection on observable assumption plausible? HIT (1997) and HIST (1998) set up criteria for comparison group data quality: 1.Same data source for program and comparison group: fulfilled 2.Program and comparison group reside in the same local labour market: fulfilled 3.Data contain a rich set of covariates: only partly fulfilled, do not observe individual employment history and pre-program data Quality checks for matching are not feasible Solution: We restrict to Probit and Tobit, Propensity Score Matching confirmed results

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ If one accepts the plausibility of the selection on-observables assumption,  Then average marginal effect Probit estimation of the program dummy could be interpreted as ATE

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Results: Descriptive Statistics

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Probit Models for overall employment (average marginal effects)

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Notes: Administrative data, Mannheim 2000, p-values in parentheses. ***/**/* indicate statistical significance at the 1, 5 and 10 percent level, respectively.

The Evaluation Problem Alexander Spermann, University of Freiburg, 2007/ Non-experimental Methods  „ Matching“  Instrumental Variables  Panel Data  etc.