ECON 3039 Labor Economics 2015-16 By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 21.

Slides:



Advertisements
Similar presentations
Evaluating the Effects of Business Register Updates on Monthly Survey Estimates Daniel Lewis.
Advertisements

Designing an impact evaluation: Randomization, statistical power, and some more fun…
Introduction to Propensity Score Matching
European Integration and Economic Growth: A Counterfactual Analysis
The World Bank Human Development Network Spanish Impact Evaluation Fund.
REGRESSION, IV, MATCHING Treatment effect Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Advantages and limitations of non- and quasi-experimental methods Module 2.2.
Review of Identifying Causal Effects Methods of Economic Investigation Lecture 13.
Random Assignment Experiments
Sources of bias in experiments and quasi-experiments sean f. reardon stanford university 11 december, 2006.
Holland on Rubin’s Model Part II. Formalizing These Intuitions. In the 1920 ’ s and 30 ’ s Jerzy Neyman, a Polish statistician, developed a mathematical.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Lecture 28 Categorical variables: –Review of slides from lecture 27 (reprint of lecture 27 categorical variables slides with typos corrected) –Practice.
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
Omitted Variable Bias Methods of Economic Investigation Lecture 7 1.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Thesis Seminar Presentation Ramin Izadi. 1. What is the question I am interested in? What is the effect of closing down schools on student outcomes?
Econ Prof. Buckles1 Welcome to Econometrics What is Econometrics?
Impact Evaluation Methods. Randomized Trials Regression Discontinuity Matching Difference in Differences.
Economics 20 - Prof. Anderson
Can we predict how enrollment may change if eligibility floor is raised to 200% of FPL? Test health insurance policy option Determine typical characteristics.
The Fundamental Problem of Causal Inference Alexander Tabarrok January 2007.
Empirical methods take real-world data estimate size of relationship between variables two types  regression analysis  natural experiments take real-world.
Prof. Dr. Rainer Stachuletz 1 Welcome to the Workshop What is Econometrics?
STAT 4060 Design and Analysis of Surveys Exam: 60% Mid Test: 20% Mini Project: 10% Continuous assessment: 10%
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 6 Chicago School of Professional Psychology.
STA 320: Design and Analysis of Causal Studies Dr. Kari Lock Morgan and Dr. Fan Li Department of Statistical Science Duke University.
Applied Business Forecasting and Planning
PAI786: Urban Policy Class 2: Evaluating Social Programs.
PEPA is based at the IFS and CEMMAP © Institute for Fiscal Studies Identifying social effects from policy experiments Arun Advani (UCL & IFS) and Bansi.
Weighting STA 320 Design and Analysis of Causal Studies Dr. Kari Lock Morgan and Dr. Fan Li Department of Statistical Science Duke University.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.2 Estimating Differences.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Error Component Models Methods of Economic Investigation Lecture 8 1.
Pablo Ibarrarán & Juan M. Villa Inter-American Development Bank Strategy Development Division Office of Strategic Planning and Development Effectiveness.
Non-experimental Quantitative Research Designs (NEQDs)
CAUSAL INFERENCE Shwetlena Sabarwal Africa Program for Education Impact Evaluation Accra, Ghana, May 2010.
SUTVA, Assignment Mechanism STA 320 Design and Analysis of Causal Studies Dr. Kari Lock Morgan and Dr. Fan Li Department of Statistical Science Duke University.
Assumptions of value-added models for estimating school effects sean f reardon stephen w raudenbush april, 2008.
Matching Estimators Methods of Economic Investigation Lecture 11.
Beyond surveys: the research frontier moves to the use of administrative data to evaluate R&D grants Oliver Herrmann Ministry of Business, Innovation.
Propensity Score Matching for Causal Inference: Possibilities, Limitations, and an Example sean f. reardon MAPSS colloquium March 6, 2007.
Application 3: Estimating the Effect of Education on Earnings Methods of Economic Investigation Lecture 9 1.
Application 2: Minnesota Domestic Violence Experiment Methods of Economic Investigation Lecture 6.
What is randomization and how does it solve the causality problem? 2.3.
The Simple Linear Regression Model: Specification and Estimation ECON 4550 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s.
Randomized controlled trials and the evaluation of development programs Chris Elbers VU University and AIID 11 November 2015.
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 41.
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 81.
Impact Evaluation Sebastian Galiani November 2006 Causal Inference.
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 91.
Causal Model Ying Nian Wu UCLA Department of Statistics July 13, 2007 IPAM Summer School.
Bilal Siddiqi Istanbul, May 12, 2015 Measuring Impact: Non-Experimental Methods.
Lecture 1 Introduction to econometrics
The Evaluation Problem Alexander Spermann, University of Freiburg 1 The Fundamental Evaluation Problem and its Solution SS 2009.
Experimental Evaluations Methods of Economic Investigation Lecture 4.
ENDOGENEITY - SIMULTANEITY Development Workshop. What is endogeneity and why we do not like it? [REPETITION] Three causes: – X influences Y, but Y reinforces.
ECON 4009 Labor Economics 2017 Fall By Elliott Fan Economics, NTU
ECON 4009 Labor Economics 2017 Fall By Elliott Fan Economics, NTU
Measuring Results and Impact Evaluation: From Promises into Evidence
The Nature of Econometrics and Economic Data
Random Variable.
Explanation of slide: Logos, to show while the audience arrive.
EMPIRICAL STUDY AND FORECASTING (II)
Random Variable.
Week 2 Outline Me Barbara Maddie
Economics 20 - Prof. Anderson
Class 2: Evaluating Social Programs
Class 2: Evaluating Social Programs
Counterfactual models Time dependent confounding
Presentation transcript:

ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 21

Fruitless comparisons (1) Elliott Fan: Labor 2015 Fall Lecture 22 KMT’s vs DPP’s performance in economic growth Can you make an inference from this table that KMT outperformed DPP? If no, why? StartEndAverage annual growth DPP14,519 (2001)18,131 (2008)3.11% KMT18,131 (2008)23,374 (2015)3.86%

Fruitless comparisons (2) Elliott Fan: Labor 2015 Fall Lecture 23 Textbook’s example:

Elliott Fan: Labor 2015 Fall Lecture 24 Treatment effect on Khuzdar: Treatment effect on Maria: Since we can only observe one of the two outcomes for each player, we have no choice but using one as another’s counterfactual outcome. That is: Fruitless comparisons (2)

Elliott Fan: Labor 2015 Fall Lecture 25 Treatment effect on Khuzdar: Treatment effect on Maria: Since we can only observe one of the two outcomes for each player, we have no choice but using one as another’s counterfactual outcome. That is: Fruitless comparisons (2)

A better comparison Elliott Fan: Labor 2015 Fall Lecture 26

A better comparison Question: Does attending a private university in the US raise income (relative to attending a public one)? Simple comparison of the average income for the five individuals attending private schools (92,000) and the four attending public schools (72,500) is potentially biased. Elliott Fan: Labor 2015 Fall Lecture 27

A better comparison We categorize the students into four groups defined by the set of schools to which they applied and were admitted to. (I) A comparison of A1+A2 and A3 shows that the return to attending private school is -5,000 (II) A comparison of B1 and B2 suggests that the return to attending private school is -30,000 Unweighted average of (I) and (II) is 12,500 Weighted average of (I) and (II) is 9,000 Elliott Fan: Labor 2015 Fall Lecture 28

A better comparison What can we learn from the two different comparisons? Apple-to-apple or orange-to-orange comparisons are what we need. Elliott Fan: Labor 2015 Fall Lecture 29

From comparison to regressions Consider a regression function: Elliott Fan: Labor 2015 Fall Lecture 210

From a small sample to a large one Consider a regression function: Elliott Fan: Labor 2015 Fall Lecture 211

Elliott Fan: Labor 2015 Fall Lecture 212

Rubin Causal Model (RCM) Elliott Fan: Labor 2015 Fall Lecture 213 This is called Robin’s “Potential Outcomes” framework

Rubin Causal Model (RCM) Elliott Fan: Labor 2015 Fall Lecture 214 The Evaluation Problem / Fundamental Problem of Causal Inference: It is impossible to observe both Y 1 and Y 0 Holland (1986): “fundamental problem of causal inference” Others: “The Evaluation Problem” Implication: We must always make assumptions to make inferences.

Rubin Causal Model (RCM) Elliott Fan: Labor 2015 Fall Lecture 215 Imbens and Wooldridge (2009) assessment of this framework See their JEL paper, p. 10. They offer five advantages of this framework. It allows us to define causal effects before specifying the assignment mechanism, and without making functional form or distributional assumptions. It links the analysis of causal effects to explicit manipulations It separates the modeling of the potential outcomes from that of the assignment mechanism. It allows us to formulate probabilistic assumptions in terms of potentially observable variables, rather than in terms of unobserved components. It clarifies where the uncertainty in the estimators comes from.

Elliott Fan: Labor 2015 Fall Lecture 216 Actually, we can rearrange the equation Selection bias arises because we employed an outcome that deviates from the counterfactual to make the comparison Introducing selection bias

Average treatment effect (ATT) Elliott Fan: Labor 2015 Fall Lecture 217 Let’s formalize it using group means Average treatment effect on the treated (ATT) A naïve comparison of those insured and those uninsured: Q: What do we observe? What don’t we observe?

Selection bias Elliott Fan: Labor 2015 Fall Lecture 218 Examples: 1.Education on earnings 2.Health insurance on infant health 3.Any other examples?

Selection bias Elliott Fan: Labor 2015 Fall Lecture 219 Regression expression of selection bias: Q: What’s the assumption needed to eliminate the selection bias?

Selection bias and randomized trials Elliott Fan: Labor 2015 Fall Lecture 220 A randomized trail, or randomized controlled trial (RCT), is a type of scientific experiment, where the people being studied are randomly allocated into the treatment and control groups under study. Intuition: randomized selection implies that the treatment and control groups share the same observables and unobservables on average. This implies: Thus, selection bias is removed.

Selection bias and randomized trials Elliott Fan: Labor 2015 Fall Lecture 221 How does a randomized trial eliminate the selection bias? In terms of RCM

Selection bias Elliott Fan: Labor 2015 Fall Lecture 222

Experiments: important examples Elliott Fan: Labor 2015 Fall Lecture 223