ECON 3039 Labor Economics 2015-16 By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 31.

Slides:



Advertisements
Similar presentations
Experimental and Ex Post Facto Designs
Advertisements

Questions What is the relationship between ‘research designs’ and ‘research strategies’? Which method of experiments, within subjects or between subjects.
Choosing the level of randomization
Girls’ scholarship program.  Often small/no impacts on actual learning in education research ◦ Inputs (textbooks, flipcharts) little impact on learning.
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Performance Based Incentives for Learning in the Mexican Classroom Brian Fuller, MPA, Foundation Escalera Victor Steenbergen, MPA Candidate, London School.
Types of Evaluation.
Chapter 4 Selecting a Sample Gay, Mills, and Airasian
Chapter 9 Experimental Research Gay, Mills, and Airasian
Comments on Teacher Incentive Papers: Duflo, Dupas and Kremer (Kenya) Muralidharan and Sundararaman (India) Bruns, Ferraz and Rangel (Brazil) By Paul Glewwe.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
Pascaline Dupas, UCLA Pupil-Teacher Ratios, Locally-Hired Contract Teachers, and School-Based Management: Evidence from Kenya Making Schools Accountable:
Lessons for Education Policy in Africa Evidence from Randomized Evaluations in developing countries James Habyarimana Georgetown University.
Experimental Design The Gold Standard?.
Chapter 1 - Introduction & Research Methods What is development?
Group Discussion Explain the difference between assignment bias and selection bias. Which one is a threat to internal validity and which is a threat to.
Consumer Preference Test Level 1- “h” potato chip vs Level 2 - “g” potato chip 1. How would you rate chip “h” from 1 - 7? Don’t Delicious like.
Discussion of Private Schooling in India Abhijit V. Banerjee.
Sampling Techniques LEARNING OBJECTIVES : After studying this module, participants will be able to : 1. Identify and define the population to be studied.
1 New York State Growth Model for Educator Evaluation 2011–12 July 2012 PRESENTATION as of 7/9/12.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Chapter 5 Selecting a Sample Gay, Mills, and Airasian 10th Edition
Between- Subjects Design Chapter 8. Review Two types of Ex research Two basic research designs are used to obtain the groups of scores that are compared.
Case Studies Harry Anthony Patrinos World Bank November 2009.
Shawn Cole Harvard Business School Threats and Analysis.
Language and Content-Area Assessment Chapter 7 Kelly Mitchell PPS 6010 February 3, 2011.
Research Strategies Chapter 6. Research steps Literature Review identify a new idea for research, form a hypothesis and a prediction, Methodology define.
Zacharias Maniadis, Fabio Tufano and John A List MAER-Net 2015 Prague Colloquium.
Improving the quality of education Abhijit Vinayak Banerjee Department of Economics and JPAL, MIT.
Selecting and Recruiting Subjects One Independent Variable: Two Group Designs Two Independent Groups Two Matched Groups Multiple Groups.
Angelo Bradley Taikein Cooper Parita Shah
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 21.
Title I, Part A Improving Basic Programs Program Requirements and Guidelines.
1 REGRESSION ANALYSIS WITH PANEL DATA: INTRODUCTION A panel data set, or longitudinal data set, is one where there are repeated observations on the same.
Aim: What goes behind designing experiments?. What is a study? A study is an experiment when we actually do something to people, animals, or objects to.
Introduction section of article
Impact Evaluation “Randomized Evaluations” Jim Berry Asst. Professor of Economics Cornell University.
Temperament Constitutionally based individual differences in behavioral characteristics that are relatively consistent across situations and over time.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Dr. Timothy Mitchell Rapid City Area Schools 9/21/13.
CHOOSING THE LEVEL OF RANDOMIZATION. Unit of Randomization: Individual?
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 41.
Comments on: The Evaluation of an Early Intervention Policy in Poor Schools Germano Mwabu June 9-10, 2008 Quebec City, Canada.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Chapter 8: Between Subjects Designs
Teaching the Control of Variables Strategy in Fourth Grade Classrooms Robert F. Lorch, Jr., William J. Calderhead, Emily E. Dunlap, Emily C. Hodell, Benjamin.
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 91.
School Based Management: Evidence from Kenya
ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 61.
Experimental and Ex Post Facto Designs
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
 Allows researchers to detect cause and effect relationships  Researchers manipulate a variable and observe whether any changes occur in a second variable.
1 New York State Growth Model for Educator Evaluation June 2012 PRESENTATION as of 6/14/12.
IEP Primer: Specially Designed Instruction
ECON 4009 Labor Economics 2017 Fall By Elliott Fan Economics, NTU
EXPERIMENTAL RESEARCH
Experimental and Quasi-Experimental Research
Threats and Analysis.
Between- Subjects Design
Travis Wright, Ed.D April 26, 2018
Scientific Method Attitude Process
Empirical Tools of Public Finance
Evaluating Impacts: An Overview of Quantitative Methods
Explanation of slide: Logos, to show while the audience arrive.
Central Middle School August 20, 2019
Tomlinson Middle School August 27, 2019
Presentation transcript:

ECON 3039 Labor Economics By Elliott Fan Economics, NTU Elliott Fan: Labor 2015 Fall Lecture 31

Card, DellaVigna, Malmendier (2011) Elliott Fan: Labor 2015 Fall Lecture 32

Four advantages of experiments Here are some advantages listed by advocates of RCTs: Has the potential for overcoming selection bias Can be policy relevant Transparency; easy to explain Allows you to test new ideas Elliott Fan: Labor 2015 Fall Lecture 33

Disadvantages of experiments Critics of RCTs point out problems such as: Context specificity Cost Human subject / ethical concerns Partial vs General equilibrium effects Duration of evaluation—can we wait? Attrition / compliance Is the mean enough? Externalities Elliott Fan: Labor 2015 Fall Lecture 34

Heckman and Smith 1995 JEP: Randomization Bias – the randomization alters the process of selection into the treatment, so that those who participate during an experiment differ from those who would have participated in the absence of an experiment. Substitution bias – members of the experimental control group cannot obtain substitutes for the treatment elsewhere. Elliott Fan: Labor 2015 Fall Lecture 35

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 36 Krueger (QJE, 1999) on STAR program Project STAR was a longitudinal study in which kindergarten students and their teachers were randomly assigned to one of three groups beginning in the 1985–1986 school year. Small classes (13–17 students per teacher), regular-size classes (22–25 students), and regular/aide classes (22–25 students) which also included a full-time teacher’s aide. Deal with two reactive effects -- ‘Hawthorne Effects’ and ‘John Henry Effects’

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 37 Krueger (QJE, 1999) on STAR program There is no ideal randomized trial in practice. However, you can somehow verify it.

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 38 Krueger (QJE, 1999) on STAR program Identification strategy (OLS) Identification strategy (2SLS)

Graphical presentation Elliott Fan: Labor 2015 Fall Lecture 39

Results Elliott Fan: Labor 2015 Fall Lecture 310

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 311 Krueger (QJE, 1999) on STAR program Deviations from an ideal experiment: Students in regular-size classes were randomly assigned again between classes with and without full-time aides at the beginning of first grade Approximately 10 percent of students switched between small and regular classes between grades, primarily because of behavioral problems or parental complaints. Some students and their families naturally relocate during the school year, actual class size varied more than intended in small classes (11 to 20) and in regular classes (15 to 30). Attrition -- around half of students who were present in kindergarten were missing in at least one subsequent year.

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 312 Duflo, Dupas, and Kremer (AER, 2011) on tracking and peer effects in Kenya To the extent that students benefit from high-achieving peers, tracking will help strong students and hurt weak ones. However, all students may benefit if tracking allows teachers to better tailor their instruction level. Lower-achieving pupils are particularly likely to benefit from tracking when teachers have incentives to teach to the top of the distribution.

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 313 Duflo, Dupas, and Kremer (AER, 2011) on tracking and peer effects in Kenya Literature: There is a rough consensus that tracking helps high-achieving students. The consensus, however, is weaker for low-achieving students. Selection bias would be serious in cases using inappropriate comparisons. Attrition constitutes another difficulty.

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 314 Duflo, Dupas, and Kremer (AER, 2011) on tracking and peer effects in Kenya The experiment design: In 2005, 140 primary schools in western Kenya received funds to hire an extra grade one teacher. Of these schools, 121 had a single first-grade class, which they split into two sections, with one section taught by the new teacher. In 60 randomly selected schools, students were assigned to sections based on initial achievement. In the remaining 61 schools, students were randomly assigned to one of the two sections. They find that tracking students by prior achievement raised scores for all students, even those assigned to lower achieving peers.

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 315 Duflo, Dupas, and Kremer (AER, 2011) on tracking and peer effects in Kenya

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 316 Duflo, Dupas, and Kremer (AER, 2011) on tracking and peer effects in Kenya Identification strategy (OLS) for estimating the tracking effect: y ij is the endline test score of student i in school j; T j is a dummy equal to 1 if school j was tracking; and X ij is a vector including a constant and child and school control variables.

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 317 Duflo, Dupas, and Kremer (AER, 2011) on tracking and peer effects in Kenya To identify potential differential effects for children assigned to the lower and upper section, they use: B ij is a dummy variable that indicates whether the child was in the bottom half of the baseline score distribution in her school.

Overall effect of tracking (SR) Elliott Fan: Labor 2015 Fall Lecture 318

Overall effect of tracking (LR) Elliott Fan: Labor 2015 Fall Lecture 319

Examples of randomized trials Elliott Fan: Labor 2015 Fall Lecture 320

Peer effect Elliott Fan: Labor 2015 Fall Lecture 321