Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th, 2012.

Slides:



Advertisements
Similar presentations
NYC Teacher Data Initiative: An introduction for Principals ESO Focus on Professional Development October 2008.
Advertisements

Work Disruption, Worker Health, and Productivity Mariesa Herrmann Columbia University Jonah Rockoff Columbia Business School and NBER Evidence from Teaching.
Teacher Training, Teacher Quality and Student Achievement Douglas Harris Tim R. Sass Dept. of Educational Dept. of Economics Policy Studies Florida State.
Cory Koedel, Eric Parsons, Michael Podgursky and Mark Ehlert
What Does Research Tell Us About Identifying Effective Teachers? Jonah Rockoff Columbia Business School Nonprofit Leadership Forum, May 2010.
Teacher Effectiveness in Urban Schools Richard Buddin & Gema Zamarro IES Research Conference, June 2010.
NYC ACHIEVEMENT GAINS COMPARED TO OTHER LARGE CITIES SINCE 2003 Changes in NAEP scores Leonie Haimson & Elli Marcus Class Size Matters January.
School Report Cards 2004– The Bottom Line More schools are making Adequate Yearly Progress. Fewer students show serious academic problems (Level.
Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th, 2012.
NYU’s Institute for Education and Social Policy/Furman Center for Real Estate and Urban Policy 1 Does Losing Your Home Mean Losing Your School? Effects.
Explaining Race Differences in Student Behavior: The Relative Contribution of Student, Peer, and School Characteristics Clara G. Muschkin* and Audrey N.
1-Teacher competence does affect student learning. Outsiders can bring fresh ideas and enthusiasm to tired systems. And principals do have a role in reform.
Multiple Regression Fenster Today we start on the last part of the course: multivariate analysis. Up to now we have been concerned with testing the significance.
Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th, 2012.
Informing Policy: State Longitudinal Data Systems Jane Hannaway, Director The Urban Institute CALDER
Using State Longitudinal Data Systems for Education Policy Research : The NC Experience Helen F. Ladd CALDER and Duke University Caldercenter.org
University Admission in Russia: Do the Wealthier Benefit from Standardized Exams? Ilya Prakhov, Maria Yudkevich Center for Institutional Studies at the.
Methodology Immigrants and Schools: Do charter schools founded by Turkish immigrants do better? Robert Maranto Danish Shakeel, Sivan.
Using Growth Models for Accountability Pete Goldschmidt, Ph.D. Assistant Professor California State University Northridge Senior Researcher National Center.
Using Hierarchical Growth Models to Monitor School Performance: The effects of the model, metric and time on the validity of inferences THE 34TH ANNUAL.
Work Disruption, Worker Health, and Productivity Mariesa Herrmann Columbia University Jonah Rockoff Columbia Business School and NBER Evidence from Teaching.
Special Education Teacher Quality and Student Achievement Li Feng Tim R. Sass Dept. of Finance & Econ.Dept. of Economics Texas State UniversityFlorida.
Grade 3-8 English. 2 The Bottom Line This is the first year in which students took State tests in Grades 3,4,5,6,7, and 8. With the new individual.
1 Transitions to Adulthood: Comparing TANF and Foster care Youth Pamela C. Ovwigho, PhD Valerie Head, MPP Catherine E. Born, PhD Paper presented at the.
What Makes For a Good Teacher and Who Can Tell? Douglas N. Harris Tim R. Sass Dept. of Ed. Policy Studies Dept. of Economics Univ. of Wisconsin Florida.
Research Using State Longitudinal Data Systems: Accomplishments and Challenges – The Case of Florida Tim R. Sass.
Different Skills? Identifying Differentially Effective Teachers of English Language Learners Ben Master, Susanna Loeb, Camille Whitney, James Wyckoff 5.
1 Comments on: “New Research on Training, Growing and Evaluating Teachers” 6 th Annual CALDER Conference February 21, 2013.
The Narrowing Gap in NYC Teacher Qualifications and its Implications for Student Achievement Don Boyd, Hamp Lankford, Susanna Loeb, Jonah Rockoff, & Jim.
Inferences about School Quality using opportunity to learn data: The effect of ignoring classrooms. Felipe Martinez CRESST/UCLA CCSSO Large Scale Assessment.
NYC ACHIEVEMENT GAINS COMPARED TO OTHER LARGE CITIES SINCE 2003 Changes in NAEP scores Class Size Matters August
How Does Secondary Education in Louisiana Stack up? Presented by Dr. Bobby Franklin January 31, 2005.
Student Engagement Survey Results and Analysis June 2011.
Evaluating the Vermont Mathematics Initiative (VMI) in a Value Added Context H. ‘Bud’ Meyers, Ph.D. College of Education and Social Services University.
Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables October 2013 Matthew Johnson Stephen Lipscomb Brian Gill.
Chapter 8 Introduction to Hypothesis Testing
How Much of a “Running Start” Do Dual Enrollment Programs Provide Students? James Cowan & Dan Goldhaber Center for Education Data & Research (
Special Education Teacher Quality and Student Achievement Li Feng Tim R. Sass Dept. of Finance & Econ.Dept. of Economics Texas State UniversityFlorida.
 Collecting Quantitative  Data  By: Zainab Aidroos.
Slide 1 Estimating Performance Below the National Level Applying Simulation Methods to TIMSS Fourth Annual IES Research Conference Dan Sherman, Ph.D. American.
The Inter-temporal Stability of Teacher Effect Estimates J. R. Lockwood Daniel F. McCaffrey Tim R. Sass The RAND Corporation The RAND Corporation Florida.
1 Psych 5500/6500 Standard Deviations, Standard Scores, and Areas Under the Normal Curve Fall, 2008.
Issues in Assessment Design, Vertical Alignment, and Data Management : Working with Growth Models Pete Goldschmidt UCLA Graduate School of Education &
 Hypothesis Testing is a procedure, based on sampling data and probability, used to test statements regarding a characteristic of one or more populations.
Linking a Comprehensive Professional Development Literacy Program to Student Achievement Edmonds School District WERA December 4, 2008.
Investigating the Role of Human Resources in School Turnaround: A Decomposition of Improving Schools in Two States Acknowledgements: This research draws.
Teacher Engagement Survey Results and Analysis June 2011.
The Policy Choices of Effective Principals David Figlio, Northwestern U/NBER Tim Sass, Florida State U July 2010.
Don Boyd, Pam Grossman, Karen Hammerness, Hamp Lankford, Susanna Loeb, Matt Ronfeldt & Jim Wyckoff This work is supported.
Stat 112 Notes 9 Today: –Multicollinearity (Chapter 4.6) –Multiple regression and causal inference.
Impediments to the estimation of teacher value added Steven Rivkin Jun Ishii April 2008.
Strategies for estimating the effects of teacher credentials Helen F. Ladd Based on joint work with Charles Clotfelter and Jacob Vigdor CALDER Conference,
Children,< 18. Dropout rates Dropouts by Generation --Latino DROPOUTS, 2000: --Born outside US = 994,000 [26%] --1 st Generation = 240,000 [4.4%] --2.
Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th, 2012.
Best Practices and New Perspectives in HR The Strategic Data Project Human Capital Diagnostic Spring GASPA | May 5, 2011.
Using School Choice Lotteries to Test Measures of School Effectiveness David Deming Harvard University and NBER.
Hypothesis Testing Introduction to Statistics Chapter 8 Feb 24-26, 2009 Classes #12-13.
Free Education and Student Test Scores in Chad Gbetonmasse B. Somasse Worcester Polytechnic Institute (WPI) International Conference on Sustainable Development.
Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th, 2012.
Teacher effectiveness. Kane, Rockoff and Staiger (2007)
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Evaluation of Teachers Trained Through Different Routes to Certification Presentation at the IES Research Conference June 2009 Jill Constantine Mathematica.
Eric Hanushek, Steven Rivkin and Jeffrey Schiman February, 2017
School Quality and the Black-White Achievement Gap
Evaluation of the Wisconsin Educator Effectiveness System Pilot: Results of the Teacher Practice Rating System Pilot Curtis Jones, UW Milwaukee Steve.
Portability of Teacher Effectiveness across School Settings
Dan Goldhaber1,2, Vanessa Quince2, and Roddy Theobald1
Correlates of District-Level Performance
Student Mobility and Achievement Growth In State Assessment Mohamed Dirir Connecticut Department of Education Paper presented at National Conference.
Conclusions and Future Implications
Presentation transcript:

Human Capital Policies in Education: Further Research on Teachers and Principals 5 rd Annual CALDER Conference January 27 th, 2012

Where You Come From Or Where You Go? Distinguishing Between School Quality And The Effectiveness Of Teacher Preparation Programs January 27, 2012 Kata Mihaly RAND J. R. Lockwood RAND Daniel McCaffrey RAND Tim Sass Georgia State University

Introduction Improving teacher effectiveness is one of four pillars for education reform under the Obama Administration Improving teacher effectiveness is one of four pillars for education reform under the Obama Administration States are using evidence based techniques to evaluate teacher preparation program effectiveness States are using evidence based techniques to evaluate teacher preparation program effectiveness One technique links student achievement to the preparation programs where their teacher was trained and certified One technique links student achievement to the preparation programs where their teacher was trained and certified Among the many concerns is that the school context could affect preparation program estimates Among the many concerns is that the school context could affect preparation program estimates

Table 1 - Characteristics of Schools Where Graduates from a Sample Program in Florida were Hired (N=22) Mean Std. Dev. MinMax Black Hispanic Female Parent no English LEP Free Lunch Math gain score (norm) New Teachers Number of Prep Programs

There are many potential problems of linking preparation program to student achievement There are many potential problems of linking preparation program to student achievement selection of teachers into and out of programs selection of teachers into and out of programs selection of program graduates into teaching positions selection of program graduates into teaching positions how teacher performance is measured how teacher performance is measured Here we consider problem of trying to distinguish preparation program effects from environment of schools where graduates teach Here we consider problem of trying to distinguish preparation program effects from environment of schools where graduates teach We estimate preparation program effectiveness using Value Added Model of student achievement with data from Florida We estimate preparation program effectiveness using Value Added Model of student achievement with data from Florida Introduction

Overview of Research Questions 1. Can school fixed effects be included in the value added model? 2. If yes, does the inclusion of school fixed effects change preparation program estimates? 3. What are the implications of including school fixed effects on precision of estimates? 4. Are fixed effects suitable in this setting? 5. What is the impact of the sample restrictions: Years of dataYears of data Inexperienced teachersInexperienced teachers

Prior Research Comparing Value-Added of Teachers Across Preparation Programs Models with Program and School Fixed Effects Models with Program and School Fixed Effects New York City -- Boyd, et al. (2008) New York City -- Boyd, et al. (2008) Florida -- Sass (2008) Florida -- Sass (2008) Kentucky -- Kukla-Acevedo, et al. (2009) Kentucky -- Kukla-Acevedo, et al. (2009) HLM Models HLM Models Texas – Mellor, et al. (2010) Texas – Mellor, et al. (2010) Louisiana – Noell, et al. (2009) Louisiana – Noell, et al. (2009)

Data Analyze recent graduates (<3 years of experience) from Florida elementary education teacher preparation programs Analyze recent graduates (<3 years of experience) from Florida elementary education teacher preparation programs teaching in grades 4 and 5 during 2000/ /05 period teaching in grades 4 and 5 during 2000/ /05 period 33 preparation programs 33 preparation programs 1 to 496 teacher graduates in tested grades/subjects 1 to 496 teacher graduates in tested grades/subjects Graduates from a single program working in 1 to 271 schools Graduates from a single program working in 1 to 271 schools Over 550,000 students Over 550,000 students Sample also includes experienced teachers and those certified out of state or in Florida through alternative pathways. Sample also includes experienced teachers and those certified out of state or in Florida through alternative pathways.

Fixed Effects Identification School fixed effect estimation only possible if all preparation programs are linked to one another through the schools where their graduates teach School fixed effect estimation only possible if all preparation programs are linked to one another through the schools where their graduates teach Preparation programs do not need to be linked directly Preparation programs do not need to be linked directly as long as there are some new teachers in the school who graduated from other programs as long as there are some new teachers in the school who graduated from other programs Regional Clustering could lead to stratification Regional Clustering could lead to stratification Work of Boyd et al. (2005) on the “draw of home” suggests graduates tend to teach in schools close to where they grew up or where they went to college Work of Boyd et al. (2005) on the “draw of home” suggests graduates tend to teach in schools close to where they grew up or where they went to college

Figure 1 - Estimated Probability of Preparation Program Graduate Teaching at School with at Least one Graduate from another Program as a Function of Distance from Program to School Negative relationship indicates graduates are more likely to teach in schools closer to where they graduated

Figure 2 – Preparation Program and School Connections Shade of line represents strength of connection - the number of graduates from a program going to that school

Figure 3 – Preparation Program Network Using a 5-Year Data Window All preparation programs are connected, so school fixed effect estimation is possible

Model – Preparation Program Effectiveness We estimate a model of student achievement gains as a function of student characteristics, teacher experience, grade and year indicators and program fixed effects We estimate a model of student achievement gains as a function of student characteristics, teacher experience, grade and year indicators and program fixed effects Program effects are estimated relative to the average preparation program in Florida using STATA felsdvregdm command Program effects are estimated relative to the average preparation program in Florida using STATA felsdvregdm command We rank programs on effectiveness, and divide the rankings into quartiles We rank programs on effectiveness, and divide the rankings into quartiles We compare the rankings and ranking quartiles with and without school fixed effects We compare the rankings and ranking quartiles with and without school fixed effects

Table 2 – Top Tier Preparation Programs No School FE With School FE Program ID Rank Rank Quartile Rank

Table 3 – Bottom Tier Preparation Programs No School FE With School FE Program ID Rank Rank Quartile Rank

Results – Preparation Program Rankings Rankings are significantly affected by the inclusion of school fixed effects in the value added model Rankings are significantly affected by the inclusion of school fixed effects in the value added model Of the 12 programs in the top quartile of rankings in either specification, 8 programs are in a different quartile of rankings in the other specification Of the 12 programs in the top quartile of rankings in either specification, 8 programs are in a different quartile of rankings in the other specification The bottom quartile of program rankings is more stable, with 6 programs in this quartile for both specifications. The bottom quartile of program rankings is more stable, with 6 programs in this quartile for both specifications. There are 2 programs that switch from the bottom quartile of rankings for one specification to the top in the other specification There are 2 programs that switch from the bottom quartile of rankings for one specification to the top in the other specification

Results – Variance Inflation Schools where all new teachers came from a single program do not help identify preparation program effects in school fixed effect model: ~32% of our sample of teachers Schools where all new teachers came from a single program do not help identify preparation program effects in school fixed effect model: ~32% of our sample of teachers Loss of these teachers can greatly inflate the standard errors of the estimated program effects for some programs Loss of these teachers can greatly inflate the standard errors of the estimated program effects for some programs The standard errors of the preparation program estimates increases by an average of 16.5% after including school fixed effects The standard errors of the preparation program estimates increases by an average of 16.5% after including school fixed effects The variance inflation is severe for 10 of the 33 programs, with standard errors increasing over 20% The variance inflation is severe for 10 of the 33 programs, with standard errors increasing over 20%

Homogeneity Assumption School fixed effects can only yield consistent estimates of program effectiveness if there are no systematic differences among teachers and schools that create the connections among programs from other teachers and schools in the state School fixed effects can only yield consistent estimates of program effectiveness if there are no systematic differences among teachers and schools that create the connections among programs from other teachers and schools in the state Three tests of homogeneity assumption: Three tests of homogeneity assumption: 1. Are schools with graduates from 1 program different than school with graduates from more than 4 programs? 2. Are teachers that teach in schools with graduates from 1 program different than teachers that teach in schools with graduates from more than 4 programs 3. Are central schools, ones that help the most in connecting preparation programs different than other schools in the state?

Figure 3 – Central School Locations Central schools have a disproportionately high influence in identifying program effects

Results – Homogeneity Assumption Three tests of homogeneity assumption: Three tests of homogeneity assumption: 1. Schools different? YES: schools with new teachers from 4+ preparation programs are larger, higher % black and Hispanic, lower test scores and higher % free lunch YES: schools with new teachers from 4+ preparation programs are larger, higher % black and Hispanic, lower test scores and higher % free lunch 2. Teachers different? YES: average characteristic of teachers in schools with from 4+ preparation programs are higher % black and Hispanic, lower test scores and lower SAT exams YES: average characteristic of teachers in schools with from 4+ preparation programs are higher % black and Hispanic, lower test scores and lower SAT exams 3. Central schools different? YES: larger schools, higher % Hispanic, immigrant, and LEP students YES: larger schools, higher % Hispanic, immigrant, and LEP students Homogeneity Assumption Likely Violated

Years of Data The data window length affects program connections The data window length affects program connections programs will have a tie through a school if they both have a graduate teaching in the school sometime during the window programs will have a tie through a school if they both have a graduate teaching in the school sometime during the window As we lengthen the window As we lengthen the window  more programs will have ties making estimation possible  however, requires the assumption that both school and program effects are constant over the entire window Other implications Other implications Variance inflation Variance inflation Homogeneity assumption Homogeneity assumption

Results – Years of Data Identification is robust to shorter window length Identification is robust to shorter window length Even with 3 years of data the school fixed effects can be identified Even with 3 years of data the school fixed effects can be identified Restricting to 2 years of data results in 3 preparation programs being disconnected Restricting to 2 years of data results in 3 preparation programs being disconnected Variance inflation is worse Variance inflation is worse Due to an increase in the proportion of teachers in schools with graduates from a single program who are not used to estimate program effects Due to an increase in the proportion of teachers in schools with graduates from a single program who are not used to estimate program effects Characteristics of schools that contribute to program estimates with school fixed effects are very similar Characteristics of schools that contribute to program estimates with school fixed effects are very similar Somewhat larger schools and higher % Hispanic Somewhat larger schools and higher % Hispanic

Sample of Teachers In the results reported so far only inexperienced teachers In the results reported so far only inexperienced teachers (< 3 years experience) were included in the analysis. (< 3 years experience) were included in the analysis. This restriction is warranted if the impact of the preparation program dissipates as the teacher gains experience on the job This restriction is warranted if the impact of the preparation program dissipates as the teacher gains experience on the job We can include experienced teachers in the sample and restrict program effects to exclude these teachers We can include experienced teachers in the sample and restrict program effects to exclude these teachers but experienced teachers change the connections between schools and preparation programs but experienced teachers change the connections between schools and preparation programs Experienced teachers will identify school fixed effects Experienced teachers will identify school fixed effects This can result in reduced variance of program effects This can result in reduced variance of program effects Our ability to compare programs could rely on differences between schools on experienced teachers Our ability to compare programs could rely on differences between schools on experienced teachers

Results – Sample of Teachers Preparation program ranking quartiles are unaffected by the inclusion of experienced teachers in models without school fixed effects Preparation program ranking quartiles are unaffected by the inclusion of experienced teachers in models without school fixed effects Ranking quartiles in models with school fixed effects change for 8 out of 33 programs Ranking quartiles in models with school fixed effects change for 8 out of 33 programs 3 out of these 8 changes are more than 2 quartile differences 3 out of these 8 changes are more than 2 quartile differences Experienced teacher result in lower program effect variances in models with school fixed effects by 13% on average Experienced teacher result in lower program effect variances in models with school fixed effects by 13% on average

Summary and Conclusions Good News for School Fixed Effects Models: Good News for School Fixed Effects Models: Despite regional clustering, Florida preparation programs are connected, so use of school fixed effects is feasible Despite regional clustering, Florida preparation programs are connected, so use of school fixed effects is feasible There is significant variation in school characteristics for graduates of any preparation program, so use of school fixed effects desirable There is significant variation in school characteristics for graduates of any preparation program, so use of school fixed effects desirable Bad News for Models with School Fixed Effects Bad News for Models with School Fixed Effects Including school fixed effects inflates variance of estimated program effects Including school fixed effects inflates variance of estimated program effects Homogeneity assumption likely violated Homogeneity assumption likely violated Preparation program effectiveness rankings differ significantly with and without school fixed effects Preparation program effectiveness rankings differ significantly with and without school fixed effects

Summary and Conclusions A 3-year data window and use of school fixed effects may provide a reasonable compromise between bias and variance inflation A 3-year data window and use of school fixed effects may provide a reasonable compromise between bias and variance inflation However, there is no clean empirical method to identify a model with no bias or a model that yields program effect estimates with the smallest MSE However, there is no clean empirical method to identify a model with no bias or a model that yields program effect estimates with the smallest MSE States will need to make a choice knowing that the choice may affect preparation program rankings and might be yielding a biased estimate unless untestable assumptions hold States will need to make a choice knowing that the choice may affect preparation program rankings and might be yielding a biased estimate unless untestable assumptions hold States may need to consider if value added modeling alone can provide useful information about preparation program States may need to consider if value added modeling alone can provide useful information about preparation program