The Evaluation of Charter School Impacts June 30, 2010 Presentation at the 2010 IES Research Conference Philip Gleason ● Melissa Clark Christina Clark.

Slides:



Advertisements
Similar presentations
Policy Studies Associates, Inc. Evaluation of the New Century High Schools Initiative Elizabeth Reisner American Youth Policy Forum October 27, 2006.
Advertisements

Template: Making Effective Presentation about Your Evidence-based Health Promotion Program This template is intended for you to adapt to your own program.
David Fairris Tarek Azzam
Teacher Effectiveness in Urban Schools Richard Buddin & Gema Zamarro IES Research Conference, June 2010.
Achievement Analyses – Matched Cohort Groups Oklahoma A+ Schools® vs. Randomly Matched OKCPS Students  OKLAHOMA CITY PUBLIC SCHOOLS  PLANNING, RESEARCH,
Explaining Race Differences in Student Behavior: The Relative Contribution of Student, Peer, and School Characteristics Clara G. Muschkin* and Audrey N.
Measuring the Impact of Full-day Kindergarten: Experimental Evidence Chloe Hutchinson Gibbs University of Chicago & Learning Point Associates March 4,
KIPP: Effectiveness and Innovation in Publicly-Funded, Privately-Operated Schools October 4, 2012 Presentation to the APPAM/INVALSI Improving Education.
WHY LARGE-SCALE RANDOMIZED CONTROL TRIALS? David Myers Senior Vice President IES 2006 Research Conference David Myers Senior Vice President IES 2006 Research.
NAEP 2008 Trends in Academic Progress in Reading and Mathematics.
UCLA Graduate School of Education & Information Studies National Center for Research on Evaluation, Standards, and Student Testing Keeping Kids in School:
Magnet Schools and Peers: Effects on Student Achievement Dale Ballou Vanderbilt University November, 2007 Thanks to Steve Rivkin, Julie Berry Cullen, Adam.
Washington State Prevention Summit Analyzing and Preparing Data for Outcome-Based Evaluation Using the Assigned Measures and the PBPS Outcomes Report.
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
School Performance Measure Calculations SY Office of Achievement and Accountability.
Introduction to NYC Teacher Data Initiative Training for Schools Fall 2008.
Reducing Chronic Absence What Will It Take? 2014.
Impact of fine arts on academic success A Comparison study EDRS 5305 Fall 2004 Dr. Teresa Cortez.
Vouchers in Milwaukee: What Have We Learned From the Nation’s Oldest and Largest Program? Deven Carlson University of Oklahoma.
9/6/2015 Choices for Studying Choice Dev Davis Macke Raymond.
Overview of the Early College High School Initiative Evaluation Susan Cole Mengli Song Andrea Berger American Institutes for Research Presentation at the.
NYC ACHIEVEMENT GAINS COMPARED TO OTHER LARGE CITIES SINCE 2003 Changes in NAEP scores Class Size Matters August
1 New York State Growth Model for Educator Evaluation 2011–12 July 2012 PRESENTATION as of 7/9/12.
What Was Learned from a Second Year of Implementation IES Research Conference Washington, DC June 8, 2009 William Corrin, Senior Research Associate MDRC.
Impacts of Comprehensive Teacher Induction: Final Results from a Randomized Trial IES Summer Research Conference, June 2010 Steven Glazerman ● Eric Isenberg.
Project CLASS “Children Learning Academic Success Skills” This work was supported by IES Grant# R305H to David Rabiner Computerized Attention Training.
Supplemental Educational Services (SES) Data Collection Process: Roles and Responsibilities of LEAs GaDOE Data Collections Conference August 17, 2011 Athens,
MPS High School Evaluation Council of the Great City Schools Annual Fall Conference October, 2010 Deb Lindsey, Milwaukee Public Schools Bradley Carl, Wisconsin.
Evaluation of After School Programs Denise Huang CRESST Conference September 8th, 2005.
A Picture of Young Children in the U.S. Jerry West, Ph.D. National Center for Education Statistics Institute of Education Sciences EDUCATION SUMMIT ON.
CALIFORNIA DEPARTMENT OF EDUCATION Jack O’Connell, State Superintendent of Public Instruction Results of the 2005 National Assessment of Educational Progress.
Evaluation Results Missouri Reading Initiative.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
Issue: Affirmative Action Group Names TITLE SLIDE.
No Child Left Behind Adequate Yearly Progress (AYP) Know the Rules Division of Performance Accountability Dr. Marc Baron, Chief Nancy E. Brito, Instructional.
Classifying Designs of MSP Evaluations Lessons Learned and Recommendations Barbara E. Lovitts June 11, 2008.
The Nation’s Report Card: U.S. History National Assessment of Educational Progress (NAEP)
Key Considerations in Collecting Student Follow-up Data NACTEI May 15, 2012 Portland, OR Promoting Rigorous Career and Technical Education Programs of.
Precision Gains from Publically Available School Proficiency Measures Compared to Study-Collected Test Scores in Education Cluster-Randomized Trials June.
Evaluation of the DC Opportunity Scholarship Program: Final Report Conducted by Westat, University of Arkansas, Chesapeake Research Associates Presented.
© Institute for Fiscal Studies, 2008 When you are born matters: the impact of date of birth on child cognitive outcomes in England Claire Crawford, Lorraine.
1 New York State Growth Model for Educator Evaluation 2011–12 July 2012 PRESENTATION as of 7/9/12.
American Educational Research Association Annual Meeting AERA San Diego, CA - April 13-17, 2009 Denise Huang Examining the Relationship between LA's BEST.
Evaluation Results Missouri Reading Initiative.
Comparing and interpreting findings on the prevalence and characteristics of children and youth with special health care needs (CYSHCN) in three national.
The Diabetic Retinopathy Clinical Research Network Effect of Diabetes Education During Retinal Ophthalmology Visits on Diabetes Control (Protocol M) 11.
Lincoln Community Learning Centers A system of partnerships that work together to support children, youth, families and neighborhoods. CLC.
Framework of Preferred Evaluation Methodologies for TAACCCT Impact/Outcomes Analysis Random Assignment (Experimental Design) preferred – High proportion.
NCLB / Education YES! What’s New for Students With Disabilities? Michigan Department of Education.
School-level Correlates of Achievement: Linking NAEP, State Assessments, and SASS NAEP State Analysis Project Sami Kitmitto CCSSO National Conference on.
Does Anxiety Vary by Gender and Race During Adolescence? Alyson Cavanaugh, Kelly A. Cheeseman, and Christine McCauley Ohannessian University of Delaware.
Who’s Minding the Kids in the Summer? Child Care Arrangements for Summer 2006 Lynda Laughlin - U.S. Census Bureau Joseph Rukus - Cornell University Annual.
ESEA Federal Accountability System Overview 1. Federal Accountability System Adequate Yearly Progress – AYP defined by the Elementary and Secondary Education.
The Nation’s Report Card: Trial Urban District Assessment: Science 2005.
Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on a First Cohort of Fifth-Grade Students June 8, 2009 IES Annual Research.
Youth Who Received Informal Handling/Supervision in 2006 DCJ Quality & Evaluation Services April 2009 Prepared by: Liang Wu, Sr. Research Analyst Charlene.
Measuring Turnaround Success October 29 th, 2015 Jeanette P. Cornier, Ph.D.
Updates on Oklahoma’s Accountability System Jennifer Stegman, Assistant Superintendent Karen Robertson, API Director Office of Accountability and Assessments.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
Standards report Standards Report CT Board 18 th March 2016.
(Insert Fictional School Name Here) ADMS 618 Leadership for Educational Change and Improvement Jamilah Anderson Christian Nolde Chris Sumner.
1 New York State Growth Model for Educator Evaluation June 2012 PRESENTATION as of 6/14/12.
Evaluation of Teachers Trained Through Different Routes to Certification Presentation at the IES Research Conference June 2009 Jill Constantine Mathematica.
The Impact of School Improvement Grants (SIG) on Student Outcomes:
2017 NAEP RESULTS: DC PUBLIC CHARTER SCHOOLS
The Impact, Costs, and Benefits of NC’s Early College Model
Urban Charter Schools IMPACT in Colorado March 2015
Urban Charter Schools IMPACT in Missouri March 2015
School Performance Measure Calculations SY
1/18/2019 1:17:10 AM1/18/2019 1:17:10 AM Discussion of “Strategies for Studying Educational Effectiveness” Mark Dynarski Society for Research on Educational.
Presentation transcript:

The Evaluation of Charter School Impacts June 30, 2010 Presentation at the 2010 IES Research Conference Philip Gleason ● Melissa Clark Christina Clark Tuttle ● Emily Dwoyer

 What are the impacts of charter schools on student achievement and other outcomes?  What characteristics of charter schools and their environments are related to charter schools' impacts? Study Questions 2

 Careful monitoring of admissions lotteries at 36 charter middle schools in 15 states  Sample: 2,330 applicants to charter schools in study  Data –State assessments in reading and math –Other outcomes from school records –Surveys of students, parents, and principals Experimental Design Based on Admissions Lotteries 3

Characteristics of Students in the Sample 4 CharacteristicTreatmentControl Average Test Scores Reading Math0.46 Number of absences Race/Ethnicity Proportion white Proportion black Proportion Hispanic Age (years) Proportion with IEP Proportion getting free/RP meals0.34

Type of School Attended in Year 1 5

 Each charter school is mini-experiment –Calculate difference in average outcomes between treatment and control groups –Control for baseline achievement & other characteristics  Average impacts across charter school sites  Calculate both: –Impact of being admission to a study charter school (ITT) –Impact of attending charter school (TOT) Estimating Charter School Impacts 6

Summary of Impacts on Key Outcomes 7 Category of outcomesSignificant Difference Between Treatment and Control Students? Student achievement/proficiencyNo Other measures of academic progressNo Homework completionNo Behavior in and out of schoolNo Parent/student satisfaction with schoolYes (+) Parental involvement in child’s educationMixed

8 * Difference is statistically significant at the 0.05 level after adjusting for multiple hypothesis testing. **Difference is statistically significant at the 0.01 level after adjusting for multiple hypothesis testing. Impacts on Average Test Scores, Year 2

Impacts on Attendance 9 * Difference is statistically significant at the 0.05 level. **Difference is statistically significant at the 0.01 level.

Impacts on Student Behavior 10 * Difference is statistically significant at the 0.05 level. **Difference is statistically significant at the 0.01 level.

Impacts on Satisfaction with School 11 * Difference is statistically significant at the 0.05 level. **Difference is statistically significant at the 0.01 level.

Significant Variation in Site-Level Impact Estimates 12 Variation in impacts is statistically significant at the 0.01 level, two-tailed test. Colored bars are statistically significant impacts at the 0.05 level, two-tailed test.

Impacts on Year 2 Test Scores, by Percent Eligible for Free or Reduced Price Meals 13 Impact statistically significant at the 0.05 (†) or 0.01 (††) level. Difference between subgroups significant at the 0.05 (#) or 0.01 (##) level.

Impacts on Year 2 Test Scores, by Baseline Achievement in Site 14 Impact statistically significant at the 0.05 (†) or 0.01 (††) level. Difference between subgroups significant at the 0.05 (#) or 0.01 (##) level.

Impacts on Year 2 Test Scores, By Urbanicity 15 Impact statistically significant at the 0.05 (†) or 0.01 (††) level. Difference between subgroups significant at the 0.05 (#) or 0.01 (##) level.

 No significant impacts on student achievement –Positive impacts on student/parent satisfaction with school  Impacts vary significantly across sites  Most successful schools were those serving disadvantaged students, in large urban areas Summary of Key Findings 16

 First study to provide experimental estimates for a national sample of charter schools  Existing experimental studies limited to large urban areas (Boston, NYC) –They find positive impacts, consistent with our results for large urban areas  Existing national studies are nonexperimental –They find insignificant or slightly negative results, consistent with our overall impact estimates Contribution to Literature 17

 Report available at – –  Please contact: –Philip Gleason, Project Director –Christina Clark Tuttle –Melissa Clark For More Information 18

Supplemental Slides 19

School Selection Process 20 Yes (n=77) Yes (n=167) ALL CHARTER SCHOOLS No Yes Ineligible (Not a charter middle school) Entry grade between 4 and 7? At least 2 years old? Initial screen indicates potential for oversubscription? Ineligible (Not enough experience) Recruiting effort confirms eligibility? Willing to participate? Ineligible (Not oversubscribed) PARTICIPATING SCHOOLS Refusal Yes (n=492) Yes (n=130) Yes (n=36) No Maintain eligibility through admissions period?

Student Sample Selection Process Yes Apply to charter school Yes No Out of sample (admitted to school) Yes Exempt from lottery? Give consent? Participate in lottery: Win a slot? Out of sample (participate in lottery & have chance of admission) Offered admission Placed on wait list High enough on wait list to be offered admission anyway? Treatment Group (60%) Control Group (40%)

Data Collection Timeline 22 InstrumentCohort 1Cohort 2 Baseline surveySpring/Summer 2005Spring/Summer 2006 School records Baseline year st follow-up year nd follow-up year Student/parent surveys Student survey Spring 2006Spring 2007 Parent survey Spring 2006Spring 2007 Principal surveys Study schools Fall 2006Fall 2007 Non-study charter schools Fall 2007

Impacts on Student Subgroups 23 Subgroup categoriesSignificant difference in impacts? Certification for free or reduced price lunchYes Race (white vs. nonwhite and Hispanic)No GenderNo Baseline reading/math achievementYes

Impacts on Year 2 Test Scores, by Race 24 Statistically significant at the 0.05 (*) or 0.01 (**) level after multiple comparison adjustment. Difference between subgroups significant at the 0.05 (^) or 0.01 (^^) level after multiple comparison adjustment.

Impacts on Year 2 Test Scores, by Gender 25 Statistically significant at the 0.05 (*) or 0.01 (**) level after multiple comparison adjustment. Difference between subgroups significant at the 0.05 (^) or 0.01 (^^) level after multiple comparison adjustment.

Impacts on Year 2 Test Scores, by Baseline Reading Achievement 26 Statistically significant at the 0.05 (*) or 0.01 (**) level after multiple comparison adjustment. Difference between subgroups significant at the 0.05 (^) or 0.01 (^^) level after multiple comparison adjustment.

Impacts on Year 2 Test Scores, by Baseline Math Achievement 27 Statistically significant at the 0.05 (*) or 0.01 (**) level after multiple comparison adjustment. Difference between subgroups significant at the 0.05 (^) or 0.01 (^^) level after multiple comparison adjustment.