Rochester City School District 2010 Symposium Improving Student Achievement While Overcoming Adversity Kent Gardner, PhD, President Center for Governmental.

Slides:



Advertisements
Similar presentations
NYC Teacher Data Initiative: An introduction for Teachers ESO Focus on Professional Development December 2008.
Advertisements

NYC Teacher Data Initiative: An introduction for Principals ESO Focus on Professional Development October 2008.
Value Added in CPS. What is value added? A measure of the contribution of schooling to student performance Uses statistical techniques to isolate the.
Ph. D. Completion and Attrition: Baseline Program Data
Policy Studies Associates, Inc. Evaluation of the New Century High Schools Initiative Elizabeth Reisner American Youth Policy Forum October 27, 2006.
Dropout Prevention EDSTAR, Inc.. © 2009 EDSTAR, Inc. Answer Key = Website
1. 2 Why are Result & Impact Indicators Needed? To better understand the positive/negative results of EC aid. The main questions are: 1.What change is.
Newark Kids Count 2011 A City Profile of Child Well-Being Advocates for Children of New Jersey 35 Halsey Street Newark, NJ
Alaska Accountability Adequate Yearly Progress January 2008, Updated.
Alaska Accountability Adequate Yearly Progress February 2007, Updated.
August 8, 2013 Texas Education Agency | Office of Assessment and Accountability Division of Performance Reporting Shannon Housson, Director Overview of.
Annual Progress Report Submitted to The Iowa Department of Education by the Cedar Rapids Community School District.
The Grade 9 Cohort of Fall 2000: Post-secondary Pathways Preliminary Analysis Presentation to HEQCO - June 15, 2009 Dr. Robert S. Brown Organizational.
Jamesville-DeWitt School Report Card Presented to the Board of Education May 10, 2010.
Testing for Tomorrow Growth Model Testing Measuring student progress over time.
Chapter 7 Hypothesis Testing
A Model Description By Benjamin Ditkowsky, Ph.D. Student Growth Models for Principal and Student Evaluation.
AMY ELLEN SCHWARTZ NEW YORK UNIVERSITY LEANNA STIEFEL NEW YORK UNIVERSITY ROSS RUBENSTEIN SYRACUSE UNIVERSITY JEFFREY ZABEL TUFTS UNIVERSITY Can Reorganizing.
Comparing Two Groups’ Means or Proportions: Independent Samples t-tests.
24 th June Widening Participation in the 21 st Century: A decade of learning Level Zero Foundation Years – the Cinderella Programmes of Widening.
The Social and Educational Factors Contributing to the Outcomes of Hispanics in Urban Schools.
Facts About the Preparation and Transition of LD Students A Snapshot from the National Longitudinal Transition Study-2 (NLTS2) Dr. Jose Blackorby SRI International.
1 Public Primary Schools and Making Connections Neighborhoods Denver CHAPSS Learning Exchange May 15, 2008 Tom Kingsley and Leah Hendey The Urban Institute.
Professor Les Ebdon CBE Director of Fair Access to Higher Education.
Putting Statistics to Work
Inferential Statistics
An Evaluation of the Early Progress of The Pittsburgh Promise ® and New Haven Promise Gabriella C. Gonzalez and Robert Bozick.
COLORADO GEAR UP SUCCESS 1. High School Graduation Rates On-Time graduation rates for students graduating after the school year. Colorado GEAR.
Early College, Early Success January 2014 Copyright © 2014 American Institutes for Research. All rights reserved. Results From the Early College High School.
Unit 4 – Inference from Data: Principles
The Aging Population Source: U.S. Census Bureau Percent Growth in U.S. Population, by Age Bracket.
Mark D. Reckase Michigan State University The Evaluation of Teachers and Schools Using the Educator Response Function (ERF)
DR. CHIALIN HSIEH DIRECTOR OF PLANNING, RESEARCH & INSTITUTIONAL EFFECTIVENESS APRIL 20, 2010 ARCC 2010 Report Accountability Reporting for the Community.
Diverse Children: Race, Ethnicity, and Immigration in America’s New Non-Majority Generation by Donald J. Hernandez, Ph.D. Hunter College, City University.
Teacher Effectiveness in Urban Schools Richard Buddin & Gema Zamarro IES Research Conference, June 2010.
1 Graduation Rates: Students Who Started 9 th Grade in 2005, 2006, 2007, 2008 and 2009.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Evaluating Hypotheses Chapter 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics.
Evaluating Hypotheses Chapter 9 Homework: 1-9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics ~
1 Leanna Stiefel and Amy Ellen Schwartz Faculty, Wagner Graduate School and Colin Chellman Research Associate, Institute for Education and Social Policy.
Grade 3-8 English Language Arts and Mathematics Results August 8, 2011.
@gardnercenter. Community Research for Youth and Families Amy Gerstein Children and Families Policy Symposium March 4,
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Data Analysis Concepts & Terms
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Evaluating the Vermont Mathematics Initiative (VMI) in a Value Added Context H. ‘Bud’ Meyers, Ph.D. College of Education and Social Services University.
1 New York State Growth Model for Educator Evaluation 2011–12 July 2012 PRESENTATION as of 7/9/12.
1 Results for Students and Individuals with Disabilities September 2008.
Graduate School of Education Leading, Learning, Life Changing Evolving Oregon Educational Policy Courtesy of Pat Burk, Ph.D. Department of Educational.
The Evaluation of Charter School Impacts June 30, 2010 Presentation at the 2010 IES Research Conference Philip Gleason ● Melissa Clark Christina Clark.
How Much of a “Running Start” Do Dual Enrollment Programs Provide Students? James Cowan & Dan Goldhaber Center for Education Data & Research (
1 Chronic Absence in the Early Grades: Presentation to NNIP An Applied Research Project funded by the Annie E. Casey Foundation (October 2008)
State Charter Schools Commission of Georgia SCSC Academic Accountability Update State Charter School Performance
MPS High School Evaluation Council of the Great City Schools Annual Fall Conference October, 2010 Deb Lindsey, Milwaukee Public Schools Bradley Carl, Wisconsin.
1 Graduation Rates: Students Who Started 9 th Grade in 2000, 2001, and 2002.
Mark DeCandia Kentucky NAEP State Coordinator
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
© CCSR ccsr.uchicago.edu. © CCSR Early Warning Indicators of High School Graduation and Dropout Elaine Allensworth.
Key Considerations in Collecting Student Follow-up Data NACTEI May 15, 2012 Portland, OR Promoting Rigorous Career and Technical Education Programs of.
1 New York State Growth Model for Educator Evaluation 2011–12 July 2012 PRESENTATION as of 7/9/12.
American Educational Research Association Annual Meeting AERA San Diego, CA - April 13-17, 2009 Denise Huang Examining the Relationship between LA's BEST.
Graduate School of Education Leading, Learning, Life Changing Emerging Trends in K-12 Education in Oregon Patrick Burk, PH.D. Educational Leadership and.
Graduation Rates: Students Who Started 9 th Grade In 2001, 2002, 2003, and 2004 Supplemental Packet.
Estimating Effects and Inferring Implications 2016 Academic Academy Sacramento, CA Terrence Willett Senior Consulting Researcher RP Group.
Looking for statistical twins
The Impact, Costs, and Benefits of NC’s Early College Model
Presentation for MEP Research Symposium Eric Grodsky April 26, 2018
WAO Elementary School and the New Accountability System
Presentation transcript:

Rochester City School District 2010 Symposium Improving Student Achievement While Overcoming Adversity Kent Gardner, PhD, President Center for Governmental Research

Inform & Empower CGR Practical Educational Program Evaluation Challenges & Issues Examples 2001 WIN Schools Evaluation 2005 Rochester Charter Schools Harvard NYC Charter Stanford National Charter Middle College Hillside Work-Scholarship Connection

Inform & Empower CGR What’s the goal? Middle College: College prep Hillside Work-Scholarship Connection (HWSC): “Graduation is the Goal” Who decides? What if the endeavor has multiple goals? Can you monitor progress by measuring intermediate or process goals?

Inform & Empower CGR What does success look like? Does the goal have a measurable outcome? Graduation is relatively easy to measure How do you measure college readiness? Are there intermediate outcomes that are measurable? Attendance Credits accumulated Which intermediate outcomes contribute most powerfully to the final outcome?

Inform & Empower CGR Data Pitfalls Why were the data collected? Unemployment insurance NYS’s checkbook School lunch If you intend to adapt data to a new use, are they accurate enough for the new purpose?

Inform & Empower CGR

Inform & Empower CGR Data Pitfalls Bias/Fraud High stakes tests: NYSED cut scores Attendance Suspensions Consistency Elementary grades across classes, schools Coding across years Coding across data systems—attendance can vary depending how & when measured

Inform & Empower CGR Assessing impact Consider how the program affects outcomes— really want to compare how the outcomes for individual students would have been different had they not participated Instead, we compare outcomes for the “experimental” group (HWSC or Middle College participants, for example) to those of students who did not participate Challenges What’s the comparison group? All others who might have participated? Can you control for all differences?

Inform & Empower CGR Matched Group Comparison Experimental design is the “platinum standard” Random assignment to either control or experimental group “Double blind” to avoid placebo effect Assignment from homogeneous population Random assignment Challenging—how do you find a context in which you can randomly select Costly—if you want to be sure of drawing from a homogeneous population, you need a big sample

Inform & Empower CGR Fallback from random assignment When random assignment infeasible or too costly, revert to “quasi-experimental” design: “Control group” is created by a process of selecting similar students Case Control: match one to one based on common characteristics Propensity Score Matching

Inform & Empower CGR Propensity Score Matching Sophisticated statistical technique: Creates a statistical model that predicts group membership according to available characteristics of participants “Retroactive” selection of control group: Can employ large data sets, including demographic characteristics, test scores prior to program participation, etc. & guarantee a control group of a predetermined size Students “in program” can be matched to multiple students not in program—1:1, 1:3, 1:5 matching proportions possible depending on size of comparison population Still can’t control for unseen factors—family characteristics, motivation, etc.—that may be consistently different in one group over the other

Inform & Empower CGR ‘01:Wegman Inner City Voucher (WIN) 98% of enrolled students in 6 inner city Catholic schools supported by WIN vouchers Case control model matching WIN students with demographically- comparable students from RCSD “schools of choice” (15, 20, 57, 58) Intended to acknowledge motivational difference between Catholic & public school families Matched on age, sex, race, F/RPL, mother’s education Poverty higher at WIN schools Comparisons? Compared Iowa Test of Basic Skills trend performance against ITBS national norms Common assessment across schools was 4 th grade ELA & Math scores for both WIN and schools of choice Couldn’t adjust for “starting point” as conversion from Stanford 9 to ITBS unreliable Conclusion: WIN and students from schools of choice performed about the same on 4 th grade ELA & Math

Inform & Empower CGR ‘05: Rochester Charter Schools CGR engaged by Gleason Foundation to monitor performance of newly-formed charter schools for first five years (beginning 2000) Expect “selection bias” for charter lottery applicants? Motivation, prior achievement Solution: Follow students not accepted by lottery RCSD facilitated monitoring of state & local tests for students enrolled in charter schools & in lottery, but remaining in traditional schools Created “value added” achievement using scores from year prior to enrollment for both groups Findings Attrition in both groups made comparisons difficult Yet findings supported conclusion that two large charter schools (Edison & National Heritage) underperformed RCSD schools Both schools were closed by NYS Charter Schools Institute

Inform & Empower CGR Harvard School of Ed (Caroline Hoxby): New York City Charter Schools Adopted same approach used by CGR in 2000: “lotteried in” v. “lotteried out” All lottery participants more black (64% v. 34%), more poor (F/RPL 92% v. 72%) than all NYC public school students Hispanic 29%/38% ELL 4%/14%; SPED 11%/13% Different in other ways? Findings “Lotteried out” students remained on grade level in traditional NYC public schools, outperforming NYC students similarly disadvantaged “Lotteried in” did better Key point: Studying only students who were part of a lottery “controls” for unseen factors like family motivation, etc.

Inform & Empower CGR Stanford CREDO (Mackie Raymond): Multistate study Employed state administrative records to create “pairwise comparison” of individual students in 15 states Matched on grade‐level, gender, race/ethnicity, F/RPL, ELL, SPED, prior test score on state achievement tests Profile 27% black, 30% Hispanic 7% ELL, 7% SPED 49% F/RPL

Inform & Empower CGR Stanford CREDO (Mackie Raymond): Multistate study

Inform & Empower CGR Stanford CREDO (Mackie Raymond): Multistate study

Inform & Empower CGR Middle College RCSD/RIT program aimed at “college readiness” for three Franklin high schools Measurement problematic—How define college readiness? How assess college readiness? Agreement on goals and objectives varied across RCSD & RIT faculty One measurement idea, “before and after” ACCUPLACER scores, proved unrealistic CGR’s role evolved to be more about process than outcome

Inform & Empower CGR Hillside Work-Scholarship Connection Focus on critical output indicator: Graduation rates Through , CGR studies based on one- to-one match of HWSC participants to RCSD students Matching conducted by individuals on Accountability staff Matched on by age, gender, race/ethnicity, F/RPL participation, grade, prior year GPA

Inform & Empower CGR HWSC: Propensity Score Matching New study for students whose “on time” graduation years were 2007, 2008 and 2009 Relied on very high level of cooperation w/ Accountability HWSC participants matched to nonparticipants by age, gender, race/ethnicity, poverty status, disability, English language learner status, grade, school quality, prior year GPA, prior year attendance, prior year suspensions, prior year state test scores

Inform & Empower CGR HWSC: Propensity Score Matching Grouped students in two ways By entry grade (8 th, 9 th, or 10 th ) & on-time graduation year (2007, 2008 or 2009) for NINE groups or “cohorts”  Groups are more homogeneous  “Graduation” has a consistent definition  BUT the groups are smaller By enrollment year (02-03 through 06-07) across all grades for THREE cohorts  HW-SC enrollment practices more consistent  Groups are larger  BUT graduation standards will vary

Inform & Empower CGR Propensity score matching complexity Considered many variations Matched 1:1, 1:3, & 1:5 RCSD student(s) to each HWSC student Studied on-time, on-time + 1yr graduation 2 probability distributions: logit v. probit 108 model “runs” (12 variants by 9 cohorts) 95% confidence interval: The true value will lie within the interval 95% of the time

Inform & Empower CGR Final statistical comments Statistical significance How often would this result occur by chance? 95% confidence interval: Given the size of the sample and an unbiased sampling procedure, the true “population parameter” will fall within this range 95 times out of % confidence interval: true “population parameter” will fall within this range 99 times out of 100 “Effect size” or importance of result

Inform & Empower CGR Questions?