Presentation is loading. Please wait.

Presentation is loading. Please wait.

Gary W. Phillips Vice President and Chief Scientist American Institutes for Research August 3, 2015.

Similar presentations


Presentation on theme: "Gary W. Phillips Vice President and Chief Scientist American Institutes for Research August 3, 2015."— Presentation transcript:

1 Gary W. Phillips Vice President and Chief Scientist American Institutes for Research August 3, 2015

2  States are busy developing new content standards and new criterion-referenced tests that measure success on those content standards.  This activity is related to the federal No Child Left Behind (NCLB) legislation that require states to report annually on whether they are making adequate yearly progress (AYP) toward meeting state standards.  However, when standard setting panelists set performance standards, they generally have little knowledge of how their state performance standards compare with other states.  They also typically have no knowledge of how their state standards compare to national standards such as those used on the National Assessment of Educational Progress (NAEP).

3  They also have no understanding of how their state performance standards compare with international standards, such as those used in TIMSS, PIRLS, and PISA.  Consequently, the states do not know if their performance standards are internationally competitive and are often surprised to find that they are setting low standards (compared to the highest achieving countries in the world) when they thought they were setting a high standard.  My presentation will discuss why states should use international benchmarking as an integral part of their standard setting process.

4  Before I talk about the importance of international benchmarks let me give you a brief tutorial on the three most commonly discussed international assessments that can be used for international benchmarking.

5  TIMSS-Trends in International Mathematics and Science  PIRLS-Progress in International Reading  PISA-Programme for International Student Assessment

6  TIMSS 4-year cycle, Grades 4 & 8, Math & Science ◦ 1995 Math, Science ◦ 1999 Math, Science ◦ 2003 Math, Science ◦ 2007 Math, Science ◦ 2011 Math, Science  PIRLS 5-year cycle, Grade 4, Reading ◦ 2001 Reading ◦ 2006 Reading ◦ 2011 Reading  PISA 3-year cycle, Age 15, Reading, Math, Science ◦ 2000 Reading(Math, Science) ◦ 2003 Math(Reading, Science) ◦ 2006 Science (Reading, Math) ◦ 2009 Reading (Math, Science) ◦ 2012 Math (Reading, Science) ◦ 2015 Science (Reading, Math)

7  Mathematics ◦ Number ◦ Algebra ◦ Geometry ◦ Measurement ◦ Data  Science ◦ Life Science ◦ Chemistry ◦ Physics ◦ Earth Science ◦ Environmental Science

8  Reading Comprehension ◦ Literary Experience ◦ Acquiring and Using Information

9  Reading ◦ Interpreting Texts ◦ Reflection and Evaluation ◦ Retrieving Information  Mathematics ◦ Quantity ◦ Space and Shape ◦ Change and Relationships ◦ Uncertainty  Science ◦ Identifying Scientific Issues ◦ Explaining Phenomena Scientifically ◦ Using Scientific Evidence

10  TIMSS & PIRLS ◦ 2-stage sampling (schools, classrooms) ◦ 150 schools (minimum) ◦ 1 (or 2) classroom per school ◦ 4,500 students (minimum) ◦ Participation rates (85% schools, 80% students) ◦ Population coverage 95%  PISA ◦ 2-stage sampling (schools, students) ◦ 150 schools (minimum) ◦ 30 students per school ◦ 4,500 students (minimum) ◦ Participation rates (85% schools, 80% students) ◦ Population coverage 95%

11  TIMSS ◦ 28 blocks (14 Math, 14 Science) ◦ 300 items (G4), 400 items (G8) ◦ 6 blocks per booklet ◦ Each student takes both subjects ◦ 70 minutes (G4), 90 minutes (G8) per student testing time  PIRLS ◦ 10 passages (5 literary, 5 informational), 120 items ◦ 2 passages per student ◦ 80 minutes per student testing time  PISA ◦ 13 clusters (7 Science, 2 Reading, 4 Math) ◦ 108 (Science), 31 items (Reading), 48 items (Math) ◦ 4 clusters per booklet ◦ Each student takes all three subjects ◦ 120 minutes per student testing time

12

13

14  Each state develops its own academic content standards  Each state develops its own performance standards  Each state develops its own test  Each state reports adequate yearly progress to the federal government based on ◦ different content standards, ◦ different performance standards, ◦ different tests. 14 Gary W. Phillips American Institutes for Research

15  Data from the 50 states are not comparable. ◦ No inference about national progress is possible. ◦ We cannot tell if progress in one state is better than progress in another state.  From a scientific point of view the system lacks transparency. ◦ Transparency in measurement is the first and most fundamental requirement for progress in science. ◦ Common metrics are needed for transparency. ◦ The ability to derive comparable measures from different measuring devices is the very definition of transparency. 15 Gary W. Phillips American Institutes for Research

16  We compare each state achievement standard to a high common international standard.  This gives us a comparable measure against which state standards from different tests can be compared. 16 Gary W. Phillips American Institutes for Research

17 17 Gary W. Phillips American Institutes for Research

18 18 Gary W. Phillips American Institutes for Research

19 19 Gary W. Phillips American Institutes for Research

20 20 Gary W. Phillips American Institutes for Research

21 21 Gary W. Phillips American Institutes for Research

22 22 Gary W. Phillips American Institutes for Research

23 23 Gary W. Phillips American Institutes for Research

24 24 Gary W. Phillips American Institutes for Research

25 25 Gary W. Phillips American Institutes for Research

26 26 Gary W. Phillips American Institutes for Research

27  In all three grades and subjects ◦ Correlation between the performance standard and percent proficient was about -.80. ◦ This means 60% of the variance in results reported to NCLB is due to how high or low the states set their performance standards. ◦ The Expectation Gap (the difference between the highest and lowest performance standard) was about two standard deviations (twice the size of the Achievement Gap). Gary W. Phillips American Institutes for Research 27

28  The fundamental problem with the existing state testing paradigm is lack of transparency. ◦ Low expectations in most states. ◦ Misleads the public because high levels of proficiency are often obtained by using low standards. ◦ Students in states with low standards do not have the opportunity to learn challenging content.  International benchmarking provides a mechanism for calibrating the difficulty and an external referent for gauging the global competitiveness of each state performance standard. 28 Gary W. Phillips American Institutes for Research

29  Envisioned by the CCSSO/NGA Common Standards.  Promoted by the Race to the Top Assessment Program competition. Gary W. Phillips American Institutes for Research 29

30 Gary W. Phillips American Institutes for Research 30

31 Gary W. Phillips American Institutes for Research 31

32 Gary W. Phillips American Institutes for Research 32

33  When the data were reanalyzed based on a level playing field there was a dramatic drop among the states reporting the highest levels of proficiency. Using Grade 4mathematics as an example we find: ◦ Tennessee and Colorado dropped from 90% and 91% to 29% and 40%, respectively. ◦ ◦ Massachusetts goes from being one of the states with the lowest percent proficient (49%) based on NCLB results to the highest achieving state with 63% percent proficient. ◦ This is consistent with NAEP results that indicate Massachusetts is the highest achieving state in the nation. Gary W. Phillips American Institutes for Research 33


Download ppt "Gary W. Phillips Vice President and Chief Scientist American Institutes for Research August 3, 2015."

Similar presentations


Ads by Google