Presentation is loading. Please wait.

Presentation is loading. Please wait.

A District-initiated Appraisal of a State Assessments Instructional Sensitivity HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE Stephen C. Court Presented in.

Similar presentations


Presentation on theme: "A District-initiated Appraisal of a State Assessments Instructional Sensitivity HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE Stephen C. Court Presented in."— Presentation transcript:

1 A District-initiated Appraisal of a State Assessments Instructional Sensitivity HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE Stephen C. Court Presented in Symposium American Educational Research Association (AERA) Annual Meeting May 2, 2010 Denver, Colorado

2 Accountability Basic premise: Teaching Learning Proficiency High proficiency rates = Good schools Low proficiency rates = Bad schools

3 Accountability Basic Assumption State assessments distinguish well- taught students from not so well-taught students with enough accuracy to support accountability decisions.

4 Accountability Q: Is the assumption warranted? A: Only if the tests are instructionally sensitive. When tests are insensitive, accountability decisions are based on the wrong things – e.g., SES.

5 Kansas: SES

6 Kansas: Test Scores

7 Kansas: Exemplary by SES

8 The Situation in Kansas Basic Question Can the instruction in low-poverty districts truly be that much better than the instruction in high-poverty districts? Or, do instructionally-irrelevant factors (such as SES) distort or mask the effects of instruction?

9 Multi-district Study Purpose –To compare instructional sensitivity appraisal models and methods –To appraise the instructional sensitivity of the Kansas state assessments District-initiated because no state-level study had been initiated –Indicator-level analysis –Loss/gain because no indicator-level cut scores Based initially on empirical approach recommended by Popham (2008)

10 Tactical Variations A variety of practical constraints and preliminary findings raised several conceptual and methodological issues. The original design underwent several revisions. Several tactical variations involving –data collection –data array, analysis, and interpretation

11 Tactical Variations See the paper for details… discusses the issues and design revisions provides exegesis of item-selection criteria and test- construction that yield instructional insensitivity describes, demonstrates, and compares the tactical variations employed in the collection, array, and analysis of the data, as well as in the interpretation of the results Due to time constraints, lets focus just on the juiciest jewels…

12 Study Participants Study Participants 575 teachers responded –320 teachers (grades 3-5 reading and math) –129 reading teachers (grades 6-8) –126 math teachers (grades 6-8) 14,000 students Only Grade 5 reading included in this study. To be reported in June at CCSSO in Detroit: –other reading results (grades 3-8) –all math results (grades 3-8)

13 A Gold Standard By recommending that teachers be asked to identify their best-taught indicators, Popham (2008) transformed the instructional sensitivity issue in a fundamental way – both conceptually and operationally: For the first time since IS inquiries began about 40 years ago, there now could be a gold standard independent of the test itself – a huge breakthrough!

14 Old and New Model A = Non-Learning B = Learning C = Slip D = Maintain A = True Fail B = False Pass = II-E C = False Fail = II-D D = True Pass

15 Initial Analysis Scheme Initial logic: If best-taught students outperform other students, indicator is sensitive to instruction. If mean differences are small or in the wrong direction, indicator is insensitive to instruction.

16 Problem But significant performance differences between best-taught and other students do not necessarily represent instructional sensitivity. affluent students provided ineffective instruction typically end up in Cell B challenged students provided effective instruction typically end up in Cell C

17 Problem Thus: Means-based and DIF-driven approaches that evaluate between- group differences are not appropriate for appraising instructional sensitivity. Instead: Focus on the degree to which indicators accurately distinguish effective from ineffective instruction – without confounding from instructionally irrelevant easiness or difficulty.

18 Conceptually Correct Rather than comparing group differences in terms of means, lets look instead at the combined proportions of true fail and true pass. That is, (A + D) / (A + B + C + D) Which can be shortened to (A + D) / N = Malta Index

19 Malta Index (A + D) / N Ranges from 0 to 1 (Completely Insensitive to Totally Sensitive) In practice: A value of.50 = chance Equivalent to random guessing

20 Totally Sensitive (A + D) / N = (50 + 50) / 100 = 1.0 A perfectly sensitive item or indicator would cluster students into Cell A or Cell D.

21 Totally Insensitive (A+D) / N = (0+0) / 100 = 0.0 A perfectly insensitive test clusters students into Cell B or Cell C

22 Useless (A+D) / N = (25+25) /100 = 0.50 0.50 = mere chance An indicator that cannot distinguish true fail or pass from false fail or pass is totally useless – no better than random guessing.

23 Malta Index Parallels The Malta Index is similar conceptually to: –Mann-Whitney U –Wilcoxon ranks statistic –Area Under the Curve (AUC) in Receiver Operating Characteristic (ROC) curve analysis But its interpretation is embedded in the context of instructional sensitivity appraisal.

24 Malta Index Compared to these other approaches, the Malta Index is easier to… –compute –understand –interpret Thus, it is more accessible conceptually to measurement novices, such as –teachers –reporters –policy-makers

25 ROC Analysis Malta Index values can be depicted graphically as ROC curves.

26 Informal Evaluation Malta Index values can be evaluated informally via acceptability criteria (Hosmer & Lemeshow, 2000) Value –.90-1.0 = excellent (A) –.80-.90 = good (B) –.70-.80 = acceptable (C) –.60-.70 = poor (D) –.50-.60 = fail (F)

27 Indicator Teacher Ratings (Most vs. Less) Prior Data: (Best vs. Not Best) Prior Data: (Best vs. Worst) MIAUCMIAUCMIAUC 1.51.56.64 2.50.51.54.63.64.66 3.50.54.56.59 4.57.55.62.68 5.53.54.72.79 6.52.50.61.69 7.53.50.56.62.63 8.55.53.56.59 9.52.54.57.64 10.52.57.64 11.51.56.59.60.68 12.52.50.57.63 13.66.52.56.58 14.64.58.62 Average.54.53.64.59.64.65

28 Summary and Interpretations AUC and the Malta Index yield very similar but not identical results Identical conclusions overall: Grade 5 reading indicators lack instructional sensitivity –No indicator was graded better than a C –Most were in the Poor to Useless range –Averages ranged from Poor to Useless

29 Summary and Interpretations Low instructional sensitivity values for grade 5 reading were disappointing, especially given: –Local contractor (CETE) –Guidance from TAC (including Popham and Pellegrino) –Concerns from the KAAC (including Court) If Kansas assessments lack instructional sensitivity, what about other states assessments?

30 Conclusion Dear U.S. Department of Education: Please make instructional sensitivity… –An essential component in reviews of RTTT funding applications –A critical element in the approval process of state and consortia accountability plans When the Department revised its Peer Review Guidance (2007) to include alignment as a critical element of technical quality, states were compelled to conduct alignment studies that they otherwise would not have conducted. Instructional sensitivity deserves similar Federal endorsement.

31 Presenters email: scourt1@cox.net Questions, comments, or suggestions are welcome


Download ppt "A District-initiated Appraisal of a State Assessments Instructional Sensitivity HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE Stephen C. Court Presented in."

Similar presentations


Ads by Google