Presentation is loading. Please wait.

Presentation is loading. Please wait.

Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools John L. Hosp, PhD University of Iowa.

Similar presentations


Presentation on theme: "Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools John L. Hosp, PhD University of Iowa."— Presentation transcript:

1 Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools John L. Hosp, PhD University of Iowa

2 Overview of this Webinar Share rubrics for evaluating screening and progress tools Describe process Iowa Department of Education used to apply rubrics

3 Purpose of the Review Survey of universal screening and progress tools currently being used by LEAs in Iowa Review these tools for technical adequacy Incorporate one tool into new state data system Provide access to tools for all LEAs in state

4 Collaborative Effort The National Center on Response to Intervention

5 Structure of the Review Process Core Group IDE staff responsible for administration and coordination of the effort Vetting Group Other IDE staff as well as stakeholders from LEAs, AEAs, and IHEs from across the state Work Group IDE and AEA staff who conducted the actual reviews

6 Overview of the Review Process The work group was divided into 3 groups: Within each group, members worked in pairs Group AGroup BGroup C Key elements of tools: name, what it measures, grades it is used with, how it is administered, cost, time to administer Technical features: reliability, validity, classification accuracy, relevance of criterion measure Application features: alignment with CORE, training time, computer system feasibility, turn around time for data, sample, disaggregated data

7 Overview of the Review Process Each pair: ▫had a copy of the materials needed to conduct the review ▫reviewed and scored their parts together and then swapped with the other pair in their group Pairs within each group met only if there were discrepancies in scoring ▫A lead person from one of the other groups participated to mediate reconciliation This allowed each tool to be reviewed by every work group member

8 Overview of the Review Process All reviews will be completed and brought to a full work group meeting Results will be compiled and shared Final determinations across groups for each tool will be shared with the vetting group two weeks later The vetting group will have one month to review the information and provide feedback to the work group

9 Structure and Rationale of Rubrics Separate rubrics for universal screening and progress monitoring ▫Many tools reviewed for both ▫Different considerations Common header and descriptive information Different criteria for each group (a, b, c)

10 Universal Screening Rubric Iowa Department of Education Universal Screening Rubric for Reading (Revised 10/24/11) What is a Universal Screening Tool in Reading: It is a tool that is administered at school with ALL students to identify which students are at-risk for reading failure on an outcome measure. It is NOT a placement screener and would not be used with just one group of students (e.g., a language screening test) Why use a Universal Screening Tool: It tells you which students are at-risk for not performing at the proficient level on an end of year outcome measure. These students need something more and/or different to increase their chances of becoming a proficient reader. What feature is most critical: Classification Accuracy because it provides a demonstration of how well a tool predicts who may and may not need something more. It is critical that Universal Screening Tools identify the correct students with the greatest degree of accuracy so that resources are allocated appropriately and students who need additional assistance get it. Header on cover page

11 Group A

12 Group B

13 Judging Criterion Measure Used for: Circle all that apply Screening: Classification Accuracy Screening: Criterion Validity Progress Monitoring: Criterion Validity Name of Criterion Measure: Gates How Criterion Administered: (circle one) Group or Individual Information Relied on to make determinations: (circle all that apply) Manual from publisher NCRtI Tool Chart Buros/Mental Measurement Yearbook On-Line publisher info. Outside Resource other than publisher or Researcher of Measure Additional Sheet for Judging the External Criterion Measure (Revised 10/24/11) 1.An appropriate Criterion Measure is: a)External to the screening or progress monitoring tool b)A Broad skill rather than a specific skill c)Technically adequate for reliability d)Technically adequate for validity e)Validated on a broad sample that would also represent Iowa’s population

14 Judging Criterion Measure (cont)

15

16 Judging Classification Accuracy

17 Judging Classification Accuracy (cont)

18 Sensitivity and Specificity Considerations and Explanations Key + = proficiency/mastery - = nonproficiency/at-risk 0 = unknown = Sensitivity = Specificity Explanations: True means “in agreement between screening and outcome”. So true can be negative to negative in terms of student performance (i.e., negative meaning at-risk or nonproficient). This could be considered either positive or negative prediction depending on which the developer intends the tool to predict. As an example, a tool that has a primary purpose of identifying students at-risk for future failure would probably use ‘true positives’ to mean ‘those students who were accurately predicted to fail the outcome test’. Sensitivity = true positives/true positives + false negatives Specificity = true negatives/true negatives + false positives

19 Consideration 1: Determine whether developer is predicting a positive outcome (i.e., proficiency, success, mastery, at or above a criterion or cut score) from a positive performance on the screening tool (i.e., at or above benchmark or a criterion or cut score) or a negative outcome (i.e., failure, nonproficiency, below a criterion or cut score) from negative performance on the screening tool (i.e., below a benchmark, criterion, or cut score). Prediction is almost always positive to positive or negative to negative; however in rare cases it might be positive to negative or negative to positive.

20

21

22 Group B (cont)

23 Group C

24 Group C (cont)

25

26 Progress Monitoring Rubric Header on cover page Descriptive info on each work group’s section

27 Group A

28 Group B

29 Group B (cont)

30

31 Group C

32 Group C (cont)

33

34 Findings Many of the tools reported are not sufficient (or appropriate) for universal screening or progress monitoring Some tools are appropriate for both No tool (so far) is “perfect” There are alternatives from which to choose

35 Live Chat Thursday April 26, :00-3:00 EDT Go to rti4success.org for more detailsrti4success.org


Download ppt "Iowa’s Application of Rubrics to Evaluate Screening and Progress Tools John L. Hosp, PhD University of Iowa."

Similar presentations


Ads by Google