Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rate of Improvement Version 2.0: Research Based Calculation and Decision Making Caitlin S. Flinn, EdS, NCSP Andrew E. McCrea, MS, NCSP Matthew Ferchalk.

Similar presentations


Presentation on theme: "Rate of Improvement Version 2.0: Research Based Calculation and Decision Making Caitlin S. Flinn, EdS, NCSP Andrew E. McCrea, MS, NCSP Matthew Ferchalk."— Presentation transcript:

1 Rate of Improvement Version 2.0: Research Based Calculation and Decision Making Caitlin S. Flinn, EdS, NCSP Andrew E. McCrea, MS, NCSP Matthew Ferchalk EdS, NCSP ASPP Conference 2010

2 Today’s Objectives Explain what RoI is, why it is important, and how to compute it. Explain what RoI is, why it is important, and how to compute it. Establish that Simple Linear Regression should be the standardized procedure for calculating RoI. Establish that Simple Linear Regression should be the standardized procedure for calculating RoI. Discuss how to use RoI within a problem solving/school improvement model. Discuss how to use RoI within a problem solving/school improvement model.

3 RoI Definition Algebraic term: Slope of a line Algebraic term: Slope of a line Vertical change over the horizontal change Vertical change over the horizontal change Rise over run Rise over run m = (y 2 - y 1 ) / (x 2 - x 1 ) m = (y 2 - y 1 ) / (x 2 - x 1 ) Describes the steepness of a line (Gall & Gall, 2007) Describes the steepness of a line (Gall & Gall, 2007)

4 RoI Definition Finding a student’s RoI = finding the slope of a line Finding a student’s RoI = finding the slope of a line Using two data points on that line Using two data points on that line Finding the line itself Finding the line itself Linear regression Linear regression Ordinary Least Squares Ordinary Least Squares

5 How does Rate of Improvement Fit into the Larger Context?

6 School Improvement/Comprehensive School Reform Response to Intervention Dual Discrepancy: Level & Growth Rate of Improvement

7 School Improvement/Comprehensive School Reform Grade level content expectations (ELA, math, science, social studies, etc.). Grade level content expectations (ELA, math, science, social studies, etc.). Work toward these expectations through classroom instruction. Work toward these expectations through classroom instruction. Understand impact of instruction through assessment. Understand impact of instruction through assessment.

8 Assessment Formative Assessments/High Stakes Tests Formative Assessments/High Stakes Tests Does student have command of content expectation (standard)? Does student have command of content expectation (standard)? Universal Screening using CBM Universal Screening using CBM Does student have basic skills appropriate for age/grade? Does student have basic skills appropriate for age/grade?

9 Assessment Q: For students who are not proficient on grade level content standards, do they have the basic reading/writing/math skills necessary? Q: For students who are not proficient on grade level content standards, do they have the basic reading/writing/math skills necessary? A: Look at Universal Screening; if above criteria, intervention geared toward content standard, if below criteria, intervention geared toward basic skill. A: Look at Universal Screening; if above criteria, intervention geared toward content standard, if below criteria, intervention geared toward basic skill.

10 Progress Monitoring Frequent measurement of knowledge to inform our understanding of the impact of instruction/intervention. Frequent measurement of knowledge to inform our understanding of the impact of instruction/intervention. Measures of basic skills (CBM) have demonstrated reliability & validity (see table at www.rti4success.org). Measures of basic skills (CBM) have demonstrated reliability & validity (see table at www.rti4success.org). www.rti4success.org

11 Classroom Instruction (Content Expectations) Measure Impact (Test) Proficient!Non Proficient Content Need?Basic Skill Need? Intervention Progress Monitor With CBM Rate of Improvement Intervention Progress Monitor If CBM is Appropriate Measure Use Diagnostic Test to Differentiate

12 So… Rate of Improvement (RoI) is how we understand student growth (learning). Rate of Improvement (RoI) is how we understand student growth (learning). RoI is reliable and valid (psychometrically speaking) for use with CBM data. RoI is reliable and valid (psychometrically speaking) for use with CBM data. RoI is best used when we have CBM data, most often when dealing with basic skills in reading/writing/math. RoI is best used when we have CBM data, most often when dealing with basic skills in reading/writing/math. RoI can be applied to other data (like behavior) with confidence too! RoI can be applied to other data (like behavior) with confidence too! RoI is not yet tested on typical Tier I formative classroom data. RoI is not yet tested on typical Tier I formative classroom data.

13 RoI is usually applied to… Tier One students in the early grades at risk for academic failure (low green kids). Tier One students in the early grades at risk for academic failure (low green kids). Tier Two & Three Intervention Groups. Tier Two & Three Intervention Groups. Special Education Students (and IEP goals) Special Education Students (and IEP goals) Students with Behavior Plans Students with Behavior Plans

14 RoI Foundations Deno, 1985 Curriculum-based measurement General outcome measures Short Standardized Repeatable Sensitive to change

15 RoI Foundations Fuchs & Fuchs, 1998 Hallmark components of Response to Intervention Ongoing formative assessment Identifying non-responding students Treatment fidelity of instruction Dual discrepancy model One standard deviation from typically performing peers in level and rate

16 RoI Foundations Ardoin & Christ, 2008 Slope for benchmarks (3x per year) More growth from fall to winter than winter to spring Might be helpful to use RoI for fall to winter And a separate RoI for winter to spring

17 RoI Foundations Fuchs, Fuchs, Walz, & Germann, 1993 Fuchs, Fuchs, Walz, & Germann, 1993 Typical weekly growth rates Typical weekly growth rates Needed growth Needed growth 1.5 to 2.0 times typical slope to close gap in a reasonable amount of time 1.5 to 2.0 times typical slope to close gap in a reasonable amount of time

18 RoI Foundations Deno, Fuchs, Marston, & Shin, 2001 Slope of frequently non-responsive children approximated slope of children already identified as having a specific learning disability

19 RoI & Statistics Gall & Gall, 2007 Gall & Gall, 2007 10 data points are a minimum requirement for a reliable trendline 10 data points are a minimum requirement for a reliable trendline How does that affect the frequency of administering progress monitoring probes? How does that affect the frequency of administering progress monitoring probes?

20 Importance of Graphs Vogel, Dickson, & Lehman, 1990 Vogel, Dickson, & Lehman, 1990 Speeches that included visuals, especially in color, improved: Speeches that included visuals, especially in color, improved: Immediate recall by 8.5% Immediate recall by 8.5% Delayed recall (3 days) by 10.1% Delayed recall (3 days) by 10.1%

21 Importance of Graphs “Seeing is believing.” “Seeing is believing.” Useful for communicating large amounts of information quickly Useful for communicating large amounts of information quickly “A picture is worth a thousand words.” “A picture is worth a thousand words.” Transcends language barriers (Karwowski, 2006) Transcends language barriers (Karwowski, 2006) Responsibility for accurate graphical representations of data Responsibility for accurate graphical representations of data

22 Skills Typically Graphed Reading Oral Reading Fluency Word Use Fluency Reading Comprehension MAZE Retell Fluency Early Literacy Skills Initial Sound Fluency Letter Naming Fluency Letter Sound Fluency Phoneme Segmentation Fluency Nonsense Word Fluency Spelling Written Expression Behavior Math Math Computation Math Facts Early Numeracy Oral Counting Missing Number Number Identification Quantity Discrimination

23 Importance of RoI Visual inspection of slope Visual inspection of slope Multiple interpretations Multiple interpretations Instructional services Instructional services Need for explicit guidelines Need for explicit guidelines

24 Ongoing Research RoI for instructional decisions is not a perfect process RoI for instructional decisions is not a perfect process Research is currently addressing sources of error: Research is currently addressing sources of error: Christ, 2006: standard error of measurement for slope Christ, 2006: standard error of measurement for slope Ardoin & Christ, 2009: passage difficulty and variability Ardoin & Christ, 2009: passage difficulty and variability Jenkin, Graff, & Miglioretti, 2009: frequency of progress monitoring Jenkin, Graff, & Miglioretti, 2009: frequency of progress monitoring

25 Future Considerations Questions yet to be empirically answered Questions yet to be empirically answered What parameters of RoI indicate a lack of RtI? What parameters of RoI indicate a lack of RtI? How does standard error of measurement play into using RoI for instructional decision making? How does standard error of measurement play into using RoI for instructional decision making? How does RoI vary between standard protocol interventions? How does RoI vary between standard protocol interventions? How does this apply to non-English speaking populations? How does this apply to non-English speaking populations?

26 How is RoI Calculated? Which way is best?

27 Multiple Methods for Calculating Growth Visual Inspection Approaches “Eye Ball” Approach “Eye Ball” Approach Split Middle Approach Split Middle Approach Tukey Method Tukey Method Quantitative Approaches Last point minus First point Approach Last point minus First point Approach Split Middle & Tukey “plus” Split Middle & Tukey “plus” Linear Regression Approach Linear Regression Approach

28 The Visual Inspection Approaches

29 Eye Ball Approach

30 Split Middle Approach Drawing “through the two points obtained from the median data values and the median days when the data are divided into two sections” Drawing “through the two points obtained from the median data values and the median days when the data are divided into two sections” (Shinn, Good, & Stein, 1989).

31 Split Middle X(9) X(14) X (9)

32 Tukey Method Divide scores into 3 equal groups Divide scores into 3 equal groups Divide groups with vertical lines Divide groups with vertical lines In 1 st and 3 rd groups, find median data point and median week and mark with an “X” In 1 st and 3 rd groups, find median data point and median week and mark with an “X” Draw line between two “Xs” Draw line between two “Xs” (Fuchs, et. al., 2005. Summer Institue Student progress monitoring for math. http://www.studentprogress.org/library/training.asp) http://www.studentprogress.org/library/training.asp

33 Tukey Method X(8) X(14)

34 The Quantitative Approaches

35 Last minus First Iris Center: last probe score minus first probe score over last administration period minus first administration period. Iris Center: last probe score minus first probe score over last administration period minus first administration period. Y2-Y1/X2-X1= RoI http://iris.peabody.vanderbilt.edu/resources.html

36 Last minus First

37 Split Middle “Plus” X(9) X(14) (14-9)/8=0.63

38 Tukey Method “Plus” X(8) X(14) (14-8)/8=0.75

39 Linear Regression

40 RoI Consistency? Any Method of Visual Inspection ??? Last minus First 0.75 Split Middle “Plus” 0.63 Tukey “Plus” 0.75 Linear Regression 1.10

41 RoI Consistency? If we are not all using the same model to compute RoI, we continue to have the same problems as past models, where under one approach a student meets SLD criteria, but under a different approach, the student does not. If we are not all using the same model to compute RoI, we continue to have the same problems as past models, where under one approach a student meets SLD criteria, but under a different approach, the student does not. Hypothetically, if the RoI cut-off was 0.65 or 0.95, different approaches would come to different conclusions on the same student. Hypothetically, if the RoI cut-off was 0.65 or 0.95, different approaches would come to different conclusions on the same student.

42 RoI Consistency? Last minus First (Iris Center) and Linear Regression (Shinn, etc.) only quantitative methods discussed in CBM literature. Last minus First (Iris Center) and Linear Regression (Shinn, etc.) only quantitative methods discussed in CBM literature. Study of 37 at risk 2 nd graders: Study of 37 at risk 2 nd graders: Difference in RoI b/w LmF & LR Methods Whole Year 0.26 WCPM Fall 0.31 WCPM Spring 0.24 WCPM McCrea (2010) Unpublished data

43 Technical Adequacy Without a consensus on how to compute RoI, we risk falling short of having technical adequacy within our model. Without a consensus on how to compute RoI, we risk falling short of having technical adequacy within our model.

44 So, Which RoI Method is Best?

45 Literature shows that Linear Regression is Best Practice Student’s daily test scores…were entered into a computer program…The data analysis program generated slopes of improvement for each level using an Ordinary-Least Squares procedure (Hayes, 1973) and the line of best fit. Student’s daily test scores…were entered into a computer program…The data analysis program generated slopes of improvement for each level using an Ordinary-Least Squares procedure (Hayes, 1973) and the line of best fit. This procedure has been demonstrated to represent CBM achievement data validly within individual treatment phases (Marston, 1988; Shinn, Good, & Stein, in press; Stein, 1987). This procedure has been demonstrated to represent CBM achievement data validly within individual treatment phases (Marston, 1988; Shinn, Good, & Stein, in press; Stein, 1987). Shinn, Gleason, & Tindal, 1989

46 Growth (RoI) Research using Linear Regression Christ, T. J. (2006). Short-term estimates of growth using curriculum based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128-133. Christ, T. J. (2006). Short-term estimates of growth using curriculum based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128-133. Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507-524. Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507-524. Good, R. H. (1990). Forecasting accuracy of slope estimates for reading curriculum based measurement: Empirical evidence. Behavioral Assessment, 12, 179-193. Good, R. H. (1990). Forecasting accuracy of slope estimates for reading curriculum based measurement: Empirical evidence. Behavioral Assessment, 12, 179-193. Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L. & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27-48. Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L. & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27-48.

47 Growth (RoI) Research using Linear Regression Jenkins, J. R., Graff, J. J., & Miglioretti, D.L. (2009). Estimating reading growth using intermittent CBM progress monitoring. Exceptional Children, 75, 151-163. Jenkins, J. R., Graff, J. J., & Miglioretti, D.L. (2009). Estimating reading growth using intermittent CBM progress monitoring. Exceptional Children, 75, 151-163. Shinn, M. R., Gleason, M. M., & Tindal, G. (1989). Varying the difficulty of testing materials: Implications for curriculum-based measurement. The Journal of Special Education, 23, 223-233. Shinn, M. R., Gleason, M. M., & Tindal, G. (1989). Varying the difficulty of testing materials: Implications for curriculum-based measurement. The Journal of Special Education, 23, 223-233. Shinn, M. R., Good, R. H., & Stein, S. (1989). Summarizing trend in student achievement: A comparison of methods. School Psychology Review, 18, 356-370. Shinn, M. R., Good, R. H., & Stein, S. (1989). Summarizing trend in student achievement: A comparison of methods. School Psychology Review, 18, 356-370.

48 So, Why Are There So Many Other RoI Models? Ease of application Ease of application Focus on Yes/No to goal acquisition, not degree of growth Focus on Yes/No to goal acquisition, not degree of growth How many of us want to calculate OLS Linear Regression formulas (or even remember how)? How many of us want to calculate OLS Linear Regression formulas (or even remember how)?

49 Pros and Cons of Each Approach ProsCons Eye Ball EasyUnderstandableSubjective Split Middle & Tukey No software needed Compare to Aim/Goal line Yes/No to goal acquisition No statistic provided, no idea of the degree of growth

50 Pros and Cons of Each Approach ProsCons Last minus First Provides a growth statistic Easy to compute Does not consider all data points, only two Split Middle & Tukey “Plus” Considers all data points. Easy to compute No support for “plus” part of methodology Linear Regression All data points Best Practice Calculating the statistic

51 An Easy and Applicable Solution

52 Get Out Your Laptops! Open Microsoft Excel I love ROI

53 Graphing RoI For Individual Students Programming Microsoft Excel to Graph Rate of Improvement: Fall to Winter

54 Setting Up Your Spreadsheet In cell A1, type 3rd Grade ORF In cell A1, type 3rd Grade ORF In cell A2, type First Semester In cell A2, type First Semester In cell A3, type School Week In cell A3, type School Week In cell A4, type Benchmark In cell A4, type Benchmark In cell A5, type the Student’s Name (Swiper Example) In cell A5, type the Student’s Name (Swiper Example)

55 Labeling School Weeks Starting with cell B3, type numbers 1 through 18 going across row 3 (horizontal). Starting with cell B3, type numbers 1 through 18 going across row 3 (horizontal). Numbers 1 through 18 represent the number of the school week. Numbers 1 through 18 represent the number of the school week. You will end with week 18 in cell S3. You will end with week 18 in cell S3.

56 Labeling Dates Note: You may choose to enter the date of that school week across row 2 to easily identify the school week. Note: You may choose to enter the date of that school week across row 2 to easily identify the school week.

57 Entering Benchmarks (3rd Grade ORF) In cell B4, type 77. This is your fall benchmark. In cell B4, type 77. This is your fall benchmark. In cell S4, type 92. This is your winter benchmark. In cell S4, type 92. This is your winter benchmark.

58 Entering Student Data (Sample) Enter the following numbers, going across row 5, under corresponding week numbers. Enter the following numbers, going across row 5, under corresponding week numbers. Week 1 – 41 Week 1 – 41 Week 8 – 62 Week 8 – 62 Week 9 – 63 Week 9 – 63 Week 10 – 75 Week 10 – 75 Week 11 – 64 Week 11 – 64 Week 12 – 80 Week 12 – 80 Week 13 – 83 Week 13 – 83 Week 14 – 83 Week 14 – 83 Week 15 – 56 Week 15 – 56 Week 17 – 104 Week 17 – 104 Week 18 – 74 Week 18 – 74

59 *CAUTION* If a student was not assessed during a certain week, leave that cell blank If a student was not assessed during a certain week, leave that cell blank Do not enter a score of Zero (0) it will be calculated into the trendline and interpreted as the student having read zero words correct per minute during that week. Do not enter a score of Zero (0) it will be calculated into the trendline and interpreted as the student having read zero words correct per minute during that week.

60 Graphing the Data Highlight cells A4 and A5 through S4 and S5 Highlight cells A4 and A5 through S4 and S5 Follow Excel 2003 or Excel 2007 directions from here Follow Excel 2003 or Excel 2007 directions from here

61 Graphing the Data Excel 2003 Excel 2003 Across the top of your worksheet, click on “Insert” Across the top of your worksheet, click on “Insert” In that drop-down menu, click on “Chart” In that drop-down menu, click on “Chart” Excel 2007 Excel 2007 Click Insert Click Insert Find the icon for Line Find the icon for Line Click the arrow below Line Click the arrow below Line

62 Graphing the Data Excel 2003 Excel 2003 A Chart Wizard window will appear A Chart Wizard window will appear Excel 2007 Excel 2007 6 graphics appear 6 graphics appear

63 Graphing the Data Excel 2003 Excel 2003 Choose “Line” Choose “Line” Choose “Line with markers…” Choose “Line with markers…” Excel 2007 Excel 2007 Choose “Line with markers” Choose “Line with markers”

64 Graphing the Data Excel 2003 Excel 2003 “Data Range” tab “Data Range” tab “Columns” “Columns” Excel 2007 Excel 2007 Your graph appears Your graph appears

65 Graphing the Data Excel 2003 Excel 2003 “Chart Title” “Chart Title” “School Week” X Axis “School Week” X Axis “WPM’ Y Axis “WPM’ Y Axis Excel 2007 Excel 2007 Change your labels by right clicking on the graph Change your labels by right clicking on the graph

66 Graphing the Data Excel 2003 Excel 2003 Choose where you want your graph Choose where you want your graph Excel 2007 Excel 2007 Your graph was automatically put into your data spreadsheet Your graph was automatically put into your data spreadsheet

67 Graphing the Trendline Excel 2003 Excel 2003 Right click on any of the student data points Right click on any of the student data points Excel 2007 Excel 2007

68 Graphing the Trendline Excel 2003 Excel 2003 Choose “Linear” Choose “Linear” Excel 2007 Excel 2007

69 Graphing the Trendline Excel 2003 Excel 2003 Choose “Custom” and check box next to “Display equation on chart” Choose “Custom” and check box next to “Display equation on chart” Excel 2007 Excel 2007

70 Graphing the Trendline Clicking on the equation highlights a box around it Clicking on the equation highlights a box around it Clicking on the box allows you to move it to a place where you can see it better Clicking on the box allows you to move it to a place where you can see it better

71 Graphing the Trendline You can repeat the same procedure to have a trendline for the benchmark data points You can repeat the same procedure to have a trendline for the benchmark data points Suggestion: label the trendline Expected ROI Suggestion: label the trendline Expected ROI Move this equation under the first Move this equation under the first

72 Individual Student Graph

73 The equation indicates the slope, or rate of improvement. The equation indicates the slope, or rate of improvement. The number, or coefficient, before "x" is the average improvement, which in this case is the average number of words per minute per week gained by the student. The number, or coefficient, before "x" is the average improvement, which in this case is the average number of words per minute per week gained by the student.

74 Individual Student Graph The rate of improvement, or trendline, is calculated using a linear regression, a simple equation of least squares. The rate of improvement, or trendline, is calculated using a linear regression, a simple equation of least squares. To add additional progress monitoring/benchmark scores once you’ve already created a graph, enter additional scores in Row 5 in the corresponding school week. To add additional progress monitoring/benchmark scores once you’ve already created a graph, enter additional scores in Row 5 in the corresponding school week.

75 Individual Student Graph The slope can change depending on which week (where) you put the benchmark scores on your chart. The slope can change depending on which week (where) you put the benchmark scores on your chart. Enter benchmark scores based on when your school administers their benchmark assessments for the most accurate depiction of expected student progress. Enter benchmark scores based on when your school administers their benchmark assessments for the most accurate depiction of expected student progress.

76 Assuming Linear Growth… …Finding Curve-linear Growth Why Graph only 18 Weeks at a Time?

77 Non-Educational Example of Curve-linear Growth

78 Academic Example of Curvilinear Growth

79 McCrea, 2010 Looked at Rate of Improvement in small 2 nd grade sample Looked at Rate of Improvement in small 2 nd grade sample Found differences in RoI when computed for fall and spring: Found differences in RoI when computed for fall and spring: Ave RoI for fall:1.47 WCPM Ave RoI for fall:1.47 WCPM Ave RoI for spring:1.21 WCPM Ave RoI for spring:1.21 WCPM

80 Ardoin & Christ, 2008 Slope for benchmarks (3x per year) Slope for benchmarks (3x per year) More growth from fall to winter than winter to spring More growth from fall to winter than winter to spring

81 Christ, Yeo, & Silberglitt, in press Growth across benchmarks (3X per year) Growth across benchmarks (3X per year) More growth from fall to winter than winter to spring More growth from fall to winter than winter to spring Disaggregated special education population Disaggregated special education population

82 Graney, Missall, & Martinez, 2009 Growth across benchmarks (3X per year) Growth across benchmarks (3X per year) More growth from winter to spring than fall to winter with R-CBM. More growth from winter to spring than fall to winter with R-CBM.

83 Fien, Park, Smith, & Baker, 2010 Investigated relationship b/w NWF gains and ORF/Comprehension Investigated relationship b/w NWF gains and ORF/Comprehension Found greater NWF gains in fall than in spring. Found greater NWF gains in fall than in spring.

84 DIBELS (6 th ) ORF Change in Criteria Fall to Winter Winter to Spring 2 nd 2422 3 rd 1518 4 th 1313 5 th 119 6 th 115

85 AIMSweb Norms Based on 50 th Percentile Fall to Winter Winter to Spring 1 st 1831 2 nd 2517 3 rd 2215 4 th 1613 5 th 1715 6 th 1312

86 Speculation as to why Differences in RoI within the Year Relax instruction after high stakes testing in March/April; a PSSA effect. Relax instruction after high stakes testing in March/April; a PSSA effect. Depressed BOY benchmark scores due to summer break; a rebound effect (Clemens). Depressed BOY benchmark scores due to summer break; a rebound effect (Clemens). Instructional variables could explain differences in Graney (2009) and Ardoin (2008) & Christ (in press) results (Silberglitt). Instructional variables could explain differences in Graney (2009) and Ardoin (2008) & Christ (in press) results (Silberglitt). Variability within progress monitoring probes (Ardoin & Christ, 2008) (Lent). Variability within progress monitoring probes (Ardoin & Christ, 2008) (Lent).

87 Programming Excel Calculating Needed RoI Calculating Actual (Expected) RoI – Benchmark Calculating Actual RoI - Student

88 Calculating Needed RoI In cell T3, type Needed RoI In cell T3, type Needed RoI Click on cell T5 Click on cell T5 In the fx line (at top of sheet) type this formula =((S4-B5)/18) In the fx line (at top of sheet) type this formula =((S4-B5)/18) Then hit enter Then hit enter Your result should read: 2 Your result should read: 2 This formula simply subtracts the student’s actual middle of year (MOY) benchmark from the expected end of year (EOY) benchmark, then dividing by 18 for the first 18 weeks (1st semester). This formula simply subtracts the student’s actual middle of year (MOY) benchmark from the expected end of year (EOY) benchmark, then dividing by 18 for the first 18 weeks (1st semester).

89 Calculating Actual (Expected) RoI - Benchmark In cell U3, type Actual RoI In cell U3, type Actual RoI Click on cell U4 Click on cell U4 In the fx line (at top of sheet) type this formula =SLOPE(B4:S4,B3:S3) In the fx line (at top of sheet) type this formula =SLOPE(B4:S4,B3:S3) Then hit enter Then hit enter Your result should read: 1.06 Your result should read: 1.06 This formula considers 18 weeks of benchmark data and provides an average growth or change per week. This formula considers 18 weeks of benchmark data and provides an average growth or change per week.

90 Calculating Actual RoI - Student Click on cell U5 Click on cell U5 In the fx line (at top of sheet) type this formula =SLOPE(B5:S5,B3:S3) In the fx line (at top of sheet) type this formula =SLOPE(B5:S5,B3:S3) Then hit enter Then hit enter Your result should read: 1.89 Your result should read: 1.89 This formula considers 18 weeks of student data and provides an average growth or change per week. This formula considers 18 weeks of student data and provides an average growth or change per week.

91 ROI as a Decision Tool within a Problem-Solving Model

92 Steps 1. Gather the data 2. Ground the data & set goals 3. Interpret the data 4. Figure out how to fit Best Practice into Public Education

93 Step 1: Gather Data Universal Screening Progress Monitoring

94 Common Screenings in PA DIBELS DIBELS AIMSweb AIMSweb MBSP MBSP 4Sight 4Sight PSSA PSSA

95 Validated Progress Monitoring Tools DIBELS DIBELS AIMSweb AIMSweb MBSP MBSP www.studentprogress.org www.studentprogress.org www.studentprogress.org

96 Step 2: Ground the Data 1) To what will we compare our student growth data? 2) How will we set goals?

97 Multiple Ways to Look at Growth Needed Growth Needed Growth Expected Growth & Percent of Expected Growth Expected Growth & Percent of Expected Growth Fuchs et. al. (1993) Table of Realistic and Ambitious Growth Fuchs et. al. (1993) Table of Realistic and Ambitious Growth Growth Toward Individual Goal* Growth Toward Individual Goal* *Best Practices in Setting Progress Monitoring Goals for Academic Skill Improvement (Shapiro, 2008)

98 Needed Growth Difference between student’s BOY (or MOY) score and benchmark score at MOY (or EOY). Difference between student’s BOY (or MOY) score and benchmark score at MOY (or EOY). Example: MOY ORF = 10, EOY benchmark is 40, 18 weeks of instruction (40-10/18=1.67). Student must gain 1.67 wcpm per week to make EOY benchmark. Example: MOY ORF = 10, EOY benchmark is 40, 18 weeks of instruction (40-10/18=1.67). Student must gain 1.67 wcpm per week to make EOY benchmark.

99 Expected Growth Difference between two benchmarks. Difference between two benchmarks. Example: MOY benchmark is 20, EOY benchmark is 40, expected growth (40- 20)/18 weeks of instruction = 1.11 wcpm per week. Example: MOY benchmark is 20, EOY benchmark is 40, expected growth (40- 20)/18 weeks of instruction = 1.11 wcpm per week.

100 Tigard-Tualatin School District (www.ttsd.k12.or.us) Looking at Percent of Expected Growth Tier I Tier II Tier III Greater than 150% Between 110% & 150% Possible LD Between 95% & 110% Likely LD Between 80% & 95% May Need More Likely LD Below 80% Needs More Likely LD

101 Fuchs, Fuchs, Hamlett, Walz, & Germann (1993) Oral Reading Fluency Adequate Response Table Realistic Growth Ambitious Growth 1 st 2.03.0 2 nd 1.52.0 3 rd 1.01.5 4 th 0.91.1 5 th 0.50.8

102 Fuchs, Fuchs, Hamlett, Walz, & Germann (1993) Digit Fluency Adequate Response Table Realistic Growth Ambitious Growth 1 st 0.30.5 2 nd 0.30.5 3 rd 0.30.5 4 th 0.751.2 5 th 0.751.2

103 From Where Should Benchmarks/Criteria Come? Appears to be a theoretical convergence on use of local criteria (what scores do our students need to have a high probability of proficiency?) when possible. Appears to be a theoretical convergence on use of local criteria (what scores do our students need to have a high probability of proficiency?) when possible.

104 Test Globally… …Benchmark Locally

105 Objectives Rationale for developing Local Benchmarks Rationale for developing Local Benchmarks Fun with Excel! Fun with Excel! Fun with Algebra! Fun with Algebra! Local Benchmarks in Action Local Benchmarks in Action

106 Rational for Developing Local Benchmarks Stage & Jacobson (2001) Stage & Jacobson (2001) Slope in Oral Reading Fluency reliably predicted performance on Washington Assessment of Student Learning Slope in Oral Reading Fluency reliably predicted performance on Washington Assessment of Student Learning McGlinchy & Hixon (2004) McGlinchy & Hixon (2004) Results support the use of CBM for determine which students are at risk for reading failure and who will fail state tests Results support the use of CBM for determine which students are at risk for reading failure and who will fail state tests Hintze & Silberglitt (2005) Hintze & Silberglitt (2005) Oral Reading Fluency is highly connected to state test performance and is and is accurate at predicting those students who are likely to not meet proficiency. Oral Reading Fluency is highly connected to state test performance and is and is accurate at predicting those students who are likely to not meet proficiency. Shapiro et al. (2006) Shapiro et al. (2006) Results of this study show that CBM and be a valuable source to identify which student are likely to be successful or fail state tests. Results of this study show that CBM and be a valuable source to identify which student are likely to be successful or fail state tests. Ask Jason Pedersen! Ask Jason Pedersen!

107 (Stewart & Silberglitt, 2008) Rational for Developing Local Benchmarks Identify and validate problems Identify and validate problems Creating ideas for instructional grouping, focus, or intensity Creating ideas for instructional grouping, focus, or intensity Goal setting Goal setting Determining the focus and frequency of progress monitoring Determining the focus and frequency of progress monitoring Exiting student or moving students to different level or tiers of intervention Exiting student or moving students to different level or tiers of intervention Systems level resource allocation and evaluation Systems level resource allocation and evaluation

108 Rationale for Developing Local Benchmarks Silberglitt (2008) Silberglitt (2008) Districts should “refrain from simply adopting a set of national target scores, as these scores may or may not be relevant to the high-stakes outcomes for which their students must be adequately prepared.” (p. 1871) Districts should “refrain from simply adopting a set of national target scores, as these scores may or may not be relevant to the high-stakes outcomes for which their students must be adequately prepared.” (p. 1871) “By linking local assessments to high-stakes tests, users are able to establish target scores on these local assessments, scores that divide students between those who are likely and those who are unlikely to achieve success on the high-stakes test.” (p. 1870) “By linking local assessments to high-stakes tests, users are able to establish target scores on these local assessments, scores that divide students between those who are likely and those who are unlikely to achieve success on the high-stakes test.” (p. 1870)

109 Rationale for Developing Local Benchmarks Discrepancy across states, in terms of the percentile ranks on a nationally administered assessment necessary to predict successful state test performance (Kingsbury et al., 2004) Discrepancy across states, in terms of the percentile ranks on a nationally administered assessment necessary to predict successful state test performance (Kingsbury et al., 2004) “Using cut scores based on the probability of success on an upcoming state-mandated assessment, might be a useful alternative to normative date for making these decisions. (Silberglitt & Hintz, 2005) “Using cut scores based on the probability of success on an upcoming state-mandated assessment, might be a useful alternative to normative date for making these decisions. (Silberglitt & Hintz, 2005) Can be used to separate students into groups in an RtII framework (Silberglitt, 2008) Can be used to separate students into groups in an RtII framework (Silberglitt, 2008)

110 Rationale for Developing Local Benchmarks Useful in calculating discrepancy in level (Burns, 2008) Useful in calculating discrepancy in level (Burns, 2008) Represent the school population where the students are getting their education (Stewart & Silberglitt, 2008) Represent the school population where the students are getting their education (Stewart & Silberglitt, 2008) Teachers often use comparisons between students in their classroom, this helps to objectify those decisions (Stewart & Silberglitt, 2008) Teachers often use comparisons between students in their classroom, this helps to objectify those decisions (Stewart & Silberglitt, 2008)

111 (Ferchalk, Richardson & Cogan-Ferchalk, 2010) Rationale for Developing Local Benchmarks How accurately does it predict proficiency level in Third Grade? How accurately does it predict proficiency level in Third Grade?

112 (Ferchalk, Richardson & Cogan-Ferchalk, 2010) Rationale for Developing Local Benchmarks Percentage of students in Third Grade predicted to be successful on the PSSA who were actually Successful Percentage of students in Third Grade predicted to be successful on the PSSA who were actually Successful

113 (Ferchalk, Richardson & Cogan-Ferchalk, 2010) Rationale for Developing Local Benchmarks Percentage of Third Grade students predicted to be unsuccessful who actually failed to meet proficiency on the PSSA Percentage of Third Grade students predicted to be unsuccessful who actually failed to meet proficiency on the PSSA

114 Getting Started Collect 3 or more years of student CBM and PSSA data Collect 3 or more years of student CBM and PSSA data Match student data for each student Match student data for each student Use data extract and data farming features offered through PSSA / DIBELS / AIMSweb websites Use data extract and data farming features offered through PSSA / DIBELS / AIMSweb websites Download with student ID numbers Download with student ID numbers If you have a data warehouse…then use your special magic…lucky! If you have a data warehouse…then use your special magic…lucky!

115 (Stewart & Silberglitt, 2008) Getting Started Reliable and valid data Reliable and valid data Linear / highly correlated data Linear / highly correlated data Gather data with integrity Gather data with integrity Do not teach to the test Do not teach to the test All students should be included in the norm group All students should be included in the norm group Be cautious of cohort effects Be cautious of cohort effects

116 Getting Started PSSA Cut Scores PSSA Cut Scores http://www.portal.state.pa.us/portal/server.pt/community/cut_scor es/7441 http://www.portal.state.pa.us/portal/server.pt/community/cut_scor es/7441 Use the lower end score for Proficiency Use the lower end score for Proficiency Download the data set from: Download the data set from: http://sites.google.com/site/rateofimprovement/

117 Wisdom from Teachers (especially from our reading specialists Tina and Kristin!) Children do not equal dots! Children do not equal dots! They are not numbers or data points! They are not numbers or data points! Having said that… Having said that… ≠

118 Fun with Excel!

119 (Burns, 2008) Fun with Algebra! Matt Burns – University of Minnesota Matt Burns – University of Minnesota X=(Y-a)/b X=(Y-a)/b Y = Proficiency Score on the PSSA Y = Proficiency Score on the PSSA a = Intercept a = Intercept b = Slope b = Slope X=Local Benchmark Score X=Local Benchmark Score

120

121

122 More Fun with Algebra! Predict student Proficiency Score Predict student Proficiency Score Resolve the equation Resolve the equation X=(Y-a)/b X=(Y-a)/b Y=(Xb)+a Y=(Xb)+a Y=Predicted PSSA Score Y=Predicted PSSA Score Use with Caution! Use with Caution! Student Student 93wcpm in the fall Data Sample Data Sample Slope = 2.56 Intercept = 1108 Y=(93X2.56)+1108 Y=(93X2.56)+1108 Y=1306 Y=1306

123 Local Benchmark Applications Northern Lebanon School District Local Benchmarks Northern Lebanon School District Local Benchmarks

124 Local Benchmark Applications For those that like the DIBELS Graphs For those that like the DIBELS Graphs

125 Local Benchmark Applications

126 (Silberglitt, 2008; Silberglitt & Hintz, 2005) Diagnostic Accuracy Sensitivity Sensitivity Of all the students who failed the PSSA, what percentage were accurately predicted to fail based on their ORF score Of all the students who failed the PSSA, what percentage were accurately predicted to fail based on their ORF score Specificity Specificity Of all of the students who passed the PSSA, what percentage were accurately predicted to pass based on their ORF score Of all of the students who passed the PSSA, what percentage were accurately predicted to pass based on their ORF score Negative Predictive Power Negative Predictive Power Percentage of students predicted to be successful on the PSSA who were actually Successful Percentage of students predicted to be successful on the PSSA who were actually Successful Positive Predictive Power Positive Predictive Power Percentage of students predicted to be unsuccessful who actually failed to meet proficiency on the PSSA Percentage of students predicted to be unsuccessful who actually failed to meet proficiency on the PSSA

127

128

129 Local Benchmarks - Method 2 Fun with SPSS! Fun with SPSS! Logistic Regression & Roc Curves Logistic Regression & Roc Curves More accurate More accurate Helps to balance Sensitivity, Specificity, Negative & Positive Predictive Power Helps to balance Sensitivity, Specificity, Negative & Positive Predictive Power For more information see For more information see Best Practices in Using Technology for Data- Based Decision Making (Silberglitt, 2008) Best Practices in Using Technology for Data- Based Decision Making (Silberglitt, 2008)

130 If Local Criteria are Not an Option Use norms that accompany the measure (DIBELS, AIMSweb, etc.). Use norms that accompany the measure (DIBELS, AIMSweb, etc.). Use national norms. Use national norms.

131 Making Decisions: Best Practice Research has yet to establish a blue print for ‘grounding’ student RoI data. Research has yet to establish a blue print for ‘grounding’ student RoI data. At this point, teams should consider multiple comparisons when planning and making decisions. At this point, teams should consider multiple comparisons when planning and making decisions.

132 Making Decisions: Lessons From the Field When tracking on grade level, consider an RoI that is 100% of expected growth as a minimum requirement, consider an RoI that is at or above the needed as optimal. When tracking on grade level, consider an RoI that is 100% of expected growth as a minimum requirement, consider an RoI that is at or above the needed as optimal. So, 100% of expected and on par with needed become the limits of the range within a student should be achieving. So, 100% of expected and on par with needed become the limits of the range within a student should be achieving.

133 Is there an easy way to do all of this?

134

135

136 Access to Spreadsheet Templates http://sites.google.com/site/rateofimprove ment/home http://sites.google.com/site/rateofimprove ment/home http://sites.google.com/site/rateofimprove ment/home http://sites.google.com/site/rateofimprove ment/home Click on Charts and Graphs. Click on Charts and Graphs. Update dates and benchmarks. Update dates and benchmarks. Enter names and benchmark/progress monitoring data. Enter names and benchmark/progress monitoring data.

137 What about Students not on Grade Level?

138 Determining Instructional Level Independent/Instructional/Frustrational Independent/Instructional/Frustrational Instructional often b/w 40 th or 50 th percentile and 25 th percentile. Instructional often b/w 40 th or 50 th percentile and 25 th percentile. Frustrational level below the 25 th percentile. Frustrational level below the 25 th percentile. AIMSweb: Survey Level Assessment (SLA). AIMSweb: Survey Level Assessment (SLA).

139 Setting Goals off of Grade Level 100% of expected growth not enough. 100% of expected growth not enough. Needed growth only gets to instructional level benchmark, not grade level. Needed growth only gets to instructional level benchmark, not grade level. Risk of not being ambitious enough. Risk of not being ambitious enough. Plenty of ideas, but limited research regarding Best Practice in goal setting off of grade level. Plenty of ideas, but limited research regarding Best Practice in goal setting off of grade level.

140 Possible Solution (A) Weekly probe at instructional level and compare to expected and needed growth rates at instructional level. Weekly probe at instructional level and compare to expected and needed growth rates at instructional level. Ambitious goal: 200% of expected RoI Ambitious goal: 200% of expected RoI

141

142 Possible Solution (B) Weekly probe at instructional level for sensitive indicator of growth. Weekly probe at instructional level for sensitive indicator of growth. Monthly probes (give 3, not just 1) at grade level to compute RoI. Monthly probes (give 3, not just 1) at grade level to compute RoI. Goal based on grade level growth (more than 100% of expected). Goal based on grade level growth (more than 100% of expected).

143 Step 3: Interpreting Growth

144 What do we do when we do not get the growth we want? When to make a change in instruction and intervention? When to make a change in instruction and intervention? When to consider SLD? When to consider SLD?

145 When to make a change in instruction and intervention? Enough data points (6 to 10)? Enough data points (6 to 10)? Less than 100% of expected growth. Less than 100% of expected growth. Not on track to make benchmark (needed growth). Not on track to make benchmark (needed growth). Not on track to reach individual goal. Not on track to reach individual goal.

146 When to consider SLD? Continued inadequate response despite: Fidelity with Tier I instruction and Tier II/III intervention. Fidelity with Tier I instruction and Tier II/III intervention. Multiple attempts at intervention. Multiple attempts at intervention. Individualized Problem-Solving approach. Individualized Problem-Solving approach. Evidence of dual discrepancy…

147

148 Three Levels of Examples Whole Class Whole Class Small Group Small Group Individual Student Individual Student - Academic Data - Behavior Data

149 Whole Class Example

150 3 rd Grade Math Whole Class Who’s responding? Who’s responding? Effective math instruction? Effective math instruction? Who needs more? Who needs more? N=19 N=19 4 > 100% growth 4 > 100% growth 15 < 100% growth 15 < 100% growth 9 w/ negative growth 9 w/ negative growth

151 Small Group Example

152 Intervention Group Intervention working for how many? Intervention working for how many? Can we assume fidelity of intervention based on results? Can we assume fidelity of intervention based on results? Who needs more? Who needs more?

153 Individual Kid Example

154 Individual Kid Making growth? Making growth? How much (65% of expected growth). How much (65% of expected growth). Atypical growth across the year (last 3 data points). Atypical growth across the year (last 3 data points). Continue? Make a change? Need more data? Continue? Make a change? Need more data?

155 RoI and Behavior?

156

157 Step 4: Figure out how to fit Best Practice into Public Education

158 Things to Consider Who is At-Risk and needs progress monitoring? Who is At-Risk and needs progress monitoring? Who will collect, score, enter the data? Who will collect, score, enter the data? Who will monitor student growth, when, and how often? Who will monitor student growth, when, and how often? What changes should be made to instruction & intervention? What changes should be made to instruction & intervention? What about monitoring off of grade level? What about monitoring off of grade level?

159 Who is At-Risk and needs progress monitoring? Below level on universal screening Below level on universal screening Entering 4 th Grade Example DORF (110) ISIP TRWM (55) 4Sight (1235) PSSA (1235) Student A 1155812551232 Student B 854812161126 Student C 723510561048

160 Who will collect, score, and enter the data? Using MBSP for math, teachers can administer probes to whole class. Using MBSP for math, teachers can administer probes to whole class. DORF probes must be administered one- on-one, and creativity pays off (train and use art, music, library, etc. specialists). DORF probes must be administered one- on-one, and creativity pays off (train and use art, music, library, etc. specialists). Schedule for progress monitoring math and reading every-other week. Schedule for progress monitoring math and reading every-other week.

161 Week 1 Week 2 ReadingMathReadingMath 1 st XX 2 nd XX 3 rd XX 4 th XX 5 th XX

162 Who will monitor student growth, when, and how often? Best Practices in Data-Analysis Teaming (Kovaleski & Pedersen, 2008) Best Practices in Data-Analysis Teaming (Kovaleski & Pedersen, 2008) Chambersburg Area School District Elementary Response to Intervention Manual (McCrea et. al., 2008) Chambersburg Area School District Elementary Response to Intervention Manual (McCrea et. al., 2008) Derry Township School District Response to Intervention Model (http://www.hershey.k12.pa.us/56039310111408/lib/56039310111408/_files/Microsoft_Word_- _Response_to_Intervention_Overview_of_Hershey_Elementary_Model.pdf) Derry Township School District Response to Intervention Model (http://www.hershey.k12.pa.us/56039310111408/lib/56039310111408/_files/Microsoft_Word_- _Response_to_Intervention_Overview_of_Hershey_Elementary_Model.pdf)

163 What changes should be made to instruction & intervention? Ensure treatment fidelity!!!!!!!! Ensure treatment fidelity!!!!!!!! Increase instructional time (active and engaged) Increase instructional time (active and engaged) Decrease group size Decrease group size Gather additional, diagnostic, information Gather additional, diagnostic, information Change the intervention Change the intervention

164 Final Exam… Student Data: 27, 29, 26, 34, 27, 32, 39, 45, 43, 49, 51, --, --, 56, 51, 52, --, 57. Student Data: 27, 29, 26, 34, 27, 32, 39, 45, 43, 49, 51, --, --, 56, 51, 52, --, 57. Benchmark Data: BOY = 40, MOY = 68. Benchmark Data: BOY = 40, MOY = 68. What is student’s RoI? What is student’s RoI? How does RoI compare to expected and needed RoIs? How does RoI compare to expected and needed RoIs? What steps would your team take next? What steps would your team take next? What if Benchmarks were 68 and 90 instead? What if Benchmarks were 68 and 90 instead?

165 Questions? & Comments!

166 The RoI Web Site http://sites.google.com/site/rateofimprovement/ http://sites.google.com/site/rateofimprovement/ http://sites.google.com/site/rateofimprovement/ Download powerpoints, handouts, Excel graphs, charts, articles, etc. Download powerpoints, handouts, Excel graphs, charts, articles, etc. Caitlin Flinn Caitlin Flinn CaitlinFlinn@hotmail.com CaitlinFlinn@hotmail.com CaitlinFlinn@hotmail.com Andy McCrea Andy McCrea andymccrea70@gmail.com andymccrea70@gmail.com andymccrea70@gmail.com Matt Ferchalk Matt Ferchalk mferchalk@norleb.k12.pa.us mferchalk@norleb.k12.pa.us mferchalk@norleb.k12.pa.us

167 Resources www.interventioncentral.com www.interventioncentral.com www.interventioncentral.com www.aimsweb.com www.aimsweb.com www.aimsweb.com http://dibels.uoregon.edu http://dibels.uoregon.edu http://dibels.uoregon.edu www.nasponline.org www.nasponline.org www.nasponline.org

168 Resources www.fcrr.org www.fcrr.org www.fcrr.org Florida Center for Reading Research http://ies.ed.gov/ncee/wwc// http://ies.ed.gov/ncee/wwc// http://ies.ed.gov/ncee/wwc// What Works Clearinghouse http://www.rti4success.org http://www.rti4success.org http://www.rti4success.org National Center on RtI

169 References Ardoin, S. P., & Christ, T. J. (2009). Curriculum- based measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology Review, 38(2), 266-283. Ardoin, S. P. & Christ, T. J. (2008). Evaluating curriculum-based measurement slope estimates using triannual universal screenings. School Psychology Review, 37(1), 109-125.

170 References Christ, T. J. (2006). Short-term estimates of growth using curriculum-based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35(1), 128-133. Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219-232.

171 References Deno, S. L., Fuchs, L.S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507-524. Flinn, C. S. (2008). Graphing rate of improvement for individual students. InSight, 28(3), 10-12.

172 References Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept for reconceptualizing the identification of learning disabilities. Learning Disabilities Research and Practice, 13, 204-219. Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27-48.

173 References Gall, M.D., & Gall, J.P. (2007). Educational research: An introduction (8th ed.). New York: Pearson. Jenkins, J. R., Graff, J. J., & Miglioretti, D.L. (2009). Estimating reading growth using intermittent CBM progress monitoring. Exceptional Children, 75, 151-163.

174 References Karwowski, W. (2006). International encyclopedia of ergonomics and human factors. Boca Raton, FL: Taylor & Francis Group, LLC. Shapiro, E. S. (2008). Best practices in setting progress monitoring goals for academic skill improvement. In A. Thomas and J. Grimes (Eds.), Best practices in school psychology V (Vol. 2, pp. 141-157). Bethesda, MD: National Association of School Psychologists.

175 References Vogel, D. R., Dickson, G. W., & Lehman, J. A. (1990). Persuasion and the role of visual presentation support. The UM/3M study. In M. Antonoff (Ed.), Presentations that persuade. Personal Computing, 14.

176 References Burns, M. (2008, October). Data-based problem analysis and interventions within RTI: Isn’t that what school psychology is all about? Paper presented at the Association of School Psychologists of Pennsylvania Annual Conference, State College, PA. Burns, M. (2008, October). Data-based problem analysis and interventions within RTI: Isn’t that what school psychology is all about? Paper presented at the Association of School Psychologists of Pennsylvania Annual Conference, State College, PA. Ferchalk, M. R., Richardson, F. & Cogan-Ferchalk, J.R. (2010, October). Using oral reading fluency data to create an accurate prediction model for PSSA Performance. Poster session presented at the Association of School Psychologists of Pennsylvania Annual Conference, State College, PA. Ferchalk, M. R., Richardson, F. & Cogan-Ferchalk, J.R. (2010, October). Using oral reading fluency data to create an accurate prediction model for PSSA Performance. Poster session presented at the Association of School Psychologists of Pennsylvania Annual Conference, State College, PA. Hintze, J., & Silberglitt, B. (2005). A Longitudinal Examination of the Diagnostic Accuracy and Predictive Validity of R-CBM and High-Stakes Testing. School Psychology Review, 34(3), 372-386. Hintze, J., & Silberglitt, B. (2005). A Longitudinal Examination of the Diagnostic Accuracy and Predictive Validity of R-CBM and High-Stakes Testing. School Psychology Review, 34(3), 372-386. McGlinchey, M., & Hixson, M. (2004). Using Curriculum-Based Measurement to Predict Performance on State Assessments in Reading. School Psychology Review, 33(2), 193-203. McGlinchey, M., & Hixson, M. (2004). Using Curriculum-Based Measurement to Predict Performance on State Assessments in Reading. School Psychology Review, 33(2), 193-203. Shapiro, E., Keller, M., Lutz, J., Santoro, L., & Hintze, J. (2006). Curriculum-Based Measures and Performance on State Assessment and Standardized Tests: Reading and Math Performance in Pennsylvania. Journal of Psychoeducational Assessment, 24(1), 19-35. Shapiro, E., Keller, M., Lutz, J., Santoro, L., & Hintze, J. (2006). Curriculum-Based Measures and Performance on State Assessment and Standardized Tests: Reading and Math Performance in Pennsylvania. Journal of Psychoeducational Assessment, 24(1), 19-35.

177 References Silberglitt, B. (2008). Best practices in Using Technology for Data- Based Decision Making. In A. Thomas and J. Grimes (eds.) Best practices in school psychology V. Bethesda, MD: National Association of School Psychologists. Silberglitt, B. (2008). Best practices in Using Technology for Data- Based Decision Making. In A. Thomas and J. Grimes (eds.) Best practices in school psychology V. Bethesda, MD: National Association of School Psychologists. Silberglitt, B., Burns, M., Madyun, N., & Lail, K. (2006). Relationship of reading fluency assessment data with state accountability test scores: A longitudinal comparison of grade levels. Psychology in the Schools, 43(5), 527-535. Silberglitt, B., Burns, M., Madyun, N., & Lail, K. (2006). Relationship of reading fluency assessment data with state accountability test scores: A longitudinal comparison of grade levels. Psychology in the Schools, 43(5), 527-535. Stage, S., & Jacobsen, M. (2001). Predicting Student Success on a State-mandated Performance-based Assessment Using Oral Reading Fluency. School Psychology Review, 30(3), 407. Stage, S., & Jacobsen, M. (2001). Predicting Student Success on a State-mandated Performance-based Assessment Using Oral Reading Fluency. School Psychology Review, 30(3), 407. Stewart, L.H. & Silberglitt, B. (2008). Best practices in Developing Academic Local Norms. In A. Thomas and J. Grimes (eds.) Best practices in school psychology V. Bethesda, MD: National Association of School Psychologists. Stewart, L.H. & Silberglitt, B. (2008). Best practices in Developing Academic Local Norms. In A. Thomas and J. Grimes (eds.) Best practices in school psychology V. Bethesda, MD: National Association of School Psychologists.


Download ppt "Rate of Improvement Version 2.0: Research Based Calculation and Decision Making Caitlin S. Flinn, EdS, NCSP Andrew E. McCrea, MS, NCSP Matthew Ferchalk."

Similar presentations


Ads by Google