Response to Intervention (RtI) practices in educational settings aim to identify students who are at risk for academic struggles. Typically, RtI tries.

Slides:



Advertisements
Similar presentations
Progress Monitoring: Data to Instructional Decision-Making Frank Worrell Marley Watkins Tracey Hall Ministry of Education Trinidad and Tobago January,
Advertisements

Progress Monitoring And RtI System
RtI Response to Intervention
DELAWARE EARLY LITERACY INITIATIVE Dr. Jim J
1 The Florida Assessments for Instruction in Reading Overview.
Chapter 9: Fluency Assessment
Comparison of Half- and Full-Day Kindergarten on Kindergarten Achievement Jack B. Monpas-Huber, Ph.D. Director of Assessment and Student Information.
Academic Data for Instructional Decisions: Elementary Level Dr. Amy Lingo, Dr. Nicole Fenty, and Regina Hirn Project ABRI University of Louisville Project.
Imagine It! Assessment.
This study seeks to evaluate the effectiveness of My Breakfast Reading Program in reducing the number of children at-risk for reading difficulty. 1st grade.
Progress Monitoring project DATA Assessment Module.
Margaret D. Anderson SUNY Cortland, April, Federal legislation provides the guidelines that schools must follow when identifying children for special.
1 Module 2 Using DIBELS Next Data: Identifying and Validating Need for Support.
Effective Intervention Using Data from the Qualitative Reading Inventory (QRI-5) Developed by the authors of the Qualitative Reading Inventory (QRI) -5,
PALS-PreK.
Universal Screening: Answers to District Leaders Questions Are you uncertain about the practical matters of Response to Intervention?
RtI Assessment CED 613. Universal Screening What is it and what does it Evaluate? What is the fundamental question it is asking? What is the ultimate.
Curriculum Based Evaluations Informed Decision Making Leads to Greater Student Achievement Margy Bailey 2006.
Joint Predictive Probabilities of Oral Reading Fluency for Reading Comprehension Young-Suk Kim & Yaacov Petscher Florida State University & Florida Center.
RESPONSE TO INTERVENTION Georgia’s Pyramid. Pyramid Vocabulary  Formative Assessment  Universal Screening  Intervention  Progress Monitoring.
Chapter 9 Fluency Assessment Tina Jensen. What? Fluency Assessment Consists of listening to students read aloud for a given time to collect information.
ICSD District RtI Committee Agenda 3/13/12 3:45- Review of Our Norms and today’s agenda 4:00- Defining RtI and screening tool criteria 4:30- Begin review.
Grade-level Benchmark Data Meetings
Use of the Dynamic Indicators of Basic Early Literacy Skills with English Language Learners Rachel McCarthy, M.S.E. & Mary Beth Tusing, Ph.D. University.
The Idaho State Department of Education Presents: “ELLA” Early Learning Literacy Activities This program is designed to support the most important early.
Research Foundations and EGRA Protocols or Why these measures? Sylvia Linan-Thompson.
Wisconsin’s New Kindergarten Screener A training for the administration and scoring of the Phonological Awareness Literacy Screener.
Classroom Support of Literacy Development for Students Demonstrating Underlying Language and Phonological Deficits.
1 Preventing Reading Difficulties with DIBELS Assessment.
Progress Monitoring and Response to Intervention Solution.
RtI July 29, RtI Is… A General Education Data-Driven Process or System that Provides Instructional Intervention in Reading, Mathematics,
DIBELS: Dynamic Indicators of Basic Early Literacy Skills 6 th Edition A guide for Parents.
RTI: Response to Intervention An Evidence-Based Practice.
Developing an Early Diagnostic Assessment for Reading Skills: The Research Basis for the Texas Primary Reading Inventory David J. Francis Texas Institute.
Slide 1 Estimating Performance Below the National Level Applying Simulation Methods to TIMSS Fourth Annual IES Research Conference Dan Sherman, Ph.D. American.
SPDG Day May 8, People First Language Kathie Snow. (n.d.) A few words about People First Language. Disability is Natural. Retrieved August 1, 2012.
Progress Monitoring and RtI: Questions from the Field Edward S. Shapiro Director, Center for Promoting Research to Practice Lehigh University, Bethlehem,
From Screening to Verification: The RTI Process at Westside Jolene Johnson, Ed.S. Monica McKevitt, Ed.S.
Response to Instruction: Using Data to Make Decisions PRESENTER: Lexie Domaradzki.
Progress Monitoring and the Academic Facilitator Cotrane Penn, Ph.D. Intervention Team Specialist East Zone.
Data Analysis MiBLSi Project September 2005 Based on material by Ed Kameenui Deb Simmons Roland Good Ruth Kaminski Rob Horner George Sugai.
2006 OSEP Project Directors Meeting 1 Screening and Progress Monitoring for Identification of Reading Disabilities within an RTI Model Screening and Progress.
Class Action Research: Treatment for the Nonresponsive Student IL510 Kim Vivanco July 15, 2009
Response to Intervention within IDEIA 2004: Get Ready South Carolina Bradley S. Witzel, PhD Department of Curriculum and Instruction Richard W. Riley College.
Experiments. The essential feature of the strategy of experimental research is that you… Compare two or more situations (e.g., schools) that are as similar.
PALS Assessments, Instructional Resources, and Data
CHAPTER 8 DEVELOPMENTAL CHANGES IN READING COMPREHENSION: IMPLICATIONS FOR ASSESSMENT AND INSTRUCTION AUTHORS: SUZANNE M. ADLOF, CHARLES A. PERFETTI, AND.
Response to Intervention in Mathematics Thinking Smart about Assessment Ben Clarke University of Oregon May 21, 2014.
Early Identification of Introductory Major's Biology Students for Inclusion in an Academic Support Program BETHANY V. BOWLING and E. DAVID THOMPSON Department.
Test of Silent Word Reading Fluency Second Edition
Part 2: Assisting Students Struggling with Reading: Multi-Tier System of Supports H325A
© 2011 by The Rector and The Board of Visitors of the University of Virginia. All Rights Reserved. Class Summary Report.
UNIVERSAL SCREENING AND PROGRESS MONITORING IN READING Secondary Level.
Using Data to Implement RtI Colleen Anderson & Michelle Hosp Iowa Department of Education.
Progress Monitoring Goal Setting Overview of Measures Keith Drieberg, Director of Psychological Services John Oliveri, School Psychologist Cathleen Geraghty,
Response to Intervention (RtI) Aldine ISD District Staff Development August 18, 2009.
Responsiveness of Students With Language Difficulties to Early Intervention in Reading O’Conner, R.E., Bocian, K., Beebe-Frankenberger, M., Linklater,
WestEd.org Washington Private Schools RtI Conference Follow- up Webinar October 16, 2012 Silvia DeRuvo Pam McCabe WestEd Center for Prevention and Early.
Linking Progress Monitoring with High-Stakes Assessments: Setting Standards-Based Goals in Reading John Hosp & Kristen Missall The University of Iowa Pursuing.
DRA2 PLUS BCSD Universal Screener 5K. What is universal screening? Universal screening is generally conducted three times per year to identify or predict.
Literacy Development in Elementary School Second-Language Learners
Early Literacy Screening: Comparing PALS-K with AIMSweb and STAR
Response to Intervention & Positive Behavioral Intervention & Support
DIBELS.
What is AIMSweb? AIMSweb is a benchmark and progress monitoring system based on direct, frequent and continuous student assessment.
CASD K-6 Transition Plan from DIBELS to DIBELS Next
Diagnosis and Remediation of Reading Difficulties
DIBELS Next Overview.
Data-Based Instructional Decision Making
Special Education teacher progress monitoring refresher training
Presentation transcript:

Response to Intervention (RtI) practices in educational settings aim to identify students who are at risk for academic struggles. Typically, RtI tries to identify 20% of students as at risk (Good, Simmons, & Smith, 2008). It is crucial to provide at-risk students with early interventions to prevent student failure (Hughes & Dexter, 2014). Standardized screening assessments are used in identifying students who are at-risk for failure. To successfully identify at-risk students, screeners must be reliable, valid, accurate, and inexpensive (Graney et al. 2010). Wisconsin has mandated the Phonological Awareness Literacy Screening--Kindergarten (PALS, University of Virginia) to assess reading- related skills during students’ kindergarten year. Although PALS-K is mandated, schools use other assessments as well. One early literacy screener commonly used in schools prior to the PALS mandate is the AIMSweb Tests of Early Literacy (Edformation). Both assessments claim to assess important early literacy skills that are predictive of future reading performance. Gaither (2008) compared PALS and AIMSweb to determine which screener correctly identified students who were at-risk; the results found that the two assessments were strongly correlated with each other. Moreover, students identified as at-risk by PALS were also identified as at-risk by AIMSweb. These results suggested that the results favored AIMSweb as a more practical assessment because it is more economical in terms of time and money to administer. More research is needed to examine the usefulness of the PALS screener in an RtI system. Thus, our study addressed two questions: 1.How well does each screener correctly predict which children will perform below expectation (i.e. “at- risk” children) on Grade 1 reading measures? 2.Which kindergarten subtests are most predictive of first-grade reading performance? Participants. 66 (39 girls, 27 boys) school-aged children from a school district in Wisconsin. Specific demographic data was not available to describe our sample. However, according to the Wisconsin Department of Public Instruction, the overall school district reported that the student body is 1.1% Asian, 2.5% Black, 2.0% Hispanic. Procedure. We accessed existing assessment data archivally. Students had completed early literacy screeners (PALS-K and AIMSWeb TEL) in the fall of kindergarten. The same students completed AIMSWeb TEL in the winter of their first-grade year. We used children’s assessment scores and conducted analyses to answer the following questions: Question 1: Classification Accuracy. Using the assessment tool’s criterion scores for determining risk status (cut-scores described below), we conducted crosstabulations to compare students’ benchmark status from kindergarten assessments to first grade reading performance. These analyses allowed us to determine the predictive accuracy (sensitivity and specificity) of AIMSWeb TEL and PALS-K for first-grade reading outcomes (R-CBM). Sensitivity answers the question of whether the assessment is indicating the correct students as failing to meet benchmark. Sensitivity = true positives / (true positives + false negatives) Specificity answers the question of whether the assessment is indicating only those students who fail to meet benchmark. Specificity = true negatives / (False positives + true negatives) Question 2: Predictive Validity. We conducted a regression analysis to determine the extent to which screening score assessments predicted winter first-grade reading outcomes. We considered the regression coefficients of each screener to determine the relative contribution of each screening score to first-grade reading outcomes. Measures. Predictor variables. AIMSWeb TEL is used as a screening tool to determine which students may be at risk for future reading difficulties. Only one measure is administered in the fall of kindergarten: LNF: students present the names of visually presented letters for one minute. The cut-score defined by AIMSWeb as identifying the students who are most at-risk is 3. This cut score is set at the 15 th percentile when compared to kindergarteners across the nation. PALS-K consists of a Phonological Awareness Task (rhyme and beginning sound awareness), Literacy Tasks (alphabet knowledge), a Letter-Sound Awareness, and Concept of Word and Recognition. The test takes approximately 30 min. to administer. Rhyme and Beginning Sound Awareness: identify the words and sounds that rhyme in an orally given selection. Alphabet Awareness: students name the letters in the alphabet in both uppercase and lowercase print. For Letter-Sound Awareness: students touch each letter and say the sound it represents. Concept of Word: students point to the words as a teacher reads them aloud. Spelling: students spell consonant-vowel-consonant words. Recognition: students must read words from a word list. The cut-score defined by PALS-K as identifying the students most at-risk is 28. This score was determined by the rhyme awareness, spelling, Concept of Word List, Beginning Sound Awareness, Alphabet Awareness, and Letter-Sound Awareness. Outcome Variables. The outcome measure for students was R-CBM administered in the fall and winter of first grade. R-CBM is a brief, individually administered test of oral reading fluency. Students are scored based on the total number of words read correctly in 1 minute. Our research suggests that PALS, the assessment currently mandated in Wisconsin schools, may not be the most pragmatic assessment we could use, as AIMSWeb is costs less time and money to administer. In a multiple-gating strategy, AIMSWeb could be used as a quick “first-gate” assessment to indicate the children who need the most intensive intervention. The next “gate” could be a more comprehensive assessment, like PALS. Implication for Question 1. Schools may need to consider an appropriate threshold of performance to define risk status in individual schools. Because schools may differ in score variability and students differ in skills, adjusting cut-scores may help correctly identify at-risk students. The poor sensitivity shown by both assessments indicates that both assessments are under-identifying at-risk students in our sample with the current cut-scores. In future research, we could analyze the level of performance on each assessment that’s most predictive of the future outcome; instead of using identified cut-scores, it may be important for schools to consider how well screeners identify risk status and determine cut scores locally. Implications for Question 2. Overall, AIMSWeb and PALS-K accounted for 45% of the total variance in R-CBM, (F = 21.9, p <.001). Beta weights tell you how much each the predictors contribute to R-CBM when the shared variance is removed. Although the cut-scores suggest PALS is more accurate, considering the two variables together indicates that AIMSWeb is more closely associated with R-CBM. Furthermore, AIMSWeb adds more to the prediction, and was significant (β =.42), whereas PALS was not (β =.30). The semipartial correlation coefficients denote how much more AIMSWeb and PALS-K add to the R-CBM scores when considered together. AIMSWeb adds 8% more to the prediction of R-CBM, whereas PALS only adds 3.9%. Thus, it seems as though AIMSWeb is the better predictor of R-CBM, however the cut score may be too low. Thus, schools may need to consider adjusting cutscores based on student performance. Potential limitations for this project include our relatively small sample size and incomplete data. Our sample consisted of 66 children, which may be insufficient if we are to generalize to school- age populations in general. Also, we do not know how well our sample generalizes to Wisconsin schools as a whole because of a lack of data from the school district. In future research, we could ask similar questions using a larger, representative archival dataset. Question 1: Classification Accuracy PALS-K and R-CBM (Winter Grade 1) Sensitivity =.27, Specificity =.98, Hit Rate =.79 AIMSWeb and R-CBM (Winter Grade 1) Sensitivity =.07, Specificity = 1.0, Hit Rate =.77 Summary of Classification Accuracy PALS-K more accurately identifies at-risk children than AIMSWeb TEL. PALS-K shows higher sensitivity (.27) than AIMSWeb TEL (.07), meaning PALS-K identifies the most children as at-risk. AIMSWeb TEL shows slightly higher specificity (1.0) compared to PALS-K (.98). This means AIMSWeb is identifying only those students who are most at-risk Question 2: Predictive Validity Simultaneous regression analyses revealed a significant beta weight for AIMSWeb LNF (β =.42). This means AIMSWeb TEL contributes more to R- CBM than PALS-K. Semipartial correlation coefficients revealed that AIMSWeb LNF accounted for 8% of the variance in R-CBM when added to PALS-K, and PALS-K adds only 4% of the variance to R-CBM when added to AIMSWeb TEL. Gaither, A. C. (2008). A comparison of two early education literacy benchmark assessments. Good, R., Simmons, D., & Smith, S. (2001). Reading Research Anthology: The Why of Reading Instruction. Novato, CA: Arena P Graney, S., Martinez, R. S., Missall, K. N., & Aricak, O. (2010). Universal screening of reading in late elementary school: R-CBM versus CBM maze. Remedial and Special Education, 31, doi: / Hughes, C., & Dexter, D. D. (2014). Universal Screening Within a Response-to- Intervention Model. Universal Screening Within a RTI Model. Retrieved April 17, 2014, from model Funding for this project was provided by the Office of Research and Sponsored Programs