Presentation is loading. Please wait.

Presentation is loading. Please wait.

Assessment – An Integral Part of RTI

Similar presentations


Presentation on theme: "Assessment – An Integral Part of RTI"— Presentation transcript:

1 Assessment – An Integral Part of RTI
Weighing cows won’t make ‘em fatter … just as, assessing children will not increase student learning! Sandy’s Slide Introduction: Welcome The purpose of this module is to share information about the regulations and the role of assess ment in a 3 tier model . “ While assessment won’t increase student learning it can serve to inform us about student instructional needs and help teachers target instruction to those needs.” Now, we will begin with a review of module 1 .

2 Module 1 Model Tier 1 Framework Review
Lori and Julia

3 Purpose of Training Modules
RTI is not a program; RTI is a process State provides a framework and process Through regulations Through professional development Through technical assistance District/School develops unique implementation RTI implementation must take into consideration unique characteristics of local culture RTI should build on existing systems/initiatives District Leadership Diversity in contribution/input/skills Distribution in workload Everyone is knowledgeable and supportive Share framework and provide guidance for school implementation Lori and Julia District Leadership Team – regular meetings and assignments

4 ? ? Tier 1 ? ? ? All Students, All Staff, ? All Settings, All Year ?
Lori and Julia Every student always receives Tier I instruction. Tier II and Tier 3 instruction will be given in addition to the Tier I instruction. (Be sure to explain that students do not leave Tier 1 when they move to Tier 2 or 3) ….Tier 1 is the core standards’ aligned instruction that ALL students receive. It is a common misconception that Tier 2 and 3 students physically “move” out of tier 1 ….They do not. Address questions. Refer to handout of Tier 1 Components and Quality Instruction Example and Non-example.

5 Most IMPORTANT RTI Component
Why Focus on Tier 1? Federal Law and State Regulations must ensure student has received quality instruction need for intervention is not due to poor/inconsistent/ or lack of core instruction Research/Previous RTI Implementation Spend Sufficient time on Tier 1 – Quality Instruction - before implementation of Interventions Most IMPORTANT RTI Component Lori and Julia

6 RTI Framework Where have we been? Where are we going?
Established District Leadership Team Overview of RTI Needs Assessment of District and School Level Implementation Tier 1 Framework Where are we going? Assessment Framework Assessments (December) Data Management (December) Data Analysis (February) Team Problem Solving (February) Interventions (April) SLD Determination (TBD) Secondary Focus ( ) Lori and Julia

7 Assessment In The 3 Tier Model
RTI Module II Delaware Department of Education Sandy’s slide Actual Beginning of the Assessment Section.

8 Purposes of Assessment in RTI
To inform instruction To provide early intervention To monitor progress at the student, class, school, and district levels To evaluate instructional programs/strategies Sandy’s slide

9 Objectives for the Day T o identify ways to implement a comprehensive assessment plan To develop an understanding of screening, progress monitoring, diagnostic, and outcome assessments To identify the critical components of reading, math, early childhood, and behavior as related to assessment Discuss the objectives. Speak to the goal. Districts/schools will leave with an understanding of a comprehensive assessment plan and determine where they are in the process of establishing such a plan -- what they have in place and what they still need to do. Use Anticipation Guide to assess background of information and promote table discussion.

10 Objectives for the Day (Continued)
To evaluate current assessment tools To review available assessment tools To develop an understanding of a data management system Sandy’s Slide

11 Leadership is Key Sandy’s Slide

12 Questions? Sandy’s Slide
We will have a parking lot for questions that you have today. Please write questions on an index card on your table. The coordinators will Collect them when they see you hold them in the air. We will answer as many questions as we can. If we feel the question is complex, we will take it back to DOE for additional clarification and send you a response in the near future.

13 It is useless if we do not use it to guide our actions.
Assessment is the collection of data to make decisions. (Salvia & Ysseldyke, 1997) It is useless if we do not use it to guide our actions. The terms assessment and evaluation are used interchangeably, but it can be helpful in navigating the implementation of RTI to think of how these terms are different. If you think of assessment as the process of collecting information, it becomes easier to convey to teachers the need for standardization, reliability, validity, and using different assessments for different purposes. This leads to thinking about evaluation as the process of using the results of data to make decisions (i.e., information collected through assessment). We use assessments to make decisions and to evaluate students’ progress, effectiveness of instruction, and effectiveness of an intervention. We give assessments to help us evaluate. The assessment data helps us to modify instruction , curriculum changes, assist in daily planning, and make group decisions. Evaluate students’ progress as they go through the tiers. Again, this isn’t just about assessment (or giving tests), it is about USING the data to make educationally sound decisions.

14 Three Tiered Model Increasing Support ~5% ~15% ~80% of Students
Tier III: Students not responding to Tier I or II interventions – Sustained Intensive Small Group & Individual Interventions Possible Special Education Identification for non-responders of Tier III interventions Three Tiered Model ~5% Tier II: Students not responding to Tier I efforts – group interventions, Specialized Research-based Interventions Increasing Support ~15% Tier I: Classroom/All Students - Core Class Instruction possible Special Services ** “Remember this is what instruction looks like in the 3 tier model”** Quickly review this slide with participants. If the core is effectively taught with fidelity and the curriculum is differentiated with academic and behavioral supports for all students, 80% or more of all students will be successful with the strong instruction provided at the Tier I level. Now we are going to look at curriculum and assessment because not only do they work together, but they are essential to ensure that all children learn. ~80% of Students

15 Assessment in a 3 Tier RTI Model
Tier I /Core Instruction Universal Screening Progress Monitoring Diagnostic Outcome We want to emphasize once again that Tier 1 is for all students all the time for both instruction and assessment. Tier I includes the following assessments: Universal Screening Assessments provide an initial indication of which students are entering the school year “at risk” . Additional assessments after the universal screening given in Tier I will depend on the child’s score on the screening assessment. Progress Monitoring Assessments are given periodically to determine whether students are making adequate progress. Diagnostic Information includes the gathering of information that can help guide interventions for students who are experiencing difficulty learning core content, for example learning to read. Outcome Assessments are given at the end of the year to assess what students learned in core content area.

16 Assessment in 3 Tier RTI Model
According to the RTI regulations, when a student is in Tier II, progress monitoring occurs every week and a formal diagnostic assessment may be given to gather more specific information. Assessments in Tier III are primarily the same as those in Tier II except we must dig deeper. Remember, these are students that have not responded enough to the instruction that we are providing. This would probably require us to do a more formal diagnostic assessment such as the DAR (Diagnostic Assessment of Reading ??)

17 Comprehensive Assessment Plan
Universal Screening Progress Monitoring Diagnostic Testing Outcome Testing Data Management System 1. A comprehensive assessment plan is a critical element of an effective school-level plan for preventing instructional difficulties across core content areas. 2. These components should be included in a comprehensive assessment plan: universal screening, progress monitoring, diagnostic testing, outcome testing, along with a data management system. 3. For interventions to work, we must have a coherent assessment system, one that employs the various types of assessment in concert. 4. Let’s review the Purposes of Assessment: Screen for students who will need additional instructional support Diagnose students’ instructional needs Monitor progress of students over time Evaluate outcomes at key points in time Keep in mind that you must always use multiple sources for assessment in making decisions. This will give us a better understanding of a child’s progress or lack of progress.

18 Where Should the Assessment Go?
Universal Screening Progress Monitoring Diagnostic Outcome Our goal for this activity is for districts will work in their groups to place their assessments in the most appropriate category on the sorting chart on card stock located on their table. Directions: Participants will use the matrix of assessments in their folders to write the names of the assessments that they presently use on post it notes. Then the group will decide which category is appropriate for each assessment. There should be discussion at each table regarding the reasons the assessments should be placed in a particular category. This is really a pre-assessment for you. After we have described each of the four assessment types, we will ask you to re-evaluate your sort based on new information that we have provided regarding the characteristics and purposes of each of the four assessments.

19 Which Assessment Term? The assessment should measure the critical skills/components of the subject area it is assessing. The assessment should yield similar scores if students were tested on a different day, by a different tester, or on a minimally different set of items. WHITE BOARD ACTIVITY – Please read the definition on the screen. At your table, discuss whether this is the definition of validity or reliability. Now write the term that you feel is defined by the definition on your white board and hold it up. This is the definition of validity. Why do you need to understand these terms? In order to be most useful in school settings, assessments of any kind must meet certain criteria . Validity and reliability are two of the most essential considerations in the selection of assessments. Let’s address validity first. Of utmost importance is that the assessments measure the critical skills in each subject area, and this concern addresses validity. To be valid, assessments must measure what they are intended to measure. In the case of reading, it is the 5 major components as identified by the National Reading Report...phonemic awareness, phonics, fluency, vocabulary and comprehension. Predictive validity is another very important type of validity .Tests with predicative validity are designed to predict future performance or success. Dibels, in reading, has predictive validity. This is an essential feature to have because we want to know if students are on the track to reading success. Pull in the definition or reliability. Note the definition of reliability. Reliability means that an assessment yields similar, consistent scores each time it is administered. Obviously this is an important feature. To have reliability, we would expect similar scores if students were tested on a different day, by a different tester, or on a minimally different set of items. Assessments are truly valuable when they obtain a reliable score. A reliability co-efficient should be as close to “1” as possible – a co-efficient of .9 is considered reliable. You are more apt to get this with an individually administered test. Group administered tests generally have lower reliability co-efficients. Turn to a partner and discuss why this may be so. (Indiv– more engaging for the child, tester can identify more problems, child produces something/Grp -efficient) You will need to remember that a good assessment is BOTH valid and reliable. When we think of validity and reliability, most often we are thinking of published assessments. However, included in your packet, is a handout created by Kristin Ritchey of the U. Of DE that provides the procedures necessary for developing your own assessment tool. It is a complex process that will include designing validity, reliability studies as well as determining norms and setting benchmark goals.

20 Which Assessment Term? The appraisal of student progress by using materials and procedures directly from the curriculum taught. A simple set of procedures for repeated measurement of student growth toward long range instructional goals. WHITE BOARD ACTIVITY – Let’s use the white boards again. Is this the definition for curriculum-based assessment or curriculum-based measurement? Talk with your friends and write your answer. This is the definition of curriculum-based assessment. The key here is the phrase “directly from the curriculum taught”. In curriculum-based assessments, probes are developed on the books or material that make up the child’s curriculum. Therefore, it is a great way to see how well a child is performing on the materials that are being used for instruction. Often the student is assessed across several levels of the curriculum and performance criteria are established to determined acceptable levels of student mastery. Turn to your partner and try to think of some examples of curriculum-based assessments. (fluency probes from a specific core program; end of the level or theme anthology tests, running records, word recognition tests from the core) PULL IN THE NEXT DEFINITION: This is the definition of curriculum-based measurement. In contrast this is not based on the curriculum per se, but uses progress monitoring content that is constant from one measurement period to the next , AND the assessment is based on the critical skills of the content taught. In reading the essential components are assessed. Examples of CBMs are DIBELS and Aims –Web . CBMs can be used for screening and progress monitoring and may provide some diagnostic information as well. You may wonder why you need to know these terms. It is important because you are using CBAs right now. However, CBMs may be new to some of you, and curriculum-based measurement has become a general term that is frequently used as an umbrella term for progress monitoring tests.

21 Which Assessment Term? A specific criterion level of skills specified as an indication of an acceptable proficiency or mastery A test used to determine the overall developmental level of a child with respect to other students White Board Activity -- Is this the definition of criterion-referenced assessment or a norm-referenced? Let’s discuss and use our white boards again. This defines a criterion–referenced assessment and the give-away is the term “criterion”. In a criterion-referenced test results are compared against an established standard. It is a goal to achieve. Let’s brainstorm some examples of criterion-referenced tests at your table. (DSTP, DIBELS, DE driving test) NOW SHOW THE NEXT DEFINITION: This is the definition of a norm-referenced assessment. Norm-referenced tests compare one child’s performance with what might be normally expected of other children. Hence the word norm. Examples are the Iowa Test of Basic Skills, Gates-McGinity, and the NAEP. These are tests which give us percentile ranks, stanines, scaled scores, and normal curve equivalence information. . Now let’s mix it up a little. PULL IN THE TRUE/FALSE QUESTION. Is this a true or a false statement? Use your whiteboards. Yes, it is true. A criterion-referenced test may also be norm-referenced. An example of this is DIBELS.

22 Which Assessment Term? An assessment that provides information about student progress in order to make mid- course corrections or improvements to instruction. The final assessment, usually quantitative in practice, of the degree to which the goals and objectives of a program have been attained. White Board Activity: Is this the definition of formative or summative? Use your white boards to answer. Yes, it is the definition of formative. You use it to inform instructional decisions. What are some examples of formative assessments? Discuss this at your table. (Examples: theme test, weekly spelling test, phonics inventories, spelling inventories) BRING IN THE SECOND DEFINTION: In contrast to formative, this is the definition of a summative or outcome assessment. It “sums” up the degree to which goals have been attained by a student. Let’s discuss some examples. (DSTP, SATs, final examinations) Look at your assessments on the category card and see if you can distinguish between the summative and formative assessments at your table. Why do you need to understand these terms? When choosing assessments, you need to know your PURPOSE. Is this assessment going to provide me with information to help me teach in my needs-based group or is it an achievement test that tells me what a child has or has not learned, but gives me little information on how to proceed with instruction. All of these terms are defined for you in your glossary. Familiarity with these terms and what they mean will probably be helpful as you develop your assessment plan.

23 Tier I: Universal Screening (NASDSE, 2005)
Universal Screening (of ALL students) occurs at least three times per year. Beginning, middle, end Procedures must identify which students are: proficient in the target skill, developing the target skill, and deficient in the target skill. Basic question to be answered: Should student be judged “at risk”? Information on this slide was taken from a report by NASDSE, which is the National Association of State Directors of Special Education. Regulations – Universal Tier I instruction screening for reading and mathematics shall be conducted at least 3 times each regular school year at routinely and fairly spaced intervals. The first screening should be conducted within 2 weeks of the beginning of the school year, or within 2 weeks of the child’s entry into school because some students will grow over the summer, Some will lose ground, and some children are new to the school. Universal screening is given to all children so that early identification of struggling students can be made both academically and behaviorally. The test must have predictive validity – be predictive of reading acquisition and later reading achievement. -In reading we know that reading trajectories are established early and that students on a low trajectory tend to stay on that trajectory and fall further and further behind. The basic question for a screening measure is whether or not the student should be judged as “at risk.” The later children are identified as needing support, the more difficult it is to catch up! Screening measures may be either a criterion-referenced, such as DIBELS which provides instructional levels, or normative comparison , such as in our regulations below the 40th percentile– Criterion-referenced is preferred because it addresses the critical skills. Universal screening is considered formative data.

24 Tier I Universal Screening Criteria
Efficient – brief, accurate, inexpensive Generally administered individually Multiple probes Clearly defined procedures for administering and scoring Broad Index – measures the Big Ideas School-wide Valid and Reliable Screening is the first step in identifying children who may be at risk for difficulties and needing additional support. These students are then considered for a more in-depth assessment, such as monitoring their progress during the next six weeks and /or collecting diagnostic information. For a screening measure to be useful, it should satisfy three criteria (Jenkins, 2003) It needs to identify students who require further assessment by measuring the Big Ideas. Big Ideas are predictive of reading acquisition and later reading achievement. They are something we can do something about; something we can teach and something that improves the outcome if taught. It is better to err on the side of over-identifying than under identifying at-risk students. 2. It needs to be practical – efficient, easy to score, easy to understand, and quick to administer. 3. It needs to generate positive outcomes (accurately identifies students without consuming resources that could be put to better use) Criterion measures are preferred because they give more accurate information about performance over relevant skills.

25 Please take out your Take Away window. The universal screening
Progress Monitoring Diagnostic Information Outcome / Summative Data Management System Please take out your Take Away window. The universal screening pane is in the top left hand corner. Please take a moment to self reflect about the purpose of universal screening and at least 3 insights that you have gained into this assessment. After you have done your individual reflection, please turn to your Partner and discuss what you have written. Now as a district, please take a moment to reflect on your current assessments under universal screening. Relative to the information that you have just heard, what is your thinking on the universal screening tool you are currently using? Any new insights? Does it meet the criteria just described?

26 Progress Monitoring Criteria
Measures rate of growth toward an observable, measurable, and targeted goal Measures small increments of growth Has multiple forms Progress monitoring determines through frequent measurement if students are making adequate progress or need more intensive support to accelerate learning. The regulations state in TIER I that children who score above the 25th percentile but below the 40th percentile shall have progress monitoring toward end of the year benchmarks at least once every 2 weeks until progress monitoring consistently demonstrates that the child is on a trajectory to meet end of year benchmarks. So as you consider available tools for progress monitoring, you need to keep the above features (on the slide) in mind. It must measure growth in small increments over time towards a targeted – benchmark goal. It is generally a curriculum based measurement. A CBM measures the critical components that have been identified as the necessary elements to achieve proficiency in a content area and which are covered across the school year. Alternate forms or probes which measure the same skills and are comparable in difficulty must be available so that we are able to administer them repeatedly.

27 Progress Monitoring Criteria (Continued)
Is efficient Is individually administered Is graphed and viewed regularly Is comparable across students Progress monitoring determines through frequent measurement if students are making adequate progress or need more intensive support to accelerate learning. The regulations state in TIER I that children who score above the 25th percentile but below the 40th percentile shall have progress monitoring toward end of the year benchmarks at least once every 2 weeks until progress monitoring consistently demonstrates that the child is on a trajectory to meet end of year benchmarks. So as you consider available tools for progress monitoring, you need to keep the above features (on the slide) in mind. It must measure growth in small increments over time towards a targeted – benchmark goal. It is generally a curriculum based measurement. A CBM measures the critical components that have been identified as the necessary elements to achieve proficiency in a content area and which are covered across the school year. Alternate forms or probes which measure the same skills and are comparable in difficulty must be available so that we are able to administer them repeatedly.

28 Grade-Level Progress Monitoring Criteria
Kindergarten – letter recognition, phonemic awareness, alphabetic principle First Grade –alphabetic principle, oral reading fluency Second Grade – alphabetic principle, oral reading fluency Third Grade – oral reading fluency In Reading First we use DIBELS as our progress monitoring tool, and it measures the five components as identified by the National Reading Panel – phonemic awareness, phonics, fluency, vocabulary and comprehension. The components which are monitored change as the grade levels change. In kindergarten, benchmarks are set for phonemic awareness skills – initial sound fluency is targeted in the fall and winter and phoneme segmentation in spring. Letter naming fluency is also addressed as is word use fluency. In first grade, phoneme segmentation and letter naming continue to be assessed; however, the targeted skill in fall and winter is nonsense word fluency which is the phonetic component and in the spring, oral reading fluency becomes the targeted skill. Word use fluency is also assessed. In second and third grade, the targeted component is oral reading fluency, even though NWF continues to be assessed in fall of second grade. And, again, word use fluency is assessed. So DIBELS measures the appropriate, targeted skills along a developmental continuum to help students on pace for acquiring proficiency in reading comprehension.

29 Trend line or Aim line? We would like to share a DIBELS report but first we need to think about these terms. We would like for you to use your white board and markers again. As a table decide which of these terms is described by the following definition. A line on a graph which connects the student’s initial data point with the benchmark or goal point of a targeted skill for a specific student. (Answer: Aim line). Student progress is monitored in relationship to the aim line. (Answer: Trend line). The trend line indicates the student’s actual progress Again Show trend line in relationship to aim line (.Is he progressing on the same trajectory as the aim-line, above the aim-line, below the aim line)

30 DIBELS REPORTS Using a graph for an individual student, a trajectory, aim-line and trend line will be discussed. The slope of the line will also be discussed. Describe the end points of the aim line. Describe the green circle and purple square. Mention the child is doing well. This child is on the trajectory to meet end of year grade level benchmarks. We assess benchmark students 3 times a year.

31 DIBELS REPORTS The Student Summary report displays Karen’s performance on benchmark skills across grades and time periods and as well as provides item-level details all in a single view. As we can see here, Karen has performed below benchmark on three occasions, her progress can be seen here along a trajectory toward her goal of benchmark. Three consecutive data points below the aim line indicates a need for change in instruction.

32 right, read what your table mate has written,
Universal Screening Progress Monitoring Diagnostic Information Outcome / Summative Data Management System Please take out your Take Away window. The progress monitoring pane is in the top right hand corner. Please take a moment to self reflect about the purpose of progress monitoring and at least 3 insights that you have gained into this assessment. After you have done your individual reflection, please leave it on the table. I will ask you to stand and move two to three seats to your right, read what your table mate has written, and star any thoughts with which you agree, and add one or more of your thoughts to that person window.– Something that you heard, but perhaps your table mate did not. Now as a district, please take a moment to reflect on your current progress monitoring assessments. relative to the information that you have just heard, What is your thinking on the progress monitoring tool you are currently using? Any new insights? Return to your seat

33 Diagnostic Information
Knowledge about a child’s skills and abilities that is useful in planning instruction Can be derived from student work, teacher observations, or other tests, as well as diagnostic tests According to the regulations, children in Tier 1 who score above the 25th percentile, but not at benchmark on any instructional screening, must receive differentiated, needs- based instruction. This means students must receive targeted instruction in the critical skills at the child’s instructional level. This instruction is very specific, explicit and systematic. The purpose of diagnostic information is to provide information to direct the targeted instruction in the needs-based groups. We can not assume that a child has comprehension Problems because he does not score well on a comprehension test. It could be a decoding problem, a language problem, or a sight word problem. Any information gathered about the child’s knowledge and skill in the components of the subject area is diagnostic information. In reading, this diagnostic information may be gleaned from informal assessments such as phonics screeners, spelling inventories, or high frequency word surveys. In contrast to diagnostic information, diagnostic tests are generally lengthier, but also provide information to drive needs-based instruction. Remember, important instructional information can come from sources other than formal diagnostic tests. Therefore, it is important to consider whether the administration of a diagnostic test will provide additional information for instruction. Instructional time is more valuable than the time vested in administering a diagnostic test if it simply duplicates information that we already know or that we can get from an efficient informal test. There must always be a purpose for assessment.

34 Diagnostic Information
Universal Screening Progress Monitoring Diagnostic Information Outcome / Summative Data Management System Please take out your Take Away window. The Diagnostic Information/ Assessment pane is in the lower left hand corner. Please take a moment to self reflect about the purpose of Diagnostic Information/Assessment and at least 3 insights that you have gained into this assessment. After you have done your individual reflection, please leave it on the table. I will ask you to stand and move 2 (3) seats to your left, please read what your table mate has written, star any thoughts with which you agree and add one of your thoughts to that persons window – something that you heard but perhaps your table mate did not. Now as a district, please take a moment to reflect on your current assessments. Relative to the information that you have just heard, what is your current thinking on the Diagnostic Information tool you are currently using? Any new insights? RETURN TO YOUR SEATS

35 Outcome Assessment Outcome assessments are important
because they give school leaders and teachers feedback about the overall effectiveness of their curriculum. Frequently group administered Provides summative data – gives end of the year information on the child’s mastery of critical skills. Does not provide information for ongoing instruction purposes. These are often “high stakes” test (AYP ratings, accountability) An outcome test or a year-end achievement test might be used to measure growth in a broad area. Summative assessment – provides an evaluation of mastery of standards for the purpose of reporting or accountability. As part of a comprehensive plan, outcome assessments should be administered every year from kindergarten through third grade. Longitudinal studies of reading have shown that students are much more likely to meet grade level standards in reading at the end of third grade if they have met those standards in each preceding year. Outcome test at the end of grade K-2 are useful to school leaders to ensure that instruction in each grade is sufficiently powerful to keep most students on track for successful performance when they take important reading accountability measures at the end of third grade. (DSTP, achievement test, Iowa Test of Basic Skills.

36 Diagnostic Information
Universal Screening Progress Monitoring Diagnostic Information Outcome / Summative Data Management System Please take out your Take Away window. The Diagnostic Information/ Assessment pane is in the lower left hand corner. Please take a moment to self reflect about the purpose of Diagnostic Information/Assessment and at least 3 insights that you have gained into this assessment. After you have done your individual reflection, please leave it on the table. I will ask you to stand and move 2 (3) seats to your left, please read what your table mate has written, star any thoughts with which you agree and add one of your thoughts to that persons window – something that you heard but perhaps your table mate did not. Now as a district, please take a moment to reflect on your current assessments. Relative to the information that you have just heard, what is your current thinking on the Diagnostic Information tool you are currently using? Any new insights? RETURN TO YOUR SEATS

37 Tier II Weekly Progress Monitoring
Diagnostic assessments may need to be considered. In Tier II, the Regulations state that interventions shall be delivered and progress shall be monitored weekly against established benchmarks. So progress monitoring is increasing. The Regulations also state that if, after 6 weeks of Tier II intervention, a child has not made progress toward benchmarks or has made progress, but is not on a trajectory to meet end-of-year benchmarks, an IST shall meet to review the child’s program. The team may decide that the child’s instruction is being adequately modified and the trend line is closing the gap on the aim-line on our progress monitoring report and that the child needs another 6 weeks of instruction OR the team may decide that the instruction is not meeting the child’s needs – the trend line is not closing the gap on the aim-line on the progress monitoring report. The main purpose of progress monitoring in Tier II is to determine whether the intervention is successful in helping students learn at an appropriate rate. We may also find that we will need more information – and might consider off grade level progress monitoring, or more in-depth information on a broader scale something beyond spelling inventories, phonics screeners or word recognition surveys that were mentioned earlier and which should have already been given. You might need to give a diagnostic assessment.

38 Tier II Diagnostic Assessments
Provide in-depth, reliable assessment of component skills Are relatively lengthy Are given when there is a clear expectation that it will provide new, reliable information about a child’s difficulties to inform more powerful instruction Formal diagnostic assessments as we have mentioned are generally lengthy in nature and assess the component skills in depth. They are used when the more informal diagnostic information such as spelling inventories, phonics screeners, HFW surveys etc. does not provide enough information to address the child’s needs. Diagnostic assessments are not meant to be given to every student but rather, only when truly needed as they take longer to give and therefore, remove the student from valuable instruction for a longer period of time. Informal, diagnostic information, such as the specific sound recognition or letter naming, may be teacher created or from a specific curriculum. Formal diagnostic assessments are generally published valid and reliable assessments. There are many reading diagnostic assessments available: DAR,( Diagnostic Assessment of Reading) ERDA (Early Reading Diagnostic Assessment ). PPVT (Peabody Picture Vocabulary Test) Fox in a Box to name a few. Remember we not recommending specific tests to be purchased. Keep in mind that if quality instruction is occurring in TIER I and informal diagnostic information is being utilized to inform instruction, the need for a formal diagnostic assessment should lessen. In choosing a diagnostic tool make sure you are getting additional information to inform Instruction. Does this tool answer the following question? What information does this diagnostic give me that can be used to inform instruction tomorrow in my small needs based groups?

39 Progress Monitoring Diagnostic Testing Tier III
According to the regulations, progress monitoring is monitored weekly against the established standards in Tier 3 just as in Tier 2. In addition the RTI procedures are designed so that the child receives the appropriate level of targeted instruction in addition to their core instruction in the general classroom setting. Students are receiving more targeted and intensive interventions in terms of frequency and duration based on the child’s progress against the benchmark as measured through the weekly progress monitoring. It is more likely that further diagnostic assessment may also be necessary at this level, but only if additional information to target instruction can be gleaned from the diagnostic.

40 A Comprehensive Plan Universal Screening Progress Monitoring
Diagnostic Information and Assessments Outcome Assessment Data Management System So lets summarize the characteristics of a comprehensive assessment plan. For interventions to be successful, a coherent assessment system must be in place. The various types of assessments work together. The results of the assessments are used to inform the planning, and this is called “assessment-driven” instruction. If the universal screening does not reveal a problem, core instruction continues. Remember differentiation of instruction is a core component of Tier I instruction and is a component of quality instruction that needs to routinely occur for all students. If with quality instruction and differentiation of instruction, the student is not progressing, then a diagnostic is administered to target needs based instruction. These assessments give us information to identify the specific needs to be addressed in needs-based group instruction. As the differentiated instruction proceeds, progress monitoring is continued and a decision is made as to whether additional instruction should continue. The outcome assessments are usually administered to measure important reading outcomes, such as reading comprehension and provide important information to the teachers and school leaders about the overall effectiveness of the reading program. They also provide an evaluation of the child’s mastery of standards for the purpose of reporting or accountability. A data management system allows you to organize and effectively utilize the assessment data and is necessary to effectively utilize data in planning and modifying instruction. With this in mind we would like you to revisit your sorting activity. Think about changes you might want to make. Do you think that you placed your assessments in the correct categories now that you are aware of their individual purposes? Are they appropriate for the purpose?

41 Taking Stock Take out the the Taking Stock of Assessment handout from your folder. Refer to the sorting chart that we did before. Please list your reading assessments on handout. The purpose of today was to show you the components of a good assessment plan, we have shared with you the important characteristics of each of the 4 different types of assessments. At this point we would like you to actually take stock of the assessments that you presently are using. Perhaps they many of the necessary features are already place. At this time let’s go through an example using something familiar to many of us DIBELS. Note that DIBELS is available for grades K-5. Can we circle screening? The answer would be yes , because we use it three times a year to identify students that are “at risk” while addressing the critical reading skills at each benchmark for each grade level. What else can we circle in this Column? Answers: Progress monitoring because it aligns with the screening and has multiple probes and sensitive to small changes in growth. We could also Circle diagnostic because we can get some diagnostic information especially in grades K-1. In Reading First Schools the federal allows us to use it as an outcome measure in addition to the DSTP. Validity and reliability information is available At the UN of Oregon website by way of a technical report. Data Management Plan system for DIBELS can be found at the UN of Oregon website and Wireless Generation. We have prepared for you some guiding questions to help you evaluate your assessment to help you determine your assessment needs so that you have a coherent integrated plan.

42 Math

43 RTI Transforming Our Vision of How to Increase the Mathematics Proficiency of All Our Children

44 Effective Math Assessment Tools
How do the characteristics of mathematical proficiency shape the design of effective screening and progress monitoring tools? What are the elements of effective math assessment?

45

46 Mathematical Reasoning Proficiency
Adaptive reasoning (Reasoning) –capacity for logical thought, reflection, explanation, and justification. Analyze Compare/Contrast Make an Inference Evaluate Classify * In order to access reasoning the item must be “novel”

47 Pilots for Progress Monitoring
America’s Choice mCLASS:Math Keymath3

48 Effective Mathematics Assessment
Although there is a purpose for assessments that compare the mathematical skills of a student or group of students to others in the district, state, nation, or world, these assessments are not created to facilitate an individual student’s learning.

49 Working Inside the Black Box: Assessment for Learning in the Classroom
A follow up study to the original article by William and Black

50 Effective use of formative assessment has shown a direct correlation to student learning and include the following characteristics: The learning targets are shared clearly with students from the beginning of the learning Classroom assessments accurately measure achievement of the important learning targets Students are given continuous, descriptive feedback that includes evidence about what they currently do understand and what they still need to work on Students are involved in the assessment, record keeping, and communication of learning Students understand how to close the gap between the goal and where they currently “are” (Black & William, 1988;Stiggins in DuFour et al., 2005)

51 Can a Universal Screening Tool be formative in design yet still set benchmarks that help to identify students in need of intervention?

52 Vision for the Future Statewide Universal Screening Tool with Embedded Formative Assessments Identifies and Informs Suggests Interventions Continuous Fine Tuning of our Collective Knowledge

53 Identify the Focal Points for each Grade Initial Data Collection:
Curriculum Based Formative Assessment Tasks Teacher Involvement Selection of embedded Assessment Tasks Identify the Focal Points for each Grade Benchmarks of Student Understanding And Mathematical Proficiency Student Work Samples with Suggested Interventions Unpacking of embedded Assessments to create Universal Screening Tool Identification of students below Benchmarks

54 Vision for the Future Phase One Current Assessment Practice Data
Quarterly Assessments Transfer Tasks Lead Teacher Teams Grade Level Representation (Initial Focus Grade K-5) Nomination Process Begin identification of key tasks and student work data State/District Support

55 Early Childhood

56 Early Childhood Assessment and Progress Monitoring
Verna Thompson

57 What happens at Tier 1 ALL young children have access to:
Evidence-based curriculum for ALL areas of development Effective teaching strategies & learning opportunities for ALL children Universal “probing” of key skills in ALL areas Assessing acquisition of key skills in ALL areas Aligned to:

58 What Happens at Tier 1 Results of progress monitoring of development show: Most children are making progress Curriculum planning is effective Learning opportunities are meaningful Most children are not making sufficient progress Modification of teaching strategies for whole class Some children are not making sufficient progress “Early intervening” for individual children

59 Recognition and Response Guidelines for Progress Monitoring and Probing
Where: In a variety of natural settings & routines Who: By Informed caregivers (teachers, parents, teams) What: Collection of multi-sources of data in ALL areas Curriculum based assessment aligned with Early Learning Foundations Observations Parent Information Work Samples Checklists When: Ongoing assessment of skills in ALL areas Periodic “Probing” of ALL Key Skills Ideally same tool is used for Monitoring and Probing Results inform curriculum, teaching strategies, planned learning opportunities for children

60 Guidelines for Selecting Authentic Assessment Tools
Authentic Assessment measures should have the following characteristics: Curriculum-based Designed to be used multiple times Easy to score Sensitive to individual differences Provide information on both level and rate of growth in key areas of learning Related to long term learning goals in curriculum Aligned to Delaware Early Learning Foundations Ideally, single measure should be used for authentic assessment and probing for key skills

61 Authentic Assessment Tools for early Childhood
Curriculum-based tools (aligned to Early Learning Foundations) Creative Curriculum Developmental Assessment High Scope COR Carolina Curriculum Checklists linked to Early Learning Foundations (Collected from multi-sources of information) RORS Work Sampling Developmental Checklist Birth to Five

62 Early Childhood in Delaware
Head Start and Early Childhood Assistance Programs use authentic assessment Early Learning Foundations being revised Will include format for assessing children Child Care Licensing Regulations revised Centers required to assess children Training on authentic assessment is in planning ABCD Grant – Public Health Screening initiative

63 ABCD Grant Assuring Better Child Health and Development
Collaboration with Medicaid, DPH, DE-AAP, Autism Society of DE Developmental Screening for all children at well-visits with a standardized screening tool Ages & Stages, PEDS 9 mos, 12 mos, 18 mos, 24 mos, 36 mos Being piloted in two community practices in DE Will be replicated statewide with Medicaid policy Plans include training early childhood providers Effort by the National Association of State Heath Policy

64 Recognition and Response Pilot
Pilot States presently implementing R&R: Connecticut Arizona Maryland Florida

65 What Can Delaware School Districts Do Now?
Advocate for QUALITY Early Childhood programs Develop RELATIONSHIPS with families and programs in community Develop PARTNERSHIPS with community programs Communicate with programs in community Suggest checklists aligned to Early Learning Foundations Provide training on using assessment checklists Use AUTHENTIC ASSESSMENT to monitor children’s progress NOT responsible for screening all children from birth in community

66 Using Recognition and Response in Kindergarten
Kindergarten Screening: Consider Pre Kindergarten experiences Child was in an environment that provided opportunities Kindergarten is the first learning opportunity Analyze screening results

67 Activity At your table: Select one area of learning from
Delaware Early Learning Foundations 2. Observe children in video clip 3. Document learning opportunities you observed in that area of learning 4. Discuss what you observed with the group

68 Behavior

69 RTI and Behavior/ Mental Health
Do we have to “do RTI” for behavior and social-emotional issues??? Does it make any sense to “do RTI” for behavior and social emotional issues? NOTE: on this slide emphasize the “does it make sense” portion … do we have to do it is secondary. Do we have to “do RTI” Short answer: NO With respect to regulations for special education, there is no requirement for the three tier model to be in place for behavior. So, to be clear, schools are NOT REQUIRED to do anything with respect to identification of behavior issues at present. Of course, children with behavior and social emotional difficulties will be part of the schoolwide academic screenings and routine monitoring that you will be doing. So, you may turn up some kids in need of behavior and social-emotional interventions as you do your academic screenings (e.g., those who find it hard to participate in the routine screenings). RULE OUT for LD … need to rule out behavior Does it make sense? Short answer: Yes! As you are developing your three tiered model for academics, it makes sense to consider how behavior fits into your model. Many students who are struggling with behavior will improve as you attend to their academic difficulties through differentiated instruction. Others may need specific attention to behavior issues, in addition to their academic concerns. Still others may be ok academically but have behavioral or social emotional problems. All of these situations can be addressed through a three tiered approach to behavior, fully integrated with your three tiered model for academics. No! Yes!

70 RTI and Behavior/ Mental Health
Are we already “doing RTI” for behavior and social emotional issues? Short answer: quite possibly! At least in some aspects…. Many of you in the audience are from districts who have PBS in your schools. Some are using Responsive Classrooms, and others are using a model (whether it be formal or informal) that attends specifically to teaching children behavioral expectations and social-emotional competencies in a systematic, school-wide manner. Quite Possibly!

71 RTI and Behavior/Mental Health
Tier 1: Schoolwide teaching of behavioral expectations and social emotional competencies Using schoolwide data to identify and address problems If your school is already attending to making sure all students know what is expected, the expectations are actively taught and kids’ efforts to meet the expectations are acknowledged (not necessarily with tangible reinforcers), you are already accomplishing some of the goals of the universal or tier one level. This next part is focus of slide: If you go a step further and systematically examine how successful the program is in accomplishing the goal of a safe, supportive school environment in which kids know how to get along with one another, and identifying where the program is less than optimally effective (e.g., in certain locations of the school, at specific times of day or activities, with specific students), then you are already accomplishing the remainder of the tier one goals. You use this information to continue to improve the schoolwide program, provide assistance to teachers who want it, and identify kids who may need more attention.

72 Universal Screening and Behavior/ Mental Health
What is already happening? Monitoring office discipline referrals Teacher referral to school-based problem solving team There are some differences between the academic and behavioral programs that you will be developing. With behavior, you may or may not choose to do routine screening for difficulties (with academics, of course, this is required). But, again, let’s look at what you may already be doing that will help you move quickly to assist kids in need. Office Discipline Referrals (ODRs) can be a source of identifying kids who may be in need of additional supports. As a school you can choose how low or high to set your standard for when a child would be judged in need of more specialized intervention Teacher referral: most schools have some kind of school-based problem solving teams who receive requests for assistance regarding behavior. While we don’t usually think about this as a universal screening, the “teacher as test” model is a good one! Teachers know which kids are demonstrating atypical behavior. However, this method may not be systematic enough to meet your needs. As you are setting up your three tiered model for academics, it is important to create a unified system of problem solving. DON’T have separate teams for behavior and academic problem-solving. If your school is large, several grade level teams as the initial problem solving step for BOTH academics and behavior will make more sense.

73 Universal Screening and Behavior/ Mental Health
Are we ready to increase our attention to universal screening for behavior and social emotional issues? Turning to things you probably are not doing…. The methods discussed next are recommended for consideration ONLY once your schoolwide system (tier one) is firmly in place and functioning well. In addition, you probably will want to wait until your Tier 2 and 3 systems are functioning well for both behavior and academics. Why? Because these methods are about identifying kids who might otherwise “slip through the cracks” (especially kids with internalizing difficulties like depression and anxiety). You don’t want to identify kids before you are ready to provide effective services to them once identified. An exception: if you have a well-developed set of intervention options (perhaps through a wellness center of community-based counseling center), you may want to begin sooner. Maybe!

74 Universal Screening and Behavior/ Mental Health
Purpose: to identify youth who have high risk for developing behavioral or mental health problems Conducted on a schoolwide basis Typically involves several levels of assessment to avoid over- or under-identification of students Time will not permit a detailed discussion of methods today. The Positive Behavior Supports initiative will be conducting a day long workshop in the spring for those interested in learning more. “Multiple Gating Procedures” are common.

75 Universal Screening and Behavior/ Mental Health
Multiple Gating Procedures Gate 1: teacher nomination procedure Gate 2: teacher rating scale procedure Gate 3: observation and/or more detailed rating scales Identification of students most at-risk Intervention Planning In this kind of approach, at Gate 1, teachers receive information about particular risk factors. Often, these are defined as “externalizing” (e.g., conduct problems, hyperactivity) and “internalizing” (e.g., depression, anxiety). Teachers then identify up to 3 students in their classes who exhibit the risk factors just defined. This is gate 1. You can expect an over-identification of at risk children through this procedure (six students per classroom is A LOT but at this stage you are trying not to miss anyone). Immediately following, teachers complete a BRIEF rating scale on the students he/she just nominated. These results are compared to normative data (may be national or local norms) and students scoring in the atypical range are identified. This is gate 2. These first two gates are designed to be accomplished in a single schoolwide meeting in about an hour. Again, there may be children identified who not need interventions (although fewer than at Gate 1), so you move to gate 3. Gate 3 can take a variety of forms but often involves systematic observation and more detailed rating scales (some of which will be familiar to you: Achenbach, BASC, etc.). This procedure is designed to yield identification of the most at-risk kids. As part of Gate 3, you will figure out two things: 1) Does this student appear to need intervention? 2) If yes, what kind of intervention makes sense? (i.e., develop an intervention plan).

76 Universal Screening and Behavior/ Mental Health
Self-report procedures Gate 1: schoolwide screening Gate 2: follow up interview Gate 3: diagnostic interview At middle and high school levels, especially if you are interested in internalizing behaviors like anxiety and depression, self report is a more accurate identification method than teacher or even parent report. There are a variety of methods available to do this (see handout of resources).

77 Universal Screening and Behavior/ Mental Health
What does progress monitoring look like? The same for academic progress More individualized around specific behavioral or social-emotional concerns Once you have identified kids at risk and are applying interventions, you will be doing progress monitoring specific to those interventions. Keeping our eye on the prize of academic progress, kids with social emotional and behavioral problems stay within the regular academic progress monitoring system, so you will know if academics are improving or deteriorating. Kids with both academic and behavioral/social-emotional issues will also be receiving differentiated instruction, receiving more intensive services within the tiers for academics as needed, but you will also be keeping watch on the effectiveness of interventions targeted specifically toward the behavioral issues in many cases.

78 Tools for progress monitoring:
Daily Report Cards Tools for progress monitoring: php/tbrc/tbrc.php See handout You can create very child-specific behavior ratings that can be used as progress monitors. These can be combined with chart dog to create graphs.

79 Universal Screening and Behavior/ Mental Health
Read more about it…. Distribute handout

80 Some Options….. Data Management Systems

81 DATA MANAGEMENT FREEWARE http://www.jimwrightonline.com
RTI: Graph Data for Visual Analysis -Charts and graphs transform progress-monitoring data into visual displays. -Time-series graphs in particular are widely used -A positive trend-line demonstrates when the student is responding well to intervention. I. Excel Graphs Made Easy -Download pre-formatted Excel spreadsheets -Enter data and create time-series graphs for common academic measures -Designed by Dr. Jim McDougal and student colleagues Karrie Clark and Jacklyn Wilson from SUNY College at Oswego. II. Generate Time-Series Graphs On-Line -Enter your student data into the on-line application ChartDog -Create time-series graphs -Plot trend lines III. Paper Charts -Chart your data by hand -Collection of blank time-series charts that you can download and print

82 DATA MANAGEMENT COMMERCIAL
Existing Grading Program & Excel Software Web based Handheld Qualitative Data Examples: (demos) (archived web casts)

83 Diagnostic Information
Universal Screening Progress Monitoring Diagnostic Information Outcome / Summative Data Management System Please take out your Take Away window. The Diagnostic Information/ Assessment pane is in the lower left hand corner. Please take a moment to self reflect about the purpose of Diagnostic Information/Assessment and at least 3 insights that you have gained into this assessment. After you have done your individual reflection, please leave it on the table. I will ask you to stand and move 2 (3) seats to your left, please read what your table mate has written, star any thoughts with which you agree and add one of your thoughts to that persons window – something that you heard but perhaps your table mate did not. Now as a district, please take a moment to reflect on your current assessments. Relative to the information that you have just heard, what is your current thinking on the Diagnostic Information tool you are currently using? Any new insights? RETURN TO YOUR SEATS

84 A Look at the Real World……
Implementation of a Comprehensive Data Plan Presented by Capital School District Pam Hererra Colleen Rinker Michele Waite

85 A Comprehensive Framework for Reading Instruction
HEAD Making informed decisions based on data Comprehensive Assessment System Literacy Support Team (LST) Instructional Support Team (IST) Whole Body is Tier 1 instruction High quality, effective instruction for all students Backbone-Assessment System LST/IST Go hand in hand Heart-students Clothes are Tiers 2 and 3 interventions and they change based on needs Grade Level Meetings Grade Level Planning

86 Comprehensive Assessment Matrix for Reading
Organization of the assessment system Purposes for assessments Timelines for assessments and schedules Using an Assessment Team Importance of professional development related to administering assessments Role of district in implementation 1)Show assessment matrixes/ discuss importance of using a variety of assessments that give you the most valuable information so informed decisions can be made about students. Discuss reasons why as we moved to DIBELS that also kept GATES and GRADE. We made small adjustments each year to the plan-as we added all schools to using DIBELS we kept GATES and GRADE because at those levels were the only consistent measure. Will continue to re-evaluate whether or not we will continue. EASE OF IMPLEMENTATION IS THE KEY. Showing the importance of having the data and USING the data to make instructional decisions is INTEGRAL to the success of implementation.WE MUST CHANGE OUR MINDSET-INSTEAD OF VIEWING ASSESSMENT AND INSTRUCTION AS TWO SEPARATE THINGS WE MUST DEMONSTRATE THAT ASSESSMENT IS INSTRUCTION. That is why the measures we select must be curriculum based. This is a very hard perception to change! If don’t find out what our students know or don’t know, we are teaching “a program” not students. Chances are, we will have to re-teach. It just makes more sense to take the time to find out what they know so we can better match instruction to needs. 2)Discuss use of an Assessment Team for benchmarks. 3)Importance of professional development for using and administering the assessments for reliability. Paper pencil and palms. Having teachers practice progress monitoring at grade level meetings. 4)Provide time and support for schools during implementation/infrastructure necessary//help with purchase of materials and training for staff/additional technology support for palms and database reporting/Literacy Support Meetings/Grade Level meetings/Walkthroughs/Professional Development (ALL revolves around our data) Show book club list, most schools doing Differentiated Instruction this year plus another related to data at school level. CHANGE HAPPENS AT THE CLASSROOM LEVEL

87 School Level Implementation
Literacy Support Team Meetings and connections to IST Use of assessment data Model Lessons Follow up with teachers Data notebook for principal Importance of principal involvement at all levels

88 Implementation at the School Level
Data notebooks for each teacher Additional assessments and purposes Diagnostic assessments (chart) Infrastructure necessary Grade level meetings and planning based on data Professional Development

89 Your Role Before and After the RTI Initiative
Topics to Consider DE RTI Regulations Schedules Professional Development Accountability Materials & Resources Data Collection & Review Participants will move to another room and discuss their roles in the implementation of a effective comprehensive assessment plan for RTI in their districts Guiding Questions for this activity How will your role change? How will you role remain the same? What are some of the challenges that you might encounter? How will you support your colleagues in the implementation of RTI?

90 Taking Stock Introduce the homework assignment by handing out the Taking Stock of Assessments worksheets ( see Kathy’s form titled Taking Stock of Assessments)

91 Conclusions and Evaluations

92 Vendors Presenting Data Management Systems
ARE they presenting data management systems or assessment systems?


Download ppt "Assessment – An Integral Part of RTI"

Similar presentations


Ads by Google