Download presentation
Presentation is loading. Please wait.
Published byEric Andrews Modified over 8 years ago
1
Early Childhood Outcomes Center Do My Data Count? Questions and Methods for Monitoring and Improving our Accountability Systems Dale Walker, Sara Gould, Charles Greenwood and Tina Yang University of Kansas, Early Childhood Outcome Center (ECO) Marguerite Hornback, Kansas Leadership Project, 619 Liaison Marybeth Wells, Idaho 619 Coordinator
2
2Early Childhood Outcomes Center Acknowledgement: Thanks are due to our Kansas colleagues who assisted with the development, administration and analysis of the COSF Survey and team process videos, and to the Kansas Part C and Kansas and Idaho Part B professionals who participated in the COSF process. Appreciation is also extended to our ECO and Kansas colleagues for always posing the next question..
3
3Early Childhood Outcomes Center Purpose of this Presentation Explore a range of questions to assist states in establishing the validity of their accountability systems Illustrate with state examples how outcome data may be analyzed Discuss ways to gather, interpret, and use evidence to improve accountability systems Information synthesized from Guidance Document on Child Outcomes Validation to be distributed soon!
4
4Early Childhood Outcomes Center Validity of an Accountability System An accountability system is valid when evidence is strong enough to conclude: The system is accomplishing what it was intended to accomplish and not leading to unintended results System components are working together toward accomplishing the purpose
5
5Early Childhood Outcomes Center What is Required to Validate our Accountability Systems? Validity requires answering a number of logical questions demonstrating that the parts of the system are working as planned Validity is improved by ensuring the quality and integrity of parts of the system Validity requires continued monitoring, maintenance and improvement
6
6Early Childhood Outcomes Center Some Important Questions for Establishing the Validity of an Accountability System Is fidelity of implementation of measures high? Are measures sensitive to individual child differences and characteristics? Are the outcomes related to measures? What are the differences between entry and exit data? Are outcomes sensitive to change over time? Are those participating in the process adequately trained?
7
7Early Childhood Outcomes Center What Methods can be used to Assess System Fidelity? COSF ratings and rating process, (including types of evidence used, e.g., parent input) Team characteristics of those determining ratings Meeting characteristics or format Child characteristics Demographics of programs or regions Decision-making processes Training information Comparing ratings over time
8
8Early Childhood Outcomes Center Fidelity: Analysis of Process to Collect Outcomes Data: Video Analysis Video observation 55 volunteer teams in KS submitted team meeting videos and matching COSF forms for review Tried to be representative of the state Videos coded Team characteristics Meeting characteristics Evidence used Tools used (e.g., ECO decision tree)
9
9Early Childhood Outcomes Center Fidelity: Analysis of Process to Collect Data Using Surveys Staff surveys Presented and completed online using Survey Monkey 279 were completed Analyzed by research partners May be summarized using Survey Monkey or other online data system
10
10Early Childhood Outcomes Center Fidelity: Analysis of Process to Collect Data Using State Databases Kansas provided Part C and Part B data Idaho provided Part B data Included: COSF ratings, OSEP categories, child characteristics
11
11Early Childhood Outcomes Center Fidelity: Types of Evidence Used in COSF Rating Meetings (videos only) Child Strengths (67-73% across outcome ratings) Child Areas to Improve (64-80%) Observations by professionals (51-73%)
12
12Early Childhood Outcomes Center Fidelity: Types of Evidence Used in COSF Rating Meetings (videos and surveys) Assessment tools Video- 55% used for all 3 ratings Survey- 53% used one of Kansas’ most common assessments Parent Input incorporated Video- 47% Survey- 76% 39% contribute prior to meeting 9% rate separately 22% attend CSOF rating meeting
13
13Early Childhood Outcomes Center Fidelity: How can we interpret this information? Assessment use About half are consistently using a formal set of questions to assess child functioning Parent involvement Know how much to emphasize in training Help teams problem-solve to improve parent involvement
14
14Early Childhood Outcomes Center Fidelity: Connection between COSF and Discussion (Video) 67% documented assessment information but did not discuss results during meetings 44% discussed observations during meetings but did not document in paperwork
15
15Early Childhood Outcomes Center How information about the Process has informed QA activities Used to improve quality of the process Refine the web-based application fields Improve training and technical assistance Refine research questions Provide valid data for accountability and program improvement
16
16Early Childhood Outcomes Center Are Measures Sensitive to Individual and Group Differences and Characteristics? Essential feature of measurement is sensitivity to individual differences in child performance Child characteristics Principal exceptionality Gender Program or Regional Differences
17
17Early Childhood Outcomes Center Frequency Distribution for one state’s three OSEP Outcomes for Part B Entry
18
18Early Childhood Outcomes Center Frequency Distribution for one state’s three OSEP Outcomes for Part C Entry
19
19Early Childhood Outcomes Center Interpreting Entry Rating Distributions Entry rating distributions If sensitive to differences in child functioning, should have children in every category Should have more kids in the middle than at the extremes (1s and 7s) 1s should reflect very severe exceptionalities 7s are kids functioning at age level with no concerns- shouldn’t be many receiving services
20
20Early Childhood Outcomes Center Social Entry Rating by State
21
21Early Childhood Outcomes Center Interpreting Exit Ratings Exit ratings If distribution stays the same as at entry Children are gaining at same rate as typical peers, but not catching up If distribution moves “up”- numbers get higher Children are closing the gap with typical peers If ratings are still sensitive to differences in functioning, should still be variability across ratings
22
22Early Childhood Outcomes Center Interpreting Social Exit Ratings
23
23Early Childhood Outcomes Center How can we interpret changes in ratings over time? Difference = 0: not gaining on typical peers, but still gaining skills Difference > 0: gaining on typical peers Difference < 0: falling farther behind typical peers Would expect to see more of the first two categories than the last if system is effectively serving children
24
24Early Childhood Outcomes Center Social Rating Differences by State
25
25Early Childhood Outcomes Center Are a State’s OSEP Outcome Scores Sensitive to Progress Over Time? Examples from 2 States
26
26Early Childhood Outcomes Center Distributions Across Knowledge and Skills Outcome at Entry and Exit
27
27Early Childhood Outcomes Center Distributions Across Social Outcome at Entry and Exit
28
28Early Childhood Outcomes Center Comparison of State Entry Outcome Data from 2007 and 2008
29
29Early Childhood Outcomes Center Importance of Looking at Exceptionality Related to Outcome Ratings should reflect child exceptionality because an exceptionality affects functioning DD ratings should generally be lower SL ratings because DD is a more pervasive exceptionality
30
30Early Childhood Outcomes Center Meets Needs by Principal Exceptionality and COSF Rating
31
31Early Childhood Outcomes Center Meets Needs by Principal Exceptionality and OSEP Category
32
32Early Childhood Outcomes Center Interpreting Exceptionality Results Different exceptionalities should lead to different OSEP categories More SL in E (rated higher to start with- less pervasive and easier to achieve gains) More DD in D (gaining, but still some concerns- more pervasive and harder to achieve gains)
33
33Early Childhood Outcomes Center Gender Differences Ratings should generally be consistent across gender. If not, ratings or criteria might be biased. Need to ensure that gender differences aren’t really exceptionality differences. Some diagnoses are more common in one gender compared to the other.
34
34Early Childhood Outcomes Center Entry Outcome Ratings by Gender
35
35Early Childhood Outcomes Center Mean Differences and Ranges in the 3 Outcomes by Gender
36
36Early Childhood Outcomes Center Gender and Exceptionality ExceptionalityMaleFemale KSIDKSID DD50.9%60.9%50.7%62.5% SL46.2%33.7%46.3%31.3%
37
37Early Childhood Outcomes Center Importance of Exploring Gender Differences by Exceptionality Because the same percentage of boys and girls are classified as DD and are classified as SL, rating differences are not the result of exceptionality differences.
38
38Early Childhood Outcomes Center Program or Regional Differences in Distribution of Outcome Scores If programs in different parts of the state are serving similar children, then ratings should be similar across programs If ratings are different across programs with similar children, check assessment tools, training, meeting/team characteristics
39
39Early Childhood Outcomes Center Program or Regional Differences in Distribution of Outcome Scores
40
40Early Childhood Outcomes Center Are the 3 Outcomes Related? Expect there to be patterns of relationships across functional outcomes compared to domains
41
41Early Childhood Outcomes Center Correlations Across Outcomes at Entry State and Part PairID (B)KS (B)KS (C) Know vs Meets.726.732.633 Social vs Meets.799.743.620 Know vs Social.782.774.758 N Children100312801108
42
42Early Childhood Outcomes Center Correlations Between Assessment Outcomes on BDI and COSF Rating Mean Correlation between COSF Outcome Ratings And BDI Domain Scores Social vs. PerSocial =.65 Knowledge vs. Cognitive =.62 Meets Needs vs. Adaptive =.61
43
43Early Childhood Outcomes Center Outcome Rating Differences by Measure Use of different measures may be associated with different ratings because they provide different information Different measures may also be associated with different Exceptionalities
44
44Early Childhood Outcomes Center Mean Knowledge and Skills Outcome Differences as a Function of Measure
45
45Early Childhood Outcomes Center Interpreting Team and Meeting Characteristics Team characteristics Team size and composition Meeting characteristics How teams meet How parents are included
46
46Early Childhood Outcomes Center Team Composition Video: 93% 2-4 professionals Survey: 85% 2-4 professionals * 35% SLP, 30% ECE * 95% SLP, 70% ECE
47
47Early Childhood Outcomes Center Do teams meet to determine ratings? (survey) 41% always meet as a team 42% sometimes meet as a team 22% members contribute, but one person rates 5% one person gathers all info and makes ratings How teams meet at least sometimes (survey) In person: 92% Phone: 35% Email: 33% How Do Teams Complete Outcome Information?
48
48Early Childhood Outcomes Center What Does Team Information Provide that is Helpful for Quality Assurance? COSF process is intended to involve teams- happens some of the time Teams are creative in how they meet- likely due to logistical constraints Checks the fidelity of the system (if it’s being used as planned) If we know how teams are meeting, can modify training to accommodate
49
49Early Childhood Outcomes Center Decision-Making Process Followed by Teams Decision-making process: Standardized steps Consensus reached by teams Deferring to a leader
50
50Early Childhood Outcomes Center What Steps Did Teams Use to Make Decisions? Use of crosswalks (survey) 59% reported that their team used 94% reported using to map items and sections COSF outcomes. ECO decision tree use Video- 95% 6% without discussing evidence (yes/no at each step) Discuss evidence at each step, rate document Discuss and document at each step Survey- 81%
51
51Early Childhood Outcomes Center What Does this Indicate About the Team Decision-making Process? Use of decision tree and crosswalks Indicates teams are using similar processes to determine ratings across the state Important because steps taken will affect results Even when using the same tools, must check that teams are using them correctly. Decision tree is intended to be use WITH evidence of child functioning, not by itself
52
52Early Childhood Outcomes Center Did Teams Always Come to a Consensus? Consensus Video- 86% Survey- 96% easy or somewhat easy to reach Deferral 11% team member deferred to another on one rating
53
53Early Childhood Outcomes Center Conclusions About How Teams Make Decisions Consensus and deferral Teams are typically making rating decisions as a team, not letting one or two individuals decide ratings. This is important because the COSF was intended to be used by a team, not by individuals.
54
54Early Childhood Outcomes Center Collaboration Between Part C and Part B in Decision-Making Collaboration between Part C and Part B Professionals Video- 56% had at least one professional at both Part C and Part B meetings Survey- 49% collaborate at least sometimes When Part C and Part B teams collaborate, information and effort is shared Transition made easier for families, more effective for children
55
55Early Childhood Outcomes Center What is Reported about Training? 68% felt adequately trained to complete the COSF process Perceived proficiency 25% proficient 52% somewhat proficient 23% would like to feel more proficient
56
56Early Childhood Outcomes Center Conclusions About Training Training Most professionals were satisfied If they had not been, would have needed to re-evaluate training methods Still room for improvement Constant battle due to large staff turnover rates
57
57Early Childhood Outcomes Center How do we apply what we learned about training? Training Use of crosswalks Use of the ECO decision tree If professionals feel adequately trained If COSF is not reflecting differences in child functioning, may need to modify training
58
58Early Childhood Outcomes Center Some Additional Questions to Ask Are OSEP outcomes affected by variable conditions in a State’s accountability processes Resources Ability to establish a standard platform for data collection and analysis Not all states have access to resources, research partners, etc Rates of turn-over in staff Based on informed, well-trained staff, access to training, TA Uses of technology to support data collection, training and management Websites to make information readily available statewide for data entry, analysis and reporting
59
59Early Childhood Outcomes Center Summary and Future Directions All states are responsible for establishing the validity of their systems and thereby the power of the decisions made based on the data Can begin building the case for validity of accountability systems through analyses of outcome data and internal studies of quality and fidelity of implementation Methods used for data tables, charts and graphs were SPSS Statistical Package, Microsoft Word, and Microsoft EXCEL
60
60Early Childhood Outcomes Center Some of these data are published in Greenwood, C. R., Walker, D., Hornback, M., Hebbeler, K., & Spiker, D. (2007). Progress developing the Kansas Early Childhood Special Education Accountability System: Initial findings using the ECO Child Outcome Summary Form (COSF). Topics Early Childhood Special Education, 27(1), 2-18. This work was supported by grants from the U.S. Office of Special Education Programs to SRI and collaborating partners (ECO Center- H327L030002; General Supervision Enhancement Grant- H326X040018). We extend our appreciation for this support.
61
Early Childhood Outcomes Center For More Information see: http://www.fpg.unc.edu/~ECO/ For More Information see: http://www.fpg.unc.edu/~ECO/ http://www.fpg.unc.edu/~ECO/ For More Information see: http://www.fpg.unc.edu/~ECO/ For More Information see: http://www.fpg.unc.edu/~ECO/ http://www.fpg.unc.edu/~ECO/
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.