Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Do We Have To Choose Between Accountability and Program Improvement? NECTAC’s Measuring Child and Family Outcomes Conference 2006 2006 Kristie Pretti-Frontczak.

Similar presentations


Presentation on theme: "1 Do We Have To Choose Between Accountability and Program Improvement? NECTAC’s Measuring Child and Family Outcomes Conference 2006 2006 Kristie Pretti-Frontczak."— Presentation transcript:

1 1 Do We Have To Choose Between Accountability and Program Improvement? NECTAC’s Measuring Child and Family Outcomes Conference Kristie Pretti-Frontczak Kent State University Jennifer Grisham-Brown University of Kentucky

2 2 Overview of Session Discuss the need for measuring child outcomes as it relates to programming and accountability purposes Discuss the need for measuring child outcomes as it relates to programming and accountability purposes Discuss three issues and associated recommendations and related research Discuss three issues and associated recommendations and related research Discussion is encouraged throughout Discussion is encouraged throughout Time will remain at the end for questions and further discussion of what was presented. Time will remain at the end for questions and further discussion of what was presented.

3 3 Introductions and Setting a Context Kristie Pretti-Frontczak Kristie Pretti-Frontczak Kent State University Kent State University Jennifer Grisham-Brown Jennifer Grisham-Brown University of Kentucky University of Kentucky Belief/Bias/Recommended Practice Belief/Bias/Recommended Practice Authentic assessment is critical regardless of purpose Authentic assessment is critical regardless of purpose

4 4 CENTRAL QUESTION FOR TODAY’S PRESENTATION Can instructional data be used for accountability purposes? Can instructional data be used for accountability purposes? The Short Answer: Yes (IF) …. The Short Answer: Yes (IF) ….

5 5 Linked System Approach Authentic Involves families Comprehensive Common Based upon children’s emerging skills Will increase access and participation Systematic Ongoing Guides decision- making Assessment Goal Development Instruction Developmentally and individually appropriated Comprehensive and common Evaluation

6 6 If you…. If you assess young children using a high quality authentic assessment… If you assess young children using a high quality authentic assessment… Then you’ll be able to develop high quality individualized plans to meet children’s unique needs…. Then you’ll be able to develop high quality individualized plans to meet children’s unique needs…. If you identify the individual needs of children…. If you identify the individual needs of children….

7 7 You’ll want to use the information to guide curriculum development… You’ll want to use the information to guide curriculum development… If you have a curriculum framework that is designed around the individual needs of the children… If you have a curriculum framework that is designed around the individual needs of the children… Then you’ll want to document that children’s needs are being met… Then you’ll want to document that children’s needs are being met…

8 8 Then you’ll need to monitor children’s performance over time using your authentic assessment… Then you’ll need to monitor children’s performance over time using your authentic assessment… And when you have done the authentic assessment for a second or third time, you’ll want to jump for joy because all of the children will have made progress! And when you have done the authentic assessment for a second or third time, you’ll want to jump for joy because all of the children will have made progress!

9 9 Three Issues Selection Selection Implementation Implementation Interpretation Interpretation

10 10 Questions around Selecting an Assessment Which tools/processes? Which tools/processes? Which characteristics should be considered? Which characteristics should be considered? What about alignment to state standards or Head Start Outcomes? What about alignment to state standards or Head Start Outcomes? Use a single/common assessment or a list? Use a single/common assessment or a list? Allow for choice or be prescriptive? Allow for choice or be prescriptive? Who should administer? Who should administer? Where should the assessment(s) be administered? Where should the assessment(s) be administered?

11 11 Recommendations Use an assessment for its intended purpose Use an assessment for its intended purpose Avoid comparing assessments to one another – rather compare them to stated/accepted criteria Avoid comparing assessments to one another – rather compare them to stated/accepted criteria Alignment to local/state/federal standards Alignment to local/state/federal standards Reliable and valid Reliable and valid Comprehensive and flexible Comprehensive and flexible Link between assessments purposes Link between assessments purposes Link between assessment and intervention Link between assessment and intervention

12 12 Recommendations Continued Allow for state/local choice if possible Allow for state/local choice if possible Increases likelihood of a match Increases likelihood of a match Increases fidelity and use Increases fidelity and use Avoids a one size fits all approach Avoids a one size fits all approach if assessment is flexible and comprehensive 1 might work if assessment is flexible and comprehensive 1 might work Authentic, authentic, authentic Authentic, authentic, authentic People who are familiar People who are familiar Settings that are familiar Settings that are familiar Toys/materials that are familiar Toys/materials that are familiar

13 13 Generic Validation Process Step 1– Create a Master Alignment Matrix Step 1– Create a Master Alignment Matrix Experts create a master matrix Experts create a master matrix Establish inclusion and exclusion criteria Establish inclusion and exclusion criteria Step 2 –Create Expert Alignment Matrixes Step 2 –Create Expert Alignment Matrixes Experts blind to the master matrix create their own alignment matrixes Experts blind to the master matrix create their own alignment matrixes Step 3 – Validate Master Alignment Matrix Step 3 – Validate Master Alignment Matrix Compare master and expert matrixes Compare master and expert matrixes Ensure that all items that should be considered were placed on the final matrixes Ensure that all items that should be considered were placed on the final matrixes Examine the internal consistency of the final matrixes Examine the internal consistency of the final matrixes Allen, Bricker, Macy, & Pretti-Frontczak, 2006; Walker, & Pretti-Frontczak, 2005 For more information on crosswalks visit: For more information on crosswalks visit: or

14 14 Concurrent Validity Purpose: Purpose: To examine the concurrent validity between a traditional norm- referenced standardized test (BDI-2) and an curriculum-based assessment (AEPS®) To examine the concurrent validity between a traditional norm- referenced standardized test (BDI-2) and an curriculum-based assessment (AEPS®) Subjects: Subjects: 31 Head Start children 31 Head Start children Ranged in age from 48 months to 67 months (M=60.68, SD=4.65) Ranged in age from 48 months to 67 months (M=60.68, SD=4.65) Methods: Methods: Six trained graduate students administered the BDI-2 and six trained Head start teachers administered the AEPS® during a two-week period. Conducted seven (7) bivariate 2-tailed correlations (Pearson’s and Spearman’s) Six trained graduate students administered the BDI-2 and six trained Head start teachers administered the AEPS® during a two-week period. Conducted seven (7) bivariate 2-tailed correlations (Pearson’s and Spearman’s) Results: Results: Five correlations suggested a moderate to good relationship between the BDI-2 and the AEPS Five correlations suggested a moderate to good relationship between the BDI-2 and the AEPS Two correlations suggested a fair relationship between the BDI-2 and the AEPS Two correlations suggested a fair relationship between the BDI-2 and the AEPS Hallam, Grisham-Brown, & Pretti-Frontczak, 2005

15 15 Concurrent Validity Results Adaptive Adaptive Self Care items from the BDI (M = 66.03, SD = 6.67) were moderately correlated with Adaptive items from the AEPS (M = 62.03, SD = 13.57), r =.57, n = 31, p =.01. Self Care items from the BDI (M = 66.03, SD = 6.67) were moderately correlated with Adaptive items from the AEPS (M = 62.03, SD = 13.57), r =.57, n = 31, p =.01. Social Social Personal Social items from the BDI (M = , SD = 22.74) had a fair correlation with Social items from the AEPS (M = 80.06, SD = 16.33), r =.50, n = 31, p =.01. Personal Social items from the BDI (M = , SD = 22.74) had a fair correlation with Social items from the AEPS (M = 80.06, SD = 16.33), r =.50, n = 31, p =.01. Communication Communication Communication items from the BDI (M = , SD = 16.22) were moderately correlated with Social Communication items from the AEPS (M = 88.61, SD = 14.20), r =.54, n = 31, p =.01. Communication items from the BDI (M = , SD = 16.22) were moderately correlated with Social Communication items from the AEPS (M = 88.61, SD = 14.20), r =.54, n = 31, p =.01.

16 16 Concurrent Validity Results Continued Motor Motor Gross Motor items from the BDI (M = 82.76, SD = 4.70) had a fair correlation with Gross Motor items from the AEPS (M = 30.10, SD = 6.62), r =.48, n = 31, p =.01. Gross Motor items from the BDI (M = 82.76, SD = 4.70) had a fair correlation with Gross Motor items from the AEPS (M = 30.10, SD = 6.62), r =.48, n = 31, p =.01. Fine Motor items from the BDI (M = 52.45, SD = 5.30) were moderately correlated with Fine Motor items from the AEPS (M = 26.39, SD = 5.68), r =.58, n = 31, p =.01. Fine Motor items from the BDI (M = 52.45, SD = 5.30) were moderately correlated with Fine Motor items from the AEPS (M = 26.39, SD = 5.68), r =.58, n = 31, p =.01. Perceptual Motor items from the BDI (M = 27.73, SD = 3.63) were moderately correlated with Fine Motor items from the AEPS (M = 26.39, SD = 5.68), r =.58, n = 31, p =.01. Perceptual Motor items from the BDI (M = 27.73, SD = 3.63) were moderately correlated with Fine Motor items from the AEPS (M = 26.39, SD = 5.68), r =.58, n = 31, p =.01. Cognitive Cognitive Cognitive items from the BDI (M = , SD = 23.44) were moderately correlated with Cognitive items from the AEPS (M = 81.26, SD = 24.26), r =.71, n = 31, p =.01. Cognitive items from the BDI (M = , SD = 23.44) were moderately correlated with Cognitive items from the AEPS (M = 81.26, SD = 24.26), r =.71, n = 31, p =.01.

17 17 Head Start/University Partnership grant (Jennifer Grisham-Brown/Rena Hallam) Head Start/University Partnership grant (Jennifer Grisham-Brown/Rena Hallam) Purpose: To build the capacity of Head Start programs to link child assessment and curriculum to support positive outcomes for preschool children Purpose: To build the capacity of Head Start programs to link child assessment and curriculum to support positive outcomes for preschool children Focus on mandated Head Start Child Outcomes Focus on mandated Head Start Child Outcomes Concepts of Print Concepts of Print Oral Language Oral Language Phonological Awareness Phonological Awareness Concepts of Number Concepts of Number Project LINK Grisham-Brown, Hallam, & Brookshire, in press; Hallam, Grisham- Brown, Gao, & Brookshire, in press

18 18 PRELIMINARY RESULTS FROM PROJECT LINK: Classroom Quality No significant differences between control and intervention classrooms on global quality (ECERS-R) No significant differences between control and intervention classrooms on global quality (ECERS-R) The quality of the language and literacy environment (ELLCO) was superior in intervention classrooms; significant in pilot classrooms The quality of the language and literacy environment (ELLCO) was superior in intervention classrooms; significant in pilot classrooms

19 19 PRELIMINARY RESULTS FROM PROJECT LINK: Child Outcomes Change scores in Intervention classrooms are significantly higher than Control classrooms on letter-word recognition subscale of FACES battery. Change scores in Intervention classrooms are significantly higher than Control classrooms on letter-word recognition subscale of FACES battery. The mean change scores were higher (although not significantly so) on seven additional subscales (11 total) of FACES battery - nearing significance on PPVT The mean change scores were higher (although not significantly so) on seven additional subscales (11 total) of FACES battery - nearing significance on PPVT Results would probably have been greater with larger sample Results would probably have been greater with larger sample Results will be duplicated this year Results will be duplicated this year

20 20 Questions Around Training, Implementation, and Use Who will implement? Who will implement? What level of training and support will staff need? What level of training and support will staff need? What will be topics of training? What will be topics of training? Who will provide training and support? Who will provide training and support? How will you know if staff are reliably collecting data? How will you know if staff are reliably collecting data? How ill you know if staff are procedurally collecting data with fidelity? How ill you know if staff are procedurally collecting data with fidelity?

21 21 Recommendations: Training/Follow-up Training/Follow-up Format Format Topics Topics Classroom and administrative Classroom and administrative Valid and reliable Valid and reliable Will require training and support Will require training and support Will require seeing assessment as a critical part of intervention/curriculum planning Will require seeing assessment as a critical part of intervention/curriculum planning

22 22 What it takes! Who? Who? All classroom staff All classroom staff Administrators/consultants Administrators/consultants What? What? Instrument Instrument Methods (e.g., observations, anecdotals, work samples) Methods (e.g., observations, anecdotals, work samples) Data entry/management Data entry/management Relationship to everything else (I.e., Linked system) Relationship to everything else (I.e., Linked system)

23 23 What it takes (cont.) How? How? Training that is “chunked” Training that is “chunked” Self-assessment Self-assessment Follow-up, follow-up, follow-up Follow-up, follow-up, follow-up Mentoring Mentoring On-site technical assistance On-site technical assistance Access to someone to call! Access to someone to call! Involvement of administration Involvement of administration

24 24 Can preschool teachers (with appropriate training) collect reliable data with fidelity? Reliability study Reliability study Fidelity study Fidelity study Accuracy study Accuracy study Brown, Kowalski, Pretti-Frontczak, Uchida, & Sacks, 2002Grisham-Brown, Hallam, & Pretti-Frontczak, in preparation Brown, Kowalski, Pretti-Frontczak, Uchida, & Sacks, 2002 ; Grisham-Brown, Hallam, & Pretti-Frontczak, in preparation

25 25 Inter-Rater Reliability Subjects: Subjects: 7 Head Start Teachers 7 Head Start Teachers 7 Head Start Teaching Assistants 7 Head Start Teaching Assistants Method: Method: Practiced scoring AEPS items from video Practiced scoring AEPS items from video Scored AEPS items; Checked against master score provided by author Scored AEPS items; Checked against master score provided by author Results: Results: 7 of 7 teachers reached reliability at 80% or higher (range 85% - 93%) 7 of 7 teachers reached reliability at 80% or higher (range 85% - 93%) 5 of 7 teaching assistants reached reliability at 80% or higher (range 75% - 90%) 5 of 7 teaching assistants reached reliability at 80% or higher (range 75% - 90%)

26 26 Fidelity Study Subjects: Subjects: Six (6) Head Start teachers/teaching assistants who reached 80% or higher on interrater reliability study Six (6) Head Start teachers/teaching assistants who reached 80% or higher on interrater reliability study Method: Method: Used fidelity measure to check teachers’ implementation of authentic assessment within seven (7) planned activities Used fidelity measure to check teachers’ implementation of authentic assessment within seven (7) planned activities Six (6) Authentic Assessment Variables Six (6) Authentic Assessment Variables set up and preparation; decision making; materials; choice; embedding; and procedure set up and preparation; decision making; materials; choice; embedding; and procedure Procedures Procedures Observed participants collecting AEPS® data during each 7 small group activities Observed participants collecting AEPS® data during each 7 small group activities Observed participants 7 times for up to 10 minutes per activity Observed participants 7 times for up to 10 minutes per activity

27 27 Average Ratings on Six Authentic Assessment Variables across Observations and Activities by Teacher

28 28 Average Ratings on Six Authentic Assessment Variables across Observations for Seven Different Activities

29 29 Accuracy Study Study designed to investigate the accuracy of teachers’ assessments of children’s skills and abilities using observational assessment Study designed to investigate the accuracy of teachers’ assessments of children’s skills and abilities using observational assessment Examined the degree of agreement between assessments of children’s Language and Literacy and Early Math skills made by their teachers using an observational assessment instrument and assessments of the same skills made by researchers using a demand performance instrument. Examined the degree of agreement between assessments of children’s Language and Literacy and Early Math skills made by their teachers using an observational assessment instrument and assessments of the same skills made by researchers using a demand performance instrument. Brown, Kowalski, Pretti-Frontczak, Uchida, & Sacks, 2002

30 30 Measures Observational Measure - Galileo System’s Scales (Bergan, Bergan, Rattee, & Feld, 2001) Observational Measure - Galileo System’s Scales (Bergan, Bergan, Rattee, & Feld, 2001) Language & Literacy-Revised Ages 3-5 (n=68 items full scale) Language & Literacy-Revised Ages 3-5 (n=68 items full scale) Early Math-Revised Ages 3-5 (n=68 items full scale) Early Math-Revised Ages 3-5 (n=68 items full scale) Demand Performance Measure Demand Performance Measure Items that could be readily assessed in individual, one-session, performance-based interviews with children were selected from the Galileo System’s scales and converted into demand performance tasks to create two performance measures Items that could be readily assessed in individual, one-session, performance-based interviews with children were selected from the Galileo System’s scales and converted into demand performance tasks to create two performance measures Language & Literacy (n=21 items) Language & Literacy (n=21 items) Early Math (n=23 items). Early Math (n=23 items). Items varied in difficulty and knowledge domain assessed. Items varied in difficulty and knowledge domain assessed. Standardized sets of materials for administering tasks were also developed (e.g., index cards with printed objects, books, manipulatives, etc.). Standardized sets of materials for administering tasks were also developed (e.g., index cards with printed objects, books, manipulatives, etc.). The performance measures were piloted with preschoolers in two regions of the state and revised accordingly. The performance measures were piloted with preschoolers in two regions of the state and revised accordingly.

31 31 Procedures Trained research assistants visited sites across the state: Trained research assistants visited sites across the state: collected data teachers entered into the relevant observation scales of the Galileo System; and collected data teachers entered into the relevant observation scales of the Galileo System; and administered the Performance Measures. administered the Performance Measures. In order to ensure that the most up-to-date information was obtained from the Galileo System, data were collected during the 2 weeks prior to and following a state mandated entry date. In order to ensure that the most up-to-date information was obtained from the Galileo System, data were collected during the 2 weeks prior to and following a state mandated entry date. Order of administration of Performance Measures was counterbalanced across assessment domains. Order of administration of Performance Measures was counterbalanced across assessment domains.

32 32 Participants 122 children 122 children ranged in age from 3 to 6 years (M=4 years, 11 months) ranged in age from 3 to 6 years (M=4 years, 11 months) 100% in state-funded Head Start programs 100% in state-funded Head Start programs 66 teachers 66 teachers Areas in which children are served Areas in which children are served 47% urban 47% urban 41% suburban/small town 41% suburban/small town 11% rural 11% rural Representation by use of the Galileo System Representation by use of the Galileo System 38% first-year users 38% first-year users 32% second-year users 32% second-year users 23% third-year users 23% third-year users

33 33 Conclusions Overall, levels of concordance were moderate. Overall, levels of concordance were moderate. In the domain in which teachers were most conservative in attributing abilities to children, Language & Literacy, there was the most amount of agreement between data teachers entered into the Galileo System and the Performance Measure (71%). In the domain in which teachers were most conservative in attributing abilities to children, Language & Literacy, there was the most amount of agreement between data teachers entered into the Galileo System and the Performance Measure (71%). In the domain in which teachers were most generous in attributing abilities to children, Early Math, there was the least amount of agreement between the data teachers entered into the Galileo System and the Performance Measure (66%). In the domain in which teachers were most generous in attributing abilities to children, Early Math, there was the least amount of agreement between the data teachers entered into the Galileo System and the Performance Measure (66%). Reliability Reliability Teachers using the naturalistic observation instrument (the Galileo System) are not providing inflated estimates of children’s skills and abilities. Teachers using the naturalistic observation instrument (the Galileo System) are not providing inflated estimates of children’s skills and abilities. However, they may be underestimating children’s skills and abilities in the domain of Language & Literacy. However, they may be underestimating children’s skills and abilities in the domain of Language & Literacy.

34 34 Questions Around Interpreting the Evidence What is evidence? What is evidence? Where should the evidence come from? Where should the evidence come from? What is considered “performing as same age peers”? What is considered “performing as same age peers”? How should decisions be made? How should decisions be made? Who should interpret the evidence? Who should interpret the evidence? How can the ECO child summary form be used? How can the ECO child summary form be used?

35 35 What is Evidence? Information (observations, scores, permanent products) about a child’s performance across the three OSEP outcomes Information (observations, scores, permanent products) about a child’s performance across the three OSEP outcomes Positive social-emotional skills (including social relationships) Positive social-emotional skills (including social relationships) Acquisition and use of knowledge and skills Acquisition and use of knowledge and skills Use of appropriate behaviors to meet their needs Use of appropriate behaviors to meet their needs The amount of type of evidence for each outcome will vary The amount of type of evidence for each outcome will vary

36 36 Where should the evidence come from? Multiple time periods Multiple time periods Multiple settings Multiple settings Multiple people Multiple people Parents Parents Providers Providers Those familiar with the child Those familiar with the child Multiple measures (should be empirically aligned) Multiple measures (should be empirically aligned) Observations Observations Interviews Interviews Direct tests Direct tests

37 37 Required Decisions Decision for Time 1 Decision for Time 1 Is the child performing as same age peers? Is the child performing as same age peers? Yes Yes No No Decision for Time 2 Decision for Time 2 Did the child make progress? Did the child make progress? YES – and performance is as you would expect of same age peers YES – and performance is as you would expect of same age peers YES – and performance is not as you would expect of same age peers YES – and performance is not as you would expect of same age peers NO progress was made NO progress was made

38 38 Things to Keep in Mind “Typical/performing as same age peers” is NOT average “Typical/performing as same age peers” is NOT average “Typical” includes a very broad range of skills/abilities “Typical” includes a very broad range of skills/abilities Child can be “typical” in one OSEP area and not another Child can be “typical” in one OSEP area and not another Progress is any amount of change Progress is any amount of change Raw score changed by 1 point Raw score changed by 1 point A single new skill was reached A single new skill was reached Child needs less assistance at time two Child needs less assistance at time two If using the Child Outcome Summary Form If using the Child Outcome Summary Form Child’s rating score does NOT have to change from time 1 to time 2 to demonstrate progress Child’s rating score does NOT have to change from time 1 to time 2 to demonstrate progress Progress can be continuing to develop at a typical rate (i.e., maintain typical status) Progress can be continuing to develop at a typical rate (i.e., maintain typical status)

39 39 How Should the Required Decisions be Made? Some assessments will make the decision Some assessments will make the decision Standard score Standard score Residual Change Scores Residual Change Scores Goal Attainment Scaling Goal Attainment Scaling Number of objectives achieved/Percent objectives achieved Number of objectives achieved/Percent objectives achieved Rate of Growth Rate of Growth Item Response Theory (cutoff score) Item Response Theory (cutoff score) Proportional Change Index Proportional Change Index

40 40 Making Decisions Continued Regardless - Team conclusions…. Regardless - Team conclusions…. should be based on multiple sources should be based on multiple sources should be based on valid and reliable information should be based on valid and reliable information should be systematic should be systematic Can use the Child Outcome Summary Form Can use the Child Outcome Summary Form Will help with required decision and provide more information for use at the local or state level Will help with required decision and provide more information for use at the local or state level

41 41 Child Outcome Summary Form Single rating scale that can be used to systematize information and make decisions Single rating scale that can be used to systematize information and make decisions After reviewing the evidence rate the child’s performance on each of the 3 outcomes from 1 to 7 After reviewing the evidence rate the child’s performance on each of the 3 outcomes from 1 to 7 Currently a score of 6 or 7 is considered to be performance that is similar to same age peers. Currently a score of 6 or 7 is considered to be performance that is similar to same age peers. CompletelySomewhatEmerging Not Yet

42 42 Getting from 7 to 3 Seven point rating scale just summarizes the evidence Seven point rating scale just summarizes the evidence The required interpretation is still needed The required interpretation is still needed a. % of children who reach or maintain functioning at a level comparable to same-age peers b. % of children who improve functioning but are not in “a” c. % of children who did not improve functioning

43 43 Example During a play-based assessment, IFSP/IEP team administered During a play-based assessment, IFSP/IEP team administered a norm-referenced test a norm-referenced test a curriculum-based assessment a curriculum-based assessment an interview with relevant caregivers an interview with relevant caregivers The team then summarized the child’s performance using each method’s internal summary procedures The team then summarized the child’s performance using each method’s internal summary procedures Calculated a standard score Calculated a standard score Derived a cutoff score Derived a cutoff score Narratively summarized interview Narratively summarized interview Lastly the team rated the child’s overall performance using ECO’s Child Outcome Summary Form for each of the 3 OSEP outcomes Lastly the team rated the child’s overall performance using ECO’s Child Outcome Summary Form for each of the 3 OSEP outcomes Two years later as the child was being transitioned out of the program, the results from a comprehensive curriculum-based assessment were reviewed Two years later as the child was being transitioned out of the program, the results from a comprehensive curriculum-based assessment were reviewed The child’s performance rated using ECO’s Child Outcome Summary Form The child’s performance rated using ECO’s Child Outcome Summary Form The team made a determination of progress The team made a determination of progress

44 44 Example Continued Time One Outcome One Outcome One Rating = 6 Rating = 6 Interpretation = “Typical” Interpretation = “Typical” Outcome Two Outcome Two Rating = 5 Rating = 5 Interpretation = “Not typical” Interpretation = “Not typical” Outcome Three Outcome Three Rating = 3 Rating = 3 Interpretation = “Not typical” Interpretation = “Not typical” Time Two Outcome One Outcome One Rating = 6 Interpretation = a Outcome Two Outcome Two Rating = 5 Interpretation = b* Outcome Three Outcome Three Rating = 5 Interpretation = b *Remember the Child Outcome Summary Form 7 point rating is a summary of performance not of progress. At time two, teams are also prompted to consider progress.

45 45 Fact or Fiction 1. Someone has the answers and if I look long enough I’ll have them too. 2. Everything has to be perfect this first time around. 3. Research doesn’t matter – just getting the data submitted. 4. I really do believe that garbage in is garbage out but at the end of the day – just want the data.

46 46 Overall Synthesis and Recommendations Rigorous implementation of curriculum- based assessments requires extensive professional development and support of instructional staff. Rigorous implementation of curriculum- based assessments requires extensive professional development and support of instructional staff. Findings suggest that CBAs, when implemented with rigor, have the potential to provide meaningful child progress data for program evaluation and accountability purposes. Findings suggest that CBAs, when implemented with rigor, have the potential to provide meaningful child progress data for program evaluation and accountability purposes.

47 47 “And that’s our outcomes measurement system. Any questions?”

48 48 References Allen, D., Bricker, D., Macy, M., & Pretti-Frontczak, K. (2006, February). Providing Accountability Data Using Curriculum-Based Assessments. Poster presented at the Biannual Conference on Research Innovations in Early Intervention, San Diego, California. Allen, D., Bricker, D., Macy, M., & Pretti-Frontczak, K. (2006, February). Providing Accountability Data Using Curriculum-Based Assessments. Poster presented at the Biannual Conference on Research Innovations in Early Intervention, San Diego, California. Brown, R. D., Kowalski, K., Pretti-Frontczak, K., Uchida, C., & Sacks, D. (2002, April). The reliability of teachers’ assessment of early cognitive development using a naturalistic observation instrument. Paper presented at the 17th Annual Conference on Human Development, Charlotte, North Carolina. Brown, R. D., Kowalski, K., Pretti-Frontczak, K., Uchida, C., & Sacks, D. (2002, April). The reliability of teachers’ assessment of early cognitive development using a naturalistic observation instrument. Paper presented at the 17th Annual Conference on Human Development, Charlotte, North Carolina. Grisham-Brown, J., Hallam, R., & Brookshire, R. (in press). Using authentic assessment to evidence children’s progress towards early learning standards. Early Childhood Education Journal. Grisham-Brown, J., Hallam, R., & Brookshire, R. (in press). Using authentic assessment to evidence children’s progress towards early learning standards. Early Childhood Education Journal. Grisham-Brown, J., Hallam, R., & Pretti-Frontczak, K. Measuring child outcomes using authentic assessment practices. Journal of Early Intervention (Innovative Practices). Manuscript in preparation. Grisham-Brown, J., Hallam, R., & Pretti-Frontczak, K. Measuring child outcomes using authentic assessment practices. Journal of Early Intervention (Innovative Practices). Manuscript in preparation. Hallam, R., Grisham-Brown, J., Gao, X., & Brookshire, R. (in press). The effects of outcomes-driven authentic assessment on classroom quality. Early Childhood Research and Practice. Hallam, R., Grisham-Brown, J., Gao, X., & Brookshire, R. (in press). The effects of outcomes-driven authentic assessment on classroom quality. Early Childhood Research and Practice. Hallam, R., Grisham-Brown, J., & Pretti-Frontczak, K. (2005, October). Meeting the demands of accountability through authentic assessment. Paper presented at the International Division of Early Childhood Annual Conference, Portland, OR. Hallam, R., Grisham-Brown, J., & Pretti-Frontczak, K. (2005, October). Meeting the demands of accountability through authentic assessment. Paper presented at the International Division of Early Childhood Annual Conference, Portland, OR. Walker, D., & Pretti-Frontczak, K. (2005, December). Issues in selecting assessments for measuring outcomes for young children. Paper presented at the OSEP National Early Childhood Conference, Washington, D.C. Walker, D., & Pretti-Frontczak, K. (2005, December). Issues in selecting assessments for measuring outcomes for young children. Paper presented at the OSEP National Early Childhood Conference, Washington, D.C. (http://www.nectac.org/~meetings/nationalDec05/mtgPage1.asp?enter=no)http://www.nectac.org/~meetings/nationalDec05/mtgPage1.asp?enter=no


Download ppt "1 Do We Have To Choose Between Accountability and Program Improvement? NECTAC’s Measuring Child and Family Outcomes Conference 2006 2006 Kristie Pretti-Frontczak."

Similar presentations


Ads by Google