Presentation is loading. Please wait.

Presentation is loading. Please wait.

Critical Issues in Early Childhood Assessment and Accountability Kathy Hebbeler ECO at SRI International Early Childhood Outcomes Meeting Baltimore, Maryland.

Similar presentations


Presentation on theme: "Critical Issues in Early Childhood Assessment and Accountability Kathy Hebbeler ECO at SRI International Early Childhood Outcomes Meeting Baltimore, Maryland."— Presentation transcript:

1 Critical Issues in Early Childhood Assessment and Accountability Kathy Hebbeler ECO at SRI International Early Childhood Outcomes Meeting Baltimore, Maryland August 2007

2 2 Terminology Assessment = Single Tool Assessment = Process

3 3 What Is Assessment? “Assessment is a generic term that refers to the process of gathering information for decision-making.” (McLean, Wolery, and Bailey, 2004)

4 4 What Is Assessment? “Early childhood assessment is a flexible, collaborative decision-making process in which teams of parents and professionals repeatedly revise their judgments and reach consensus....” Bagnato and Neisworth (1991) Quoted in DEC Recommended Practices (2005)

5 5 Possible uses of assessment in EI/ECSE Eligibility determination Norm-referenced test % delay Individual program planning Curriculum-based tools – e.g., LAP, Carolina Ongoing individualized progress monitoring Curriculum-based tools? Accountability assessment/program improvement E.g., Head Start reporting system

6 6 Differences in these uses Individualized vs. aggregate Who uses the data Who derives benefit Who suffers consequences of assessment done poorly

7 7 Issue #1: Variation in assessment use by practitioners Do not know how many EI/ECSE programs routinely use assessment tools for anything other than eligibility Presumed eligibility? In some programs, the only formal assessment is for eligibility Appears to be much state-to-state variation

8 8 Issue #2: Does each purpose require a different tool? Can an assessment (process) being conducted for eligibility determination also provide information for Program planning? Accountability? Can the same assessment (process) be used to plan a program and monitor process?

9 9 Accountability in particular Can assessments already being used by programs for other purposes (whatever they are) be used for accountability purposes? Does this apply to all tools or only some categories of tools? Screening tools for accountability?

10 10 Issue #3: Changing perspective on assessment in general early childhood community Major changes in last 15 years in how assessment of young children is viewed Old position: Do not test little kids New position: Ongoing assessment is part of a high quality early childhood program

11 11 What changed New and different tools became available for general EC Curriculum-based assessments were developed, e.g., Creative Curriculum, Work Sampling, etc. Tools for 3-5 came first; 0-3 tools are coming now Interesting sidebar: Curriculum-based assessments for programs serving children 0-5 with disabilities have been around for years

12 12 What changed The purpose of assessment was redefined Not about: sorting, labeling, using to deny access Now about: Getting a rich picture of what children can do and can’t do and using that information to help them acquire new skills “progress monitoring”

13 13 What changed Assessment had always been seen as a process with multiple purposes Distinctions have been made been good and bad uses of assessment with young children Good uses are now promoted For more information: NAEYC web site (Position statement on Curriculum, Assessment and Evaluation)

14 14 Position Statement of the National Association for the Education of Young Children and the National Association of Early Childhood Specialists in State Departments of Education (2003) Policymakers, early childhood professionals, and others have a shared responsibility to “make ethical, appropriate, valid, and reliable assessment a central part of all early childhood programs.”

15 15 Interesting Irony Even though the disability community had developed many curriculum-based assessment tools, currently [many? some?] programs do not practice ongoing assessment The push for ongoing assessment to monitor how a child is doing and plan for instruction/intervention is coming from the general education community

16 16 Issue #4: Limitations of existing assessment tools “Assessment of young children poses greater challenges than people generally realize….assessment results—in particular, standardized tests, that reflect a given point in time—can easily misrepresent children’s learning…There is widespread dissatisfaction with traditional norm- referenced standardized tests which are based on early 20 th century psychological theory.” National Research Council, 2001

17 17 Problem: Nature of the young child Not well suited to a standardized testing situation Performance varies from day to day, place to place, person to person Don’t perform well for strangers or on demand Growth is sporadic and uneven

18 18 Problem: Response capabilities of children with disabilities Same issue as with school-age children: assessment assumes child who can see, hear and understand spoken language, point, etc. Few assessments include accommodations nor were children with disabilities included in the norming sample Very little data on validity of accommodations with young children

19 19 Problem: Impact of disability/delay on development Typically developing children tend to develop in multiple areas simultaneously Language, cognition, motor skills march forward more or less together Even though development has been divided into domains for assessment and research, much of development is intertwined These interconnections present challenges for obtaining a “pure” domain score

20 20 Problem: Impact of disability/delay on development More difficult to accurately portray the development of children developing atypically with available assessments, esp. children with language delays Do they understand the directions? Is the assessment tapping cognition or language? Are other behavioral/attentional factors influencing performance?

21 21 Problem: Psychometric properties of existing instruments Some of the most common instruments are being used with limited or no reliability and validity data None have validity or reliability data reporting when used for outcomes and accountability

22 22 Response: New forms of assessment Growing recognition that the only way to get a valid picture of what a child can do/does to is look at performance over a variety of settings and people including what the child does spontaneously with familiar adults and in familiar situations Can’t base conclusions about child’s capabilities on elicited responses alone “Authentic assessment”

23 23 Position Statement: NAEYC and NAECS/SDE “To assess young children’s strengths, progress, and needs, use assessment methods that are developmentally appropriate, culturally and linguistically responsive, tied to children’s daily activities, supported by professional development, inclusive of families, and connected to specific, beneficial purposes”

24 24 Response: Use multiple sources of information (best practice) “A single test, person, or occasion is not a sufficient source of information. This means that we must gather information from several sources, instruments, settings and occasions to produce the most valid description of the child’s status or progress” ---DEC Recommended Practices

25 25 Issue #5: Strategies for synthesizing multiple sources of information And just how is that information supposed to be put together? Especially for aggregated data (accountability/program improvement)

26 26 Issue #6: Validity and Reliability Are not characteristics of an assessment per se Validity – context dependent on the use of the results Individual vs. group decisions “Validity, the degree that an assessment measures what it purports to measure, relates to the use of the test, rather than the test itself.” Score Reliability, Pg. 113

27 27 Issue #6: Validity and Reliability Reliability is a characteristic of a set of scores, not of a test “…reliability refers to the degree of consistency of the information obtained from an information gathering process” “…reliability of the scores provided by an instrument or procedure may fluctuate depending on how, when, and to whom the instrument or procedure is administered..” Joint Committee on Standards for Educational Evaluation, 1994 Quoted in Score Reliability, pg.95

28 28 Implications for State Outcome Data Collections Cannot assume that your state’s scores are valid and reliability because your state is using a tool/process that has demonstrated validity or reliability Validity and reliability need to be established for each use/context

29 29 Validity in an accountability system Validity question: Do the assessment results lead to the “right” decisions? Framework from the Council of Chief State School Officers How does one assess validity in an accountability system? How should a state determine the validity of its child outcome data? The data being submitted to OSEP?

30 30 Issue #7: Validity vs. credibility dilemma for accountability Strangers can’t elicit valid data on young children’s performance capabilities in a testing situation BUT can data produced by those who know the child and whose programs are being evaluated, be credible in an accountability system?

31 31 States have spoken… For child outcomes, states are collecting data through those familiar with the child Implications: Data are subject to credibility challenge Need to put safeguards in place so you can defend the credibility of your data

32 32 Issue #8: How to use current assessment tools to look at functional outcomes OSEP outcomes are functional and cut across domains Existing assessments provide scores for domains, not 3 outcomes Existing assessments vary in the extent to which they assess functioning vs. isolated skills

33 33 Outcomes Are Functional Functional outcomes: Refer to things that are meaningful to the child in the context of everyday living Refer to an integrated series of behaviors or skills that allow the child to achieve the important everyday goals

34 34 Question to ask Is the information provided by the assessment really functional?

35 35 Issue #9: Variation in provider knowledge of assessment (Based on ECO work with states) Some practitioners are skilled in administering and interpreting multiple assessment tools, some in one, some rarely use any. Many children served in programs for typically developing children where knowledge and use of assessment is limited. How will practitioners be trained and supervised?

36 36 Issue #10: Variation of provider knowledge of child across settings Some practitioners only see children in clinic settings or for a very short period of time How can practitioners obtain more comprehensive information about children’s behavior and daily routines?

37 37 Issue #11: Role of families in the assessment process Families provide a unique perspective on the child’s functioning Not all assessment tools have good procedures for incorporating the family’s perspective Need good tools/procedures for learning about child from the family

38 38 Role of families in the assessment process Programs vary in how much and how they share assessment data with families, especially with regard to communicating developmental ages or extent of a child’s delay. Some providers are “soft pedaling” the assessment results Providers may need training in: eliciting information about child’s day to day functioning and sharing results with families

39 39 Issue #12: Multiple assessment systems Children 0-5 participating in IDEA programs also will be participating in the required OSEP reporting (approx. 1 million children) Some of these children also may be participating in other assessment systems

40 OSEP Reporting Child Care Head Start State Preschool Participation in Multiple Accountability Systems??

41 41 Bottom Line What needs to happen to make sure assessments make a meaningful contribution to improved outcomes and program improvement? What can be done to insure assessment data used for outcomes measurement are: Meaningful Valid Reliable Credible?

42 42 Responsibilities as State Leaders Ensure that: Practitioners understand recommended practices with regard to assessment of young children Practitioners have the skills necessary to engage in recommended practices Practitioners actively and appropriately involve families in the assessment process

43 43 More Responsibilities Ensure that: Practitioners have the skills to sensitively and accurately explain assessment results to families Practitioners use ongoing assessment to monitor children’s process AND to make adjustment to the child’s program based on the results

44 44 And More… Put mechanisms in place to promote quality assessment for all purposes including accountability: Supervisors, coaches Data checks and verification Create a culture promoting data use, assessment data in particular Involve the entire state in using data for program improvement

45 45 And More… Collaborate with Higher education to ensure new practitioners are entering the field with necessary knowledge and skills related to assessment Other programs serving the same children to learn their “message” and possible requirements related to assessment (Goal: Families hear one message)

46 Hats off to you for leading the charge! I Love Good Outcome Data


Download ppt "Critical Issues in Early Childhood Assessment and Accountability Kathy Hebbeler ECO at SRI International Early Childhood Outcomes Meeting Baltimore, Maryland."

Similar presentations


Ads by Google