Linguistic Demands of Preschool Cognitive Assessments Glenna Bieno, Megan Eparvier, Anne Kulinski Faculty Mentor: Mary Beth Tusing Method We employed three.

Slides:



Advertisements
Similar presentations
WMS-IV Wechsler Memory Scale - Fourth Edition
Advertisements

Standardized Scales.
Teaching Strategies Gold
ASSESSING RESPONSIVENESS OF HEALTH MEASUREMENTS. Link validity & reliability testing to purpose of the measure Some examples: In a diagnostic instrument,
Examining the Relationship Between Confrontational Naming Tasks & Discourse Production in Aphasia Leila D. Luna & Gerasimos Fergadiotis Portland State.
Advanced Topics in Standard Setting. Methodology Implementation Validity of standard setting.
Figurative Language Development Research and Popular Children’s Literature: Why We Should Know, “Where the Wild Things Are” Kathleen Ahrens.
Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability.
INTRODUCTIONRESULTS PURPOSE METHODS CONCLUSION The Correlation between Parental Perception of Movement Difficulties and Scoring on a Motor Proficiency.
Psychoeducational Testing Psychoeducational testing uses psychological test to analyze the mental processes underlying the child’s educational performance.
Intelligence tests Wechsler Tests Info on David Wechsler.
The Effects of Increased Cognitive Demands on the Written Discourse Ability of Young Adolescents Ashleigh Elaine Zumwalt Eastern Illinois University.
Assessing Achievement and Aptitude
Intelligence Meredyth Daneman PSY100. What is Intelligence? abstract reasoning, problem solving, capacity to acquire knowledge memory, mental speed, linguistic.
ASSESSMENT OF INTELLIGENCE Chapter Nine. CHAPTER OBJECTIVES The complexity of intelligence The purpose of intelligence testing What IQ score represent.
Author: Sabrina Hinton. Year and Publisher: American Guidance Service.
Assessment: Understanding the Psycho-Educational Evaluation Elizabeth A. Rizzi, MA NYS Certified School Psychologist John Jay High School.
Assessment of Special Education Students
But What Does It All Mean? Key Concepts for Getting the Most Out of Your Assessments Emily Moiduddin.
Formulating objectives, general and specific
Office of Institutional Research, Planning and Assessment January 24, 2011 UNDERSTANDING THE DIAGNOSTIC GUIDE.
Wechsler Adult Intelligence Scales
COGNITIVE DISABILITIES Definition and Eligibility Criteria Disproportionality Institute August 2007.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Vanessa D’silva Clinical Psychologist, Learning Disability Clinic, KEM Hospital.
Bloom's Taxonomy: The Sequel (What the Revised Version Means for You!)
ELA SCHOOL TEAM SESSION Welcome to EEA, 2012! 10/2/2015MSDE1.
Administering ELDA K & ELDA 1-2 English Language Development Assessment Assessing ELL Students in the Primary Grades Developed by the Limited English Proficient.
RESULTS INTRODUCTION Laurentian_University.svgLaurentian_University.svg‎ (SVG file, nominally 500 × 87 pixels, file size: 57 KB) Comparison of the ASQ.
Parental Educational Level, Language Characteristics, and Children Who Are Late to Talk Celeste Domsch Department of Hearing & Speech Sciences Vanderbilt.
Understanding the TerraNova Test Testing Dates: May Kindergarten to Grade 2.
Copyright © Allyn & Bacon 2007 Chapter 11 Testing and Individual Differences.
ScWk 240 Week 6 Measurement Error Introduction to Survey Development “England and America are two countries divided by a common language.” George Bernard.
Competency in Older Adults: Clinical and Legal Perspectives The Role of Cognitive and Neuropsychological Evaluations John Crumlin, PhD Assistant Director,
The Literacy Event Chapter 2. Qualities of Effective Teachers Think about the good teachers you have had or have observed. In your opinion, what made.
Child Assessment Cognitive & Language Development The options are in 2 major categories 1)Parent report (e.g. PARCA) 2)Researcher administered (e.g. BAS,
Measurement Validity.
Objective The current study examined whether the timing of recovery from late onset of productive vocabulary (e.g., either earlier or later blooming) was.
Standard Setting Results for the Oklahoma Alternate Assessment Program Dr. Michael Clark Research Scientist Psychometric & Research Services Pearson State.
Assessing Learners with Special Needs: An Applied Approach, 6e © 2009 Pearson Education, Inc. All rights reserved. Chapter 9: Measures of Intelligence.
Jessica Williamson Kane
Wechsler Scales. Security and Terms Test security Test security Terms Terms Floor Floor Ceiling Ceiling Basal and ceiling rules Basal and ceiling rules.
An Innovative Approach to Fair Evaluations for People with Cognitive Disabilities.
Parent Guide to Using Lexile Scores Provided on the Georgia Milestones Individual Score Reports Using the Lexile Score to support the growth of your child’s.
Developing Meaningful, Measurable Student Learning Outcomes Tulsa Community College January 2013 Susan Hatfield Professor, Winona State University
Spring 2015 Kyle Stephenson
Assessing Responsiveness of Health Measurements Ian McDowell, INTA, Santiago, March 20, 2001.
Intelligence testing. What is Intelligence? Intelligence is a construct (i.e, concrete observational entities), not a concrete object. Intelligence is.
Leiter International Performance Scale – Revised
Report on the NCSEAM Part C Family Survey Batya Elbaum, Ph.D. National Center for Special Education Accountability Monitoring February 2005.
Reinforcement Look at matched picture after sound ends & it moves 10 trials (5 of each pairing) 2 or 4 blocks (2 pairs of words, 2 pairs of swoops) Participants.
Certificate IV in Project Management Assessment Outline Course Number Qualification Code BSB41507.
Educational Research Chapter 8. Tools of Research Scales and instruments – measure complex characteristics such as intelligence and achievement Scales.
San Luis Valley Gifted Education Network Meeting October 17, 2013.
Theories of Intelligence. Defining Intelligence Like any concept in psychology one of the principal challenges is defining intelligence In the face of.
Literacy, Intelligence, and Academic Achievement Zembar and Blume Middle Childhood Development: A Contextual Approach, First Edition ©2009 Pearson Education,
Chapter 9 Intelligence. Objectives 9.1 The Nature of Intelligence Define intelligence from an adaptation perspective. Compare and contrast theories of.
Late talkers (Delayed Onset)
Translation of Research into Practice
Intelligence Andrea Mejia Spring 2017.
Universal Nonverbal Intelligence Test
Bowden, Shores, & Mathias (2006): Failure to Replicate or Just Failure to Notice. Does Effort Still Account for More Variance in Neuropsychological Test.
AP Unit 11 Testing and Individual Differences pt. 1
The IB Diploma Programme visual arts course encourages students to: A
Examining the Content Validity for a Preschool Mathematics Assessment Carol Sparber M.Ed., Kent State University Pam Elwood M. Ed., Kent State University.
OSEP Project Directors Meeting
The Literacy Event Chapter 2 11/12/2018 9:52 PM
The Literacy Event Chapter 2 11/20/2018 1:07 AM
Intelligence Testing.
Chapter 10: Intelligence & Testing
Presentation transcript:

Linguistic Demands of Preschool Cognitive Assessments Glenna Bieno, Megan Eparvier, Anne Kulinski Faculty Mentor: Mary Beth Tusing Method We employed three methodologies to review recently revised preschool tests of cognitive abilities. Basic Concept Review The frequency of basic concept words in the test directions of the WPPSI III, DAS II, and KABC II was compared to the standardization data of the Bracken Basic Concept Scale (BBCS) and the Boehm Test of Basic Concepts. If either the BBCS or the Boehm indicated that 75% of children within a certain age range did not pass the concept word tested on the respected assessment, the word was counted as a concept word violation. The review of test directions only included those concept words meant to guide, direct, or give feedback to the child. Verbosity and Complexity Review Methodology introduced by Cormier et al. (2011) was applied to the standard test directions of each assessment. This involved calculating total words, total sentences, average number of words per sentence, and average number of syllables per word using an online readability calculator The overall scores were then transformed into z scores to allow for a relative comparison across all subtests from all assessments. A total Verbosity and total Complexity score was calculated. Total Verbosity reflects the average of z scores for each subtest’s total words and total sentences calculations. Total Complexity reflects the average of z scores for the subtest’s average syllables per word and average words per sentence scores. Finally, Total Demand represents the average of the Complexity and Verbosity indices. Expert Analysis of Linguistic Demand Utilizing reviews provided by Ortiz’s (2005) culture- language test classifications, we classified the preschool cognitive assessments as high, moderate, and low in linguistic demand. Discussion Findings from the current study provide a variety of lenses from which school psychologists can examine the potential impact of linguistic demands in test directions on assessment outcomes. Such working knowledge is critical for practitioners when assessing preschool children from linguistically diverse backgrounds. The findings are also timely in that an updated review of recently revised preschool cognitive assessments has not yet occurred. Conclusions to be drawn regarding the linguistic demands of the three assessment tools appear to vary as a function of analysis type. In several cases, subtests with a higher number of basic concept word violations did not have correspondingly high verbosity or complexity scores. Similarly, several subtests rated as “high” in linguistic demand by Ortiz (2005) did not result in high verbosity or complexity scores and likewise, subtests high in linguistic demand as determined by readability indices were sometimes rated as “low” in linguistic demand by Ortiz. Cormier et al. (2011) noted similar findings and argued that the linguistic demands of assessment tools are likely multidimensional in nature, and are not easily categorized unidimensionally as low, moderate, or high. As a result, practitioners are encouraged to consider the various ways in which a child’s linguistic competencies may impact test performance and select the cognitive assessment tool least likely to be impacted by the child’s linguistic differences. For example, assessments with high numbers of concept word violations may be particularly problematic for children with language delays. However, assessments with lengthy test directions (i.e., verbosity) or more complex language may be more problematic for children with limited English proficiency. Likewise, assessments with high expressive language demands, which Ortiz’s categorization considers, may be problematic for children with expressive language needs. Findings from the current study are interpreted with caution for a number of reasons. First, the methods employed to review the current assessments are theoretical in orientation. Findings should be cross validated with data on assessment outcomes for students from diverse linguistic backgrounds. Second, the types of review methods employed are subject to application error. Inter-rater reliability was not yet completed at the time of this publication. Introduction In daily practice, the relevance of norm referenced assessments for young children from diverse linguistic or socio-economic backgrounds must be continually assessed. Flanagan, Mascolo, and Genshaft (2000) referred to such information as the “qualitative knowledge base” needed by practitioners to make informed assessment tool selection and interpretation decisions. The linguistic demands of assessments can pose significant challenges to a practitioner’s ability to validly estimate a child’s cognitive functioning. If the spoken directions of an assessment demand receptive language abilities that are greater than typical expectations for the child’s chronological age, results may underestimate the child’s true cognitive functioning. Likewise, expressive language demands of the assessment can impact the child’s ability to demonstrate his/her knowledge. Bracken (1987) provided the first review of the linguistic demands of preschool cognitive assessments. He employed the methodology of identifying the incidence of basic concept words in orally spoken test directions. After analyzing the number of concept words used and the typical age ranges at which children demonstrate an understanding of the concept words, Bracken offered recommendations about the appropriateness of the assessment tool for children of different ages. Flanagan, Mascolo, and Genshaft (2000) provided a review of the linguistic demands of preschool assessment tools by utilizing expert analysis to categorize tests as high, moderate, or low in linguistic demand. Review criteria included features such as length of test directions, use of gestures, and ability of the child to respond by pointing or completing a nonverbal task. Cormier, McGrew, and Evans (2011) introduced a new methodology to quantify the linguistic demand of assessment test directions. They analyzed the linguistic demands of test directions via analysis of commonly used readability formulae. Total words, total sentences, average number of words per sentence, and average syllables per word were categorized into the factors of “verbosity” and “complexity” and analyzed to determine whether subtests had low, medium, or high linguistic demand. Correlational analysis suggested that the indexes of verbosity and complexity represented different dimensions of the test directions. Results This poster supported by the UWEC Differential Tuition Program and the Office of Research and Sponsored Programs OrtizReadability IndicesConcept Word Test BatteryReviewVerbosityComplexityDemandViolations Differential Ability Scales II CopyingLow MatricesLow Pattern ConstructionLow Picture SimilaritiesLow Verbal Comprehension Moderate Naming VocabularyModerate Kaufman Assessment Battery for Children II AtlantisLow Face RecognitionLow Pattern ReasoningLow TrianglesLow Conceptual ThinkingModerate Number RecallModerate RebusModerate Word OrderModerate Expressive VocabularyHigh RiddlesHigh Wechsler Preschool and Primary Intelligence Scale III Matrix ReasoningLow Picture CompletionLow Block DesignModerate CodingModerate Picture ConceptsModerate Symbol SearchModerate ComprehensionHigh InformationHigh SimilaritiesHigh VocabularyHigh Word ReasoningHigh Wechsler Preschool and Primary Intelligence Scale III Ages 2-3 Object AssemblyLow Picture NamingModerate Block DesignModerate Receptive VocabularyModerate InformationHigh