Download presentation
Presentation is loading. Please wait.
1
ASSESSMENT WITH THE WISC– V AND WPPSI– IV
2017 JEROME M. SATTLER Copyright © 2016 Jerome M. Sattler, Publisher, Inc. 1
2
Opening Poem Reflecting Childhood
Put Something In “Draw a crazy picture, Write a nutty poem, Sing a mumble-gumble song, Whistle through your comb. Do a loony-goony dance 'Cross the kitchen floor, Put something silly in the world That ain't been there before.” ― Shel Silverstein
3
Controversy of Intelligence
4
Thoughts about Intelligence
“Intelligence is important in psychology for two reasons. First, it is one of the most scientifically developed corners of the subject, giving the student as complete a view as is possible anywhere of the way scientific method can be applied to psychological problems. Secondly, it is of immense practical importance, educationally, socially, and in regard to physiology and genetics.” — Raymond Cattell
5
Thoughts about Intelligence
“Our purpose is to be able to measure the intellectual capacity of a child who is brought to us in order to know whether he is normal or retarded. ... We do not attempt to establish or prepare a prognosis and we leave unanswered the question of whether this retardation is curable, or even improveable. We shall limit ourselves to ascertaining the truth in regard to his present mental state.” — Alfred Binet
6
Life Outcomes and Intelligence [1](not in text)
Research shows a strong relationship between intelligence test scores and life outcomes such as economic and social competence (see Sattler, 2008 for studies and for most cited research in this section). Examples Annual income of 32-year-olds in 1993 in U.S. dollars was $5,000 for individuals with IQs below 75, $20,000 for individuals with IQs of 90 to 110, and $36,000 for individuals with IQs above (Murray, 1998).
7
Life Outcomes and Intelligence [2](not in text)
Examples (Cont.) Measures of general intelligence predict occupational level and job performance “better than any other ability, trait, or disposition and better than job experience” (Schmidt & Hunter, 2004, p. 162). There is a moderate relationship between IQs obtained in childhood (as early as 3 years of age) and later occupational level and job performance, with an overall correlation of about r = .50 (Schmidt & Hunter, 2004).
8
Life Outcomes and Intelligence [3](not in text)
Examples (Cont.) General intelligence predicts job performance better in more complex jobs (about r = .80) than in less complex jobs (about r = .20; Gottfredson, 2003). Intelligence is related to health and longevity (Gottfredson & Deary, 2004). IQs in childhood predict substantial differences in adult morbidity and mortality, including deaths from cancers and cardiovascular disease Gottfredson & Deary, 2004).
9
Life Outcomes and Intelligence [4](not in text)
Examples (Cont.) Children obtaining high scores on intelligence tests at ages 7, 9, and 11 (N = 11,103) had fewer adult hospitalizations for unintentional injuries than those who obtained lower scores (Lawlor et al., 2007). Those with higher intelligence test scores probably had more education, which in turn likely increased their ability to process information and assess risks
10
Life Outcomes and Intelligence [5](not in text)
Examples (Cont.) Youth identified before age 13 (N = 320) as having profound mathematical or verbal reasoning abilities (top 1 in 10,000 on SAT) were tracked for three decades (Kell et al., 2013): At age 38 many have leadership positions in business, health care, law, higher education, science, technology, engineering, and mathematics. Results mirror those of Galton (1869)
11
Life Outcomes and Intelligence [6](not in text)
Examples (Cont.) (Gifted, Kell et al., 2013; Continued): To identify individuals with profound human potential requires assessing multiple cognitive abilities and using atypical measurement procedures. These individuals hold extraordinary potential for enriching society by contributing creative products and competing in global economies
12
Life Outcomes and Intelligence [7](not in text)
Source Kell, H. J., Lubinski, D., & Benbow, C. P. (2013). Who rises to the top? Early indicators. Psychological Science, 24(5), 648–659. doi: /
13
US Department of Education, Office of Civil Rights [1](not in text)
Dear Colleague letter, July 26, 2016 Section 504 of the Rehabilitation Act of 1973 prohibits discrimination on the basis of disability and requires school districts to provide an equal educational opportunity to students with disabilities
14
US Department of Education, Office of Civil Rights [2](not in text)
Dear Colleague letter, July 26, 2016 Deficiencies of Schools Students are not being referred or identified as needing an evaluation to determine whether they have a disability and need special education or related services Students not being evaluated in a timely manner once identified as needing an evaluation School districts are conducting inadequate evaluations of students
15
US Department of Education, Office of Civil Rights [3](not in text)
Dear Colleague letter, July 26, 2016 Responsibilities of Schools School districts must conduct individualized evaluations of students who, because of disability, including ADHD, need or are believed to need special education or related services Must ensure that qualified students with disabilities receive appropriate services that are based on specific needs, not cost
16
US Department of Education, Office of Civil Rights [4](not in text)
Dear Colleague letter, July 26, 2016 Aim of “Dear Colleague” letter Help school districts properly evaluate and provide timely and appropriate services to students with ADHD
17
US Department of Education, Office of Civil Rights [5](not in text)
Dept of Ed Resource Guide ADHD & 504 Evaluation Considerations A school district must evaluate students who are suspected of having a disability in all related or all specific areas of educational need An evaluation must consist of more than IQ tests An evaluation must measure specific areas of educational need, such as speech processing, inability to concentrate, and behavioral concerns
18
US Department of Education, Office of Civil Rights [6](not in text)
Dept of Ed Resource Guide ADHD & 504 Evaluation Considerations (Cont.) Tests must be selected and administered so that the results accurately reflect the student’s aptitude or achievement or other factors being measured Test results should not reflect the student’s disability, except where those are the factors being measured
19
US Department of Education, Office of Civil Rights [7](not in text)
Dept of Ed Resource Guide ADHD & 504 Evaluation Considerations (Cont.) Tests and other evaluation materials are validated for the specific purpose for which they are used Tests are appropriately administered by trained personnel
20
US Department of Education, Office of Civil Rights [8](not in text)
Dept of Ed Resource Guide ADHD & 504 Evaluations Must be Timely Intervention strategies must not deny or delay evaluation of students suspected of having a disability School districts violate Section 504 when they deny or delay conducting an evaluation of a student when a disability, and the resulting need for special education or related services, is suspected
21
US Department of Education, Office of Civil Rights [9](not in text)
Dept of Ed Resource Guide ADHD & 504 Evaluations Must be Timely (Cont.) School districts run afoul of Section 504 when they Rigidly insist on first implementing interventions before conducting an evaluation Insist that each tier of a multi-tiered model of intervention must be implemented first Categorically require that data from an intervention strategy must be collected and incorporated as a necessary element of an evaluation
22
US Department of Education, Office of Civil Rights [10](not in text)
Dept of Ed Resource Guide ADHD & 504 Summary Section 504 requires a school district to identify and conduct an evaluation of any student who needs or is believed to need special education or related services because of a disability
23
US Department of Education, Office of Civil Rights [11](not in text)
Dept of Ed Resource Guide ADHD & 504 Summary (Cont.) A school district must evaluate students who are suspected of having any kind of disability in all specific or all related areas of educational need, even if the students do not fit into one suspected disability category or fit into multiple disability categories
24
US Department of Education, Office of Civil Rights [12](not in text)
Dept of Ed Resource Guide ADHD & 504 Summary (Cont.) Students who achieve satisfactory, or even demonstrate above-average, academic performance may still have a disability that substantially limits a major life activity and be eligible for special education or related aids and services because the school district is not meeting their needs as adequately as the needs of nondisabled students are met
25
US Department of Education, Office of Civil Rights [13](not in text)
Dept of Ed Resource Guide ADHD & 504 Summary (Cont.) Implementation of intervention strategies, such as interventions contained within a school’s RTI program, must not be used to delay or deny the Section 504 evaluation of a student suspected of having a disability and needing regular or special education and related aids and services as a result of that disability
26
US Department of Education, Office of Civil Rights [14](not in text)
Source: U.S. Department of Education, Office for Civil Rights. (2016). Students with ADHD and Section504: A Resource Guide. Retrieved from
27
Court Case Showing Need of a Thorough Evaluation [1](not in text)
In Phyllene W. v. Huntsville City (AL) Bd. of Ed. (11th Cir. 2015) the U.S. Court of Appeals for the Eleventh Circuit reversed the decision of a Hearing Officer and of a U. S. District Court and ruled in favor of the parent and child. The Court explained that:
28
Court Case Showing Need of a Thorough Evaluation [2](not in text)
"[T]he Board violated IDEA by failing to evaluate M.W. when faced with evidence that she suffered from a suspected hearing impairment. As a result of its failure to obtain necessary medical information regarding M.W.'s hearing, the Board further failed to provide her with a FAPE.
29
Court Case Showing Need of a Thorough Evaluation [3](not in text)
The lack of medical information rendered the accomplishment of the IDEA's goals impossible because no meaningful IEP was developed, and the IEPs put into place lacked necessary elements with respect to the services that M.W. should have been provided. In short, the Board's failure to evaluate M.W. with respect to her hearing loss deprived M.W. of the opportunity to benefit educationally from an appropriate IEP."
30
Overview of Assessment of Children: WISC–V and WPPSI–IV
Contents: pp. iv to v List of Tables: pp. vi to ix List of Exhibits and Figures: p. x Appendixes A, B, and C: pp. 473 to 517 References, Name Index, and Subject Index: pp. 519 to 529 Tables BC-1, BC-2, BC-3, BC-4: Inside back cover
31
Before you read a chapter
Study Suggestions [1] Before you read a chapter Read summary at the end of the chapter Look at key terms, concepts, and names at the end of the chapter (Note that each of these terms, concepts, and names have a page number) Look at the study questions
32
After you read a chapter
Study Suggestions [2] After you read a chapter Read summary at the end of the chapter Look at key terms, concepts, and names at the end of the chapter and define each one (Note that each of these terms, concepts, and names have a page number) Look at the study questions If you can’t define a term, concept, or name or answer the study questions, go back and read the material again
33
Role of the Evaluator in the Assessment Process
Chapter 1 Role of the Evaluator in the Assessment Process
34
Chapter 1 Major Heads[1] Evaluator Characteristics
Preparing for the First Meeting Establishing Rapport Observing Children General Suggestions for Administering Tests Administering Tests to Children with Special Needs Computer‑Based Administration, Scoring, and Interpretation
35
Chapter 1 Major Heads[2] Accounting for Poor Test Performance
Strategies for Becoming an Effective Evaluator Confidentiality of Assessment Findings and Records Concluding Comment on the Role of the Evaluator in the Assessment Process Thinking Through the Issues Summary Key Terms, Concepts, and Names Study Questions
36
Wechsler Intelligence Scale for Children–V (WISC–V): Description
Chapter 2 Wechsler Intelligence Scale for Children–V (WISC–V): Description
37
Goals & Objectives (p. 55) Chapter designed to enable you to:
Evaluate psychometric properties of the WISC–V Administer the WISC–V competently and professionally Evaluate and select short forms of the WISC–V Choose between the WISC–V and the WPPSI–IV at the overlapping ages Choose between the WISC–V and the WAIS–IV at the overlapping ages
38
History of the WISC–V (not in text)
WISC 1st published in 1949 WISC–R first revision published in 1974 WISC–III next revision published in 1991 WISC–IV next revision published in 2003 WISC–V latest revision published in 2014 Revisions of the WISC * *David Wechsler, the original author, died in 1982.
39
WISC–V Structure For information about the structure of the WISC–V review: Table 2-1 (p. 56) Figs. 2-1 and 2-2 (p. 59) Fig. 2-3 (p. 60) Fig. 2-4 (p. 61)
40
Subtests in the WISC–V [1](pp. 56–58)
Block Design Similarities Matrix Reasoning Digit Span Coding Vocabulary Figure Weights Visual Puzzles Picture Span Symbol Search Information Picture Concepts Letter-Number Sequencing Cancellation
41
Subtests in the WISC–V [2](pp. 56–58)
Naming Speed Literacy Naming Speed Quantity Immediate Symbol Translation Comprehension Arithmetic Delayed Symbol Translation Recognition Symbol Translation Exhibit 2-1 (pp. 57 and 58) presents items similar to those on the WISC–V subtests
42
Definition of Cognitive Proficiency Index (not in text)
Definition of the word “Cognitive” “of or relating to the mental processes of perception, memory, judgment, and reasoning, as contrasted with emotional and volitional processes.” From: dictionary.com Definition of the word “Proficiency” “a high degree of competence or skill; expertise” From: google.com 42
43
Definition of General Ability Index (not in text)
Definition of the term “General Ability” “a term that is used to describe the measurable ability believed to underlie skill in handling all types of intellectual tasks.” “Our general ability is the skill underlying all tasks.” From: psychologydictionary.org 43
44
Diagnostic Utility of GAI and CPI (WISC–IV) [1] (not in text)
Devena and Watkins (2012) reported the following: Study sample: 5 groups of children (hospital sample with ADHD = 78, nondiagnosed hospital sample = 66, school sample with ADHD = 196, school matched comparison sample = 196, simulated standardization sample = 2,200) A discrepancy analysis between the GAI and CPI was found to have “low accuracy in identifying children with attention deficit hyperactivity disorder.” (p. 133)
45
Diagnostic Utility of GAI and CPI (WISC–IV) [2] (not in text)
Source: Devena, S. E., & Watkins, M. W. (2012). Diagnostic utility of WISC–IV General Abilities Index and Cognitive Proficiency Index difference scores among children with ADHD. Journal of Applied School Psychology, 28(2), 133–154. doi: /
46
Predictive Ability of GAI vs FSIQ (WISC–IV) [1] (not in text)
Rowe, Kingsley, and Thompson (2010) reported the following: Study sample = 88 children tested for gifted programming Both the FSIQ and GAI significantly predicted reading and math scores However, the FSIQ explained more of the variance than the GAI
47
Predictive Ability of GAI vs FSIQ (WISC–IV) [2] (not in text)
Conclusion Working memory and verbal comprehension explained significant, unique variance in reading and math Processing speed and perceptual reasoning did not account for significant amounts of variance over and above working memory and verbal comprehension Working memory in the FSIQ was the main difference between FSIQ and GAI
48
Predictive Ability of GAI vs FSIQ (WISC–IV) [3] (not in text)
Source: Rowe, E. W., Kingsley, J. M., & Thompson, D. F. (2010). Predictive ability of the General Ability Index (GAI) versus the Full Scale IQ among gifted referrals. School Psychology Quarterly, 25(2), 119–128. doi: /a
49
FSIQ vs GAI in Intellectual Disability (WISC–IV) [1] (not in text)
Koriakin et al. (2013) reported the followiing: Study sample: 543 males and 290 females Fewer children were identified as having intellectual disability using the GAI (n = 159) than when using the FSIQ (n = 196) “The use of GAI for intellectual disability diagnostic decision-making may be of limited value.” (p. 840)
50
FSIQ vs GAI in Intellectual Disability (WISC–IV) [2] (not in text)
Source: Koriakin, T. A., McCurdy, M. D., Papazoglou, A., Pritchard, A. E., Zabel, T. A., Mahone, E. M., & Jacobson, L. A. (2013). Classification of intellectual disability using the Wechsler Intelligence Scale for Children: Full Scale IQ or General Abilities Index? Developmental Medicine and Child Neurology, 55(9), doi: /dmcn.12201
51
Items Similar to Those on the WISC–V (pp. 57–58)
See Exhibit 2-1
52
Same Subtests Used to Derived Several Index Scores (p. 61)
Overlap of subtests means that these ancillary indexes are not independent.
53
Available Manuals and Technical Reports [1] (p. 61)
At present, there are 7 publications related to the WISC–V 4 WISC–V Manuals 4 WISC–V Technical Reports The website for obtaining 3 of the 4 Technical Reports can be found in the page 61 of the text.
54
Available Manuals and Technical Reports [2] (not in text)
The reference for the 4th Technical Report is as follows: Raiford, S. E., Zhang, O., Drozdick, L. W., Getz, K., Wahlstrom, D., Gabel, A., Holdnack, J. A., & Daniel, M. (2016). WISC–V Coding and Symbol Search in digital format: Reliability, validity, special group studies, and interpretation. Technical Report #12. Retrieved from
55
Useful Psychometric Tables
Demographic characteristics (Table 2-2; p. 62) Various types of reliability (Table 2-3; pp. 63–71) Criterion validity studies (Table 2-7; pp. 72–73)
56
Concurrent Validity of WISC–V Subtests and KTEA–3 Composite[1]
Academic Skills Battery Similarities .66 Vocabulary .70 Information Comprehension .58 Block Design .52 Visual Puzzles .41 Matrix Reasoning .51 Figure Weights .54
57
Concurrent Validity of WISC–V Subtests and KTEA–3 Composite[2]
Academic Skills Battery Picture Concepts .44 Arithmetic .68 Digit Span .59 Picture Span .42 Letter-Number Seq. .55 Coding .23 Symbol Search .34 Cancellation .11
58
Concurrent Validity of WISC–V Subtests and WIAT–3 Composite[1]
Total Achievement Similarities .65 Vocabulary .63 Information .57 Comprehension .52 Block Design .43 Visual Puzzles .37 Matrix Reasoning .35 Figure Weights .33
59
Concurrent Validity of WISC–V Subtests and WIAT–3 Composite[2]
Total Achievement Picture Concepts .34 Arithmetic .64 Digit Span .65 Picture Span .45 Letter-Number Seq. .62 Coding Symbol Search .28 Cancellation .05
60
Source: Slide: Concurrent Validity of WISC–V Subtests and KTEA–3 Composite (Wechsler, 2014c) Slide: Concurrent Validity of WISC–V Subtests and WIAT–3 Composite (Wechsler, 2014c)
61
Concurrent Validity of WISC–V VCI, VECI, FRI, and EFI [1] (not in text)
Criterion WIAT–III VCI VECI FRI EFI Oral Language .78 .80 .33 .55 Total Reading .65 .70 .32 .50 Basic Reading .53 .60 .30 .45 Reading Comprehension and Fluency .25 Written Expression Mathematics Math Fluency .36 -- .31 Total Achievement .74 .40
62
Concurrent Validity of WISC-V VCI, VECI, FRI, and EFI [2]
Abbreviations: VCI = Verbal Comprehension Index VECI = Verbal Expanded Crystallized Index FRI = Fluid Reasoning Index EFI = Expanded Fluid Index Sources: Raiford, Drozdick, Zhang, & Zhou (2015) Wechsler (2014c)
63
WIAT–III Total Achievement
Relationship of Complementary Indexes and FSIQ to WIAT–III Total Achievement (not in text) WISC–V Index WIAT–III Total Achievement Naming Speed Index (NSI) .29 Symbol Translation Index (STI) .39 Storage and Retrieval Index (SRI) .45 FSIQ .81 See Table 5.14 on p. 104 of the Technical and Interpretive Manual
64
Age Equivalents (p. 63) Table A.9 in the Administration and Scoring Manual (pp. 337–340) provides age equivalents for all the subtests and some process scores (see left column p. 63 in text for discussion) No validity data are provided in any of the WISC–V manuals for age equivalents Recommend that they only be used in an informal manner
65
Special Group Studies with WISC– V (pp. 75–76)
13 special groups compared across the primary index scales (Table 2-8; p. 75) VCI VSI FRI WMI PSI
66
Standardization of the WISC–V (pp. 61–62)
Standardized on 2,200 children who were selected to represent the school-age population in the United States in 2012 Used a stratified sample based on demographic characteristics of age, sex, ethnicity, geographic region, and parental education (as a measure of socioeconomic status)
67
WISC-V FSIQs for 5 Ethnic Groups (1) [not in text]
European American 103.5 African American 91.9 Hispanic American 94.4 Asian American 108.6 Other 100.4
68
WISC-V FSIQs for 5 Ethnic Groups (2) [not in text]
Note: Adapted from Table 5.3 (p. 157) in Weiss et al (2016) Source: Weiss, L. G., Locke, V., Pan, T., Harris, J. G., Saklofske, D. H., & Prifitera, A. (2016). WISC–V use in societal context. In L. G. Weiss, D. H. Saklofske, J. A., Holdnack, & A. Prifitera (Eds.), WISC–V assessment and interpretation: Scientist-practitioner perspectives (pp. 123–185). San Diego, CA: Academic Press.
69
Descriptive Statistics for the WISC–V (pp. 62–76)
The WISC-V uses: Standard scores (M = 100, SD = 15) for each of the primary, ancillary, and complementary index scores and for the FSIQ Scaled scores (M = 10, SD = 3) for the 16 primary and secondary subtests Standard scores (M = 100, SD = 15) for the five complementary subtests (Note that the complementary subtests have standard scores, not scaled scores)
70
Confidence Intervals [1](p. 71)
Table A-1 (pp. 372–373) shows confidence intervals based on the obtained score and the SEM for 68% 85% 90% 95% 99% Confidence intervals are shown for the VCI, VSI, FRI, WMI, PSI, and the FSIQ 70
71
Confidence Intervals [2](p. 71)
Table A-2 (pp. 374–375) shows confidence intervals for the 7 ancillary indexes and 3 complementary indexes These confidence intervals are based on the child’s obtained score, whereas those in the Administration and Scoring Manual are obtained on the child’s estimated true score 71
72
Description of the Five Factors [1](pp. 76–81; based on Sattler et al
Verbal Comprehension Measures verbal knowledge and understanding obtained primarily through both formal and informal education and reflects the application of verbal skills to new situations
73
Visual Spatial/Fluid Reasoning
Description of the Five Factors [2](pp. 76–81; based on Sattler et al., 2016) Visual Spatial/Fluid Reasoning Measures the ability to interpret and organize visually perceive material, the ability to perform nonverbal inductive reasoning, and the ability to analyze and solve novel problems involving conceptual thinking
74
Description of the Five Factors [3](pp. 76–81; based on Sattler et al
Working Memory Measures the ability to hold and manipulate information as well as the ability to pay attention and concentrate on tasks at hand
75
Description of the Five Factors [4](pp. 76–81; based on Sattler et al
Processing Speed Measures the ability to process visually perceived nonverbal information quickly, with concentration and rapid eye-hand coordination being important components
76
Description of the Five Factors [5](pp. 76–81; based on Sattler et al
Unknown Factor Has only one subtest in the total group with a high loading: Cancellation We advise that this factor not be used in interpreting the WISC–V
77
Letter–Number Sequencing
Measurement of g (p. 81) Good Measures of g Vocabulary Information Similarities Arithmetic Digit Span Letter–Number Sequencing Fair Measures of g Visual Puzzles Block Design Comprehension Matrix Reasoning Figure Weights Picture Span Picture Concepts Poor Measures of g Symbol Search Coding Cancellation 77
78
WISC–V Subtests as Measures of g (p. 82)
Table 2-12 Verbal Comprehension and Working Memory subtests (the exception is Picture Span) are good measures of g Visual Spatial and Fluid Reasoning subtests are fair measures of g Processing Speed subtests are poor measures of g (Note: Average loading of g for Cancellation is .24—the poorest measure of g in the WISC–V) 78
79
Amount of Specificity in WISC–V Subtests (p. 83)
Table 2-13 Most subtests have ample or adequate specificity at all ages The three exceptions where specificity is inadequate are Vocabulary at ages 8 and 10 Information at age 11 Symbol Search at ages 12 and 13 79
80
WISC–V Factor Structure [1]
Research Studies The Technical and Interpretive Manual (Wechsler, 2014c) performed a confirmatory factor analysis on the WISC–V on the standardization sample for 16 subtests and reported 5 factors: Verbal Comprehension Visual Spatial Fluid reasoning Working Memory Processing Speed
81
WISC–V Factor Structure [2]
Research Studies (Cont.) Sattler et al. (2016; p.76 in text) performed an exploratory factor analysis of the WISC–V standardization sample for the 16 subtests and found a set of 5 factors that differed from those Wechsler (2014c)
82
WISC–V Factor Structure [3]
Research Studies (Cont.) Canivez et al. (2016a) performed an exploratory factor analysis of the WISC–V standardization sample for the 16 subtests and found that g accounts for most of the variance
83
WISC–V Factor Structure [4]
Research Studies (Cont.) However, some minimal support was found for a 4-factor model: Verbal Comprehension: Similarities, Vocabulary, Information, and Comprehension Working Memory: Arithmetic, Digit Span, Picture Span, and Letter–Number Sequencing Perceptual Reasoning: Block Design, Visual Puzzles, Matrix Reasoning, and Figure Weights Processing Speed: Coding, Symbol Search, and Cancellation Picture Concepts did not load on any factor
84
WISC–V Factor Structure [5]
Research Studies (Cont.) Canivez et al. (2016b) also performed a confirmatory factor analysis of the WISC standardization sample for 16 subtests and reported that the g factor was more dominant than any other factors Dombrowski et al. (2105) performed an exploratory bifactor analysis of the WISC–V standardization sample for the 16 subtests and reported that the g factor accounted for the largest portions of the total and common subtest variance
85
WISC–V Factor Structure [6]
Sources: Canivez, G. L., Watkins, M. W., & Dombrowski, S. C. (2016a). Factor structure of the Wechsler Intelligence Scale for Children–Fifth Edition: Exploratory factor analyses with the 16 primary and secondary subtests. Psychological Assessment. Psychological Assessment, 28(8), 975–986. doi: /pas
86
WISC–V Factor Structure [7]
Sources: (Cont.) Canivez, G. L., Watkins, M. W., & Dombrowski, S. C. (2016b, July 21). Structural validity of the Wechsler Intelligence Scale for Children–Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests. Psychological Assessment. Advance online publication. doi: /pas
87
WISC–V Factor Structure [8]
Sources: (Cont.) Dombrowski, S. C., Canivez, G. L., Watkins, M. W., & Beaujean, A. (2015). Exploratory bifactor analysis of the Wechsler Intelligence Scale for Children–Fifth Edition with the 16 primary and secondary subtests. Intelligence, 53, 194–201. doi: /j.intell
88
Scaled Score Ranges for WISC–V Subtests [1] (p. 84)
Table 2-14 14 of the 16 subtests have a scaled score range of 1 to 19 Picture Concepts has a range of 1 to 19 at ages 6-0 to 16-11 2 to 19 at ages 6-0 to 6-3 88
89
Scaled Score Ranges for WISC–V Subtests [2] (p. 84)
Table 2-14 (Cont.) Letter-Number Sequencing has a range of 1 to 19 at ages 7-4 to 16-11 2 to 19 at ages 7-0 to 7-3 3 to 19 at ages 6-4 to 6-11 4 to 19 at ages 6-0 to 6-3 This means that you can’t automatically compare Letter-Number Sequencing scores at ages 6-0 to 7-3 with those of older ages 89
90
Range of Index Scores (p. 84)
Table 2-15 All primary index scales have a range of 45 to 155 The FSIQ has a range of 40 to 160 Ancillary index scores have ranges of 40 to 160 and 45 to 155 Complementary index scores have a range of 45 to 155 90
91
Guidelines for Computing Index Scores and FSIQs (pp. 84–85)
Study the guidelines for computing the following index scores on p. 85 Primary index scores FSIQ Ancillary index scores Complementary index scores 91
92
Test Administration Guidelines [1](pp. 85–88)
Use suitable testing location Maintain good rapport Be flexible Be alert to the child’s mood and needs Be professional Follow standardization process Maintain steady pace
93
Test Administration Guidelines [2](pp. 85–88)
Make smooth transitions Be organized Shield your writing Take breaks, as needed between, not during, subtests Praise effort Empathize and encourage Use the exact wording of the directions, questions, and items
94
Test Administration Guidelines [3](pp. 85–88)
Observe the child’s performance carefully throughout the test Record responses correctly using (Q) for queries (P) for prompts (R) for repeated instructions Score each item after the child answers so that you know when to use a reverse procedure and when to discontinue the subtest
95
Supplementary Instructions for Administration (pp. 86–87)
Exhibit 2-2 Study carefully the supplementary instructions for administering the WISC–V The instructions cover the following areas: Preparing to administer the WISC–V Administering the WISC–V Scoring Record Form General guidelines for completing the Record Form Miscellaneous information and suggestions 95
96
Subtest Sequence (p. 89) The primary subtests that make up the Full Scale are administered in the following order: Block Design Similarities Matrix Reasoning Digit Span Coding Vocabulary Figure Weights
97
Administration Issues [1](pp. 89–94)
Specific guidelines are provided in the WISC–V Administration and Scoring Manual for: Queries Prompts Instructions Repeating items Additional help Waiting time Start point
98
Administration Issues [2](pp. 89–94)
Specific guidelines are provided in the WISC–V Administration and Scoring Manual for: (Cont.) Reverse Sequence rule Start-Point scoring rule Discontinue-Point scoring rule Discontinue criterion Scoring
99
Administration Issues [3](pp. 89–94)
Specific guidelines are provided in the WISC–V Administration and Scoring Manual for: (Cont.) Perfect scores Points for items not administered Spoiled responses Subtest substitution Proration
100
Subtest Substitution in the WISC–V (p. 93)
Only substitute a subtest if absolutely necessary When you substitute, Psychometric properties of the FSIQ may change Reliabilities and validities of the FSIQ may change Confidence intervals of the FSIQ may change No empirical data for substitutions No empirical data for number of substitutions Follow the subtest substitution guidelines on p. 93 100
101
Substitution, Proration, and Retest on the WPPSI–IV [1] (not in text)
Zhu et al. (2016) using the standardization data reported that substituting, prorating, and retesting resulted in An increase of the FSIQ SEM by .61 to 1.92 points, a 20% to 64% increase Wider confidence intervals by 1.2 to 3.8 IQ points Misclassifications as high as 22% Conclusion: Substitution, proration, or retesting introduces additional measurement error
102
Substitution, Proration, and Retest on the WPPSI–IV [1] (not in text)
Source: Zhu, J., Cayton, T. G., & Chen, H. (2016). Substitution, proration, or a retest? The optimal strategy when standard administration of the WPPSI–IV is infeasible. Psychological Assessment. Advance online publication. doi: /pas Original paper was given at the American Psychological Association, July 2013 in Honolulu, HI (Zhu and Cayton, 2013; reference in text)
103
Potential Problems in Administering the WISC–V (pp. 94–97) [1]
Potential problems (see Table 2-17, pp. 95–96) include difficulties in : Establishing rapport Administering test items Scoring test items Completing the Record Form
104
Potential Problems in Administering the WISC–V (pp. 94–97) [2]
McDermott et al. (2014) pointed out that: Compromised administration and scoring is not unique to cognitive tests It is endemic to psychological assessment in general and affects a broad collection of measuring devices Characteristics of the examiner, examinee, or examiner–examinee relationship also affect the test results They cite Terman (1918) who said that “there are innumerable sources of error in giving and scoring mental tests of whatever kind” (p. 33)
105
Potential Problems in Administering the WISC–V (pp. 94–97) [3]
Sources: McDermott, P. A., Watkins, M. W., & Rhoad, A. M. (2014). Whose IQ is it? Assessor bias variance in high-stakes psychological assessment. Psychological Assessment, 26(1), 207–214. doi: /a Terman, L. M. (1918). Errors in scoring Binet tests. Psychological Clinic, 12, 33–39.
106
Using Portfolios to Teach Test-Scoring Skills [1] (not in text)
Egan et al. (2003) reported that students Who maintained a portfolio with completed protocols And reviewed them prior to each practice administration Made fewer errors than the control group
107
Using Portfolios to Teach Test-Scoring Skills [2] (not in text)
Source: Egan, P., McCabe, P., Semenchuk, D., & Butler, J. (2003). Using portfolios to teach test scoring skills: A preliminary investigation. Teaching of Psychology, 30(3), 233–235. doi: /S TOP3003_08
108
Short Forms of WISC–V (pp. 97–98)
See Table A-5 in Appendix A (pp. 387–388) for short form reliability and validity coefficients See Tables A-7, A-8, A-9, A-10 and A-11 in Appendix A (pp. 391–401) for 2-, 3-, 4-, 5-, and 6-subtest short forms
109
Reliable and Unusual Scaled-Score Ranges (pp. 389–390)
See Table A-6 for reliable and unusual scaled-score ranges for 2-, 3-, 4-, 5-, 6-, 7-, 10-, and 16-subtest combinations For the FSIQ, a reliable range is 5 points (statistically significant at .05 level) For the FSIQ, an unusual range is 9 points (occurs in less than 10% of the population)
110
Choosing Between the WISC–V and the WPPSI–IV or the WAIS–IV (p. 98)
WISC–V or WPPSI–IV The WISC–V and the WPPSI–IV overlap at the ages 6-0 to 7-7 Specific recommendations are provided for choosing which test to use (see page 98 for recommended tests) WISC–V or WAIS–IV The WISC–V also overlaps with the WAIS–IV at ages 16-0 to 16-11
111
Administering the WISC–V to Children with Disabilities (pp. 98–100)
Chapter 1 (pp. 36–39) provides general suggestions for administering tests to children with special needs, while Chapter 2 (pp. 98–100) focuses on the WISC–V Prior to making any modifications in administration procedures Evaluate the sensory-motor abilities of children with disabilities Closely examine how suitable the subtests are for a child with special needs
112
Strengths of WISC–V (p. 100)
Excellent standardization Good overall psychometric properties Useful diagnostic information Good administration procedures Good manuals and interesting test materials Helpful scoring criteria Usefulness for children with some disabilities
113
Limitations of WISC–V [1] (pp. 100–101)
Limited breadth of coverage of the FSIQ Failure to provide conversion tables when substitutions are made Failure to provide a psychometric basis for requiring raw scores of 1 in order to compute FSIQ Limited range of scores for extremely low or high functioning children Limited criterion validity studies Possible difficulties in scoring responses
114
Limitations of WISC–V [2] (pp. 100–101)
Somewhat large practice effects Occasional confusing guidelines Poor quality of some test materials
115
How Am I Going to Score These?
Question: What are 12, 14, and 16? Answer: That’s easy; MTV, Fox, and Cartoon network. Question: What is celebrated on Thanksgiving Day? Answer: My cousin’s birthday. Question: What is the capital of Greece? Answer: G. 115
116
How Am I Going to Score These?
Biology question: List three examples of marine life Answer: Marching, Barracks inspection, running the obstacle course. Astronomy question: Where is the milky way located? Answer: In the checkout aisle next to the rest of the candy bars. 116
117
How Am I Going to Score These?
Question: What does imitate mean? Answer: What does imitate mean? Question: What would you do if you were lost in the woods? Answer: I’d use my cell phone, pager, or my global positioning satellite device. 117
118
How Am I Going to Score These?
Question: What ended in 1945? Answer: 1944 Question: Where was the American Declaration of Independence signed? Answer: At the bottom Question: How do you change centimeters to meters? Answer: Take out centi 118
119
How Am I Going to Score These?
Question: Explain the phrase “free press.” Answer: When your mom irons trousers for you Question: What is a fibula? Answer: A little lie Question: What is a stand alone computer system? Answer: It does not come with a chair 119
120
Reflections on Intelligence and Childhood
“Too often we give children answers to remember rather than problems to solve.” —Roger Lewin
121
Chapter 3 WISC–V Subtests
122
Goals & Objectives (p. 107) Chapter designed to enable you to:
Critically evaluate the 21 WISC–V primary, secondary, and complementary subtests Understand the rationales, factor analytic findings, reliability and correlational highlights, administration guidelines, and interpretive suggestions for the 21 WISC–V subtests
123
Skills a Child Needs to be Successful on the WISC–V (p. 108)
Adequate hearing Ability to pay attention and understand directions Retain the directions while solving problems Adequate vision Adequate fine- and gross-motor skills See pg. 108.
124
Scoring WISC–V Items (p. 108)
Important considerations in scoring: Score each item as it is administered Do not to discontinue administering a subtest prematurely This is particularly important when you are unsure how to score a response immediately Better to administer more items in a subtest, even though some may not be counted in the final score You do not want to short-change the child by discontinuing the subtest too soon
125
Evaluating and Interpreting a Child’s Performance [1](p. 108)
Consider: Child’s scores and responses Quality of child’s responses Child’s response style, motivation, and effort How child handles frustration Child’s problem-solving approach Child’s fine-motor skills Child’s pattern of successes and failures
126
Evaluating and Interpreting a Child’s Performance [2](p. 108)
Consider: (Cont.) How child handles test materials How child handles tasks of each subtest Responding to difficult items Responding to time limits
127
Block Design [1](pp. 109–113) Primary Visual Spatial subtest
Key areas of measurement: Nonverbal reasoning Visual-spatial organization Other areas of measurement: See page 109
128
Block Design [2](pp. 109–113) Other Considerations Fair measure of g
Contributes moderately to the visual spatial/fluid reasoning factor A reliable subtest Somewhat difficult to administer and score
129
Similarities [1](pp. 113–116) Primary Verbal Comprehension subtest
Key area of measurement: Verbal concept formation Other areas of measurement: See page 113
130
Similarities [2](pp. 113–116) Other Considerations Good measure of g
Contributes substantially to the verbal comprehension factor A reliable subtest Relatively easy to administer, but some responses may be difficult to score
131
Matrix Reasoning [1](pp. 116–118)
Primary Fluid Reasoning subtest Key area of measurement: Visual-perceptual analogic reasoning ability without a speed component Other areas of measurement: See page 116
132
Matrix Reasoning [2](pp. 116–118)
Other Considerations Fair measure of g Contributes substantially to the visual spatial/fluid reasoning factor A reliable subtest Relatively easy to administer and score
133
Digit Span [1](pp. 118–122) Primary Working Memory subtest
Key areas of measurement: Auditory short-term memory Auditory sequential processing Other areas of measurement: See page 118
134
Digit Span [2](pp. 118–122) Other Considerations Good measure of g
Contributes substantially to the working memory factor A highly reliable subtest Relatively easy to administer and score
135
Coding [1](pp. 122–125) Primary Processing Speed subtest
Key area of measurement: Ability to learn an unfamiliar task involving speed of mental operation and graphomotor speed Other areas of measurement: See page 122
136
Coding [2](pp. 122–125) Other Considerations Poor measure of g
Contributes substantially to the processing speed factor A reliable subtest Relatively easy to administer and score
137
Vocabulary [1](pp. 125–129) Primary Verbal Comprehension subtest
Key area of measurement: Knowledge of words Other areas of measurement: See page 125
138
Vocabulary [2](pp. 125–129) Other Considerations
Best measure of g in the WISC–V Contributes substantially to the verbal comprehension factor A reliable subtest Relatively easy to administer but some responses may be difficult to score
139
Figure Weights [1](pp. 129–131)
Primary Fluid Reasoning subtest Key area of measurement: Visual-perceptual quantitative reasoning Other areas of measurement: See page 129
140
Figure Weights [2](pp. 129–131)
Other Considerations Fair measure of g Contributes substantially to the visual spatial/fluid reasoning factor A highly reliable subtest Relatively easy to administer and score
141
Visual Puzzles [1](pp. 131–134)
Primary Visual Spatial subtest Key area of measurement: Visual-perceptual reasoning Other areas of measurement: See page 131
142
Visual Puzzles [2](pp. 131–134)
Other Considerations Fair measure of g Contributes substantially to the visual spatial/fluid reasoning factor A reliable subtest Relatively easy to administer and score
143
Picture Span [1](pp. 134–136) Primary Working Memory subtest
Key area of measurement: Short-term memory Other areas of measurement: See page 134
144
Picture Span [2](pp. 134–136) Other Considerations Fair measure of g
Contributes substantially to the working memory factor A reliable subtest Relatively easy to administer and score
145
Symbol Search [1](pp. 136–140) Primary Processing Speed subtest
Key area of measurement: Processing speed Other areas of measurement: See page 136
146
Symbol Search [2](pp. 136–140) Other Considerations Poor measure of g
Contributes substantially to the processing speed factor A reliable subtest Relatively easy to administer and score
147
Information [1](pp. 140–142) Secondary Verbal Comprehension subtest
Key area of measurement: Long-term memory for factual information Other areas of measurement: See page 140
148
Information [2](pp. 140–142) Other Considerations Good measure of g
Contributes substantially to the verbal comprehension factor A reliable subtest Easy to administer and score
149
Picture Concepts [1](pp. 142–145)
Secondary Fluid Reasoning subtest Key area of measurement: Abstract, categorical reasoning based on visual- perceptual recognition process Other areas of measurement: See page 142
150
Picture Concepts [2](pp. 142–145)
Other Considerations Fair measure of g Contributes moderately to the visual spatial/fluid reasoning factor A reliable subtest Relatively easy to administer and score
151
Letter-Number Sequencing [1](pp. 145–147)
Secondary Working Memory subtest Key areas of measurement: Short-term working memory Auditory sequential processing Other areas of measurement: See page 145
152
Letter-Number Sequencing [2](pp. 145–147)
Other Considerations Good measure of g Contributes substantially to the working memory factor A reliable subtest Relatively easy to administer and score
153
Cancellation [1](pp. 147–150) Secondary Working Memory subtest
Key areas of measurement: Visual-perceptual recognition Speed of visual processing Other areas of measurement: See page 147
154
Cancellation [2](pp. 147–150) Other Considerations
Poorest measure of g Contributes minimally to the processing speed factor A reliable subtest Relatively easy to administer and score
155
Naming Speed Literacy [1](pp. 150–153)
Complementary subtest Key areas of measurement: Processing speed Naming fluency Other areas of measurement: See page 150
156
Naming Speed Literacy [2](pp. 150–153)
Other Considerations Considered to be a measure of Processing Speed Long-Term Storage and Retrieval Combines with Naming Speed Quantity to form the Naming Speed Index A reliable subtest Easy to administer and score
157
Naming Speed Quantity [1](pp. 153–156)
Complementary subtest Key areas of measurement: Processing speed Naming fluency involving quantities Other areas of measurement: See page 153
158
Naming Speed Quantity [2](pp. 153–156)
Other Considerations Considered to be a measure of Processing Speed Long-Term Storage and Retrieval Combines with Naming Speed Literacy to form the Naming Speed Index A reliable subtest Relatively easy to administer and easy to score
159
Immediate Symbol Translation [1](pp. 156–158)
Complementary subtest Key area of measurement: Short-term memory Other areas of measurement: See page 156
160
Immediate Symbol Translation [2](pp. 156–158)
Other Considerations Considered to be a measure of Long-Term Storage and Retrieval Short-Term Memory Visual Processing A reliable subtest Relatively easy to score, but somewhat difficult to administer
161
Comprehension [1](pp. 158–160)
Secondary Verbal Comprehension subtest Key areas of measurement: Practical reasoning Judgment in social situations Other areas of measurement: See page 158
162
Comprehension [2](pp. 158–160)
Other Considerations Fair measure of g Contributes substantially to the verbal comprehension factor A reliable subtest Relatively easy to administer, but somewhat difficult to score
163
Arithmetic [1](pp. 160–163) Secondary Fluid Reasoning subtest
Key area of measurement: Numerical reasoning Other areas of measurement: See page 160
164
Arithmetic [2](pp. 160–163) Other Considerations Good measure of g
Contributes moderately to the working memory factor A highly reliable subtest Relatively easy to administer and score
165
Delayed Symbol Translation [1](pp. 163–165)
Complementary subtest Key area of measurement: Delayed visual recall Other areas of measurement: See pages 163–164
166
Delayed Symbol Translation [2](pp. 163–165)
Other Considerations Considered to be a measure of Long-Term Storage and Retrieval A reliable subtest Relatively easy to administer and score
167
Recognition Symbol Translation [1](pp. 165–167)
Complementary subtest Key area of measurement: Delayed visual recall Other areas of measurement: See page 165
168
Recognition Symbol Translation [2](pp. 165–167)
Other Considerations Considered to be a measure of Long-Term Storage and Retrieval A reliable subtest Relatively easy to administer and score
169
Reflections on Intelligence and Childhood
“Integrity without knowledge is weak and useless, and knowledge without integrity is dangerous and dreadful.” — Samuel Johnson “Intelligence without ambition is a bird without wings.” — Salvador Dali “You might be poor, your shoes might be broken, but your mind is a palace.” —Frank McCourt
170
Interpreting the WISC–V
Chapter 4 Interpreting the WISC–V
171
Goals & Objectives (p. 171) Chapter designed to enable you to:
Describe profile analysis for the WISC–V Analyze and evaluate WISC–V scores from multiple perspectives Develop hypotheses about WISC–V scores and responses Report WISC–V findings to parents and others
172
What does the WISC–IV IQ Represent? [1](not in the text)
McDermot et al. (2014) reported that WISC–IV FSIQs: Are associated with the assessor’s bias (multilevel linear modeling) Sample size: N = 2,783 children evaluated by 448 regional school psychologists for possible special education placements
173
What does the WISC–IV IQ Represent? [2](not in the text)
Chen et al. (2016), in contrast, reported that WISC–IV FSIQs: Are valid measures of children’s intellectual abilities and are not related to the assessor’s bias (hierarchical linear modeling) Sample size: N = 2,200 in the standardization sample The only subtest that showed some assessor bias was Comprehension
174
What does the WISC–IV IQ Represent? [3](not in the text)
Source: McDermott, P. A., Watkins, M. W., & Rhoad, A. M. (2014). Whose IQ is it? Assessor bias variance in high-stakes psychological assessment. Psychological Assessment, 26(1), 207–214. doi: /a Chen, H., Pan, T., & Zhu, J. (2016). It is the examinee’s IQ. Psychological Assessment. Advance online publication. doi: /pas
175
Factors to Consider in Interpreting the WISC–V [1](p. 172)
Perform a profile analysis Determine whether the five primary index scores differ significantly from each other Determine whether the subtest scaled scores differ significantly from each other Obtain base rates for differences between the index scores Obtain base rates for differences between some of the subtest scaled scores
176
Factors to Consider in Interpreting the WISC–V [2](p. 172)
Determine base rates for intersubtest scatter Develop hypotheses and interpretations
177
Full Scale IQ [1](p. 172) Includes measures of: Verbal comprehension
Visual spatial reasoning Fluid reasoning Working memory Processing speed
178
The seven subtests that comprise the Full Scale are:
Full Scale IQ [2](p. 172) The seven subtests that comprise the Full Scale are: Similarities Vocabulary Block Design Matrix Reasoning Figure Weights Digit Span Coding
179
Verbal Comprehension Index [1](p. 172)
Measures: Verbal comprehension Application of verbal skills and information to the solution of new problems Ability to process verbal information Retrieval of information from long-term memory Crystallized knowledge Conceptual reasoning ability Language development
180
Verbal Comprehension Index [2](p. 172)
The two subtests that comprise the Verbal Comprehension Index are: Similarities Vocabulary
181
Visual Spatial Index [1](pp. 172–173)
Measures: Ability to think in visual images and manipulate them with fluency and speed Ability to interpret or organize visually perceived material quickly Nonverbal reasoning Visual-perceptual discrimination Visual spatial reasoning ability
182
Visual Spatial Index [2](pp. 172–173)
The two subtests that comprise the Visual Spatial Index are: Block Design Visual Puzzles
183
Fluid Reasoning Index [1](p. 173)
Measures: Fluid reasoning ability Visual-perceptual reasoning and organization Ability to think in visual images and manipulate them with fluency and relative speed Ability to interpret or organize visually perceived material quickly Nonverbal reasoning Visual-perceptual discrimination
184
Fluid Reasoning Index [2](p. 173)
The two subtests that comprise the Fluid Reasoning Index are: Matrix Reasoning Figure Weights
185
Working Memory Index [1](p. 173)
Measures: Short-term memory Visual processing Working memory Memory span Visual spatial memory Rote memory Immediate visual memory Attention Concentration
186
Working Memory Index [2](p. 173)
The two subtests that comprise the Working Memory Index are: Digit Span Picture Span
187
Processing Speed Index [1](p. 173)
Measures: Processing speed Psychomotor speed Perceptual speed Short-term visual memory Visual-motor coordination and dexterity Visual-perceptual discrimination Speed of mental operation Attention Concentration Scanning ability
188
Processing Speed Index [2](p. 173)
The two subtests that comprise the Processing Speed Index are: Coding Symbol Search
189
Ancillary Indexes (pp. 173–175)
Seven Ancillary Indexes Quantitative Reasoning Index Auditory Working Memory Index Nonverbal Index General Ability Index Cognitive Proficiency Index Verbal (Expanded Crystallized) Index Expanded Fluid Index
190
Quantitative Reasoning Index (p. 173)
Provides additional information regarding a child’s reasoning skills, specifically those involving numeric information The two subtests that comprise the Quantitative Reasoning Index are: Figure Weights Arithmetic
191
Auditory Working Memory Index (p. 173)
Provides additional information regarding a child’s memory skills. The two subtests that comprise the Auditory Working Memory Index are: Digit Span Letter-Number Sequencing
192
The six subtests that comprise the
Nonverbal Index (p. 174) Provides additional information about thinking abilities that do not require expressive responses and an estimate of intellectual ability, with reduced demands on verbal comprehension abilities The six subtests that comprise the Nonverbal Index are: Block Design Visual Puzzles Matrix Reasoning Figure Weights Picture Span Coding
193
General Ability Index (p. 174)
May be useful when a means of estimating intellectual ability is needed that places reduced demands on working memory and processing speed The five subtests that comprise the General Ability Index are: Similarities Vocabulary Block Design Matrix Reasoning Figure Weights
194
Cognitive Proficiency Index (p. 174)
May be useful when a means of estimating intellectual ability is needed that places reduced demands on verbal comprehension, visual spatial, or fluid reasoning abilities The four subtests that comprise the Cognitive Proficiency Index are: Digit Span Picture Span Coding Symbol Search
195
Verbal (Expanded Crystallized) Index [1](p. 174)
Measures: Verbal comprehension Fund of information Receptive and expressive language Range of factual knowledge Application of verbal skills and information to the solution of new problems Logical reasoning Cognitive flexibility (including the ability to shift mental operations) Verbal concept formation Ability to self-monitor
196
Verbal (Expanded Crystallized) Index [2](p. 174)
Subtests draw on a child’s accumulated experience The four subte sts that comp rise the Verba l (Expa nded Cryst allize d) Index are: Similarities Vocabulary Information Comprehension
197
Expanded Fluid Index [1](pp. 174–175)
Measures: Perceptual reasoning Conceptual thinking Ability to think in terms of visual images and manipulate them with fluency Ability to form abstract concepts and relationships without the use of words Cognitive flexibility (including the ability to shift mental operations) Fluid reasoning Attention Concentration Nonverbal ability Ability to self-monitor Mental computation
198
Expanded Fluid Index [2](pp. 174–175)
Index requires nonverbal problem-solving ability with use of previously acquired skills to solve a novel set of problems The four subtests that comprise the Expanded Fluid Index are: Matrix Reasoning Figure Weights Picture Concepts Arithmetic
199
Complementary Indexes (p. 175)
The three Complementary Indexes are Naming Speed Index Symbol Translation Index Storage and Retrieval Index
200
Naming Speed Index [1](p. 175)
Measures: Processing speed Number sense Long-term storage and retrieval Ability to identify size, color, letters, and numbers Naming facility Automaticity in visual- verbal associations Perceptual speed Rate of test taking Attention Visual-perceptual discrimination Concentration Scanning ability
201
Naming Speed Index [2](p. 175)
The two subtests that comprise the Naming Speed Index are: Naming Speed Literacy Naming Speed Quantity
202
Symbol Translation Index [1] (p. 175)
Measures: Long-term storage and retrieval Visual memory Visual-perceptual discrimination Short-term memory Visual processing Learning ability Associative memory Scanning ability Working memory Recognition memory Visualization Rote learning
203
Symbol Translation Index [2] (p. 175)
The three subtests that comprise the Symbol Translation Index are: Immediate Symbol Translation Delayed Symbol Translation Recognition Symbol Translation
204
Storage and Retrieval Index [1] (p. 175)
Measures: Naming facility Long-term storage and retrieval Processing speed Short-term memory Perceptual speed Working memory Rate of test taking Visual memory Visual processing Visual-perceptual discrimination Visualization Associative memory Learning ability
205
Storage and Retrieval Index [2] (p. 175)
Measures: (Cont.) Scanning ability Retrieval speed Number sense Immediate and delayed visual recall skills Ability to identify size, color, letters, and numbers Paired-associates learning Automaticity of visual- verbal associations Attention and concentration Recognition memory
206
Storage and Retrieval Index [3] (p. 175)
The two subtests that comprise the Storage and Retrieval Index are: Naming Speed Index Symbol Translation Index
207
Rapid Automatized Naming (RAN) [1](not in text)
Review of Literature Norton and Wolf (2012) reviewed the literature on Rapid Automatized Naming (RAN) and reading fluency. They concluded the following: RAN provides an index of one’s abilities to integrate multiple neural processes RAN and phonological awareness are both robust early predictors of reading ability, and one or both are often impaired in people with dyslexia
208
Rapid Automatized Naming (RAN) [2](not in text)
Fluent reading can be conceptualized as a complex ability that depends on automaticity across all levels of cognitive and linguistic processing involved in reading, allowing the individual time and thought to be devoted to comprehension
209
Rapid Automatized Naming (RAN) [3](not in text)
Successful intervention depends on accurate assessment of both accuracy and speed across all levels of reading Best interventions involve multicomponential intervention programs that target phonology and multiple levels of language, including:
210
Rapid Automatized Naming (RAN) [4](not in text)
Best interventions: (Cont.) Orthography—study of letters and spelling of words Morphology—study of how words are formed Syntax—study how words are ordered to form logical, meaningful sentences Semantics—study of the meaning and interpretation of words
211
Rapid Automatized Naming (RAN) [5](not in text)
Example of Research Willburger et al. (2008) reported the following: Sample size: N = 267 children Children with dyslexia had a deficit in rapid naming of items Children with dyscalculia had a deficit in rapid naming of quantities Children with both dyslexia and dyscalculia had deficits in both rapid naming of items and rapid naming of quantities
212
Rapid Automatized Naming (RAN) [6](not in text)
Sources: Norton, E. S., & Wolf, M. (2012). Rapid Automatized Naming (RAN) and Reading Fluency: Implications for understanding and treatment of reading disabilities. Annual Review Psychology, 63, 427–452. doi: /annurev-psych
213
Rapid Automatized Naming (RAN) [7](not in text)
Willburger, E., Fussenegger, B., Moll, K., Wood, G., & Landerl, K. (2008). Naming speed in dyslexia and dyscalculia. Learning and Individual Differences, 18(2), 224–236. doi: /j.lindif
214
Profile Analysis [1](p. 175)
Aims of Profile Analysis To look at a child’s unique ability pattern (including strengths and weaknesses), going beyond the information contained in the FSIQ or the index scores To help in formulating teaching strategies, accommodations, and other types of interventions
215
Profile Analysis [2](p. 175)
Cannot reliably be used to arrived at a clinical or psychoeducational diagnosis Results on any one test should never be used as the sole basis for a clinical or psychoeducational diagnosis
216
Profile Analysis [3](p. 176)
Goal of Profile Analysis To generate hypotheses about a child’s abilities, which then need to be verified using other scores and information about the child
217
Profile Analysis [4](p. 176)
Relatively Large Intersubtest Variability May Indicate Special aptitudes or weaknesses Acquired deficits or disease processes Temporary inefficiencies Motivational difficulties Vision or hearing problems Concentration difficulties Rebelliousness
218
Profile Analysis [5](pp. 176)
Relatively Large Intersubtest Variability May Indicate (Cont.) Learning disabilities Particular school or home experiences
219
Profile Analysis [6](p. 176)
Scaled Scores 13 to 19 always indicate a strength (84th to 99th percentile rank) 8 to 12 always indicate average ability (25th to 75th percentile rank) 1 to 7 always indicate a weakness (1st to 16th percentile rank)
220
Profile Analysis [7](p. 178)
Base Rates Determining the frequency with which the differences between scores occurred in the normative sample Base rate approach Probability-of-occurrence approach
221
Profile Analysis [8](pp. 179–198)
Methods of Profile Analysis Compare the primary index scores—VCI, VSI, FRI, WMI, and PSI—with each other Compare each primary index score with the mean of the child’s primary index scores and/or the FSIQ, using critical values and base rates Compare each primary index subtest scaled score with the child’s mean scaled score on the primary index subtests (MSS-P) and/or the FSIQ subtests (MSS-F), using critical values and base rates
222
Profile Analysis [9](pp. 179–198)
Methods of Profile Analysis (Cont.) Compare sets of individual primary and secondary subtest scaled scores Compare the range of subtest scaled scores with the base rate found in the normative sample Compare the Cancellation Random and Cancellation Structured process scores and other process scores
223
Profile Analysis [10](pp. 179–198)
Methods of Profile Analysis (Cont.) Compare the GAI and the CPI Compare the VECI and the EFI Compare the NSI and the STI Compare sets of individual complementary subtest standard scores
224
A Successive Level of Approach to Test Interpretation (pp. 198–200)
The use of a successive-level approach to test interpretation can help you better understand a child’s performance on the WISC–V (see Figure 4-1, p. 199) by providing Quantitative and qualitative data An analysis of both general and specific areas of intellectual functioning
225
Steps in Analyzing a Protocol (pp. 199–200)
See pages 199–200
226
Estimated Percentile Ranks and Age Equivalents (p. 200)
Estimated percentile ranks can be obtained for the FSIQ, index scores, and subtest scaled scores Age equivalents cam be obtained for the total raw scores Qualitative descriptions of the index scores and FSIQ can be found on p. 200
227
Profile Variability [1] (p. 201)
Research Studies Is the FSIQ valid when the index scores show extreme variability? Two research reports shed light on this question Daniel (2007) used stimulation methodology to investigate the effect of index score “scatter” on the construct validity on the WISC–IV FSIQ He found that the FSIQ was “equally valid at all levels of scatter, supporting the interpretability of the FSIQ in populations characterized by variable index-score profiles” (p.291)
228
Profile Variability [2] (p. 201)
Research Studies (Cont.) Watkins, Glutting, and Lei (2007) showed that WISC–III and WISC–IV FSIQs have robust correlations with measures of reading and math, even when test profiles have at least one statistically significant difference in factor or index scores: 82% to 85% of the 4,044 children in study had at least one statistically significant difference in factor or index scores
229
Profile Variability [3] (p. 201)
Comment The above studies argue against the position of Fiorello et al. (2007) and Hale et al. (2007) who contended that the WISC–IV FSIQ should not be interpreted for children with disabilities when index scores are diverse
230
Profile Variability [4] (p. 201)
Sources: Daniel, M. H. (2007). ‘Scatter’ and the construct validity of FSIQ: Comment on Fiorello et al. (2007). Applied Neuropsychology, 14(4), 291–295. Fiorello, C. A., Hale, J. B., Holdnack, J. A., Kavanagh, J. A., Terrell, J., & Long, L. (2007). Interpreting intelligence test results for children with disabilities: Is global intelligence relevant? Applied Neuropsychology, 14(1), 2–12.
231
Profile Variability [5] (p. 201)
Sources: (Cont.) Hale, J. B., Fiorello, C. A., Kavanagh, J. A., Holdnack, J. A., & Aloe, A. M. (2007). Is the demise of IQ interpretation justified? A response to special issue authors. Applied Neuropsychology, 14(1), 37–51. Watkins, M. W., Glutting, J. J., & Lei, P. W. (2007). Validity of the Full-Scale IQ when there is a significant variability among WISC–III and WISC–IV factor scores. Applied Neuropsychology, 14(1), 13–20.
232
Reflection on Intelligence and Childhood
“It takes a long time to grow young.” —Pablo Picasso “I not only use all the brains that I have, but all I can borrow.” —Woodrow Wilson
233
Remembering and Forgetting
234
Chapter 5 Wechsler Preschool and Primary Scale of Intelligence–Fourth Edition (WPPSI–IV): Description
235
Goals & Objectives (p. 207) Chapter designed to enable you to:
Evaluate the psychometric properties of the WPPSI–IV Administer the WPPSI–IV competently and professionally Evaluate and select short forms of the WPPSI–IV Choose between the WPPSI–IV and the WISC–V at the overlapping ages
236
History of the WPPSI–IV (not in text)
Revisions of the WPPSI WPPSI first published in 1967 WPPSI–R revision published in 1989 WPPSI–III next revision published in 2002 WPPSI–IV latest revision published in 2012 * *David Wechsler, the original author, died 1982.
237
WPPSI–IV Structure (pp. 208–212)
See: Table 5-1 (p. 208) Figs. 5-1 and 5-2 (p. 209) Figs. 5-3 and 5-4 (p. 210) Fig. 5-5 (p. 211) Fig. 5-6 (p. 212)
238
Standardization of WPPSI–IV (p. 213)
Standardized on 1,700 children selected to represent preschool and young school-age population in the US between 2010 and 2012 (see Table 5-2, p. 213) Obtained a stratified sample using demographic characteristics of age, sex, ethnicity, geographic region, and parental education (used as a measure of socioeconomic status)
239
Descriptive Statistics for the WPPSI–IV (pp. 214–218)
The WPPSI–IV uses: Standard scores (M = 100, SD = 15) for each of the primary and ancillary index scores and for the FSIQ Scaled scores (M = 10, SD = 3) for the 15 subtests and 2 process scores
240
Reliability (pp. 214–218) Internal consistency reliabilities for subtests, process scores, and index scales: See Table 5-3 (pp. 215–216) Test-retest reliabilities for index scores and FSIQ: See Table 5-5 (pp. 217–218) Test-retest point gains for subtests and process scores See Table 5-6 (p. 219)
241
Validity [1] (pp. 218–232) Criterion validity, see Table 5-7 (pp. 220–221) Special group studies, see Table 5-8 (p. 222) Subtest and index score correlations, ages 2-6 to 3-11, see Table 5-9 (p. 223) Picture Naming, highest with FSIQ .66 Information, next highest with FSIQ .63 Receptive Vocabulary, next highest with FSIQ .61
242
Validity [2] (pp. 218–232) Subtest, process score, and index score correlations ages 4-0 to 7-7, see Table 5-10 (p. 224) Vocabulary, highest with FSIQ .67 Information and Similarities, next highest with FSIQ .64
243
Description of the Factors [1] (pp. 225, 228)
Three factors at ages 2‑6 to 3‑11 (see Table 5-11, p. 225): Verbal Comprehension: measures verbal knowledge and understanding obtained through informal education and reflects the application of verbal skills to new situations Visual Spatial: measures the ability to interpret and organize visually perceived material and to generate and test hypotheses related to problem solutions
244
Description of the Factors [2] (p. 228)
Three factors at ages 2‑6 to 3‑11 (see Table 5-11, p. 225): (Cont.) Working Memory: measures the ability to hold and manipulate information in the mind as well as the ability to pay attention and concentrate on tasks at hand
245
Description of the Factors [3] (p. 228)
Five factors at ages 4-0 to 7-7 (see Table 5-12, pp.226–228): Verbal Comprehension: measures verbal knowledge and understanding obtained primarily through both formal and informal education and reflects the application of verbal skills to new situations Visual Spatial: measures the ability to interpret and organize visually perceived material and to generate and test hypotheses related to problem solutions
246
Description of the Factors [4] (p. 228)
Five factors at ages 4-0 to 7-7 (see Table 5-12, pp.226–228): (Cont.) Fluid Reasoning: measures nonverbal ability, inductive reasoning ability, and the ability to analyze and solve novel problems Working Memory: measures the ability to hold and manipulate information in the mind as well as the ability to pay attention and concentrate on tasks at hand
247
Description of the Factors [5] (p. 229)
Five factors at ages 4-0 to 7-7 (see Table 5-12, pp.226–228): (Cont.) Processing Speed: measures the ability to process visually perceived nonverbal information quickly, with concentration and rapid eye-hand coordination being important components
248
Subtest Loadings of .30 or Higher (p. 229)
See Table 5-13 (p. 229) Ages 2-6 to 3-11, subtests differ in their loadings on the three scales at different ages Ages 4-0 to 7-7, subtests differ in their loadings on the five scales at different ages
249
Measures of g at Ages 2-6 to 3-11 (see Table 5-14, pp. 230–231)
Good Measures of g Information Picture Naming Receptive Vocabulary Fair Measures of g Picture Memory Block Design Object Assembly Zoo Locations
250
Measures of g at Ages 4-0 to 7-7 (see Table 5-14, pp. 230–231)
Good Measures of g Information Similarities Vocabulary Comprehension Picture Naming Receptive Vocabulary Fair Measures of g Matrix Reasoning Block Design Object Assembly Picture Memory Bug Search Picture Concepts Animal Coding Zoo Locations Cancellation
251
Amount of Specificity (p. 232)
Nine Age Groups and Total Group See Table 5-15 (p. 232) Overall subtest specificity adequate Exceptions are (inadequate) Picture Naming at ages 7-0 to 7-7 Animal Coding at ages 5-0 to 5-5
252
Subtest Scaled-Score Ranges (p. 233)
See Table 5-16 (p. 233) Ranges 1 to 19 for 9 subtests Ranges 1 to 18 for 1 subtest Variable ranges for 7 subtests Use caution in comparing subtests and evaluating developmental changes when subtests have different ranges
253
Computing Index Scores and FSIQs (pp. 232–233)
Follow special guidelines for ages 2-6 to 3-11 and ages 4-0 to 7-7 on p. 233
254
Index Score Ranges (p. 234) See Table 5-17 (p. 234)
Ages 2-6 to 3-11, FSIQ extensive ranges at ages 2-6 to 2-8 at ages 2-9 to 2-11 at ages 3-0 to 3-2 at ages 3-3 to 3-11 Ages 4-0 to 7-7, FSIQ minimal ranges at ages 4-0 to 6-7 at ages 6-8 to 7-7
255
Supplementary Instructions for Administration (pp. 235–237)
Exhibit 5-1 Study carefully the supplementary instructions for administering the WPPSI–IV The instructions cover the following areas: Preparing to administer the WPPSI–IV Administering the WPPSI–V Scoring Record Form for ages 2-6 to 3-11 and 4-0 to 7-7 General guidelines for completing the Record Form Miscellaneous information and suggestions 255
256
Overall Guidelines for Test Administration [1](pp. 237–238)
Use a suitable testing location Maintain good rapport Be flexible Be alert to the child’s mood and needs Be professional Follow standard order of subtest administration Maintain steady pace Make smooth transitions
257
Overall Guidelines for Test Administration [2](pp. 237–238)
Shield your writing Take short breaks, as needed between, not during, subtests Praise effort Empathize and encourage the child Use the exact wording of the directions, questions, and items Be sure to observe the child’s performance carefully throughout the test
258
Overall Guidelines for Test Administration [3](pp. 237–238)
Be sure to record responses correctly using (Q) for queries (P) for prompts (R) for repeated instructions Score each item after the child answers so that you know when to use a reverse procedure and when to discontinue a subtest
259
Subtest Sequence [1](p. 238)
At ages 2‑6 to 3‑11, the core subtests for the Full Scale are administered in the following order: Receptive Vocabulary Block Design Picture Memory Information Object Assembly
260
Subtest Sequence [2](p. 238)
At ages 4‑0 to 7‑7, the core subtests for the Full Scale are administered in the following order: Block Design Information Matrix Reasoning Bug Search Picture Memory Similarities
261
Administration Issues [1](pp. 238–243)
Administration and Scoring Manual provides specific guidelines for: Queries Prompts Repeating instructions Repeating items Additional help Waiting time Start point
262
Administration Issues [2](pp. 238–243)
Administration and Scoring Manual provides specific guidelines for : (Cont.) Reverse sequence rule Start-point scoring rule Discontinue-point scoring rule Discontinue criterion Scoring Perfect scores
263
Administration Issues [3](pp. 238–243)
Administration and Scoring Manual provides specific guidelines for : (Cont.) Points for items not administered Spoiled responses Subtest substitution Proration
264
Perfect Scores (p.241) See Table 5-18 (p. 241) Perfect scores vary
Pay careful attention to perfect scores on each subtest Perfect scores usually are 1 or 2 points But, on Object Assembly, perfect scores can range from 1 to 5 points
265
Subtest Substitution Guidelines (p. 242)
See page 242 Guidelines differ at ages 2-6 to 3-11 and at ages 4-0 to 7-7
266
Potential Problems in Administering the WPPSI–IV (p. 243)
Study potential problems in administering the WISC–V in Chapter 2 (pp. 94–97) Make videos of your test administration Become thoroughly familiar with the administrative and scoring guidelines Learn from your mistakes and from other’s feedback
267
Short Forms (pp. 243–244) See Tables B-6 (p. 437) and B-7 (p. 438) in Appendix B for a list of short forms
268
Subtest Scatter [1] (p. 245) See Table B-8 (p. 439) for ages 2-6 to 3-11 for reliable and unusual scaled-score ranges for 2, 3, 4, 5, 6, and 7 subtests For 6 subtests Reliable scaled-score range is 5 Unusual scaled-score range is 8
269
Subtest Scatter [2] (p. 245) See Table B-9 (pp 440–441) for ages 4-0 to 7-7 for reliable and unusual scaled-score ranges for 2, 3, 4, 5, 6, and 10 subtests For 6 subtests Reliable scaled-score range is 5 Unusual scaled-score range is 9
270
Choosing Between the WPPSI–IV and the WISC–V [1](p. 245)
The WPPSI–IV, because of its lower floor, should be used with three specific groups of children 6‑0 to 7‑7 years of age: Children who may have below-average cognitive ability Children who are English language learners Children with language handicaps
271
Choosing Between the WPPSI–IV and the WISC–V [2](p. 245)
The WISC–V, because of its higher ceiling, should be used with children 6‑0 to 7‑7 years of age who, based on clinical judgment, are suspected to have above-average cognitive ability Either the WPPSI–IV or the WISC–V can be used with children 6‑0 to 7‑7 years of age who, based on clinical judgment, are suspected to have average cognitive ability
272
Administering the WPPSI–IV to Children with Disabilities (pp. 245–246)
See Chapter 1 for general suggestions for administering tests to children with special needs Prior to making any modifications, evaluate the sensory-motor abilities of children with special needs Closely examine how suitable the subtests are for a child with special needs
273
Strengths of WPPSI–IV (pp. 246–247)
Excellent standardization Good overall psychometric properties Useful diagnostic information Inclusion of process scores Good administration procedures Good manuals and interesting test materials Helpful scoring criteria Usefulness for children with some disabilities
274
Limitations of WPPSI–IV [1](p. 247)
Limited breadth of coverage of the FSIQ Failure to provide conversion tables Failure to provide a psychometric basis for requiring a certain number raw scores of 1 in order to compute FSIQ Limited range of score for children who are extremely low or high functioning Variable ranges of subtest scaled scores at ages 4‑0 to 7‑7 Limited criterion validity studies
275
Limitations of WPPSI–IV [2](p. 247)
Possible difficulties in scoring responses Somewhat large practice effects Occasional confusing guidelines
276
Reflection on Intelligence and Childhood
“Just think of the tragedy of teaching children not to doubt.” ― Clarence Darrow
277
Chapter 6 WPPSI–IV Subtests
278
Goals & Objectives (p. 253) Chapter designed to enable you to:
Critically evaluate the 15 WPPSI–IV core, supplemental, and optional subtests Understand the rationales, factor analytic findings, reliability and correlational highlights, and administration and interpretive considerations for the 15 WPPSI–IV subtests
279
Skills Needed to be successful on the WPPSI–IV (p. 254)
Hear See Pay attention Understand directions Retain the directions while solving problems A child must be able to: Some subtests also require motor skills Although several subtests have time limits, none provide additional points for speed
280
Scoring WPPSI–IV Items (p. 254)
Important considerations in scoring: Score each item as it is administered Do not to discontinue administering a subtest prematurely This is particularly important when you are unsure how to score a response immediately Better to administer more items in a subtest, even though some may not be counted in the final score You do not want to short-change the child by discontinuing the subtest too soon
281
Evaluating and Interpreting a Child’s Performance [1](p. 254)
Consider: Child’s scores and responses Quality of child’s responses Child’s response style, motivation, and effort How child handles frustration Child’s problem-solving approach Child’s fine-motor skills Child’s pattern of successes and failures
282
Evaluating and Interpreting a Child’s Performance [2](p. 254)
Consider: (Cont.) How child handles test materials How child handles tasks of each subtest Responding to difficult items Responding to time limits
283
Block Design [1](pp. 254–259) Core Visual Spatial subtest at all ages of the test Requires reproducing designs with colored blocks Key areas of measurement: Nonverbal reasoning Visual-spatial organization Other areas of measurement: See page 255
284
Block Design [2](pp. 254–259) Other Considerations Fair measure of g
Contributes moderately to the visual spatial factor A reliable subtest Somewhat difficult to administer and score
285
Information [1](pp. 259–262) Core Verbal Comprehension subtest at all ages Requires answering questions about different topics, such as body parts, uses of common objects, and calendar information. Key area of measurement: Long-term memory for factual information Other areas of measurement: See pages 259–260
286
Information [2](pp. 259–262) Other Considerations Good measure of g
Contributes substantially to the verbal comprehension factor A reliable subtest Easy to administer and score
287
Matrix Reasoning [1](pp. 262–264)
Core Fluid Reasoning subtest for ages 4-0 to 7-7 Not administered at ages 2-6 to 3-11 Key area of measurement: Visual-perceptual analogic reasoning ability without a speed component Other areas of measurement: See page 262
288
Matrix Reasoning [2](pp. 262–264)
Other Considerations Fair measure of g Contributes minimally to the fluid reasoning factor A highly reliable subtest Relatively easy to administer and score
289
Bug Search [1](pp. 264–266) Core Processing Speed subtest for ages 4-0 to 7-7 Not administered at ages 2-6 to 3-11 Key area of measurement: Visual-perceptual discrimination and scanning Other areas of measurement: See page 264
290
Bug Search [2](pp. 264–266) Other Considerations Fair measure of g
Contributes substantially to the processing speed factor A reliable subtest Relatively easy to administer and to score
291
Picture Memory [1](pp. 266–268)
Core Working Memory subtest at all ages Requires identifying one or more previously shown objects Key area of measurement: Visual short-term memory Other areas of measurement: See page 267
292
Picture Memory [2](pp. 266–268)
Other Considerations Fair measure of g Contributes either minimally (at ages 2-6 to 3-11) or moderately (at ages 4-0 to 7-7) to the working memory factor A highly reliable subtest Easy to administer and score
293
Similarities [1](pp. 268–271) Core Verbal Comprehension subtest for ages 4-0 to 7-7 Not administered at ages 2-6 to 3-11 Key area of measurement: Verbal concept formation Other areas of measurement: See page 269
294
Similarities [2](pp. 268–271) Other Considerations Good measure of g
Contributes substantially to the verbal comprehension factor A reliable subtest Relatively easy to administer, but some responses may be difficult to score
295
Picture Concepts [1](pp. 271–274)
Supplemental Fluid Reasoning subtest for ages 4-0 to 7-7 Not administered at ages 2-6 to 3-11 Key area of measurement: Abstract, categorical reasoning based on visual- perceptual recognition processes Other areas of measurement: See page 272
296
Picture Concepts [2](pp. 271–274)
Other Considerations Fair measure of g Contributes moderately to the visual spatial factor A reliable subtest Relatively easy to administer and score
297
Cancellation [1](pp. 274–276) Supplemental Processing Speed subtest for ages 4- 0 to 7-7 Not administered at ages 2-6 to 3-11 Key areas of measurement: Visual-perceptual recognition Speed of visual processing Other areas of measurement: See page 274
298
Cancellation [2](pp. 274–276) Other Considerations
Fair (but the poorest) measure of g Contributes substantially to the processing speed factor A reliable subtest Relatively easy to administer and score
299
Zoo Locations [1](pp. 276–278) Supplemental Working Memory subtest for all ages of the test Key area of measurement: Short-term visual memory Other areas of measurement: See page 276
300
Zoo Locations [2](pp. 276–278) Other Considerations Fair measure of g
Contributes substantially to the working memory factor A reliable subtest Somewhat difficult to administer but relatively easy to score
301
Object Assembly [1](pp. 278–281)
Core Visual Spatial subtest for ages 2-6 to 3-11 Supplemental subtest for ages 4-0 to 7-7 Requires assembling puzzle pieces to form common objects Key area of measurement: Visual-perceptual organization Other areas of measurement: See page278
302
Object Assembly [2](pp. 278–281)
Other Considerations Fair measure of g Contributes either substantially (at ages 2-6 to 3-11) or moderately (at ages 4-0 to 7-7) to the visual spatial factor A reliable subtest Somewhat difficult to administer and score
303
Vocabulary [1](pp. 281–284) Optional subtest for ages 4-0 to 7-7
Not administered at children ages 2-6 to 3-11 Key area of measurement: Knowledge of words Other areas of measurement: See page 281
304
Vocabulary [2](pp. 281–284) Other Considerations Good measure of g
Contributes substantially to the verbal comprehension factor A reliable subtest Relatively easy to administer but some responses may be difficult to score
305
Animal Coding [1](pp. 284–286) Optional subtest for ages 4-0 to 7-7
Not administered at ages 2-6 to 3-11 Key area of measurement: Ability to learn an unfamiliar task involving speed of mental operation and psychomotor speed Other areas of measurement: See page 284
306
Animal Coding [2](pp. 284–286) Other Considerations Fair measure of g
Contributes substantially to the processing speed factor A reliable subtest Relatively easy to administer and score
307
Comprehension [1](pp. 286–288)
Optional subtest for ages 4-0 to 7-7 Not administered at ages 2-6 to 3-11 Key area of measurement: Practical social reasoning and judgment in social situations Other areas of measurement: See page 286
308
Comprehension [2](pp. 286–288)
Other Considerations Good measure of g Contributes substantially to the verbal comprehension factor A highly reliable subtest Relatively easy to administer but somewhat difficult to score
309
Receptive Vocabulary [1](pp. 289–291)
Core Verbal Comprehension subtest for ages 2-6 to 3-11 Optional subtest for ages 4-0 to 7-7 Key area of measurement: Word knowledge Other areas of measurement: See page 289
310
Receptive Vocabulary [2](pp. 289–291)
Other Considerations Good measure of g Contributes either substantially (at ages 2-6 to 3-11) or moderately (at ages 4-0 to 7-7) to the verbal comprehension factor A highly reliable subtest Easy to administer and score
311
Picture Naming [1](pp. 291–293)
Supplemental subtest at all ages Key area of measurement: Knowledge of words Other areas of measurement: See page 291
312
Picture Naming [2](pp. 291–293)
Other Considerations Good measure of g Contributes substantially to the verbal comprehension factor A reliable subtest Easy to administer and score
313
Reflection on Intelligence and Childhood
“The soul is healed by being with children.” —Fyodor Dostoyevsky
314
Interpreting the WPPSI–IV
Chapter 7 Interpreting the WPPSI–IV
315
Goals & Objectives (p. 297) Chapter designed to enable you to:
Describe profile analysis for the WPPSI–IV Analyze and evaluate WPPSI–IV scores from multiple perspectives Develop hypotheses about WPPSI–IV scores and responses Report WPPSI–IV findings to parents and others
316
Steps in Interpreting the WPPSI–IV [1](p. 298)
Perform a profile analysis Determine whether the three primary index scores at ages 2-6 to 3-11 or the five primary index scores at ages 4-0 to 7-7 differ significantly from each other Determine whether the subtest scaled scores differ significantly from each other Obtain base rates for differences between the index scores Obtain base rates for differences between some of the subtest scaled scores
317
Steps in Interpreting the WPPSI–IV [2](p. 298)
Determine base rates for intersubtest scatter Develop hypotheses and interpretations
318
Full Scale IQ [1](p. 298) At ages 2-6 to 3-11, it includes measures of: Verbal comprehension Visual spatial reasoning Working memory At ages 4-0 to 7-7, it includes measures of: Fluid reasoning Processing speed
319
Full Scale IQ [2](p. 298) The five subtests that comprise the
Full Scale at ages 2-6 to 3-11 are: Receptive Vocabulary Information Block Design Object Assembly Picture Memory
320
The six subtests that comprise the Full Scale at ages 4-0 to 7-7 are:
Full Scale IQ [3](p. 298) The six subtests that comprise the Full Scale at ages 4-0 to 7-7 are: Information Similarities Block Design Matrix Reasoning Picture Memory Bug Search
321
Verbal Comprehension Index [1](p. 298)
Measures: Verbal comprehension Application of verbal skills and information to the solution of new problems Ability to process verbal information Retrieval of information from long-term memory Crystallized knowledge Conceptual reasoning ability Language development
322
Verbal Comprehension Index [2](p. 298)
The two subtests that comprise the Verbal Comprehension Index at ages 2-6 to 3-11 are: Receptive Vocabulary Information
323
Verbal Comprehension Index [3](p. 298)
The two subtests that comprise the Verbal Comprehension Index at ages 4-0 to 7-7 are: Information Similarities
324
Visual Spatial Index [1](p. 298)
Measures: Ability to think in visual images and manipulate them with fluency and speed Ability to interpret or organize visually perceived material quickly Nonverbal reasoning Visual-perceptual discrimination Visual spatial reasoning ability
325
Visual Spatial Index [2](p. 298)
The two subtests that comprise the Visual Spatial Index at both age bands are: Block Design Object Assembly
326
Fluid Reasoning Index [1](p. 299)
Measures: Fluid reasoning ability Visual-perceptual reasoning and organization Ability to think in visual images and manipulate them with fluency and relative speed Ability to interpret or organize visually perceived material quickly Nonverbal reasoning Visual-perceptual discrimination
327
Fluid Reasoning Index [2](p. 299)
The two subtests that comprise the Fluid Reasoning Index at ages 4-0 to 7-7 are: Matrix Reasoning Picture Concepts
328
Working Memory Index [1](p. 299)
Measures: Short-term memory Visual processing Working memory Memory span Visual spatial memory Rote memory Immediate visual memory Attention Concentration
329
Working Memory Index [2](p. 299)
The two subtests that comprise the Working Memory Index at both age bands are: Picture Memory Zoo Locations
330
Processing Speed Index [1](p. 299)
Measures: Processing speed Perceptual speed Visual-motor coordination and dexterity Speed of mental operation Scanning ability Psychomotor speed Short-term visual memory Visual-perceptual discrimination Attention and concentration
331
Processing Speed Index [2](p. 299)
The two subtests that comprise the Processing Speed Index at ages 4-0 to 7-7 are: Bug Search Cancellation
332
Vocabulary Acquisition Index [1](p. 299)
Measures: Crystallized knowledge Language development Word knowledge Verbal comprehension Fund of information Long-term memory Perception of meaningful stimuli Visual memory Receptive and expressive language
333
Vocabulary Acquisition Index [2](p. 299)
The two subtests that comprise the Vocabulary Acquisition Index at both age bands are: Receptive Vocabulary Picture Naming
334
Nonverbal Index [1](pp. 299–300)
Measures: Fluid reasoning ability Visual processing Processing speed Short-term memory Visual-perceptual analogic reasoning Speed of mental operation Symbol-associative skills Analysis and synthesis Scanning ability
335
Nonverbal Index [2](pp. 299–300)
Measures: (Cont.) Attention Concentration
336
Nonverbal Index [3](pp. 299–300)
The four subtests that comprise the Nonverbal Index at ages 2-6 to 3-11 are: Block Design Object Assembly Picture Memory Zoo Locations
337
Nonverbal Index [4](pp. 299–300)
The five subtests that comprise the Nonverbal Index at ages 4-0 to 7-7 are: Block Design Matrix Reasoning Picture Concepts Picture Memory Bug Search
338
General Ability Index [1](p. 300)
Measures: Crystallized knowledge Fluid reasoning ability Visual processing Language development Lexical knowledge Visualization Induction Verbal concept formation
339
General Ability Index [2](p. 300)
Measures: (Cont.) Nonverbal reasoning Visual-perceptual discrimination Attention Concentration
340
General Ability Index [3](p. 300)
The four subtests that comprise the General Ability Index at ages 2-6 to 3-11 are: Receptive Vocabulary Information Block Design Object Assembly
341
General Ability Index [4](p. 300)
The four subtests that comprise the General Ability Index at ages 4-0 to 7-7 are: Information Similarities Block Design Matrix Reasoning
342
Cognitive Proficiency Index [1] (p. 300)
Measures: Short-term memory Processing speed Visual processing Working memory Memory span Visualization Visual memory Visual-perceptual discrimination Speed of mental processing
343
Cognitive Proficiency Index [2] (p. 300)
Measures: (Cont.) Scanning ability Attention Concentration
344
Cognitive Proficiency Index [3] (p. 300)
The four subtests that comprise the Cognitive Proficiency Index at ages 4-0 to 7-7 are: Picture Memory Zoo Locations Bug Search Cancellation
345
Profile Analysis [1](pp. 300–301)
Aims of Profile Analysis To look at a child’s unique ability pattern (including strengths and weaknesses), going beyond the information contained in the FSIQ or the index scores To help in formulating teaching strategies, accommodations, and other types of interventions
346
Profile Analysis [2](p. 300)
Cannot reliably be used to arrived at a clinical or psychoeducational diagnosis Results on any one test should never be used as the sole basis for a clinical or psychoeducational diagnosis
347
Profile Analysis [3](p. 301)
Goal of Profile Analysis To generate hypotheses about a child’s abilities, which then need to be verified using other scores and information about the child
348
Profile Analysis [4](p. 301)
Relatively Large Intersubtest Variability Special aptitudes or weaknesses Acquired deficits or disease processes Temporary inefficiencies Motivational difficulties Vision or hearing problems Concentration difficulties Rebelliousness
349
Profile Analysis [5](p. 301)
Relatively Large Intersubtest Variability (Cont.) Learning disabilities Particular school or home experiences
350
Profile Analysis [6](p. 301)
Scaled Scores 13 to 19 always indicate a strength (84th to 99th percentile rank) 8 to 12 always indicate average ability (25th to 75th percentile rank) 1 to 7 always indicate a weakness (1st to 16th percentile rank)
351
Profile Analysis [7](p. 303)
Base Rates Determining the frequency with which the differences between scores occurred in the normative sample Base rate approach Probability-of-occurrence approach
352
Profile Analysis [8](pp. 303–318)
Methods of Profile Analysis Compare the primary index scores—VCI, VSI, FRI, WMI, and PSI—with each other Compare each primary index score with the mean of the child’s primary index scores and/or the FSIQ, using critical values and base rates Compare each primary index subtest scaled score with the child’s mean scaled score on the primary index subtests and/or the FSIQ subtests, using critical values and base rates
353
Profile Analysis [9](pp. 303–318)
Methods of Profile Analysis (Cont.) Compare sets of individual subtest scaled scores Compare the range of subtest scaled scores with the base rate found in the normative sample Compare the Cancellation Random and Cancellation Structured process scores Compare the GAI and the CPI
354
A Successive Level Of Approach To Test Interpretation (pp. 319–320)
The use of a successive-level approach to test interpretation can help you better understand a child’s performance on the WPPSI–IV The six levels provide quantitative and qualitative data and an analysis of both general and specific areas of intellectual functioning (see Figure 7-1, p. 319)
355
Reflection on Intelligence and Childhood
“There is in every child at every stage a new miracle of vigorous unfolding.” — Erik Erikson
356
Chapter 8 Report Writing
357
Goals & Objectives (p. 325) Chapter designed to enable you to:
Understand the purposes of a psychological report Understand the sections of a psychological report Develop appropriate skills for communicating findings and making recommendations in a report Write a psychological report
358
Potential Sources of Report Information [1](p. 326)
Psychological tests Interviews with the child, his or her parents, teachers, and others Questionnaires and rating forms completed by a parent, teacher, and/or evaluator Self-monitoring forms completed by the child Systematic behavioral observations School records Prior psychological or psychiatric reports
359
Potential Sources of Report Information [2](p. 326)
Medical reports Other relevant sources
360
Qualities of a Good Report (p. 326)
A report should be: Well organized Objective Unbiased Based upon all of the assessment data you gathered See pg. 326 for specific details.
361
Purposes of a Report [1](pp. 326–331)
To provide accurate and understandable assessment-related information to the referral source and others To serve as a basis for clinical hypotheses, appropriate interventions, and information for program evaluation and research
362
Purposes of a Report [2](pp. 326–331)
To furnish meaningful baseline information For evaluating the child’s progress after interventions have been implemented For changes in the child that have occurred over time To serve as a legal document
363
Formulating the Report [1](p. 331)
Four considerations: Who will be the primary audiences for the report? After reading the report, what new understanding will the readers have? What new action will the readers take? Consider the circumstances under which the assessment took place
364
Formulating the Report [2](p. 331)
Four considerations: (Cont.) Include examples to illustrate or document selected statements you make in the report Make your recommendations with an appreciation of the needs and values of the child, the family, and the extended family; the family’s resources; the child’s ethnic and cultural group; the school; and the community
365
Other Considerations (p. 331)
Subjective Elements in the Report Although you should strive for objectivity and accuracy in writing a report, remember that no report can be completely objective Every report has elements of subjectivity Promptness in Writing the Report Write the report as soon as possible after you complete the assessment to ensure that you record all important details and do not forget any
366
Sections of a Report (pp. 332–339)
Report Title Identifying Information Assessment Instruments Reason for Referral Background Information Observations During Assessment Assessment Results Clinical Impressions Recommendations Summary Signature
367
22 Principles of Report Writing [1](pp. 339–364)
The 22 principles cover: How to organize, interpret, and present the assessment findings Exercises are included to help you apply some of the principles
368
22 Principles of Report Writing [2](pp. 339–364)
Principle 1 (pp. 339–340) Organize the assessment findings by looking for common themes that run through them, integrating the main findings, and adopting an eclectic perspective Principle 2 (pp. 340–341) Include only relevant material in the report; omit potentially damaging material not germane to the evaluation
369
22 Principles of Report Writing [3](pp. 339–364)
Principle 3 (pp. 341–342) Be extremely cautious in making interpretations based on a limited sample of behavior Principle 4 (pp. 342–343) Consider all relevant sources of information about the child as you generate hypotheses and formulate interpretations
370
22 Principles of Report Writing [4](pp. 339–364)
Principle 5 (pp. 343–344) Be definitive in your writing when the findings are clear; be cautious in your writing when the findings are not clear Principle 6 (p. 344) Cite specific behaviors and sources and quote the child directly to enhance the report’s readability
371
22 Principles of Report Writing [5](pp. 339–364)
Principle 7 (p. 344) Consider the FSIQ, in most cases, to be the best estimate of the child’s present level of intellectual functioning Principle 8 (pp. 344–345) Interpret the meaning and implications of a child’s scores, rather than simply citing test names and scores
372
22 Principles of Report Writing [6](pp. 339–364)
Principle 9 (p. 345) Obtain the classification of FSIQs and other test scores from the numerical ranges given in the test manuals Principle 10 (p. 345–346) Use percentile ranks whenever possible to describe a child’s scores
373
22 Principles of Report Writing [7](pp. 339–364)
Principle 11 (p. 346) Provide clear descriptions and interpretations of abilities measured by the subtests when appropriate Principle 12 (p. 347–348) Relate inferences based on subtest or index scores to the cognitive processes measured by them; use caution in making generalizations
374
22 Principles of Report Writing [8](pp. 339–364)
Principle 13 (p. 348–349) Describe the profile of scores clearly and unambiguously Principle 14 (p. 349–350) Make recommendations carefully, using all available sources of information
375
22 Principles of Report Writing [9](pp. 339–364)
Principle 15 (p. 350) Provide justification for each classification or diagnosis and address all relevant diagnostic criteria explicitly Principle 16 (pp. 350–353) Communicate clearly, and do not include unnecessary technical material in the report
376
22 Principles of Report Writing [10](pp. 339–364)
Principle 17 (p. 353–354) Describe and use statistical concepts appropriately; make sure to check all calculations carefully and to report the reliability and validity of the test results accurately Principle 18 (p. 354) Avoid biased language
377
22 Principles of Report Writing [11](pp. 339–364)
Principle 19 (p. 354–356) Write a report that is concise but adequate Principle 20 (pp. 356–357) Attend carefully to grammar and writing style
378
22 Principles of Report Writing [12](pp. 339–364)
Principle 21 (pp. 357–364) Develop strategies to improve your writing, such as using an outline, revising your first draft, using word-processor editing tools, and proofreading your final report Principle 22 (p. 364) Maintain security of confidential information Treat confidential electronic files as carefully as you would treat confidential paper files
379
Checklist (p. 363) See Table 8-3 (p. 363) for a checklist for evaluating accuracy, quality, and completeness of the first draft of your assessment report
380
A Good Report (p. 364) Is understandable and interesting to read
Presents information in a logical manner Interprets test results accurately and explains them clearly Answers specific referral questions Provides recommendations that are realistic and feasible Provides a useful summary Is concise yet thorough
381
Reflections on Intelligence and Childhood
“You might be poor, your shoes might be broken, but your mind is a palace.” —Frank McCourt
382
Reflections on Development
The Little Boy and the Old Man Said the little boy, "Sometimes I drop my spoon." Said the old man, "I do that too." The little boy whispered, "I wet my pants." I do that too," laughed the little old man. Said the little boy, "I often cry." The old man nodded, "So do I." But worst of all," said the boy, "it seems Grown-ups don't pay attention to me." And he felt the warmth of a wrinkled old hand. I know what you mean," said the little old man.” ― Shel Silverstein
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.