Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reading Assessment: Still Time for a Change P. David Pearson UC Berkeley Professor and Former Dean Former Former Slides available at www.scienceandliteracy.org.

Similar presentations


Presentation on theme: "Reading Assessment: Still Time for a Change P. David Pearson UC Berkeley Professor and Former Dean Former Former Slides available at www.scienceandliteracy.org."— Presentation transcript:

1 Reading Assessment: Still Time for a Change P. David Pearson UC Berkeley Professor and Former Dean Former Former Slides available at

2 Why did I pick such a boring topic? I’m a professor! Who needs fun? The consequences are too grave. I have a perverse standard of fun. Slides available at

3 A set of contrasts between cognitively oriented views of reading and prevailing practices in assessing reading circa 1986 New views of the reading process tell us that...Yet when we assess reading comprehension, we... Prior knowledge is an important determinant of reading comprehension.Mask any relationship between prior knowledge and reading comprehension by using lots of short passages on lots of topics. A complete story or text has structural and topical integrity.Use short texts that seldom approximate the structural and topical integrity of an authentic text. Inference is an essential part of the process of comprehending units as small as sentences. Rely on literal comprehension text items. The diversity in prior knowledge across individuals as well as the varied causal relations in human experiences invites many possible inferences to fit a text or question. Use multiple-choice items with only one correct answer, even when many of the responses might, under certain conditions, be plausible. The ability to vary reading strategies to fit the text and the situation is one hallmark of an expert reader. Seldom assess how and when students vary the strategies they use during normal reading, studying, or when the going gets tough. The ability to synthesize information from various parts of the text and different texts is hallmark of an expert reader. Rarely go beyond finding the main idea of a paragraph or passage. The ability to ask good questions of text, as well as to answer them, is hallmark of an expert reader. Seldom ask students to create or select questions about a selection they may have just read. All aspects of a reader’s experience, including habits that arise from school and home, influence reading comprehension. Rarely view information on reading habits and attitudes as being as important information about performance. Reading involves the orchestration of many skills that complement one another in a variety of ways. Use tests that fragment reading into isolated skills and report performance on each. Skilled readers are fluent; their word identification is sufficiently automatic to allow most cognitive resources to be used for comprehension. Rarely consider fluency as an index of skilled reading. Learning from text involves the restructuring, application, and flexible use of knowledge in new situations. Often ask readers to respond to the text’s declarative knowledge rather than to apply it to near and far transfer tasks. Valencia and Pearson (1987) Reading Assessment: Time for a Change. In Reading Teacher

4 New views of the reading process tell us that... Yet when we assess reading comprehension, we... Prior knowledge is an important determinant of reading comprehension. Mask any relationship between prior knowledge and reading comprehension by using lots of short passages on lots of topics. A complete story or text has structural and topical integrity. Use short texts that seldom approximate the structural and topical integrity of an authentic text. Inference is an essential for comprehending units as small as sentences. Rely on literal comprehension text items.

5 New views of the reading process tell us that... Yet when we assess reading comprehension, we... The diversity in prior knowledge across individuals as well as the varied causal relations in human experiences invites many possible inferences to fit a text or question. Use multiple-choice items with only one correct answer, even when many of the responses might, under certain conditions, be plausible. The ability to synthesize information from various parts of the text and different texts is hallmark of an expert reader. Rarely go beyond finding the main idea of a paragraph or passage. The ability to vary reading strategies to fit the text and the situation is one hallmark of an expert reader. Seldom assess how and when students vary the strategies they use during normal reading, studying, or when the going gets tough.

6 What is thinking? You do it in your head, without a pencil..Alexandra, age 4 You shouldn’t do it in the dark. It’s too scary, Thomas, age 5

7 What is Thinking? Thinking is when you’re doing math and getting the answers right, Sissy, age 5 And in response… NO! You do the thinking when you DON’T know the answer. Alex, age 5

8 What is Thinking? It’s very, very easy. The way you do it is just close your eyes and look inside your head. Robert, age 4

9 What is Thinking? You think before you cross the street! What do you think about? You think about what you would look like smashed up! Leon, age 5

10 What is Thinking? You have to think in swimming class. About what? About don’t drink the water because maybe someone peed in it…and don’t drown!

11 New views of the reading process tell us that... Yet when we assess reading comprehension, we... The ability to ask good questions of text, as well as to answer them, is hallmark of an expert reader. Seldom ask students to create or select questions about a selection they may have just read. All aspects of a reader’s experience, including habits that arise from school and home, influence reading comprehension. Rarely view information on reading habits and attitudes as being as important information about performance. Reading involves the orchestration of many skills that complement one another in a variety of ways. Use tests that fragment reading into isolated skills and report performance on each.

12 New views of the reading process tell us that... Yet when we assess reading comprehension, we... Skilled readers are fluent; their word identification is sufficiently automatic to allow most cognitive resources to be used for comprehension. Rarely consider fluency as an index of skilled reading. Learning from text involves the restructuring, application, and flexible use of knowledge in new situations. Often ask readers to respond to the text’s declarative knowledge rather than to apply it to near and far transfer tasks.

13 Why did We Take this Stance? Need a little mini-history of assessment to understand our motives Slides available at

14 14 The Scene in the US in the 1970s and early 1980s Behavioral objectives Mastery Learning Criterion referenced assessments Curriculum-embedded assessments Minimal competency tests: New Jersey Statewide assessments: Michigan & Minnesota Slides available at

15 15 Skill 1 Skill 2 Teach Assess Conclude Teach Assess Conclude The 1970s Skills management mentality: Teach a skill, assess it for mastery, reteach it if necessary, and then go onto the next skill. Historical relationships between instruction and assessment Foundation: Benjamin Bloom’s ideas of mastery learning

16 16 Skill 1 Skill 2 Skill 3 Teach Assess Conclude Teach Assess Conclude Teach Assess Conclude Skill 4 Skill 5 Skill 6 Teach Assess Conclude Teach Assess Conclude Teach Assess Conclude The 1970s, cont. And we taught each of these skills until we had covered the entire curriculum for a grade level.

17 Dangers in the Mismatch we Saw in 1987 False sense of security. Instructionally insensitive to progress on new curricula Accountability will do us in and force us to teach to the tests and all the bits and pieces.

18 18 Pearson’s First Law of Assessment The finer the grain size at which we monitor a process like reading and writing, the greater the likelihood that we will end up teaching and testing bits and pieces rather than global processes like comprehension and composition.

19 The ideal The best possible assessment teachers observe and interact with students as they read authentic texts for genuine purposes. they evaluate the way in which the students construct meaning. intervening to provide support or suggestions when the students appear to have difficulty.

20 Pearson’s Second Law of Assessment An assessment tool is valued to the degree that it can approximate the good judgment of a professional teacher!

21 A new conceptualization of the goal FeatureLevel of Decision-Making Beyond School SchoolClassroomIndividual AccuracyIRI or Unit Test or NRT IRI or Unit Test IRI FluencyIRI Word Meaning Norm Refenced Unit or NRTUnitUnit assessment Comprehen sion NRTIRI or unit activities CritiquePerform Discussion ResponseEssay Discussion

22 A 1987 Agenda for the Future

23 Pearson’s Third Law of Assessment When we ask an assessment to serve a purpose for which it was not designed, it is likely to crumble under the pressure, leading to invalid decisions and detrimental consequences.

24 24 Early 1990s in the USA Standards based reform State initiatives IASA model Trading flexibility for accountability Move from being accountable for the means and leaving the ends up for grabs (doctor or lawyer model) TO Being accountable for the ends and leaving the means up for grabs (carpenter or product model)

25 Mid 1990s Developments Assessment got situated within the standards movement Content Standards: Know and be able to do? Performance Standards: What counts? Opportunity to Learn Standards: Quid pro quo?

26 26 Standards-Based Reform The Initial Theory of Action Standards Assessment Accountability Clear Expectations Motivation Higher Student Learning Ala Tucker and Resnick in the early 1990s

27 27 Expanded Theory of Action Standards Assessment Accountability Clear Exp’s Motivation Higher Student Learning Instruction Professional Development Ala Elmore and Resnick in the late 1990s

28 The Golden Years of the 90s? A flying start in the late 1980s and early 1990s International activity in Europe, Down Under, North America Developmental Rubrics Performance Tasks New Standards CLAS Portfolios of Various Sorts Storage bins Showcase: best work Compliance: Walden, NYC Increase the use of constructed response items in NRTs

29 29 Late 1980s/early 1990s: Portfolios Performance Assessments Make Assessment Look Like Instruction On standards 1-n Activities Conclusions From which we draw We engage in instructional activities, from which we collect evidence which permits us to draw conclusions about student growth or accomplishment on several dimensions (standards) of interest.

30 30  The complexity of modern assessment practices: one to many Any given activity may offer evidence for many standards, e.g, responding to a story. Activity X Standard 5 Standard 3 Standard 4 Standard 2 Standard 1

31 31 Standard X Activity 1 Activity 2 Activity 3 Activity 4 Activity 5 For any given standard, there are many activities from which we could gather relevant evidence about growth and accomplishment, e.g., reads fluently  The complexity of performance assessment practices: many to one

32 32  The complexity of portfolio assessment practices, many to many Activity 1 Activity 2 Activity 3 Activity 4 Activity 5 Standard 1 Standard 2 Standard 3 Standard 4 Standard 5 Any given artifact/activity can provide evidence for many standards Any given standard can be indexed by many different artifacts/activities

33 33 Thunder is a rich source of loudness "Nitrogen is not found in Ireland because it is not found in a free state" The perils of performance assessment: or maybe those multiple-choice assessments aren’t so bad after all…….

34 34 "Water is composed of two gins, Oxygin and Hydrogin. Oxygin is pure gin. Hydrogin is gin and water.” "The tides are a fight between the Earth and moon. All water tends towards the moon, because there is no water in the moon, and nature abhors a vacuum. I forget where the sun joins in this fight." The perils of performance assessment

35 35 "Germinate: To become a naturalized German." "Vacumm: A large, empty space where the pope lives.” Momentum is something you give a person when they go away. The perils of performance assessment

36 36  The cause of perfume disappearing is evaporation. Evaporation gets blamed for a lot of things people forget to put the top on.  Mushrooms always grow in damp places which is why they look like umbrellas.  Genetics explains why you look like your father, and if you don't, why you should. The perils of performance assessment

37 37 "When you breath, you inspire. When you do not breath, you expire." The perils of performance assessment

38 38 Post 1996: The Demise of Performance Assessment A definite retreat from performance- based assessment as a wide-scale tool Psychometric issues Cost issues Labor issues Political issues

39 The Remains… Still alive inside classrooms and schools Hybrid assessments based on the NAEP model multiple-choice short answer extended response The persistence of standards-based reform.

40 No Child Left Behind Accountability in Spades Every grade level reporting Census assessment rather than sampling (everybody takes the same test) Disaggregated Reporting by Income Exceptionality Language Ethnicity

41 NCLB, continued Assessments for varied purposes Placement Progress monitoring Diagnosis Outcomes/program evaluation Scientifically based curriculum too Slides available at

42 42 There is good reason to worry about disaggregation School 1School 2 L Achievement H

43 43 Disaggregation and masking School 1School 2 A Large N B Small N B Large N A Small N L Achievement H Simpson’s Paradox? Height of bar = average achievement; width = number of students

44 44 Disaggregation: Damned if we do and damned if we don’t Don’t report: render certain groups invisible Do report: blame the victim (they are the group that did not meet the standard.

45 Pearson’s Fourth Law of Assessment Disaggregation is the right approach to reporting results. Just be careful where the accountability falls.

46 Pearson’s Fourth Law: A Corollary Accountability, in general, falls to the lowest level of reporting in the system.

47 Assessment can be the friend or the enemy of teaching and learning The curious case of DIBELS, … and other benchmark assessments The Dark Side

48 A word about benchmark assessments… The world is filled with assessments that provide useful information… But are not worth teaching to They are good thermometers or dipsticks Not good curriculum 48

49 The ultimate assessment dilemma… What do we do with all of these timed tests of fine-grained skills: Words correct per minute Words recalled per minute Letter sounds named per minute Phonemes identified per minute Scott Paris: Constrained versus unconstrained skills Pearson: Mastery skills versus growth constructs

50 Why they are so seductive Mirror at least some of the components of the NRP report Correlate with lots of other assessments that have the look and feel of real reading Takes advantage of the well-documented finding that speed metrics are almost always correlated with ability, especially verbal ability. Example: alphabet knowledge 90% of the kids might be 90% accurate but… They will be normally distributed in terms of LNPM

51 How to get a high correlation between a mastered skill and something else Letter Name Accuracy Letter Name Fluency (LNPM) The wider the distribution of scores, the greater the likelihood of obtaining a high correlation

52 Face validity problem: What virtue is there in doing things faster? naming letters, sounds, words, ideas What would you do differently if you knew that Susie was faster than Ted at naming X, Y, or Z???

53 Why I fear the use of these tests

54 They meet only one of tests of validity: criterion-related validity correlate with other measures given at the same time--concurrent validity predict scores on other reading assessments--predictive validity

55 Fail the test of curricular or face validity They do not, on the face of it, look like what we are teaching…especially the speeded part Unless, of course, we change instruction to match the test

56 Really fail the test of consequential validity Weekly timed trials instruction Confuses means and ends Proxies don’t make good goals

57 57 The Achilles Heel: Consequential Validity Give DIBELS Use results to craft instruction Give DIBELS again Give Comprehension Test The emperor has no clothes

58 The bottom line on so many of these tests Pearson’s Third Law again New Bumper Sticker Never send a test out to do a curriulum’s job!

59 59 The dark side of alignment: the transfer problem I agree about the importance of curriculum- based assessment and situated learning, BUT… 4We do expect what you learn in one context to assist you in others 4In our heart of hearts we do NOT believe that kids learn ONLY what you teach OR 4That only what is tested is what should get learned (and taught) 4Note our strong faith in the idea of application

60 60 How do we test for transfer? A continuum of cognitive distance 4An example: Learn about the structure of texts/knowledge about insect societies--bees, ants, termites 4New passages 4Paper wasps 4A human society 4A biome 4How far will the learning travel? 4Our problem today: THIS IDEA OF TRANSFER IS NOT EVEN ON OUR CURRENT RADAR SCREEN!!! 4And it ought to be!!!!!

61 61 Domain representation If we teach to the standards and the assessments, will we guarantee that all important aspects of the curriculum are covered? Linn and Shepard study: improvements on a narrow assessment do not transfer to other assessments Shepard et al: in high stakes districts, high performance on consequential assessments comes at a price...

62 62 Linn and Shepard’s work... Year = New Standardized Test = Old Standardized Test

63 63 Shepard et al work ST AA ST High Stakes Schools Low Stakes Schools ST = consequential standardized assessment AA = more authentic assessment of the same skill domain Note the consequences of high stakes on alternative assessments

64 64 Key Concept: Haladyna Test Score Pollution: a rise or fall in a score on a test without an accompanying rise or fall in the cognitive or affective outcome allegedly measured by the test

65 65 Aligning everything to the standards: A model worth rejecting Standards Assessment Instruction This model is likely to shape the instruction too narrowly. Lead to test score pollution.

66 A better way of thinking about the link between standards, instruction and assessment Standards: How we operationalize our values about teaching and learning Teaching and Learning Activities Assessment Activities This relationship can operate at the regional or local level The logic of lots of good reform projects! Guide the development of both instruction and assessment

67 Pearson’s Fifth Law of Assessment Alignment is a double-edged sword. If there must be alignment, lead with the instruction and let the assessment follow.

68 Pearson’s Sixth Law High Stakes will corrupt any assessment, no matter how virtuous or pure in intent

69 Corollary to Pearson’s Fifth and Sixth Laws The worst possible combination is high stakes and low challenge Hgh Stakes and Low Challenge

70 So how did we do in responding the the challenges from Valencia & Pearson? IssueGradeSolution Prior KnowledgeDChoice of Passages Authentic TextB+Things are lots better on lots of comprehension assessments InferenceBDepends on the test Diversity in Knowledge means diversity in response DConstructed response and multiple correct answers or graded answers Flexible use of strategiesCHard to assess; easy to coach; I’d abandon except for diagnostic interviews Synthesizing Information is paramount DStill too much emphasis on details

71 So how did we do in responding the the challenges from Valencia & Pearson? IssueGradeSolution Asking questions as an index of comprehension DNo progress except in informal classroom assessment Measuring habits, attitudes, and dispositions CSome reasonable things out there. But no teeth Orchestrating many skillsDToo many mastery skills; not enough growth skills FluencyDMade a fetish out of it Transfer and applicationDLimited to a few situations Overall GradeDLots of work to do

72 72 Where should we be headed? So, what makes sense for a district or school? Develop an educational improvement system

73 73 Elements of an Educational Improvement System Standards, yes Assessments, yes Outcome assessments for program evaluation Benchmark assessments for monitoring individual progress “Closer look” diagnostic assessments for determining individual student emphases Reporting system, yes as long as we are prepared to live with the dilemmas of disaggregation Alignment, but of a different sort

74 74 Outcome assessments Drop in out of the sky Curriculum independent Assess reading in their most global aspects Growth constructs NOT mastery constructs Could be some sort of standardized assessment Slides available at

75 75 A plan for early reading benchmark assessments Every so often, give four benchmark assessments.

76 76 Benchmarks for Intermediate and Secondary Comprehend Deconstruct: What do authors do and why Compose Narratives Response to Literature Author’s Craft Creative Writing Information Genres Summaries, Charts, Key ideas Genre (form follows function) Writing from sources to convey ideas

77 Closer Look Assessments There is no sin in examining the infrastructure of reading Really do need to know which of those pieces kids have and have not mastered Question is what to do about them Teach to and practice the weak bits Rely on strengths to bootstrap the weaknesses Just read more “just right” material

78 Teaching to Weaknesses Flaw Basic Skills Conspiracy of Good Intentions: First you gotta get the words right and the facts straight before you can do the what ifs and I wonder whats?

79 79 Monitoring Conditions of Instruction Collect data on curriculum, instructional practices We need clear data on enacted curriculum and instructional practices in order to link it as precisely as possible to achievement Use data for program improvement Design professional development

80 Return to the hard work on assessment Encouraged by recent funding of new century assessments Could be some good coming out of our reading for understanding assessment grants in the US Possibilities in the Australian work: NAPLAN?? Tests that take the high road (tests worth teaching to) Focus on making and monitoring meaning Focus on the role of reading in knowledge building and the acquisition of disciplinary knowledge Focus on critical reasoning and problem solving Focus on representation of self. The unfinished business from the 1990s

81 81 Where Could we Be Headed: A Near Term Research Agenda The Development of More Trustworthy, More Useful Curriculum-Based Assessments Expanding the logic of the Informal Reading Inventory Getting comprehension assessment right Computerized Assessments (yes, but no time today)

82 82 Expanding the logic of the IRI Benchmark books model ala Reading Recovery Indices of… Level of text one can read independently Accuracy (including error patterns) Fluency Comprehension Not one, not two, not three, but many, many conceptually and psychometrically comparable passages at every level of text challenge.

83 83 Comprehension Assessment Our models for external assessment, modeled after some of the better wide-scale assessments, are OK. We desperately need a school/classroom tool that does for comprehension what running records/benchmark books have done for oral reading accuracy and fluency

84 Disciplinary Grounding We’re much better off if we ground our comprehension assessments in the inquiry and knowledge traditions of the disciplines rather than to

85 Pearson’s (bet on a) Seventh Law of Assessment Comprehension assessment begins and ends within the knowledge traditions and inquiry processes of each discipline

86 Pearson’s (bet on a) a Corollary to the Seventh Law Summative (big external) assessments of reading comprehension will be better if they begin as formative (smaller internal) assessments of reading comprehension within the knowledge traditions and inquiry processes of each discipline.

87 87 My bottom line Tests that are Instructionally sensitive Psychometric sound Trustworthy No decision of consequence should be based upon a single indicator. Tests are a means to an end:.

88 To reduce it to a single idea Six, maybe seven laws Two, maybe three corollaries But only one thing truly worth remembering… Never send a test out to do a curriulum’s job! Slides available at

89 Coda in Stuart McNaughton’s Spirit A new bumper sticker with a tinge of optimism. Tests in support of teaching and learning.

90

91 91 Computerized Assessment With advances in voice recognition, we are close to being able to teach computers to recognize and score students’ oral responses Applications: Listen to oral reading of benchmark passages and conduct a first level diagnosis (thus eliminating a key barrier, time, to more widespread use of this important diagnostic tool).

92 92 More applications of voice recognition Phonemic awareness tasks Word reading tasks Phonics tests (both real words and synthetic words) Comprehension assessment still a way down the road because of the interpretive problem The computer has to both listen to and understand the response BARLA: Bay Area Reading and Listening Assessment Computerized Assessment in Early Literacy


Download ppt "Reading Assessment: Still Time for a Change P. David Pearson UC Berkeley Professor and Former Dean Former Former Slides available at www.scienceandliteracy.org."

Similar presentations


Ads by Google