Presentation on theme: "Norberta P. Sabado. Rationale and Objectives Assessment of student achievement is changing for many reasons. Changes in the skills and knowledge needed."— Presentation transcript:
Norberta P. Sabado
Rationale and Objectives Assessment of student achievement is changing for many reasons. Changes in the skills and knowledge needed for success in today’s workplace, in our understanding of how students learn, and in the relationship between assessment and instruction are changing our learning goals for students. In the global economy of this Information Age, students face a world that will demand new knowledge, skills and competencies. Their future professional world do not only require them to understand the basics; they also need to develop the ability to access, interpret, analyze and use information for making decisions, to think critically and make inferences, to develop their abilities to work collaboratively in teams, and to communicate and present ideas effectively. Helping students develop these skills will require more than answering written tests; they require new approaches in our assessment. It is in this context that assessment of student learning has been the centrepiece of many educational improvement efforts. Assessment reform is viewed as a means of setting more appropriate learning goals for students, and thereby improving instruction and instructional materials to meet these goals.
In the light of these changes and need for assessment reform, this brief aims to provide participants the context and motivation to: 1. Develop deeper understanding and appreciation on the vital connections of assessment to teaching and learning; 2. Take initiatives towards designing more authentic performance-based assessments and using rubrics as scoring devices for organizing and interpreting data gathered from observations of student performance; and 3. Actively engage in critical reflection on essential questions that teachers must address in their day-to-day instructional decision-making, such as the following:
How do we know what students have learned? How should we plan for assessment? Which skills and competencies must be assessed? How do we document competency? How should we score performance-based assessments and report the results? How do we use information gathered from assessment to guide us in our instructional decisions?
The role of Assessment and Evaluation in the Teaching-Learning Cycle Assessment is often referred to as “the eyesight of instruction”. In some educational literature, assessment and evaluation have been used interchangeably; however there should be a clear distinction made between the two.
Assessment is the collection of information about students’ learning, the gathering of evidence of what students know and can do. From a teacher’s perspective, to assess is to compare student’s performance with a set of criteria to see where it stands on scale. From a student’s perspective, assessment provides the opportunity to improve and reassess his/her position at frequent intervals. Teachers feedback is essential during this process. Evaluation is the process of giving value judgement on the results of assessment. It is often an accumulation of information that gives a specific appraisal based on a set of criteria. An evaluation of student work should reflect their most consistent and recent efforts.
There are three basic purpose of assessment, namely: (1) to provide feedback on instruction to the teacher; (2) to provide feedback on learning to the students; (3) for grading and reporting students’ progress. With all these varied purposes of assessment, it is important to note that any meaningful form of assessment must be based on a vision for learning. Kulieke, et al (1990) explored a new definition of learning which is based on cognotive, philosopical, multicultural perspectives. These perspectives suggest that: Meaningful learning occurs when a learner has a knowledge base that can be used with fluency to make a sense of the world, solve problems, and make decisions. Learners need to be self-determined, feel capable, and continually strive to acquire and use the tools they have to learn. They need to be strategic learners who have repertoire to effective strategies for their own learning. Finally, they need to be empathetic learners who view themselves and the world from perspectives other than their own.
Alternative, authentic, or performance- based assessment? Many of the theoretical assumptions on which contemporary testing and assessment are based rely primarily on behaviourist views of cognition and development. Over the past two decades, however, educators have come to realize that new, alternative ways of thinking about learning and accessing learning needed. Consequently, the terms alternative assessment, authentic assessment, performance-based assessment have being popularized. These three terms are sometimes interchangeably to describe the same type of assessment task. Herman, Aschbacher, and Winters (1992) contends that these three terms are used synonymously to mean “variants of performance assessments that require students to generate rather than choose a response.” other authors set delineation among these three terms:
Alternative Assessment is used to refer, in general, to all forms of assessment alternative to traditional testing. In other words, an alternative to conventional ways of monitoring students’ language progress and performance. Alternative assessment is an ongoing process involving the student and teacher in making judgements about the students’ progress in language using non-conventional strategies Authentic assessment is a concept promoted by Grant Wiggins (1989, cited in Laud, 1997) whereby students are engaged in applying skills and knowledge to solve “real world” problems, giving the task a sense of authenticity. Performance-based assessment is used to refer to all forms of assessment that require students to demonstrate their knowledge, skills, and strategies by creating a response or a product (Rudner & Boston, 1994; Wiggins, 1989). It is systematic way of evaluating skill outcomes that cannot be adequately measured by the typical objective or essay test. It requires judgement on either the effectiveness of a procedure or a product resulting from the performance, or both.
While not all performance-based assessment may be authentic in its truest sense, Lund (1997) mentions the following characteristics that authentic performance assessment have in common 1) They require the presentation of worthwhile and/or meaningful tasks that are designed to be representative of performance in the field. 2) They emphasize “higher level” thinking and more complex learning. 3) The criteria used in authentic assessment are articulated in advance so that students know how they will be evaluated. 4) Assessments are so firmly embedded in curriculum that they are practically indistinguishable from instruction. Assessment can be continuous/formative rather than summative. 5) Authentic assessment changes the role of the teacher from adversary to ally. The root of the well assessment means “to sit with” 6) Students are expected to present their work publicly. This lets students know the work significant and important.
The Assessment Design Process. Herman, Aschbacher, and Winters (1992) suggest ten steps as part of the assessment 1. Clearly state the purpose for the assessment, and do not expect to meet purposes for which it was not designed. Studies show that students perform better when the know the goal, see models, know how their performance compares to the standard. 2. Clearly define what it is you assess (the achievement target). What important cognitive, social, affective, and metacognitive skills do I want my students to develop? What types of problems do I want my students to be able to solve? What concepts and principles do I want my students to be to apply? Prioritize these outcomes. List your final set of skills, processes, and dispositions (by subject area, if desired)
3. match the assessment method to the achievement purpose and target defined in step 2 4. Specify illustrative tasks that require students to demonstrate certain skills and accomplishments. Match the assessment tasks to intended learning outcomes Develop a task specification checklist and describe the outcomes to be measured Specify the assessment administration process (Group/individual roles, Materials/equipment, Administration instructions, Help allowed, Time allowed)
5. Specify the criteria and standards for judging student performance on the tasks selected in step 4. Be as specific as possible, and provide samples of student work that exemplify each of the standards. 6. Develop a reliable rating process that allows different raters at different points in time to obtain the same- or nearly the same-results, or allows a single teacher in a classroom to asses each student using the same 7. Avoid the pitfalls that threaten reliability and validity and can lead to mismeasurement of students. As should ensure (a) adequate sampling of the content domain, (b) absence of bias or subjective scoring (c) reasonable uniformity in administering assessments, (d) minimal effect of extraneous factors (e) a suitable environment for assessment, and (f) awareness of and compensation for temporary factors affecting the student. 8. Collect evidence/data showing that the assessment is reliable (yields consistent results) and valid (yield useful data for the decisions being made). With performance assessments, reliability and validity to be demonstrated through inter-rater agreement on scoring and evidence that students who perform well on the assessment also perform well on ralated items or tasks.
9. Ensure “consequential validity.” That is, the assessment should have a maximum of positive effects and a minimum of negative ones. 10. Use assessment results to refine assessment process and improve curriculum and instruction; provide feedback to students, school administrators, parents, and the community.
Performance-Based Assessments have their own merits or advantages and limitations. These limitations on performance-based assessment, particularly on the method of observing, recording and scoring guide performance outcomes, have given room to the development of rubrics-increasingly popular type of scoring guide used to access more complex subjective criteria
Making way for Rubrics What is Rubric? A rubric is a scoring tool that lists the criteria for a piece of work, or “what counts” (for example, purpose of organization, details, voice and mechanics are often what count a in piece of writing); it also articles graduations of quality for each criterion, from excellent the poor ( Goodrich, 1997) A rubric is a device for organizing and interpreting data gathered from observations of student performance. More precisely, it is a scoring guide that differentiates between levels of development in a specific area of performance or behaviour. (Rose, 1999)
Rubrics may be used as both an assessment and evaluation tool. As an assessment instrument, it allows students to assess their own achievement as they are working on task. It is also an opportunity for the teacher, while conferencing with a student, to point out the differences in levels and to give students specific indicators of what they must do and how they can achieve a higher level. As an evaluation instrument, it allows the teacher to give a fair and unbiased judgement of student work. Because it is given to students prior to a task, referred to during class, and used as assessment over a period of time, evaluating using this scale gives a clear judgement of student ability and performance.
Why Use Rubrics? 1. Rubrics help define “quality.” They are powerful tools for both teaching and assessment. They can improve and monitor student performance by making teachers’ expectations clear and by showing students how to make expectations. They communicate detailed explanations of what constitutes excellence throughout a project provide a cleat teaching directive. The result is often marked improvements in the quality of student worked to learning. 2. Rubrics help students become more thoughtful judges of the quality of their own and others’ work. When rubrics are used to guide self and peer assessment, student become increasingly able to spot and solve project in their own and one another's work.
3. Rubrics reduce the amount of time teachers spend evaluating student work. Teachers tend to find that by piece has been self-and peer-assessed according to a rubric, they have little left to say about it. Rubrics precise students with more informative feedback about their strengths and areas in need of improvement. 4. Teachers appreciate rubrics because their “accordion” nature allows them to accommodate heterogeneous classes. 5. Rubrics are easy to use and explain.
How Do Teachers Create Rubrics? Rubrics are becoming increasingly popular as teachers move toward more authentic, performance-based assessments. Recent publications contain some rubrics as our models or guide. Chances are that we will have to develop a few to reflect our own curriculum and teaching style. For beginners, Goodrich(1997) suggest that the rubric design process should engage both teacher and students in the following steps:
1. Look at Models: show students examples of good and not-so- good work. Identify the characteristics that make the good ones good and the bad ones bad. 2. List criteria: Use the discussion of models to begin a list of what counts in quality work. 3. Articulate graduations of quality: Describe the and worst levels of quality, then fill in the middle level 4. Practice on Models: Have students use the rubrics to evaluate the models you gave them in Step 1. 5. Use self-and peer-assessment: Give students their task. As they work, stop them occasionally for self-and peer- assessment. 6. Revise: Always give students time to revise their work based on feed-back they get in step 5. 7. Use Teacher Assessment: Use the same rubric students used to assess their work yoirself.
Tips on Designing Rubrics 1. Avoiding unclear language, such as “creative beginning.” If a rubric is to teach as well as evaluate, terms in these must be defined for students. 2. Avoid unnecessarily negative language. Articulate gradations of quality. It helps if you spend a lot of time thinking about criteria and how best to chunk them before going on to define the levels of quality. 3. Have a balance of quality and quality indicators with in a rubric. Some criteria are best described using quality performance descriptors, while others are much easier to evaluate using quality performance descriptors.
Examples: Quantity: 1-2, 3-4, several, many, variety wide variety Quantity: uses examples, consistent, accuracy, detail, comprehensive 4. Consider the choices you can make in building a rubric, especially in the language you choose as your criteria and level performance descriptors. Not all opinions will be the same and there is room for choice and interpretations. The most difficult task in creating rubrics is to write out level descriptors in plain language that can be clearly understood and interpreted by your students.. The easiest method is to begin with a ready made rubric and adjust it to suit your needs and those of your students. 5. Be cautious in over-populating a rubric. Keep it simple and succint.
Example of Gradations of Quality
Using the Rubrics You’ve Created Creating rubrics is the hard part, using them is relatively easy. Once you’ve created a rubric, give copies to students and ask them to assess their own process progress on a task or project. Their assessment may or may not count toward a grade. The value of student’s self-assessment using rubrics is to help them more and produce better final products. Self and peer- assessments are intended to help any support and evaluate student learning.
Conclusion The call for increased use of meaningful (authentic) assessments that involve students in selecting and reflecting on their learning implies that teachers will have a wider range of evidence on which to judge whether students are have demonstrated the target skills and competencies. It also means that the curriculum will become more responsive to the differing learning styles of students and value diversity of student’s potentialities. Finally, a curriculum that focus on authentic performance-based assessment are likely to develop in students lifelong skills related to critical thinking that build a basis for future learning, and enable them to evaluate what they learn both in and outside of the classroom.
The value of assessment in education may be viewed in terms of its relationship with teaching and learning. Students value what we assess them. Thus, if we to assess assessment, what criteria might we use to answer to this question will be a reflection of my own principles for assessment which emanate from a learning-centered views of teaching, the constructivist view of knowing and critical theoretical view for empowerment. Foremost, assessment practices should inrich teaching and improve students’ learning. The assessment practices should empower teachers, students, and all stakeholders in education. As view of empowerment. Foremost, assessment practices should actually help teachers and students achieve a more excused view of the student’s learning. Finally, for teachers, empowerment means having a fuller grasp of own abilities, needs, realities and potentialities. In the final analysis, in authentic performance-based assessment teachers do not distinguish instruction and assessment but rather, integrate assessment with teaching and learning. Such is the power of assessment. To what extent, then, have we, as teachers, maximized assessment, maximized assessment to its fullest potential?
Sample Rubric for Assessing a Performance-Based Assessment Project Criteria/ScaleLimitedEmergingDevelopingOutstanding Application of Assessment Principles (40%) Not evident in all the components of the performance task(0-10 pts.) Somehow evident but with gross violations (11-20 pts.) Evident but with minor violations (21-30 pts.) Highly evident in an components of the performance task(31- 40 pts.) Appropriateness and Adequacy of Performance Tasks (40%) Not Evident; Performance Outcome is not clearly defined (0-10 pts.) Performance outcome is defined but performance tasks are inadequate; no specified method of observing, recording, and scoring 11-20 pts. Performance outcome is clearly defined with appropriate and adequate tasks but method of observing, recording, and scoring needs improvement (21-30) With clearly defined outcome and performance situation; adequate task requirements with appropriate method of observing,and scoring (31-40 pts.) Overall Mechanical make-up (20 %) Lacks important components; contains several typographical errors (0-5 pts.) Lack 1 or 2 components and few typographical errors Complete components, no typographical errors but needs improvement is presentation (11-15 pts.) Complete components, self- contained, error-free, well-presented (16-20 pts.)