Evaluation Models Dr Jabbarifar(EDO DENTITRY2007 Isfahan.

Slides:



Advertisements
Similar presentations
1. Creativity and Innovation 2. Communication and Collaboration
Advertisements

Inquiry-Based Instruction
Performance Assessment
Action Research Not traditional educational research often research tests theory not practical Teacher research in classrooms and/or schools/districts.
Johns Hopkins University School of Education Johns Hopkins University Evaluation Overview.
MERC Ten Steps to Designing an Evaluation for Your Educational Program Linda Perkowski, Ph.D. University of Minnesota Medical School.
Summative Evaluation The Evaluation after implementation.
Laura Pejsa Goff Pejsa & Associates MESI 2014
(IN)FORMATIVE ASSESSMENT August Are You… ASSESSMENT SAVVY? Skilled in gathering accurate information about students learning? Using it effectively.
Formative and Summative Evaluations
PPA 502 – Program Evaluation
The Dick and Carey Model
Evaluating and Revising the Physical Education Instructional Program.
Instructional Design Methods of Teaching Adults Spring Interim, May 2001 O riginal Slide Presentation Developed by Dr. Gary Moore at NC State.
EDU555 CURRICULUM & INSTRUCTION ENCIK MUHAMAD FURKAN MAT SALLEH WEEK 4 CURRICULUM EVALUATION.
Grade 12 Subject Specific Ministry Training Sessions
WEEK 1 – TOPIC 1 OVERVIEW OF ASSESSMENT: CONTEXT, ISSUES AND TRENDS 1.
Standards and Guidelines for Quality Assurance in the European
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Participant-Oriented Evaluation Prepared by: Daniel Wagner Jahmih Aglahmie Kathleen Samulski Joshua Rychlicki.
Purpose Program The purpose of this presentation is to clarify the process for conducting Student Learning Outcomes Assessment at the Program Level. At.
Meeting SB 290 District Evaluation Requirements
Rediscovering Research: A Path to Standards Based Learning Authentic Learning that Motivates, Constructs Meaning, and Boosts Success.
ASSESSMENT Formative, Summative, and Performance-Based
Revising instructional materials
Qualitative Research.
Interstate New Teacher Assessment and Support Consortium (INTASC)
Conceptual Framework for the College of Education Created by: Dr. Joe P. Brasher.
A Model for EAP Training Development Zhiyun Zhang IDE 632 — Instructional Design & Development II Instructor : Dr. Gerald S. Edmonds.
Lecture 8A Designing and Conducting Formative Evaluations English Study Program FKIP _ UNSRI
Connected Learning with Web 2.0 For Educators Presenter: Faith Bishop Principal Consultant Illinois State Board of Education
RESEARCH IN MATH EDUCATION-3
Program Evaluation EDL 832 Jeffrey Oescher, Instructor 6 June 2013.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Research Methods in Education
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
ationmenu/nets/forteachers/2008s tandards/nets_for_teachers_2008.h tm Click on the above circles to see each standard.
March 26-28, 2013 SINGAPORE CDIO Asian Regional Meeting and Workshop on Engineering Education and Policies for Regional Leaders Programme Evaluation (CDIO.
© E. Kowch iD Instructional Design Evaluation, Assessment & Design: A Discussion (EDER 673 L.91 ) From Calgary With Asst. Professor Eugene G. Kowch.
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
ScWk 242 Course Overview and Review of ScWk 240 Concepts ScWk 242 Session 1 Slides.
1 Theoretical Paradigms. 2 Theoretical Orientation  Also called paradigms and approaches  A paradigm is a “loose collection of logically related assumptions,
Quantitative and Qualitative Approaches
CT 854: Assessment and Evaluation in Science & Mathematics
What is design? Blueprints of the instructional experience Outlining how to reach the instructional goals determined during the Analysis phase The outputs.
Fourth session of the NEPBE II in cycle Dirección de Educación Secundaria February 25th, 2013 Assessment Instruments.
Evelyn Wassel, Ed.D. Summer  Skilled in gathering accurate information about students learning?  Using it effectively to promote further learning?
Qualitative Research January 19, Selecting A Topic Trying to be original while balancing need to be realistic—so you can master a reasonable amount.
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Situated Cognition & Cognitive Apprenticeships
CHAPTER 16 ASSESSMENT OF THE PROGRAM. Educational Assessment »Assessment and Evaluation is an integral part of any educational program. »This is true.
Wandra Coffield EdS Educational Technology EDUC 7101 ~ Fall 2009 Walden University Innovation and Diffusion of E-portfolios in K12 Schools.
Christine Yang March 17, As a teacher it is critical for me to demonstrate mastery of technology teacher standards. ISTE-NETS Teacher Standards.
Tier III Implementation. Define the Problem  In general - Identify initial concern General description of problem Prioritize and select target behavior.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 24, 2009.
Program Evaluation Overview. Definitions of Program Evaluation systematic collection of information abut the activities, characteristics, and outcome.
Experimental Research Methods in Language Learning Chapter 3 Experimental Research Paradigm and Processes.
CRITICAL THINKING AND THE NURSING PROCESS Entry Into Professional Nursing NRS 101.
Wandra Coffield EdS Educational Technology EDUC 7101 ~ Fall 2009 Walden University Innovation and Diffusion of E-portfolios in K12 Schools.
By Mario Carrizo. Definition Instructional design is define basically as the person who teaches, designs or develops instructions. Instructional designers.
1 Expanded ADEPT Support and Evaluation System Training Module for Cooperating Teachers and Supervising Faculty.
.  Evaluators are not only faced with methodological challenges but also ethical challenges on a daily basis.
Design Evaluation Overview Introduction Model for Interface Design Evaluation Types of Evaluation –Conceptual Design –Usability –Learning Outcome.
COM 535, S08 Designing and Conducting Formative Evaluations April 7, 2008.
Action Research for School Leaders by Dr. Paul A. Rodríguez.
…an open, online course Statistics in Education for Mere Mortals Statistics in Evaluation & Research: Some Important Context Lloyd P. Rieber Professor.
Evaluating a Task-based English Course: A Proposed Model
Unit 7: Instructional Communication and Technology
Adjunct Training – August 2016 | Jason Anderson
Evaluation Research Defined as the process of making judgments about the merits, value, or worth of any component of education. (e.g. best text books to.
Presentation transcript:

Evaluation Models Dr Jabbarifar(EDO DENTITRY2007 Isfahan

Definition “Evaluation models either describe what evaluators do or prescribe what they should do” (Alkin and Ellett, 1990, p.15)

Prescriptive Models “Prescriptive models are more specific than descriptive models with respect to procedures for planning, conducting, analyzing, and reporting evaluations” (Reeves & Hedberg, 2003, p.36). Examples: – Kirpatrick: Four-Level Model of Evaluation (1959) – Suchman: Experimental / Evaluation Model (1960s) – Stufflebeam: CIPP Evaluation Model (1970s)

Descriptive Models They are more general in that they describe the theories that undergird prescriptive models (Alkin & Ellett, 1990) Examples: – Patton: Qualitative Evaluation Model (1980s) – Stake: Responsive Evaluation Model (1990s) – Hlynka, Belland, & Yeaman: Postmodern Evaluation Model (1990s)

Formative evaluation An essential part of instructional design models It is the systematic collection of information for the purpose of informing decisions to design and improve the product / instruction (Flagg, 1990)

Why Formative Evaluation? The purpose of formative evaluation is to improve the effectiveness of the instruction at its formation stage with systematic collection of information and data (Dick & Carey, 1990; Flagg, 1990). So that Learners may like the Instruction So that learners will learn from the Instruction

When? Early and often Before it is too late

Assess Needs to Identify Goals Conduct Instructional Analysis Analyze Learners and Contexts Write Performance Objectives Develop Assessment Instruments Develop Instructional Strategy Develop and Select Instructional Materials Design and Conduct Formative Evaluation Design and Conduct Summative Evaluation Revise Instruction

What questions to be answered? Feasibility: Can it be implemented as it is designed? Usability: Can learners actually use it? Appeal: Do learners like it? Effectiveness: Will learners get what is supposed to get?

Strategies Expert review – Content experts: the scope, sequence, and accuracy of the program’s content – Instructional experts: the effectiveness of the program – Graphic experts: appeal, look and feel of the program

Strategies II User review – A sample of targeted learners whose background are similar to the final intended users; – Observations: users’ opinions, actions, responses, and suggestions

Strategies III Field tests – Alpha or Beta tests

Who is the evaluator? Internal – Member of design and development team

When to stop? Cost Deadline Sometimes, just let things go!

Summative evaluation The collection of data to summarize the strengths and weakness of instructional materials to make decision about whether to maintain or adopt the materials.

Strategies I Expert judgment

Strategies II Field trials

Evaluator External evaluator

Outcomes Report or document of data Recommendations Rationale

Comparison of Formative & Summative FormativeSummative Purpose: RevisionDecision How: Peer review, one-to- one, group review, & field trial Expert judgment, field trial Materials: One set of materials One or several competing instructional materials Evaluator InternalExternal Outcomes A prescription for revising materials Recommendations and rationale Source: Dick and Carey (2003). The systematic design of instruction.

Objective-Driven Evaluation Model (1930s): R.W. Tyler – A professor in Ohio State University – The director of the Eight Year Study (1934) Tyler’s objective-driven model is derived from his Eight-Year Study

Objective-Driven Evaluation Model (1930s): The essence: The attainment of objectives is the only criteria to determine whether a program is good or bad. His Approach: In designing and evaluating a program: set goals, derive specific behavioral objectives from the goal, establish measures to the objectives, reconcile the instruction to the objectives, and finally evaluate the program against the attainment of these objectives.

Tyler’s Influence Influence: Tyler’s emphasis on the importance of objectives has influenced many aspects of education. – The specification of objectives is a major factor in virtually all instruction design models – Objectives provide the basis for the development of measurement procedures and instruments that can be used to evaluate the effectiveness of instruction – It is hard to proceed without specification of objectives

Four-Level Model of Evaluation (1959): D. Kirpatrick

Kirkpatrick’s four levels: The first level (reactions) – the assessment of learners’ reactions or attitudes toward the learning experience The second level (learning) – an assessment how well the learners grasp of the instruction. Kirkpatrick suggested that a control group, a pre-test/posttest design be used to assess statistically the learning of the learners as a result of the instruction The third level (behavior) – follow-up assessment on the actual performance of the learners as a result of the instruction. It is to determine whether the skills or knowledge learned in the classroom setting are being used, and how well they are being used in job setting. The final level (results) – to assess the changes in the organization as a result of the instruction

Kirkpatrick’s model “Kirkpatrick’s model of evaluation expands the application of formative evaluation to the performance or job site” (Dick, 2002, p.152).

Experimental Evaluation Model (1960s): The experimental model is a widely accepted and employed approach to evaluation and research. Suchman was identified as one of the originators and the strongest advocate of experimental approach to evaluation. This approach uses such techniques as pretest/posttest, experimental group vs. control group, to evaluate the effectiveness of an educational program. It is still popularly used today.

CIPP Evaluation Model (1970s): D. L. Stufflebeam. CIPP stands for Context, Input, Process, and Product.

CIPP Evaluation Model Context is about the environment in which a program would be used. This context analysis is called a needs assessment. Input analysis is about the resources that will be used to develop the program, such as people, funds, space and equipment. Process evaluation examines the status during the development of the program (formative) Product evaluation that assessments on the success of the program (summative)

CIPP Evaluation Model Stufflebean’s CIPP evaluation model was the most influential model in the 1970s. (Reiser & Dempsey,2002)

Qualitative Evaluation Model (1980s) Michael Quinn Patton, Professor, Union Institute and University & Former President of the American Evaluation Association

Qualitative Evaluation Model Patton’s model emphases the qualitative methods, such as observations, case studies, interviews, and document analysis. Critics of the model claim that qualitative approaches are too subjective and results will be biased. However, qualitative approach in this model is accepted and used by many ID models, such as Dick & Carey model.

Responsive Evaluation Model (1990s) Robert E. Stake He has been active in the program evaluation profession He took up a qualitative perspective, particularly case study methods, in order to represent the complexity of evaluation study

Responsive Evaluation Model It emphasizes the issues, language, contexts, and standards of stakeholders Stakeholders: administrators, teachers, students, parents, developers, evaluators… His methods are negotiated by the stakeholders in the evaluation during the development Evaluators try to expose the subjectivity of their judgment as other stakeholders The continuous nature of observation and reporting

Responsive Evaluation Model This model is criticized for its subjectivity. His response: subjectivity is inherent in any evaluation or measurement. Evaluators endeavor to expose the origins of their subjectivity while other types of evaluation may disguise their subjectivity by using so-called objective tests and experimental designs

Postmodern Evaluation Model (1990s): Dennis Hlynka Andrew R. J. Yeaman

The postmodern evaluation model Advocates criticized the modern technologies and positivist modes of inquiry. They viewed educational technologies as a series of failed innovations. They opposed the systematic inquiry and evaluation. ID is a tool of positivists who hold onto the false hope of linear progress

How to be a postmodernist Consider concepts, ideas and objects as texts. Textual meanings are open to interpretation Look for binary oppositions in those texts. Some usual oppositions are good/bad, progress/tradition, science/myth, love/hate, man/woman, and truth/fiction Consider the critics, the minority, the alternative view, do not assume that your program is the best

The postmodern evaluation model Anti-technology, anti-progress, and anti- science Hard to use, Some evaluation perspectives, such as race, culture and politics can be useful in evaluation process (Reeves & Hedberg, 2003).

Fourth generation model E.G. Guba S. Lincoln

Fourth generation model Seven principles that underlie their model (constructive perspective) 1. Evaluation is a social political process 2. Evaluation is a collaborative process 3. Evaluation is a teaching/learning process 4. Evaluation is a continuous, recursive, and highly divergent process 5. Evaluation is an emergent process 6. Evaluation is a process with unpredictable outcomes 7. Evaluation is a process that creates reality

Fourth generation model Outcome of evaluation is rich, thick description based on extended observation and careful reflection They recommend negotiation strategies for reaching consensus about the purposes, methods, and outcomes of evaluation

Multiple methods evaluation model M.M. Mark and R.L. Shotland

Multiple methods evaluation model One plus one are not necessarily more beautiful than one Multiple methods are only appropriate when they are chosen for a particularly complex program that cannot be adequately assessed with a single method

REFERENCES Dick, W. (2002). Evaluation in instructional design: the impact of Kirkpatrick’s four-level model. In Reiser, R.A., & Dempsey, J.V. (Eds.). Trends and issues in instructional design and technology. New Jersey: Merrill Prentice Hall. Dick, W., & Carey, L. (1990). The systematic design of instruction. Florida: HarperCollinsPublishers. Reeves, T. & Hedberg, J. (2003). Interactive Learning Systems Evaluation. Educational Technology Publications. Reiser, R.A. (2002). A history of instructional design and technology. In Reiser, R.A., & Dempsey, J.V. (Eds.). Trends and issues in instructional design and technology. New Jersey: Merrill Prentice Hall. Stake, R.E. (1990). Responsive Evaluation. In Walberg, H.J. & Haetel, G.D.(Eds.), The international encyclopedia of educational evaluation (pp.75-77). New York: Pergamon Press.