Download presentation
Published byLindsey Kellie Bishop Modified over 9 years ago
1
Art of Evaluation ,Knowledge, Skills &Attitude
Prepared By Salwa Mahmoud Abd Elwahab Under supervision Prf. Dr Tahany El Senosy Professor of medical surgical nursing Faculity of nursing Ain Shams University 2010
2
Outlines: Types of evaluation Introduction
Specific terminology related to evaluation Definition of evaluation Types of evaluation Purposes of evaluation Elements Needed for the Construction of an Evaluation System Standards of Evaluation Guiding Principles for evaluator Steps of evaluation
3
Outlines cont Phases of evaluation
Methods of evaluate (knowledge – skills – attitude ) Evaluation models
4
Objectives : Define evaluation Define types of evaluation
List the purposes of evaluation Identify elements of construction and evaluation system Identify steps of evaluation Explain phases of evaluation Discuss models of evaluation
5
Introduction Evaluation is systematic determination of merit, worth, and significance of something or someone using criteria against a set of standards. Evaluation is a learning and action-oriented management tool and organizational process for improving current activities and future planning, programming, and decision-making
6
Introduction cont The knowledge is defined by the Oxford English Dictionary as expertise, and skills acquired by a person through experience or education There are many methods to evaluate knowledge Interview Questionnaire, Discussion, Survey, Analysis of records and data. (Lesley, 2001) defines practical nursing skills as hands-on actions that promote the patient's physical comfort, hygiene, and safe medical treatment which are commonly referred to as procedures or psychomotor skills .
7
There are three types of skills Intellectual skills as analysis, and application, manual and psychomotor skills as lab skills , social and interpersonal skills as communication skills, others as decision making skills and planning skills. An attitude is a hypothetical construct that represents an individual's degree of like or dislike for an item, we can evaluate attitude througth Likert-type scale, Self report published, Inventories, Semantic, Differential projective technique. Wikipedia Project, (2009)
8
Introduction cont Art is the process or product of deliberately arranging elements in a way that appeals to the senses or emotions. It encompasses a diverse range of human activities, creations, and modes of expression, including music and literature. The meaning of art is explored in a branch of philosophy known as Aesthetics.
9
Specific terminology related to evaluation:
Criteria: Statement of needs, rules, standards, or tests that must be used in evaluating a decision, idea, opportunity, program, project, etc., to form correct judgment regarding the intended goal. Criteria is plural of criterion Evidence: A thing or things helpful in forming a conclusion or judgment: Judgment: The act or process of judging; the formation of an opinion after consideration or deliberation.
10
Test: An instrument or tool for obtaining measurement or assessment e
Test: An instrument or tool for obtaining measurement or assessment e.g. an essay Examination: A formal situation in which students undertake one or more tests under specific rules Measurement: A quantative process involving the assigning of a number to an individual's characteristics
11
Definitions According to American Evaluation Association (2007), evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. Evaluation: is an integral component of the teaching-learning process that should facilitate student learning and improve instruction. Teachers make judgments about student progress based on information gathered through a variety of assessment techniques.
12
Definitions cont This information assists teachers in planning and modifying their instructional programs, which in turn helps students learn more effectively. Evaluation is also used for reporting progress to students and parents and for making decisions related to such things as student promotion and awards.
13
Evaluation must be considered during the planning stage of instruction when learning objectives and appropriately related teaching strategies and methods are chosen. Too often in the past evaluation has been treated as an add-on, something to be dealt with at the end of a unit of study.
14
Purpose of Student Evaluation
Incentive to learn Feedback to student Modification of learning activities Selection of student Success or failure Feedback to teacher School public relations Information for selection and certification
15
Types of Evaluation There are many different types of evaluations depending on the object being evaluated and the purpose of the evaluation. 1.Formative evaluating 2. Summative evaluation Formative evaluations is an on-going process, an integral part of the learning process that keeps students and teachers informed of student progress towards program learning objectives. The main purpose of formative evaluation is to improve instruction and student learning ?(Bahola, 1990) .
16
Types of evaluation cont
Summative evaluations, occurs most often at the end of a unit of study. Its primary purpose is to determine what has been learned over a period of time, to summarize student progress, and to report on progress relative to curriculum foundational objectives to students, parents and educators. It is a judgment of the student's global competence.
17
Types of evaluation cont
Formative evaluation includes several evaluation types: Needs assessment determines who needs the program, how great the need is, and what might work to meet the need Evaluability assessment determines whether an evaluation is feasible and how evaluators can help shape its usefulness Structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes Implementation evaluation monitors the fidelity of the program or technology delivery Process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures
18
Types of evaluation cont
Summative evaluation can also be subdivided: outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values secondary analysis reexamines existing data to address new questions or use methods not previously employed meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgment on an evaluation question
19
Another method to clarify types of evaluation
1. Internal evaluation or self- evaluation An evaluation carried out by members of the organization who are associated with the programme, intervention or activity to be evaluated. 2. Ex-ante evaluation or impact assessment An assessment which seeks to predict the likelihood of achieving the intended results of a programme or intervention or to forecast its unintended effects.
20
3. Mid-term or interim evaluation
This is conducted before the programme or intervention is formally adopted or started. Common examples of ex-ante evaluation are environmental and/or feasibility studies. 3. Mid-term or interim evaluation An evaluation conducted half-way through the lifecycle of the programme or intervention to be evaluated. Monitoring An ongoing activity aimed at assessing whether the programme or intervention is implemented in a way that is consistent with its design and plan and is achieving its intended results.
21
4. Ex-post or summative evaluation
An evaluation which usually is conducted some time after the programme or intervention has been completed or fully implemented. Generally its purpose is to study how well the intervention served its aims, and to draw lessons for similar interventions in the future. 5. Meta-evaluation Two processes are often referred to as meta-evaluation: (1) the assessment by a third evaluator of evaluation reports prepared by other evaluators; and (2) the assessment of the performance of systems and processes of evaluation.
22
6. Formative evaluation An evaluation which is designed to provide some early insights into a programme or intervention to inform management and staff about the components that are working and those that need to be changed in order to achieve the intended objectives. (Information Project, 2006)
23
Anther methods to clarify types of evaluation
1. Outcome evaluation: An evaluation which is focused on the change brought about by the programme or intervention to be evaluated or its results regarding the intended beneficiaries. 2. Impact evaluation: An evaluation that focuses on the broad, longer-term impact or effects, whether intended or unintended, of a programme or intervention. It is usually done some time after the programme or intervention has been completed.
24
3. Performance evaluation: An analysis undertaken at a given point in time to compare actual performance with that planned in terms of both resource utilization and achievement of objectives. This is generally used to redirect efforts and resources and to redesign structures.
25
Elements Needed for the Construction of an Evaluation System
Planning the evaluation of situation analysis and the identification of priority health problems (context ) Evaluation of the context is concerned with importance of educational programme. Planning the evaluation of the human and material resources to be used and the elements which included in the programme ( input)
26
Elements Needed for the Construction of an Evaluation System
It is important to make sure that teachers are competent and comfortable with the teaching methodology to be used or not . Planning the monitoring of implementation( the educational process) An evaluation system must be plan how the implementation of the programme is to be monitored.
27
Elements Needed for the Construction of an Evaluation System
Planning the evaluation of the learners (the output) The central component of an evaluation of the learners performance .
28
Standards of Evaluation
The Joint Committee on Standards for Educational Evaluation (2009) has developed standards for educational programmes, personnel, and student evaluation. The Joint Committee standards are broken into four sections: Utility : The quality or state of being useful Feasibility : Capable of being accomplished or brought about; possible Propriety : the quality or state of being appropriate or fitting Accuracy: The ability of a measurement to match the actual value of the quantity being measured.
29
The Community Tool Box, (2009)
30
Steps of Student Evaluation
. Put the criteria of the educational objectives Develop and use measuring instrument Interpret measurement data Formulate judgments and take appropriate action
31
Steps of evaluation Get an overview of the program
Determine why you evaluating Determine what you need to know and formulate research question Figure out what information you need to answer question Design the evaluation Collect the information / data Analyze the information Formulate conclusions Communicate results Use results to modify the program.
32
Phases of evaluation The evaluation process is cyclical in nature. Each phase is linked to and dependent on the others. It is a continuous process that takes careful planning and systematic implementation not to mention constant review and modification to guide student learning.
33
Phases of evaluation cont.
34
In the preparation phase, decisions are made which identify what is to be evaluated, the type of evaluation to be used, the criteria against which student learning outcomes will be judged, and the most appropriate assessment strategies with which to gather information on student progress.
35
The assessment phase is action-oriented
The assessment phase is action-oriented. The teacher identifies appropriate information-gathering strategies, constructs or selects assessment techniques in collaboration with the student, continues to make decisions such as identification and elimination of bias from assessment instruments, and determines where, when and how assessments will be conducted. The teacher collects, organizes and interprets the student information gathered.
36
During the evaluation phase, the teacher examines the collected student information carefully taking into consideration pertinent points such as the student's particular situation, the curriculum, the time of the year, the variety of resources, etc. to make a judgment on the progress of the student or the level of achievement of student leanings.
37
The reflection phase allows the teacher to consider possible actions and to make decisions necessary to carry out improvements or modifications to subsequent teaching and evaluation.
38
Guiding Principles for evaluator
Be selective! Be realistic! Be creative! Be careful! Be balanced! Be holistic! Be human! (Course evaluation methods, 1999)
39
Methods of Evaluation What types of data will be collected ?
To select most appropriate methods for evaluation , must answer the following questions: What types of data will be collected ? From whom or what will data be collected? How/ when, and where will data be collected ? By whom will data be collected ?
40
Qualities of effective tool for evaluation
Validity,This is the most important aspect of a test and is the extent to which the test measures what it is designed to measure. Reliability, it defined as the extent to which a set of variables consistent with what it aims to measure ObjectivityThe state of being objective, just, unbiased and not influenced by emotions or personal prejudices PracticabilityIt means applicability of a test to be applied; it implies factors as the time taken to conduct the test, the cost of using it and the practicality for everyday use .
41
Qualities of effective tool for evaluation
Equilibrium : achievement of the correct proportion among questions allocated to each of the objectives. Equity : extent to which the question set in the examination correspond to the teaching content . Specificity : quality of a measuring instrument whereby an intelligent student who has not followed the teaching on the basis of which the instrument has been constructed will obtain in result equivalent to that expected by pure chance. Discrimination : quality of each element of a measuring instrument which makes it possible to distinguish between good and poor students in relation to a given variable.
42
Efficiency : quality of measuring instrument which insure the greatest possible number of independent answers per unit of time Time : it is well known that a measuring instrument will be less reliable if it leads to the introduction of relevant factors because the time allowed is too short
43
Evaluation of knowledge:
The cognitive domain in Bloom's Taxonomy, 2007 comprised knowledge, comprehension, application, analysis synthesis and evaluation. Each of these levels demands a different form of assessment. There are varieties of objectives tests of knowledge that can be used to evaluate the different level of cognitive domain.
44
Methods of evaluate knowledge
Interview Questionnaire Discussion Survey Analysis of records and data Focus group Case study Documentation review
45
Evaluation of skills There are three types of skills
Intellectual skills as analysis, and application) Manual and psychomotor skills as lab skills Social and interpersonal skills as communication skills,others as decision making skills and planning skills
46
Methods of evaluation of skills through:
Rating scales Checklist methods OSCE
47
Rating scales It is a method to evaluate clinical performance .Rating scales can be either narrative or numerical in their format: Narrative (descriptive ) rating scale Numerical rating scale
48
Examples of narrative rating scale
49
Example on rating scale
Numerical Pain Scales
50
OSCE (obstructive structure clinical evaluation)
The candidates rotate through a series of stations at which they are asked to carry out a (usually clinical task)
51
Background about OSCE Started in 1972 by R. Harden and F. Gleeson
First literature about OSCE 1975 Used in undergraduate as well as postgraduate Formative & summative Used in many disciplines
52
Characteristics of the OSCE
It is an assessment approach primarily used to measure clinical competence Should be planned or structured (predetermined clinical competences) Examination format or framework Different types of test method can be incorporated into it In most stations students are observed (by one or more examiners) Scored as they carry out the task or interpret clinical materials (e.g.laboratory data, X-rays), write notes or answer question
55
Simulated Patient (examination)
56
Advantages of the OSCE Valid examination
The examiners can control the complexities of the examination Used as summative as well formative Can be used with larger number of students Reproducible The variable of the examiner and the patient are to a large extent removed promotes team work
57
Disadvantages of the OSCE
Knowledge and skills are tested in compartments The OSCE may be demanding for both examiners and patients More time in setting it up Shortage of examiners Might be quite distressing to the student
58
Evaluation of attitudes
Attitudes are generally positive or negative views of a person, place, thing, or event. Components of attitude: Cognitive component refers to ideas that express relationship between situations as people watch too much TV Affective component refers to the emotional and feeling accompanied by this idea as feeling of frustration so I watch TV Behavioral component refers to act towards an object or situation in consistent way
59
Methods to evaluate attitude
Likert-type scale Self report published Inventories Semantic Differential projective technique
60
1-Likert-type scale It is the most popular methods of measuring attitudes by summated ratings. The scale consists of a series of declarative statements, The subject is asked to indicate whether he agrees or disagrees with each statement. Commonly, five options are provided: "strongly agree," "agree," "undecided," "disagree," and "strongly disagree."
61
Examples of likert -scaled questions
1. How effective was this course in meeting the objectives that were set? Very effective – effective – neither – ineffective – very ineffective 2. How would you rate the facilities and resources for learning? Very good – Good – Neither – Poor – Very poor
62
2-Self report published (Participant Self Evaluation Attitude Questionnaire)
School : I happy at school I get a long with the teachers I get into trouble a lot at school I find school word hard I feel angry at school Future : I am excited about my future People have confidence in me I would love to travel I know I will succeed at school
63
3-Inventory The process of making such a list, report, or record.
The items listed in such a report or record Any list of articles or goods. The law requires that an inventory shall be attached to certain special documents, and in other cases an inventory is found advantageous for reference or comparison
64
4-Differential projective techniques
A type of psychological test that assesses a person's thinking patterns, observational ability, feelings, and attitudes on the basis of responses to ambiguous test materials. It is not intended to diagnose psychiatric disorders
65
Models of evaluation 1. Reaction model
Donald Kirkpatrick's , 2008 state 4 stages of evaluation models (Reaction - Learning - Behavior -Results) 1. Reaction model This level measures how those who participate in the event, react to it, the learner's perception (reaction) to the event. This level is often measured with attitude questionnaires (smile sheets) that are passed out after most training sessions
66
2-Learning model Means the extent to which participants change attitudes, improve knowledge, and increase skill as a result of attending the programme The learning evaluation requires pre testing and post-testing Evaluating the learning that has taken place is typically focused on such questions as: 1.What knowledge was acquired? 2. What skills were developed or enhanced? 3. What attitudes were changed?
67
3-Behavior model This model involves testing the participant's capabilities to perform learned skills after the event Evaluations can be performed formally (testing) or informally (observation). Behavior data provides insight into the transfer of learning from the workshop to the work environment and the barriers encountered when attempting to implement the new techniques learnt in the programme.
68
4-Result model It measures the training effectiveness, "what impact has the training achieved?. It is concerned with the impact of the programme on the wider community Immediate end of course evaluation sheet largely to measure Reaction Follow up questionnaire sent 6 weeks after the event concentrated on what participants felt they had learned from the event in terms of increased awareness, development of skills and acquisition of knowledge
69
Models of Evaluation First-generation models Second-generation models
Third-generation models Fourth-generation models
70
First-generation models
First-generation models are measurement-oriented models where the evaluation is technical and based on objectively measurable data, e.g., number of students passing registration examinations or mean scores of students from a particular programme. First generation models use measurement (usually testing of individual students) as the means of evaluation and consequently usually do not contribute to programme improvement. Furthermore, results of a first generation evaluation do not necessarily reflect the quality or appropriateness of a curriculum (Sarnecky, 1990b).
71
Second-generation model
Second-generation models build on first-generation models and are characterized by their emphasis on description, specifically in relation to programme objectives. These model are a direct response to the popularity of management by objectives. In second generation models, evaluation is based on describing how well objectives of a programme (or programme outcomes) are met. This usually includes measurement of various aspects of a programme (Sarnecky, 1990a).
72
Third-generation models
Third-generation models (characterized by the use of high technology) focus on using evaluation as a basis for judgment. Although most evaluations result in some kind of judgement, according to Sarneky (1990a) third- generation models make use of a wider variety of measures and description than first and second- generation models and the focus of analysis of the data is on making judgments and planning interventions. Variations or combinations of the first three-generation models were traditionally used in evaluation
73
Fourth-generation models
This model is characterized by collaboration, participation and self-determination and is designed to assist programme stakeholders and to result in empowerment through self-evaluation and reflection. Use for the purpose of accreditation or professional approval. In spite of the availability of useful third-generation models, it was felt that use of more qualitative, reflective methods was needed to fully understand the quality of educational programmes. The fourth generation models were a response to this perception.
74
thank you
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.