Kirkpatrick.

Slides:



Advertisements
Similar presentations
Southwood School: A Case Study in Training and Development
Advertisements

Evaluating Training Programs The Four Levels
SNDT Women's University Introduction to Evaluation and Assessments  Presented by Kathleen (Kat) Miller, Senior Consultant with Booz Allen Hamilton  4.
A Presentation on T&D. What is Training? Training involves an organized attempt to assist learning through Training involves an organized attempt to assist.
Training Kirkpatrick’s Four Levels of Evaluation Kelly Arthur Richard Gage-Little Dale Munson Evaluation Strategies for Instructional Designers.
Reasons for Evaluating Training Companies are investing millions of dollars in training programs to help gain a competitive advantage. To justify the costs.
Copyright © Questionmark Corporation and/or Questionmark Computing Limited, known collectively as Questionmark. All rights reserved. Questionmark.
Orientation and Training
Unit 10: Evaluating Training and Return on Investment 2009.
OH 7-1 Training Employees Human Resources Management and Supervision 8 OH 8-1.
Evaluation of Training
Chapter 7 Training and Developing Employees
Formative and Summative Evaluations
Evaluating and Revising the Physical Education Instructional Program.
HRM-755 PERFORMANCE MANAGEMENT
Evaluating Training Programs E-learning Assignment Bret (one-t) Painter.
Chapter 6 Training Evaluation
DEFINING JOB PERFORMANCE AND ITS RELATIONSHIP TO ASSESSMENTS.
Chapter 3 Needs Assessment
Evaluation of Training
Training for Improved Performance
Revising instructional materials
+ Instructional Design Models EDU 560 Fall 2012 Online Module November 13, 2012.
Chapter 6 Training and Development in Sport Organizations.
McGraw-Hill/Irwin Copyright © 2008 by The McGraw-Hill Companies, Inc. All rights reserved. Providing Orientation and Training Training is important to.
The eSSential L&D Shared Model. A VISION OF POSSIBILITIES.
EFFECTIVENESS OF TRAINING Group 5. Effectiveness of Training  What is “effectiveness of training”? Effectiveness means producing an intended result.
Copyright © 2013 by The National Restaurant Association Educational Foundation. Published by Pearson. All rights reserved. HOSPITALITY HUMAN RESOURCES.
Copyright © 2013 Pearson Education, Inc. All rights reserved. Burden/Byrd Methods for Effective Teaching: Meeting the Needs of All Students, 6/e USING.
Chapter 6 Training Evaluation. Chapter 6 Training Evaluation Concepts Training Evaluation: The process of collecting data regarding outcomes needed to.
2 The combination of three concepts constitutes the foundation for results: 1) meaningful teamwork; 2) clear, measurable goals; and 3) regular collection.
TRAINING, DEVELOPMENT AND CAREER MANAGEMENT
Training Evaluation. Evaluation of Training Since huge sums of money are spent on training and development, how far the programme has been useful must.
Training 2 MANA 3320 Dr. Jeanne Michalski. Phase 3: Implementing the Training Program Importance of training outcomes Type of trainees Choosing the instructional.
Kirkpatrick’s Levels of Evaluation
TRAINING EVALUATION WHAT? WHAT? WHY? WHY? HOW? HOW? SO WHAT? SO WHAT?
Group HR Training & Development Welcome Good Evening 18 th September 2012 Sukanya Patwardhan.
This project is financed by the European Union 1 The project is implemented by a European Profiles S.A. led consortium Evaluation of Training By Senior.
Interactive Training Skills: Evaluation Study Executive Summary Presentation Service Merchandise Company Date: Wednesday, April 6, 2011 CONFIDENTIAL Consultants:
Dr. Shulagna Sarkar Assistant Professor , Jt. Coordinator – PGDM – HRM
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
Human Resource Management Lecture-23 Staffing HRM FUNCTIONS Employee & Labor Relations Safety & Health Compensation & Benefits Human Resource Development.
Training & Development is a continuous process in an organization to achieve its organizational goals by improving the skills and knowledge of the employees.
Chapter 7 Training Employees. MGMT Chapter 7 Training Linked to Organizational Needs Training –An organization’s planned efforts to help employees.
Community Planning Training 5- Community Planning Training 5-1.
Chapter 6 Training Evaluation
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Strategic Human resource Management Training & Developing.
DAY-5 Evaluation and Control Process 6 Yes Determine what to measure. Measure performance. Take corrective action. STOP No Does perform- ance match.
Data Analysis Processes: Cause and Effect Linking Data Analysis Processes to Teacher Evaluation Name of School.
Kirkpatrick’s Four-Level Model of Evaluation
The Kirkpatrick Model organizational change Richard Lambert, Ph.D.
Industrial and Organizational Psychology Training Employees.
+ Instructional Design Models EDU 560 Fall 2012 Online Module November 13, 2012.
This project is financed by the European Union 1 The project is implemented by a European Profiles S.A. led consortium Evaluation of Training By Senior.
Training evaluation- Ten steps process 1.Determining needs 2.Settings objectives 3.Determining subject content 4.Selecting participants 5.Determining the.
By Mario Carrizo. Definition Instructional design is define basically as the person who teaches, designs or develops instructions. Instructional designers.
Re-Cap NGSS. Assessment, Evaluation, and Alignment.
Training Evaluation Chapter 6 6 th Edition Raymond A. Noe Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
So You Think You’ve Made a Change? Developing Indicators and Selecting Measurement Tools Chad Higgins, Ph.D. Allison Nichols, Ed.D.
Instructional Leadership Supporting Common Assessments.
BEST PRACTICES IN LIBRARY INSTRUCTION FORUM November 7, 2007
Training & Development
Chapter Six Training Evaluation.
Kirkpatrick’s Evaluating Training programs
SPIRIT OF HR.in TRAINING EVALUATION.
Assessments and the Kirkpatrick Model
Welcome to Your New Position As An Instructor
Kirkpatrick’s Four Levels of Evaluation
Presentation transcript:

Kirkpatrick

The Four Levels Reaction Learning Behavior Results

All about Kirkpatrick In 1959, Kirkpatrick wrote four articles describing the four levels for evaluating training programs. He was working on his dissertation for a Ph.D. when he came up with the idea of defining evaluation. Evaluation, as according to Kirkpatrick, seems to have multiple meanings to training and developmental professionals. Some think evaluation is a change in behavior, or the determination of the final results. The reason why Kirkpatrick wanted to develop his Four-Level Model was to clarify the meaning and process for determining ‘evaluation’ in a training program. If there is no change in behavior, but there is a change in skills, knowledge, or attitudes, then using only part of the model (not all levels) is acceptable. If the purpose of the training program is to change behavior, then all four levels apply. Other authors on evaluation of training programs have proposed various strategies, but Kirkpatrick is given credit for developing and masterminding the Four-Level Model. Kirkpatrick focuses the Model for the executives and middle management. However, his model works well in most other training areas.

All about Kirkpatrick (continued) Kirkpatrick says they are all right, and yet all wrong. All four levels are important in understanding the basic concepts in training. There are exceptions, however.

Kirkpatrick: Evaluating Training Programs “What is quality training?” “How do you measure it?” “How do you improve it?” These are questions asked by HRD coordinators on training performance and the beginning criteria and the expectations of the resulting training program. Business training operations need quantitative measures as well as qualitative measures. A happy medium between these two criteria is an ideal position to fully understand the training needs and to fulfill its development. Quantitative - the research methodology where the investigator's “values, interpretations, feelings, and musings have no place in the positivist’s view of the scientific inquiry.” (Borg and Gall, 1989) cont.

Evaluating “The reason for evaluating is to determine the effectiveness of a training program.” (Kirkpatrick, 1994, pg. 3) The end results after an evaluation are hopefully positive results for both upper management and the program coordinators.

The Ten Factors of Developing a Training Program 1. Determine needs 2. Set objectives 3. Determine subject content 4. Select qualified applicants 5. Determine the best schedule 1. Ask participants, bosses, testing, or ask others who are familiar with the needs or objectives. Some examples are surveys or interviews. 2. a. What are the results that you are trying to do? b. What behaviors do you want the participants to have at the end of the training program. c. What knowledge, skills, and/or attitudes do you want your pupils to demonstrate at the end of the training program. 3. Determine subject content to meet needs and objectives. 4. Four decisions: a. Who is the best suited to receive the training. b. Are the training programs required by law (affirmative action). c. Voluntary or required d. Should hourly and salary be included in the same class or be segregated. 5. Solid week or intermittent days. How often should breaks be taken. Should lunch be brought in or allow participants to leave for a hour.

The Ten Factors of Developing a Training Program 6. Select appropriate facilities 7. Select qualified instructors 8. Select and prepare audiovisual aids 9. Co-ordinate the program 10. Evaluate the program 6. Should be comfortable and convenient and appropriate. 7. a. In-house or outside contractors b. Do instructors need to be ‘tailored’ to the special needs in the training program. 8. Two purposes: a. Maintain interest b. Help communicate ideas and skill transfer. Both of these purposes can be accomplished by using single, special interest video cassettes or some type of packaged program. 9. Two scenarios: a. Frustration, and b. Needs of the instructor. 10. The determining effectiveness of a training program are planning and implementation.

Reasons for Evaluating Kirkpatrick gives three reasons ‘why’ there is a need to evaluate training: 1.“To justify the existence of the training department by showing how it contributes to the organizations’ objectives and goals.” 1. If and when downsizing occurs, this statement shall have more meaning than ever for some unlucky people. HRD departments are regarded by upper management as an overhead and not contributing directly to production.

Reasons for Evaluating 2. “To decide whether to continue or discontinue training programs.” 3. “To gain information on how to improve future training programs.” (Kirkpatrick, 1994, pg. 18) 2. Pilot courses may be implemented to see if the participants have the necessary knowledge, or skills, or behavioral changes to make the program work. 3. Kirkpatrick uses eight factors on how to improve the effectiveness of a training program. These eight factors closely follow the Ten Factors of Developing a Training Program. This is a feedback statement spinning off of the Ten Factors.

The Four Levels Reaction Learning Behavior Results

“The Four Levels represent a sequence of ways to evaluate (training) programs….As you move from one level to the next, the process becomes more difficult and time-consuming, but it also provides more valuable information.” (Kirkpatrick, 1994, pg. 21) All of these levels are important. However, in later examples of this model, you shall see where large corporations have taken the Kirkpatrick Model and used all of it, only part of it, and still some reversed the order of the levels.

Reaction: is the measuring of the reaction of the participants in the training program. is “a measure of customer satisfaction.” (Kirkpatrick, 1994, pg. 21) The reactions of the participants must be positive for the program to survive, grow, and improve. Reactions reach back to bosses and subordinates alike. This word-of-mouth gossip reaction can either make the program or break it. Here ‘customer’ refers to the participants in the training program.

Learning: is the change in the participants’ attitudes, or an increase in knowledge, or greater skills received, as a result of the participation of the program. A training program must accomplish at least one of these three learning traits in order to be effective for a participant to learn. The best case scenario is to see an improvement in all three traits. However, as according to Kirkpatrick, only one learning trait is all it takes to have an effective training program.

Learning The measuring of learning in any training program is the determination of at least one of these measuring parameters: Did the attitudes change positively? Is the knowledge acquired related and helpful to the task? Is the skill acquired related and helpful to the task? Guidelines for measuring Learning: 1. Use a control group along with an experimental group to provide a comparison analysis, 2. Have a pre-test and a post-test, then measure the difference, 3. Try to get an honest and true 100% response to any interviews, surveys, or tests. 4. The use of a test to measure participant learning is an effective evaluation for both participant and instructor alike. However, this is not a conclusive fact. There may be other factors involved. Results must be measured across the spectrum of the Ten Factors of Development.

Behavior Level 3 attempts to evaluate how much transfer of knowledge, skills, and attitude occurs after the training. Level 3 asks the question “What changes in behavior occurred because people attended the training? This Level is a more difficult evaluation than Levels 1 and 2.

The four conditions Kirkpatrick identifies for changes to occur: Desire to change Knowledge of what to do and how to do it Work in the right climate Reward for (positive) change The employee must want to make the change. The training must provide the what and the how. The employee must return to a work environment that allows and/or encourages the change. There should be rewards - Intrinsic - inner feelings of price and achievement. Extrinsic - such as pay increases or praise.

When all conditions are met, the employee must: Realize an opportunity to use the behavioral changes. Make the decision to use the behavioral changes. Decide whether or not to continue using the behavioral changes. The employee may - Like the new behavior and continue using it. Not like the new behavior and return to doing things the “old way”. Like the change, but be restrained by outside forces that prevent his continuing to use it.

When evaluating change in behavior, decide: When to evaluate How often to evaluate How to evaluate With Reaction and Learning, evaluation should be immediate. But evaluating change in Behavior involves some decision-making.

Guidelines for evaluating behavior: Use a control group Allow time for change to occur Evaluate before and after Survey/interview observers Get 100% response or sampling Repeat evaluation, as appropriate Consider cost versus benefits Use a control group only if applicable. Be aware that this task can be very difficult and maybe even impossible. Allow time for behavioral changes. This could be immediate, as in the case of diversity training, or it can take longer, such as using training for administration of performance appraisals. For some programs 2-3 months is appropriate. For others, 6 months is more realistic. Evaluate before and after, if time and budgets allow. Conduct interviews and surveys. Decide who is qualified for questioning, and, of those qualified, whose answers would be most reliable, who is available, and, of the choices, should any not be used. Attempt to get 100% response. Repeat the evaluation. Not all employees will make the changes at the same time. Consider cost vs. benefit. This cost can be internal staff time or an outside expert hired to do the evaluation. The greater the possible benefits, the greater the number of dollars that can be justified. If the program will be repeated, the evaluation can be used for future program improvements.

Results Level 4 is the most important and difficult of all - determining final results after training.

Evaluation Questions: Increased production? Improved quality? Decreased costs? Improved safety numbers? Increased sales? Reduced turnover? Higher profits? Many of these questions do not get answered. Why? Trainers don’t know how to measure results in comparison to the cost of the training. Secondly, the results may not be clear proof that the training caused the positive results. Unless there is a direct relationship between the training and the results. (i.e. sales training and resulting sales dollars)

Guidelines for evaluating results: Use a control group. Allow time for results to be achieved. Measure before and after the program. Repeat the measurements, as needed. Consider cost versus benefits. Be satisfied with evidence if proof is not possible. Use a control group, again, if applicable, to prove the training caused the change. Allow time for results, different for different programs, different for each individual. Measure before and after. This is easier than measuring behavior because figures are usually available - hard data, such as production numbers or absenteeism. Repeat the measurement. You must decide when and how often to evaluate. Consider cost vs. benefit. Here, the amount of money spent on evaluation should be determined by - cost of training, potential results to be achieved, and how often the training will be repeated. And last, be happy with evidence of training success, because you may not get proof!

Case Study #1 INTEL CORPORATION

Intel’s Compromise of the Kirkpatrick Model Intel uses the four-level model as an analysis instrument to determine the initial training needs and design of its training program; as well as using the model for evaluations.

Intel’s Compromise of the Kirkpatrick Model Their uniqueness of using the model is in the fact that the designers of the training program worked backwards in the analysis of the training, starting with Level Four.

The Model This implementation of the Kirkpatrick Model stands as vivid testimony to the versatility of the model as a training tool, and in developing fledgling training programs.

The Model It also reflects the open-mindedness of the senior executives at Intel for their infinite use of the model and the use of the genius and visions of Kirkpatrick.

How Intel applies the analysis to their training program Level Four …”Determine the organizations’ structure and future needs.” Level Three. Change the environmental conditions and employee conditions to improve business indicators.

How Intel applies the analysis to their training program Level Two. “Design a training program that would ensure a transfer of deficient skills and knowledge.” Level One. Use a questionnaire, according to their skill level, that would instruct and inspire training participants.

How Intel applies evaluation to their training program Level One - Questionnaire. Level Two - Demonstrate competency, create action plans through group simulations. Level Three - Follow-up to determine if action plans were met (specific steps to implement concepts of what was learned). Level Four - Ongoing process of tracking business indicators.

Case Study #2 ST. LUKE’S HOSPITAL This case study used Level 1, Reaction, and Level 3, Behavior: St. Luke’s needed to improve efficiency and cost control and was looking for ways to improve management training. Outdoor-based programs have been effective in improving interdepartmental communications, increasing employee trust, and reducing boundaries between departments, thereby empowering employees. How many of you have taken part in such a program? There is an entire course of “rope and ladder” activities in the woods, some at ground level and some at higher elevations. The goal of these activities is to build trust and encourage openness and sharing.

St. Luke’s is unique - Evaluation of outdoor-based training program, not classroom. Results analyzed statistically to determine the significance of any change. Evaluation led to recommendations for future programs. St. Luke’s program consisted of three 1-day sessions on such a course. Phase I was directed at getting acquainted: in the morning “low rope” activities and in the afternoon, “high rope” elements. Phase II was focused on building trust within the group with harder, more challenging activities. Phase III focused on individual development and increased group support. The group traveled together and had team slogans and T-shirts. Previous participants were given a questionnaire to describe what they had personally gotten from the program and how it had changed their behavior. The results were used to design a new questionnaire for future participants.

The New Questionnaire Used before attendance in the program. Used 3 months after completion of the program. Used again 6 months after completion of the program. (Communication showed statistically significant improvement, and Group Effectiveness showed statistically significant change.) Evaluation of this program showed that some of the goals were achieved and were long-lasting. Also, it showed that participants had a positive reaction to the program, which can be linked to results on the job.

Kirkpatrick’s 4 Levels of Evaluation are: Level 1 - Reaction: how participants reacted to the program. Level 2 - Learning: what participants learned from the program. Level 3 - Behavior: whether what was learned is being applied on the job. Level 4 - Results: whether that application is achieving results.

Post-test Questions (1) Name three ways evaluation results can be measured. (2) Do all 4 Levels have to be used? (3) Do they have to be used in 1,2,3,4 order? (4) Is Kirkpatrick’s method of evaluation summative or formative? (5) Which developmental “view” does Kirkpatrick use? (discrepancy, democratic, analytical, diagnostic) (1) For ways results can be measured, refer to Slide 21. (2) All four Levels do not have to be used. Case study on St. Luke’s Hospital only used Levels 1 and 3. (3) The Levels do not have to be used in 1,2,3,4 order. Intel started 4,3,2,1 in designing their program. (4) What is your opinion on Kirkpatrick’s method being summative or formative? Is it a combination? (5) Which developmental view do you think Kirkpatrick uses? Defend your opinion.

“IF YOU THINK TRAINING IS EXPENSIVE, TRY IGNORANCE.” and, remember, the definition of ignorance is repeating the same behavior, over and over, and expecting different results! FOOD FOR THOUGHT!