Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation of Training

Similar presentations


Presentation on theme: "Evaluation of Training"— Presentation transcript:

1 Evaluation of Training
Chapter #8

2 Learning Objectives By the conclusion of this discussion you should:
Understand the different types of evaluation that should be collected and how to collect them. Know the basic design issues associated with the four levels of outcome evaluation. Be aware of the various threats to internal and external validity of data. Be able to construct a cost/benefit analysis and an evaluation plan for your proposed training program

3 Evaluating Training Programs, Donald Kirkpatrick
Why Evaluate? To justify the existence of the training department To decide whether to continue or discontinue training program To gain information on how to improve future training programs. Quotes from Evaluating Training Programs, Donald Kirkpatrick:pg 1994 Survey: 66% - assess learning 62% assess behavior 47% assess organizational outcomes Top management is demanding evidence that training departments are contributing positively to the bottom line. If no formal evaluations have taken place, decisions will be based upon whatever impressions the decision makers have of the training. Discuss book arguments: Resistance to training Evaluating Training Programs, Donald Kirkpatrick

4 Evaluation Data Process (Formative) – evaluates the training design, development, and delivery. Before During Outcome Data (Summative) – evaluates how well training accomplished its objectives. Reaction, Learning, Behavior, Results Before Training: assess the effectiveness of the needs analysis; assess the training objectives; evaluate design issues; Table 6.1 During Training: Table 6.2 Also referred to as formative and summative respectfully. Table 6.3 – Who is interested in the Process Data?

5 Evaluating Training Programs, Donald Kirkpatrick
Reaction Data Determine what you want to find out Design a form (quantify) Encourage written comments/suggestions Get 100% immediate response Get honest responses Develop acceptable standards Measure reactions against standards and take appropriate action Communicate reaction as appropriate Reaction outcomes are measures f the trainees perceptions emotions and subjective evaluations of the training experience. Set an upper limit on how much the trainees will learn. Provides the organization with a measure of the perceived value of the training. Allows you to modify to make training more relevant and appropriate. Figure 6.2 – Guidelines for Developing Evaluating Training Programs, Donald Kirkpatrick

6 Evaluating Training Programs, Donald Kirkpatrick
Learning Use a control group if practical Evaluate knowledge, skills, and/or attitudes both before and after program Design differently for declarative, procedural and strategic knowledge as well as skill and attitude Get 100% response Use the results of the evaluation to take appropriate action Communicate results as appropriate. Learning outcomes are measured by the requisite learning objectives and the overall training objectives. The amount of learning that occurs places an upper limit on the amount of change in job behavior that can occur. Learning should be assessed informally throughout the training and formally immediately following. Evaluating Training Programs, Donald Kirkpatrick

7 Evaluating Training Programs, Donald Kirkpatrick
Behavior Use a control group if practical Allow time for behavior change to take place Evaluate both before and after program if practical Survey and/or interview, documentation review, etc. Get 100% response or a sampling Repeat the evaluation at appropriate times Consider cost versus benefits Communicate as appropriate More complex evaluation. Job behavior outcomes are similarly measured in a manner consistent with the TNA. The TNA measured performance deficiencies. Whatever method you used to identify deficiencies should be used to measure to see if the deficiencies have been closed. The degree to which job behavior improves places a cap on how much organizational results can improve. Table 6.6 – How to develop Effective Questionnaires. Evaluating Training Programs, Donald Kirkpatrick

8 Organizational Results
Use a control group if practical Allow time for results to be achieved Measure both before and after the program if practical Repeat the measurement at appropriate times Consider cost versus benefits Be satisfied with evidence if proof is not possible. Communicate as appropriate Most difficult Organizational results is the highest level in the hierarchy and reflects the performance deficiency identified in the TNA. i.e.scrap,Turnover, Sales, Grievances, Quality, Productivity Evaluating Training Programs, Donald Kirkpatrick

9 Evaluating Costs Determine if Results where worth the Costs!!
Cost-Benefit Cost-Effectiveness Cost savings analysis Utility analysis Cost –benefit: compares monetary costs of training to non-monetary benefits (attitudes, relationships, etc.) Cost-effectiveness: compares the monetary costs of training to the financial benefits accrued from training Cost-savings: looks at the financial value of improvement in the problem that the training was intended to correct. (reduction in grievances; reduction in scrap, increase customer service) Utility analysis – looks at all the ways in which the trainees improved job performance will financially benefit the organization. (reduction in scrape, higher productivity; morale influences.) Calculate revenue produced by training: 1- revenue after- revenue without training 2- itemized analysis of all revenue increased by training Calculate return on investment: Revenue produced by training – cost of training = ROI

10 Actual Costs Development Costs Direct Costs Indirect Costs
Materials (20 trainees x $100) $2,000 Trainer/developer time $80) $3,200 Direct Costs Trainer’s time $80) $1920 Food/beverage (20 $20 each) $ 400 Material/equipment (projector rental, markers, etc.) $ 250 Indirect Costs Marketing $ 100 Administration (20 $10) $ 200 Participant Compensation 20 employees $10) $4,800 Evaluation Costs Evaluators time (40 $80) $3,200 Materials $ 250 Total $16,320 Guide for estimating training preparation – Table 5-2 pg 184 See Table 5-4 for complete description of costs and Table 5-5 for an additional example.

11 Cost/Benefit Analysis
Results – Increased productivity by 20% In an regular 8 hour day, each employee now produces 96 parts instead of 80 parts. (8hrs *$10)/80 parts = $1.00 per part (8hrs *$10)/96 parts = $ .83 per part $.17 addition profit annualize over 1 years production of 26,880 = $4,570 per employee (20 *4570) = $91,400 Table 8-5 page 368

12 Return on Investment ROI = benefit/cost ROI = 91,400/16,320
5.6 or 560% during 1st year Whenever ROI is greater than one you achieved a positive return on your investment.

13 Who is Interested in Outcomes?
Reaction Learning Behavior Results Trainer Yes No Other Trainers Maybe Training Mgr Trainees Trainees SS No – unless no transfer Upper Mgmt

14 Validity Internal – proof that performance improvements are related to training program External – proof that training will be equally effective for other groups yet to attend the training Must have internal validly first

15 Threats to Internal Validity
History Maturation Testing Instrumentation Statistical Regression Selection Mortality Diffusion of Training Compensation Treatments Compensatory Rivalry Demoralized Control Group History – events that occur concurrently with the training program that could effect outcomes. Learning occurred due to history events and not training. Maturation – changes that occur in outcomes because of the passage of time. (tired during post-testing) Testing - when pre-test/post-test method is used learning can occur simply by taking the pre-test. Learning the test questions not the training content. Instrumentation – providing different pre-test/post-test questions can raise concern regarding equivalent questions. Statistical Regression – tendency for those who pre-tested in the extremities to regress back to the middle when taking the post-test – instrument problem. Selection – issue associated with the selection of participants into the trained group or the control group. Mortality – people (usually the poorer scoring pre-testers) drop out of the program between the pre-test and post-test. Post-test results are then skewed to the high end. Diffusion of Training – when trainees share new info and knowledge with the control group members in the workforce. Compensation Treatments – when the control group is given special treatment from supervisors because of their status as a “control group”. This treatment can lower the training evaluation results. Compensatory Rivalry – When work groups are trained together, the non-trained work group can voluntarily boost productivity out of a sense of challenge and competition. Demoralized Control Group – Control group can actually reduce their productivity due to feelings of inadequacy because they were not selected into the training group.

16 Threats to External Validity
Testing Selection Reaction to Evaluation Multiple Techniques Evaluation must have internal validity before it can be externally valid. External validity - Confidence that internal validity will generalize to others who undergo training. Testing – changing evaluation methods (not using pretesting consistently) Selection – make sure participants in training program have equal KSAs as did the group the training was conducted for. Mid-level mgmt training may not be effective given to technical staff, etc. Reaction to Evaluation – once a training is shown to be effective, evaluation may stop. The mere elimination of the evaluation mechanism may lower the results, simply because of the Hawthorne effect. When you change the way groups are treated, you change the results. Multiple Techniques – Be aware of changing the combination of events that occur during training. Changing order and/or removing items can effect the overall learning experience.

17 Basic Evaluation Designs
Post-test Only Limited uses Hard to assess change Pre-test/Post-test Can demonstrate change Post-test Only Limited uses: Okay for certification type training/competency tests Okay if data is available for pre-test comparison Add control group to improve data Problematic for assessing change Pre-test/Post-test Can demonstrate change – but it is difficult to determined that training was responsible for the change Better to use without a control group than not at all, use a control group when at all possible

18 Complex Evaluation Designs
Post-test with Control Group Pre-test/Post-test with Control Group Time Series (with or without control group) Multiple Baseline Solomon 4 Group Control Group – group of similar employees who do not receive the training. This group is used to determine if changes that take place in training also take place for those who do not receive training. Including a control group helps to determine whether the differences in pre-test/post-test scores is due to the training or some other factor. Random Assignment – assignment to either the control group or the training group purely by random chance; ensures equivalency. Difficult to do in organizations. Representative sampling more realistic – match employees as best as possible to the two groups. Time Series – uses a series of measurements before and after training and compare. The more data collected over a larger period of time can improve confidence in results. Multiple Baseline – multiple measures are taken pre and post training and multiple training groups are training at different points in the data collection. Solomon 4 Group – deals with all concerns of internal validity. Four groups are identified. Two of which receive training. The first group is pretest and then trained and post-tested. The 2nd group is pretest not training and then post-tested The 3rd group is trained and post tested and the 4th group is only post tested. The more complex the design the more valid the results. The more complex the design the more difficult it is to collect and more costly. Deciding what design to Use:

19 Conclusion Evaluation of training is necessary and if done correctly can provide internally and externally valid data. Evaluation is complex and must be designed to provide the data your organization requires from it. Cost analysis is a necessary part of training evaluation.


Download ppt "Evaluation of Training"

Similar presentations


Ads by Google