Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating Training Programs. How can training programs be evaluated? Measures used in evaluating training programs Measures used in evaluating training.

Similar presentations


Presentation on theme: "Evaluating Training Programs. How can training programs be evaluated? Measures used in evaluating training programs Measures used in evaluating training."— Presentation transcript:

1 Evaluating Training Programs

2 How can training programs be evaluated? Measures used in evaluating training programs Measures used in evaluating training programs Various ways of designing the evaluation procedures Various ways of designing the evaluation procedures Describe the measurement process itself Describe the measurement process itself

3 Donald Kirkpatrick Kirkpatrick developed a model of training evaluation in 1959 Kirkpatrick developed a model of training evaluation in 1959 Arguably the most widely used approach Arguably the most widely used approach Simple, Flexible and Complete Simple, Flexible and Complete 4-level model 4-level model

4 Measures of Training Effectiveness REACTION - how well trainees like a particular training program. Evaluating in terms of reaction is the same as measuring trainees' feelings. It doesn't measure any learning that takes place. And because reaction is easy to measure, nearly all training directors do it. REACTION - how well trainees like a particular training program. Evaluating in terms of reaction is the same as measuring trainees' feelings. It doesn't measure any learning that takes place. And because reaction is easy to measure, nearly all training directors do it.

5 Reaction (cont) It's important to measure participants' reactions in an organized fashion using written comment sheets that have been designed to obtain the desired reactions. It's important to measure participants' reactions in an organized fashion using written comment sheets that have been designed to obtain the desired reactions. The comments should also be designed so that they can be tabulated and quantified. The comments should also be designed so that they can be tabulated and quantified. The training coordinator/trained observer should make his own appraisal of the training to supplement participants' reactions. The training coordinator/trained observer should make his own appraisal of the training to supplement participants' reactions. The combination of two evaluations is more meaningful than either one by itself. The combination of two evaluations is more meaningful than either one by itself.

6 Reaction (cont) When training directors effectively measure participants' reactions and find them favorable, they can feel proud. But they should also feel humble; the evaluation has only just begun. When training directors effectively measure participants' reactions and find them favorable, they can feel proud. But they should also feel humble; the evaluation has only just begun. May have done a masterful job measuring reactions, but no assurance that any learning has taken place. Nor is that an indication that participants' behavior will change because of training. And still further away is any indication of results that can be attributed to the training. May have done a masterful job measuring reactions, but no assurance that any learning has taken place. Nor is that an indication that participants' behavior will change because of training. And still further away is any indication of results that can be attributed to the training.

7 Collecting reaction measures after training important: Memory distortion can affect measures taken at a later point. Memory distortion can affect measures taken at a later point. There is often a low return rate for questionnaires mailed to people long after they have completed the training. There is often a low return rate for questionnaires mailed to people long after they have completed the training.

8 Learning Defined in a limited way: What principles, facts, and techniques were understood and absorbed by trainees? (We're not concerned with on-the- job use of the principles, facts, and techniques.) Defined in a limited way: What principles, facts, and techniques were understood and absorbed by trainees? (We're not concerned with on-the- job use of the principles, facts, and techniques.)

9 Here are some guideposts for measuring learning: Measure the learning of each trainee so that quantitative results can be determined. Measure the learning of each trainee so that quantitative results can be determined. Use a before-and-after approach so that learning can be related to the program. Use a before-and-after approach so that learning can be related to the program. As much as possible, the learning should be measured on an objective basis. As much as possible, the learning should be measured on an objective basis. Where possible, use a control group (not receiving the training) to compare with the experimental group that receives the training. Where possible, use a control group (not receiving the training) to compare with the experimental group that receives the training. Where possible, analyze the evaluation results statistically so that learning can be proven in terms of correlation or level of confidence. Where possible, analyze the evaluation results statistically so that learning can be proven in terms of correlation or level of confidence.

10 Behavior - Evaluation of training in terms of on-the- job behavior is more difficult than reaction and learning evals, because one must consider many factors. Here are several guideposts for evaluating training in terms of behavioral changes: Conduct a systematic appraisal of on-the-job performance on a before-and-after basis. Conduct a systematic appraisal of on-the-job performance on a before-and-after basis. The appraisal of performance should be made by one or more of the following groups (the more the better): trainees, trainees' supervisors, subordinates, peers, and others familiar with trainees' on-the-job performance. The appraisal of performance should be made by one or more of the following groups (the more the better): trainees, trainees' supervisors, subordinates, peers, and others familiar with trainees' on-the-job performance. Conduct a statistical analysis to compare before-and- after performance and to relate changes to the training. Conduct a statistical analysis to compare before-and- after performance and to relate changes to the training. Conduct a post-training appraisal three months or more after training so that trainees have an opportunity to put into practice what they learned. Subsequent appraisals may add to validity of the study. Conduct a post-training appraisal three months or more after training so that trainees have an opportunity to put into practice what they learned. Subsequent appraisals may add to validity of the study.

11 Results The objectives of most training programs can be stated in terms of the desired results, such as reduced costs, higher quality, increased production, and lower rates of employee turnover and absenteeism. The objectives of most training programs can be stated in terms of the desired results, such as reduced costs, higher quality, increased production, and lower rates of employee turnover and absenteeism. It's best to evaluate training programs directly in terms of desired results. But complicated factors can make it difficult to evaluate certain kinds of programs in terms of results. It's best to evaluate training programs directly in terms of desired results. But complicated factors can make it difficult to evaluate certain kinds of programs in terms of results. It's recommended that training directors begin to evaluate using the criteria in the first three steps: reaction, learning, and behavior. It's recommended that training directors begin to evaluate using the criteria in the first three steps: reaction, learning, and behavior.

12 Utility Analysis Utility Analysis Cost-benefit analysis: compare costs of training program with the benefits received (both monetary and non-monetary) Cost-benefit analysis: compare costs of training program with the benefits received (both monetary and non-monetary) Costs: direct costs, indirect costs, overhead, development costs, and participant compensation Costs: direct costs, indirect costs, overhead, development costs, and participant compensation Benefits: improvement in trainee attitudes, job performance, quality of work, creativity Benefits: improvement in trainee attitudes, job performance, quality of work, creativity

13 How Should a Training Evaluation Study be Designed? Case Study Case Study Training >>>>Measures Taken After Training Training >>>>Measures Taken After Training Problem- no measures taken prior to training so no way to know whether assertiveness training brought any change Problem- no measures taken prior to training so no way to know whether assertiveness training brought any change Pretest-Posttest Design Pretest-Posttest Design Measures Taken Before Training >> Training >>>Measures Taken After Training Measures Taken Before Training >> Training >>>Measures Taken After Training Little to no value as multitude of unknown factors could be the real cause of change in performance. Little to no value as multitude of unknown factors could be the real cause of change in performance.

14 A. PRETEST - POSTTEST METHOD 1. Most commonly used method in training. 2. Does not clearly identify training as the reason for improved knowledge or performance. B. AFTER-ONLY DESIGN WITH A CONTROL GROUP 1. Control group is used to determine whether training made a difference. 2. No Pretests are given. 3. Both groups take posttest after training. 4. The after-only design with a control group allows trainers to tell whether changes are due to their programs.

15 C. PRETEST-POSTTEST DESIGN WITH A CONTROL GROUP 1. Employees are randomly assigned to a treatment group or a control group. 2. Only treatment group receives training. 3. Both groups take a posttest. 4. Advantages a. Pretest results ensure equality between the groups. b. Statistical analysis determines whether differences in posttest results are significant. D. TIME-SERIES DESIGN 1. Uses a number of measures both before and after training. 2. Purpose is to establish individuals' patterns of behavior and then see whether a sudden leap in performance followed a training program. performance followed a training program. 3. Weakness: because of relatively long time period covered, changes in behavior can be attributed to circumstances other than the program. circumstances other than the program.

16 More Sophisticated Evaluation Designs Solomon Four Group Design – ideal for ascertaining whether a training intervention had a desired effect on training behavior. Unlike the designs discussed – this design involves the use of more than one control group. Solomon Four Group Design – ideal for ascertaining whether a training intervention had a desired effect on training behavior. Unlike the designs discussed – this design involves the use of more than one control group.

17 Evaluation statistically Preferred choice for analyzing training intervention when considering statistical power & lower costs is Analysis of Variance with an after-only control group design. Preferred choice for analyzing training intervention when considering statistical power & lower costs is Analysis of Variance with an after-only control group design. The next best approach is the Analysis of Covariance using the pretest score as a covariant. The next best approach is the Analysis of Covariance using the pretest score as a covariant.

18 Self report - Self report - trainees are asked to evaluate themselves on variables related to the purpose of training trainees are asked to evaluate themselves on variables related to the purpose of training measures complicate the measurement of change because of the problems involved in the definition of change itself. measures complicate the measurement of change because of the problems involved in the definition of change itself. 3 Types of Change w/ Self-Report Data are: 3 Types of Change w/ Self-Report Data are: Alpha change Alpha change Beta change Beta change Gamma change Gamma change

19 Barriers that Discourage Training Evaluation (p. 161- 163) Top mgmt doesn’t usually require evaluation Top mgmt doesn’t usually require evaluation Most senior-level training mgrs don’t know how to go about evaluating training programs Most senior-level training mgrs don’t know how to go about evaluating training programs Senior-level training managers don’t know what to evaluate Senior-level training managers don’t know what to evaluate Evaluation is perceived as costly & risky. Evaluation is perceived as costly & risky.


Download ppt "Evaluating Training Programs. How can training programs be evaluated? Measures used in evaluating training programs Measures used in evaluating training."

Similar presentations


Ads by Google