Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unit 10: Evaluating Training and Return on Investment

Similar presentations


Presentation on theme: "Unit 10: Evaluating Training and Return on Investment"— Presentation transcript:

1 Unit 10: Evaluating Training and Return on Investment
2009

2 Unit 10, Class 1: Evaluating Training
Learning Objectives By the end of this unit, students will: Determine benefits of a training program. Calculate benefit/cost ratio. Calculate return on investment (ROI). Identify when ROI evaluation is not appropriate. Use other methods to verify training value when ROI is not appropriate. Objectives for the class. ©SHRM 2009

3 Training Evaluation Reluctance to evaluate: Why evaluate?
Managers are unwilling to devote time to evaluation. Lack of know-how or no importance. Why evaluate? Link to organizational strategy. Effectiveness of training. Return on investment. Evaluation and analysis is the last step in the ADDIE training model. It allows an opportunity to go back to the beginning and assess the results of the training cycle. Early in the process, learning objectives were set and actions to accomplish these objectives were determined. Evaluation allows the organization to assess if the training accomplishments met those objectives and reflect the goals established in the strategic plan. Organizations invest millions of dollars in training programs in an effort to gain the competitive advantage generated by well-trained employees. A 2004 American Society for Training and Development (ASTD) benchmarking survey showed that small companies (those with employees) spent $1,194 per employee for training and organizations with 2000 or more employees spent $800 per employee. No one would argue that training is not expensive. Organizations want to see a positive return for training dollars spent. As a result, training managers are increasingly being asked to justify expenses and demonstrate how training dollars increase the organization’s bottom line return. Even so, managers are sometimes unwilling to spend the time needed to evaluate training, or they simply don’t know how to do it or don’t see the importance of evaluation. Training departments with limited budgets often assume their training has been effective and then put their dollars into new training programs instead of evaluation. Some of the reluctance to evaluate is understandable because in many cases, much of the benefit of training is intangible and can be difficult or impossible to measure. What dollar value can be placed on improved employee attitudes? Priceless, certainly! Management, though, wants real numbers; priceless won’t suffice as an answer. Sources: Noe, R. A., (2008). Kruse, K. Evaluating e-Learning: Introduction to the Kirkpatrick Model. Retrieved: from: ©SHRM 2009

4 Was the training effective?
Training Evaluation Was the training effective? Training effectiveness: The benefits the organization and trainees receive from training. The main question to be determined from evaluation is whether the training was effective. Effectiveness is determined by measuring the benefits the organization receives from the training. ©SHRM 2009

5 Training Evaluation Formative evaluation: Evaluation of training that takes place during program design. May result in content change. May involve pilot test. May adjust to meet needs of the trainees. Formative evaluation starts long before the training program is completed. It begins during the program design. Evaluation in the early stages enables trainers to ensure that when the training is implemented, it is well organized and runs smoothly. Formative evaluation is also done to assess whether the trainees will learn the content intended and that they will be satisfied with the training program. Evaluation is done by meeting with focus groups or SMEs. As a result of information derived from the process, training content may be changed to ensure that it is accurate, easy to understand and appealing to the trainees. Sometimes a pilot test is done to preview the training program with potential trainees and managers. A pilot test is like a dress rehearsal of the training, and it may reveal problems that can be corrected before the actual training is implemented. When evaluation is done during the training itself, it is used to obtain feedback from participants and to make any adjustments in the training program to meet the trainees’ immediate needs. Source: Noe, R.A., (2008). ©SHRM 2009

6 Training Evaluation Summative evaluation: Evaluation conducted at the end of training. Used to determine the extent to which trainees have changed as a result of the training program. Used to measure return on investment. Summative evaluation occurs at the end of the program. It is used to determine the extent to which trainees have changed as a result of the training. It measures the change in trainees knowledge, skills, attitudes and behaviors. It is also used to assess the organization’s return on training investment. Source: Noe, R.A., (2008). ©SHRM 2009

7 Instructional Design: ADDIE Model
In spite of some assumption that evaluation occurs only at the end of training, notice the arrows on the instructional system design model. Evaluation is in the center, with arrows indicating that evaluation occurs during each phase of the process and makes a complete loop from the initial analysis through design, development and, finally, implementation. Clark, D. R. (2008), Instructional System Design; Figure 3 retrieved 09/03/08 from ©SHRM 2009

8 Evaluation Process Conduct a needs analysis.
Develop measurable learning outcomes and plan for transfer of training. Develop outcome measures. Choose an evaluation strategy. Plan and execute the evaluation. Remind students that three of the five steps in the evaluation process occurred long before training is implemented. This further reiterates that evaluation is built into the training from the very beginning. Source: Noe, R.A., (2008). ©SHRM 2009

9 Kirkpatrick’s Four-Level Model of Evaluation
Level 1: Reaction Level 2: Learning Level 3: Behavior Level 4: Results In 1975, Donald Kirkpatrick first presented a four-level model of evaluation that has become a standard in the training industry. Source: Chapman, A. (2007). Kirkpatrick’s learning and training evaluation theory. Retrieved 09/03/08 from ©SHRM 2009

10 Level 1: Reaction Reaction: How did participants react to the program?
Smile sheets. Informal comments from participants. Focus group sessions with participants. In the first evaluation level, students are asked to rate the training after completing the program. These are sometimes called smile sheets because in their simplest form, they ask students how well they liked the training. This level is often measured through attitude questionnaires that are distributed at the end of training. It can also be done through focus groups of training participants. This level measures reaction only; learners identify if they were satisfied with the training. It does not indicate if learners acquired any knew knowledge or skills, nor does it indicate that any new learning will be carried back to the workplace. If learners react poorly to the training and indicate dissatisfaction at this evaluation level, trainers must determine if the negative results are due to poor program design or unskilled delivery. Source: Clark, D. R. (2008). Instructional System Design. Retrieved 09/03/08 from “Why Measure Training Effectiveness?” (2008) Retrieved 09/03/08 from ©SHRM 2009

11 Level 2: Learning Learning: To what extent did participants improve knowledge and skills and change attitudes as a result of the training? Pre- and post-tests scores. On-the-job assessment. Supervisor reports. The second evaluation level is used to determine learning results. Did students actually learn the knowledge, skills and attitudes the program was supposed to teach? It asks the questions: What knowledge was acquired? What skills were developed or enhanced? What attitudes were changed? The results are usually determined by pre-and post-test scores and on-the-job assessments or reports from supervisors. The second evaluation level is not as widely used as the first level, but it is still very common. “Why Measure Training Effectiveness?” (2008) Retrieved 09/03/08 from Clark, D. R. (2008), “Instructional System Design;” Retrieved 09/03/08 from ©SHRM 2009

12 Level 3: Behavior Behavior: Do learners use their newly acquired skills and knowledge on the job? Transfer of training. On-the-job observation. Self-evaluation. Supervisor and peer evaluation. Kirkpatrick’s third evaluation level explores the consequences of the learner’s behavior. Has the learner transferred the learning back to changed performance in the workplace? Can the learner actually put the newly acquired skills to use on the job? This is referred to as transfer of training. No matter how good the training program was, if participants cannot (or will not) use the new skills and knowledge on the job, the training has little value to the employer. Ideally, this evaluation is conducted three to six months after completion of the training program. This allows time for learners to implement new skills, and retention rates can be evaluated. Evaluation is done by observation of learners on the job, or through self-evaluation or evaluation from supervisors, peers or others who work directly with the learner. Source: Clark, D. R. (2008). Kruse, K. Evaluating e-Learning: Introduction to the Kirkpatrick Model. Retrieved 09/02/08 from ©SHRM 2009

13 Level 4: Results Results: What organizational benefits resulted from the training? Difficult and costly to collect. Impossible to isolate the results of training. Measuring return on investment. Financial reports. Quality inspections. Interviews. Kirkpatrick’s level four evaluates the final results of the training. It asks the question – What effect has the training achieved? Effects can include such things as morale, teamwork, and most certainly, the monetary effect on the organization’s bottom line. Management wants to know if they received value for the training dollars spent and what their return on investment was. Collecting and analyzing evaluation at this level can be difficult and time-consuming. Part of the difficulty comes from the challenge of isolating the training variable from other factors in the organization that may also affect learners’ behaviors. When employee behavior changes, it is difficult to know if the change is the result of training or the result of some other environmental factor. Level four evaluations are done through financial reports, quality inspections and interviews with management personnel. Source: Clark, D. R. (2008). Kruse, K. Evaluating e-Learning: Introduction to the Kirkpatrick Model. Retrieved 09/02/08 from: ____. (2008). Why Measure Training Effectiveness? Retrieved 09/03/08 from ©SHRM 2009

14 Levels of Evaluation vs. Value
The difficulty and cost of conducting evaluations increases as you move up the levels. Organizations and trainers must carefully consider which levels of evaluations are appropriate for which training programs. Most commonly, level one evaluations are conducted for all training. Level two--learning evaluations--are generally conducted for skills training programs. Level three evaluations–behavior—for strategic programs and level four--results evaluations—are appropriate only for broad-based, high-budget training programs. Unfortunately, the easy evaluation instruments used at level one don’t give results that have much value to the organization. The value of the information obtained from the process increases as evaluation moves to higher levels. Level four–results–is the most difficult to assess and yet reveals the most valuable information. Source: Kirkpatrick, D.L, Kirkpatrick, J. D. (2006). Evaluating Training Programs: The Four Levels. Barrett-Koehler. ©SHRM 2009 Kirkpatrick, & Kirkpatrick, 2006

15 Design evaluation instruments for your training project.
Students should design evaluation instruments that reflect Kirkpatrick’s four levels of evaluation. ©SHRM 2009

16 Unit 10, Class 2: Return on Investment: Benefit-Cost Ratio
Aids in decision-making process. Consistent analysis across programs. Information difficult to obtain. Increased competition for investment dollars requires organizations to decide whether to invest in training or to invest in something else. A well-designed benefit-cost ratio analysis can aid in the decision-making process by allowing several different investment options to be compared with each other. The problem is that some benefits derived from training can be intangible and difficult to quantify. How do you measure and put a dollar value on increased morale or better teamwork? Consequently, gathering and compiling the information needed for an accurate benefit-cost analysis can be a complicated task. Source: U.S. Dept. of Labor Retrieved 09/02/08, from ©SHRM 2008

17 Return on Investment Return on Investment/Benefit-Cost Ratio:
Program Benefits Benefit-Cost Ratio = Program Costs $2,500 Benefit-Cost = 2.5: $1,000 As organizations tighten budgets and scrutinize costs, some wonder if the investment in training is worth it. Most trainers assess results by using at least some of Kirkpatrick’s four levels of evaluation. When management questions the value returned in exchange for the money spent, training directors must go further to justify the investment in time and money that training requires. Some suggest that there should be a fifth level in Kirkpatrick’s model: focusing on return on investment. The most common method of measuring return on investment is to calculate a benefit/cost ratio for financial return. First, training managers must determine the total cost of the training. This includes both direct costs (printing, equipment rental, etc.) and indirect costs (overhead, productivity loss, etc.). Then a dollar value must be determined for the benefit of the training. This is where it gets difficult. If training has directly increased productivity, that may not be too difficult to calculate. For benefits that are less tangible, however, such as improved morale or better teamwork, assigning a dollar value can be difficult. Once we have total dollar figures for cost and benefit, the ratio is determined by dividing the value of the program benefits by the cost of the program. The resulting calculation is expressed as a ratio. The example in the slide indicates a training cost of $1,000 and a training benefit of $2,500, which computes to a benefit-cost ratio of 2.5:1. This means that for every $1 of cost invested, the organization derived a benefit of $2.50. A 2.5:1 ratio would be a very worthwhile investment! But, any ratio that is less than 1:1 indicates a loss for the organization because the costs of the program outweighed the benefits. What is the ratio on the next slide? Source: U.S. Dept. of Labor, retrieved 09/02/08, from ©SHRM 2009

18 What is the benefit-cost ratio?
Program benefits : $6,500 Program costs: $8,495 What is the benefit-cost ratio? Use the same calculation process as before: benefits divided by the cost. $6,500 benefit divided by $8,495 cost results in a ratio of .765:1. Not a good return for the investment! This program returned only 76.5 cents for every dollar spent! Maybe the organization should scrap this program. Or maybe not. Ask students to discuss why an organization might continue with a program that costs more than it returns. After discussion, remind them that there are sometimes hidden or social benefits that are not quantifiable. In addition, some benefits may seem costly in the short term, but in the long term they may generate positive results for the organization. Source: U.S. Dept. of Labor. Retrieved 09/02/08, from ©SHRM 2009

19 What About ROI? Return on Investment – ROI (%) Program benefit: $2,500
= 2.5 x 100 = 250% Program cost: $1,000 ROI = 250% Return on investment (ROI) is calculated much the same way as the benefit-cost ratio except that ROI is expressed as a percentage instead of a ratio. Using the same example as before, this time demonstrating a 250 percent return on investment. A ROI of 100 percent would be the break-even point where costs and benefits are exactly equal. Any percentage less than 100 means the program has a net cost. In other words, the program cost more than the benefit received. ©SHRM 2009

20 Determining Benefits Measuring training benefits:
Benefits must consider training objectives. Literature summaries of benefits of specific training. Assessment of pilot training programs. Observations of successful trainees. Estimates from trainees and managers. As demonstrated in the previous slides, the math is easy. The difficulty comes in calculating the value of the training benefit. How should it be done? First, to determine the benefits of the training, the organization must go back and review the original reasons that the training was conducted. What were the original objectives for the training program and were they accomplished? How should the accomplishments be measured? In some cases, academic research is available, and a search of practitioner literature may summarize the benefits derived from specific training programs. For example, OSHA has a number of success cases that identify concrete examples of the effect of training on an organization’s bottom line. Another method might be to conduct a pilot test. Pilot training programs can assess the value of the benefits derived from a small group of employees before the full-scale training program is implemented. Observation of successful trainees compared with untrained employees can demonstrate the productivity impact of a specific training program. Trainees and their managers can provide estimates of the benefits of training after completion of a program. Sources: Noe, R. A. (2008). U.S. Department of Labor. (2007). Anthony Forest Products Saves Over $1 Million by Investing $50,000 in Safety and Health. Retrieved: 09/07/08 from ©SHRM 2009

21 Programs Best Suited for ROI Analysis
Training appropriate for ROI analysis: Clearly identified outcomes. Not one-time events. Broad-based and highly visible in the organization. Strategically focused. Training effects can be isolated. Remember from the Kirkpatrick model that the higher the level of evaluation, the more costly and difficult it will be to conduct the evaluation. Therefore, it is important to remember that ROI analysis may not be appropriate for all training programs. Training programs best suited for ROI analysis must have clearly identified outcomes from which the benefit can be determined. They should be a reflection of the goals set in the organization’s strategic plan. These are broad-based across the organization and not one-time training events. The effects of training can be isolated to ensure that the benefit is not a reflection of other organizational factors. Source: Noe, R. A. (2008). ©SHRM 2009

22 When ROI Isn’t Appropriate
Justifying training when ROI isn’t the answer: Success cases. Measuring the payback period. The consequences of NOT training. Focus on most important programs. Make training a true business partnership. When an ROI analysis is not practical or simply impossible for a particular training program, training managers will be challenged to find other ways to justify the value of the training. Smile sheets are good, but upper management wants to see real results that generate a positive business impact. Success cases demonstrate business value through credible stories that show economic effect. This information can be gathered through short, behavior-based questionnaires that trainees complete after the training. These instruments are designed to examine transfer of training and to test some fundamentals regarding content or skills that the learning was supposed to deliver. The questionnaires may be used to interview a strategic sampling of the trainees to learn how they are applying the learning in their job and how that leads to business value for the organization. Measuring the payback period of the training investment is a projection of how long it will take the organization to recover its investment in training. This is a simple calculation where the total training investment is divided by the annual savings generated by the training. For example, if the organization spent $100,000 on safety training and the safety training was expected to save $40,000 per year in safety expense and workers’ compensation insurance, the payback period would equal 2.5 years. If the training program had a three-year useful life, it would be a good deal. If the useful life of the training was only one year and it would then have to be done again, obviously this would not be a good investment. What about the consequences of not training employees? If you can demonstrate that not training your workforce will make the organization less competitive, less productive, etc., you will have a stronger argument for the positive benefit of training. This is a likely argument in areas such as EEO compliance. In this case, you must identify a potential risk of loss for the organization, predict its business impact and compare that loss to the cost of training. Sources: IOMA (June 2003). 5 Creative Ways to Measure Training’s Return-On-Investment. IOMA’s Report on Managing Training & Development, IOMA (February 2005). How Senior Managers Really Want You to Prove the Value of Training. IOMA’s Report on Managing Training & Development, ©SHRM 2009

23 Training and Organization Success
Strategic Planning Evaluation Assessment This is where training comes full circle, back to the organization’s strategic plan. Senior management will always be looking for trainers to justify the expense of training. Consequently, training managers must remember to focus training on issues that are most important to the organization and that will achieve strategic objectives. In this way, trainers will become true partners in the organization’s success. Implementation Design Development


Download ppt "Unit 10: Evaluating Training and Return on Investment"

Similar presentations


Ads by Google