Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation.

Similar presentations


Presentation on theme: "1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation."— Presentation transcript:

1 1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation

2 2 © 2010 by Nelson Education Ltd. Learning Outcomes  Define training evaluation and the main reasons for conducting evaluations  Discuss the barriers to evaluation and the factors that affect whether or not an evaluation is conducted  Describe the different types of evaluations  Describe the models of training evaluation and the relationship among them

3 3 © 2010 by Nelson Education Ltd. Learning Outcomes  Describe the main variables to measure in a training evaluation and how they are measured  Discuss the different types of designs for training evaluation, as well as their requirements, limits, and when they should be used

4 4 © 2010 by Nelson Education Ltd. Instructional Systems Design Model

5 5 © 2010 by Nelson Education Ltd. Instructional Systems Design Model Training evaluation is the third step of the ISD model and consists of two parts: The evaluation criteria (what is being measured) Evaluation design (how it will be measured) These concepts are covered in the next two chapters Each has a specific and important role to play in the effective evaluation of training and the completion of the ISD model

6 6 © 2010 by Nelson Education Ltd. What is Training Evaluation? Process to assess the value - the worthiness - of training programs to employees and to organizations

7 7 © 2010 by Nelson Education Ltd. Training Evaluation  Not a single procedure; a continuum of techniques, methods, and measures  Ranges from simple to elaborate procedures  The more elaborate the procedure, the more complete the results, yet usually the more costly (time, resources)  Need to select the procedure based on what makes sense and what can add value within resources available

8 8 © 2010 by Nelson Education Ltd. Why Conduct Training Evaluations?  Assist managers in identifying what, and who, should be trained  Determine cost-benefits of a program  Determine if training program has achieved expected results  Diagnose strengths and weaknesses of a program and pinpoint needed improvements  Justify and reinforce the value of training

9 9 © 2010 by Nelson Education Ltd. Barriers to Training Evaluation Barriers fall into two categories: 1.Pragmatic requires specialized knowledge and can be intimidating data collection can be costly and time consuming 2.Political potential to reveal ineffectiveness of training

10 10 © 2010 by Nelson Education Ltd. Types of Training Evaluation Evaluations may be distinguished from each other with respect to: 1.The data gathered and analyzed 2.The fundamental purpose of the evaluation

11 11 © 2010 by Nelson Education Ltd. Types of Training Evaluation 1.The data gathered and analyzed a.Trainee perceptions, learning and behaviour at the conclusion of training b.Assessing psychological forces that operate during training c.Information about the work environment Transfer climate and learning culture

12 12 © 2010 by Nelson Education Ltd. Types of Training Evaluation 2. The purpose of the evaluation: a.Formative: provide data about various aspects of a training program b.Summative: provide data about worthiness or effectiveness of a training program c.Descriptive: provide information that describes the trainee once they have completed a training program d.Causal: provide information to determine if training caused the post-training behaviours

13 13 © 2010 by Nelson Education Ltd. Models of Training Evaluation A.Kirkpatrick’s Hierarchical Model Oldest, best known, and most frequently used model. The Four Levels of Training Evaluation: –Level 1: Reactions –Level 2: Learning –Level 3: Behaviours –Level 4: Results

14 14 © 2010 by Nelson Education Ltd. Models of Training Evaluation Kirkpatrick’s Model provides a systematic framework for assessing training  The four levels are presented in a hierarchy with each level providing more important information than the preceding one  It assumes all levels are positively related to each other; have causal effect on the next level  Contributions of Kirkpatrick model not to be underestimated; clear, simple, demystified evaluation and provide impetus for research

15 15 © 2010 by Nelson Education Ltd. Models of Training Evaluation There is general agreement that the four levels are important outcomes to be assessed there are some critiques:  Doubt about the validity  Insufficiently diagnostic  Kirkpatrick requires all training evaluations to rely on the same variables and outcome measures

16 16 © 2010 by Nelson Education Ltd. Models of Training Evaluation B.COMA Model A training evaluation model that involves the measurement of four types of variables 1.Cognitive 2.Organizational Environment 3.Motivation 4.Attitudes

17 17 © 2010 by Nelson Education Ltd. Models of Training Evaluation The COMA model improves on Kirkpatrick’s model in three ways: 1.Integrates a greater number of measures 2.Measures are causally related to training success 3.Defines variables with greater precision However, COMA model is relatively new so too early to determine its value; also its focus is on factors that impact transfer only; and it does not specify how evaluations should be conducted

18 18 © 2010 by Nelson Education Ltd. Models of Training Evaluation C. Decision-Based Evaluation Model A training evaluation model that specifies the target, focus, and methods of evaluation

19 19 © 2010 by Nelson Education Ltd. Models of Training Evaluation Decision-Based Evaluation Model  Goes further than either of the two preceding models: 1.Identifies the target of the evaluation 2.Identifies its focus 3.Suggest methods 4.General to any evaluation goals 5.Flexibility: guided by target of evaluation

20 20 © 2010 by Nelson Education Ltd. Models of Training Evaluation  As with COMA, the DBE model is recent and will need to be tested more fully  All three models require specialized knowledge to complete the evaluation; this can limit their use in organizations without this knowledge  Holton and colleagues’ Learning Transfer System Inventory (seen in Ch 10) provides a more generic approach See Training Today 11.2 for more on its use for evaluation

21 21 © 2010 by Nelson Education Ltd. Training Evaluation Variables  Training evaluation requires data be collected on important aspects of training  Some of these variables have been identified in the three models of evaluation  A more complete list of variables is presented in Table 11.1 and Table 11.2 shows sample questions and formats for measuring each type of variable

22 22 © 2010 by Nelson Education Ltd. Training Evaluation Variables A.Reactions B.Learning C.Behaviour D.Motivation E.Self-efficacy F.Perceived/anticipated support G.Organizational perceptions H.Organizational results See Table 11.1 in text

23 23 © 2010 by Nelson Education Ltd. Training Evaluation Variables A. Reactions 1.Affective reactions: measures that assess trainees’ likes and dislikes of a training program 2.Utility reactions: measures that assess the perceived usefulness of a training program

24 24 © 2010 by Nelson Education Ltd. Training Evaluation Variables B.Learning Learning outcomes can be measured by: 1.Declarative learning: refers to the acquisition of facts and information, and is by far the most frequently assessed learning measure 2.Procedural learning: refers to the organization of facts and information into a smooth behavioural sequence

25 25 © 2010 by Nelson Education Ltd. Training Evaluation Variables C. Behaviour Behaviours can be measured using three approaches: 1.Self-reports 2.Observations 3.Production indicators

26 26 © 2010 by Nelson Education Ltd. Training Evaluation Variables D.Motivation Two types of motivation in the training context: 1.Motivation to learn 2.Motivation to apply the skill on-the-job (transfer) E.Self-efficacy Refers to the beliefs that trainees have about their ability to perform the behaviours that were taught in a training program.

27 27 © 2010 by Nelson Education Ltd. Training Evaluation Variables F. Perceived and/or Anticipated Support Two important measures of support are: 1.Perceived support: The degree to which the trainee reports receiving support in attempts to transfer the learned skills 2.Anticipated support: The degree to which the trainee expects to supported in attempts to transfer the learned skills

28 28 © 2010 by Nelson Education Ltd. Training Evaluation Variables G. Organizational Perceptions Two scales designed to measure perceptions: 1.Transfer climate: can be assessed via a questionnaire that identifies eight sets of “cues” 2.Continuous learning culture: can be assessed via questionnaire presented in Trainer’s Notebook 4.1 in chapter 4 of your text

29 29 © 2010 by Nelson Education Ltd. Training Evaluation Variables G. Organizational Perceptions (cont'd) Transfer climate cures include: Goal cues Social cues Task and structural cues Positive feedback Negative feedback Punishment No feedback Self-control

30 30 © 2010 by Nelson Education Ltd. Training Evaluation Variables H.Organizational Results Results information consists of : 1.Hard data: Results that can be measured objectively (i.e., number of items sold) 2.Soft Data: Results that are assessed through perceptions and judgments (i.e., attitudes) 3.Return on Expectations: Measurement of a training program’s ability to meet managerial expectations

31 31 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation The manner with which the data collection is organized and how the data will be analyzed  All data collection designs compare the trained person to something

32 32 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation 1.Non-experimental designs: Comparison is made to a standard and not to another group of (untrained) people 2.Experimental designs: Trained group is compared to another group that does not receive the training and when the assignment of people to the training group and the non-training group is random 3.Quasi-experimental designs: Trained group is compared to another group that does not receive the training, but when the assignment of people to the training group and the non-training group is not random.

33 33 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation Different Types of Evaluation Designs Design A: The single group post-only design* Design B: The single group pre-post design* Design C: The time series design* Design D: The single group design with control group** Design E: The pre-post design with control group** Design F: The time-series design with comparison group** Design G: The internal referencing strategy*** * All of the above are non-experimental designs **Casual Models: experimental or quasi-experimental ***Hybrid design that permits some conclusions drawn from causal designs

34 34 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation

35 35 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation PrePost A: Single Group Post-only Design (Non-experimental) B: Single Group Pre-Post Design (Non-experimental)

36 36 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation PrePost C: Time Series Design (Non-experimental) D: Single Group Design with Control Group TrainedUntrained

37 37 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation PrePost E: Pre-post Design with Control Group F: Time Series Design with Comparison Group TrainedUntrained

38 38 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation PrePost G: Internal Referencing Strategy Training on Relevant Items Training on Irrelevant Items

39 39 © 2010 by Nelson Education Ltd. Data Collection Designs in Training Evaluation  Decisions about which data collection design to use need to be considered in the training design stage  For example if a trainer wants to have pre- and post data collection it will need to be factored into the design and administration of the program at the design stage

40 40 © 2010 by Nelson Education Ltd. Summary  The main purposes for evaluating training programs was as well as the barriers was discussed  Three models of training (Kirkpatrick, COMA, and DBE) were presented, critiqued and contrasted  Recognized that Kirkpatrick model is most frequently used, yet has some limitations  The variables required for an evaluation as well as methods and techniques required to measure them  The main types of data collections designs were presented  Factors influencing choice of data collection designs were discussed


Download ppt "1 © 2010 by Nelson Education Ltd. Chapter Eleven Training Evaluation."

Similar presentations


Ads by Google