Presentation is loading. Please wait.

Presentation is loading. Please wait.

Are Your e-Learners Learning? How to develop online level 2 evaluations quickly and effectively Gus Prestera, PhD, CPT President, effectPerformance, Inc.

Similar presentations


Presentation on theme: "Are Your e-Learners Learning? How to develop online level 2 evaluations quickly and effectively Gus Prestera, PhD, CPT President, effectPerformance, Inc."— Presentation transcript:

1 Are Your e-Learners Learning? How to develop online level 2 evaluations quickly and effectively Gus Prestera, PhD, CPT President, effectPerformance, Inc. Instructional Design Consultant April 26, 2004

2 Why assess learning?

3 Performance Context (e.g., Workplace) Learning Context (e.g., Classroom) Are your PALs aligned? Performance Task Assessment Task Learning Task

4 Agenda Rapid Prototyping Before Test Development… 4-Step Test Development Process Practice Discussion

5 Rapid Prototyping Rapid Prototyping: Develop a functional prototype quickly, test/refine it until it is accepted, and then proceed with full development

6 Rapid Prototyping Reverse-engineering Develop User test Refine Minimal upfront analysis Iterative and incremental approach Continuous improvement Progressive refinement User centric Reliant on user input and user feedback Testing under realistic conditions

7 Thiagis Rapid ID Model Strategy 1. Speed up the process Strategy 2. Use a partial process Strategy 3. Incorporate existing instructional materials Strategy 4. Incorporate existing non-instructional materials Strategy 5. Use templates Strategy 6. Use computers and recording devices Strategy 7. Involve more people Strategy 8. Make efficient use of subject matter experts Strategy 9. Involve trainees in speeding up instruction Strategy 10. Use performance support systems Thiagarajan (1999)

8 My Approach Prioritize - Spend time on what matters most Produce - Move from abstract to concrete fast… Pilot - Dont guess, just see if it works Learn - Creative processes are iterative Listen - Involve learners early and often Leverage - Use technology, templates, EPSS tools Streamline - Reduce process complexity, inefficiencies, and redundancies to cut cycle time and costs Align - Maintain PAL alignment

9 Before Test Development… Source: Prestera, 2004a

10 Front-End Analysis What are the performance gaps or opportunities? What are the root causes? What interventions will close those gaps? Which are skill gaps, i.e., are caused by gaps in knowledge, skills, or attitudes? What skill gaps can/should be addressed through training?

11 Training Needs Assessment (TNA) Identify critical skills Prioritize skill set Difficulty of implementation Potential of impact Type of cognitive process Type of knowledge (Krathwohl, 2002) Survey skill needs TNA (Prestera, 2004b)

12 IRC Worksheet Tool Tool: IRC Worksheet High IRC skills are more difficult to implement, have a high potential impact on the organization, and require the most instructional resources to develop/influence.

13 TNA Survey Tool Tool: TNA Survey This tool automates the survey development and analysis, quickly informing you of which skills have high perceived training need and which have low perceived training need and can be addressed through non- training interventions, if at all.

14 Test Development: A 4-Step Process Prestera, 2004a

15 Step 1: Identify Criteria Form panel (3-7 people) Exemplar workers Subject matter experts (SMEs) Review skill set Brainstorm assessment criteria

16 Step 2: Develop Test Is the skill well-defined or ill-defined? Is there a set of right and wrong ways of doing things? Or is right/wrong more dependent upon perspective, degrees of rightness, and context? Does the skill need to be: Physically performed (motor, psychomotor, and some procedural tasks) Mentally performed (decision-making, problem-solving, remembering, analyzing, synthesizing, evaluating) Write test items and test instructions Review with a SME for content (Nitko, 1996) Review for grammar, spelling, etc.

17 Test Format Matrix Objective (Correct/Incorrect) Subjective (Rating scales) Performance (Performance, simulation, projects, apprenticeships) Objective/ Performance Subjective/ Performance Knowledge (MC, TF, matching, fill-in, short answer, essay, report) Objective/ Knowledge Subjective/ Knowledge Link to test writing guidelines: http://taesig.8m.com/createcon.html http://taesig.8m.com/createcon.html Test Format Matrix Note: There can be an element of objectivity to almost any subjective judgments and there certainly is subjectivity in any objective judgment, and the same overlap exists between performance tasks and knowledge tasks, so do not get hung up on the labels.

18 Practice: What format would you use? 1. Cashiers ability to distinguish between valid and invalid coupons 2. Cashiers ability to process transactions involving coupons at the cash register 3. Sales persons product knowledge 4. Designers ability to select the right test format 5. Managers ability to apply laws and regulations governing hiring practices 6. Managers ability to conduct a job interview 7. Sales persons ability to use product knowledge to help customers make good product decisions

19 Performance Context (e.g., Workplace) Learning Context (e.g., Classroom) Remember Your PALs Performance Task Assessment Task Learning Task

20 Step 3: Pilot Test Difficult to write good test items But soooo easy to write bad ones Use a random sample of actual learners Alternative: two-group approach, use a group of average learners with no training and a group of exemplars After data collection, copy the data into our Item Analysis Tool Set the parameters and youre ready! Practice Set 1

21

22 Set the Parameters

23 After these preliminary steps, you are ready to interpret the results of the Test and Item Analysis

24 Step 4: Revise Test Interpret indicators: Reliability estimates Item difficulty (p) Item discrimination (d) Revisit criteria Revise test items Pilot again

25 Test Reliability Is it measuring consistently? How often is your watch accurate? Would you use it if it were accurate 50% of the times? Reliability estimates: KR-20, KR-21 (Kuder & Richardson, 1937) Alpha (Cronbach, 1951)

26 Item Discrimination (d) Is it measuring accurately? Discriminant Validity Does the question differentiate between those who know their stuff and those who dont? If your watch was reliable but consistently told you the wrong time, would you keep it? d is the key indicator (Sullivan, Wircenski & Major, 1999) d >.1: good question 0 < d <.1: weak question d < 0: bad question

27 Item Difficulty (p) How difficult was that question? What are the odds that a learner will get it right in the future? Good questions are challenging but feasible Too easy – Is training even necessary for that skill? Too hard – Is current training for that skill effective?

28

29 Item Difficulty (p) Item Discrimination (d)

30 So whats in curve? This distribution is skewed to the left because so many items are extremely difficult. This distribution is not Normal.

31 A Normal Distribution

32 Practice: Interpreting Analysis Results Practice Set 2 Practice Set 3

33 Discussion

34 Benefits Simple-to-use tool makes item analysis fast and easy Tests with high face, discriminant, and ecological validity as well as reliability Validation promotes sense of fairness in test process Assessments create a sense of learner accountability High-quality tests drive high-quality training Concrete understanding of client needs Iterative cycle enables test development to inform design decisions Continuous improvement approach compatible with Six Sigma, LEAN, and Gemba Kaizen quality models Analysis takes time and time is in short supply (Rossett, 1999; pp. x-xi)

35 14% usage rates* 60% dropout rates * ASTD, 2001 Is work performance relevant anymore? Individuals contribute about 30% less when working in teams 84% admit they could work much harder 50% admit they only work as hard as they must to keep their jobs Is e-learning relevant to work performance? Clark, 2004 How can valid assessments help you address these concerns regarding e-learning and the workplace?

36 Key Success Factors Can you form a panel of exemplar workers? Can you secure pilot participants? Can you get over the fear of not being perfect the first time? Are you willing to discard and revise items?

37 Obstacles Anti-test cultures Lack of management support Fear of making mistakes and learning from feedback Tendency to do things once and forget about them

38 Did we get there? After attending this session, are you able to use the rapid prototyping process and tools provided to: Identify and prioritize needed skills? Collaborate with learners to brainstorm assessment criteria for each skill? Determine what test formats need to be used in order to keep PALs aligned? Run a pilot and quickly conduct test and item analyses? Use pilot data to decide what to remove, revise, or refine? Position assessments as a means to drive training?

39 effect Performance Instructional design solutions for your learning and performance needs Gus Prestera, Ph.D., CPT President, effectPerformance, Inc. www.effectPerformance.com E-mail: gprestera@effectPerformance.com Voice 610.449.2060 Fax 610.449.2061 1513 Fairview Avenue, Havertown, PA 19083 Contact Slides and tools available at: http://www.effectperformance.com/html/library.htm http://www.effectperformance.com/html/library.htm

40 References ASTD. (2001). Benchmarking Report on e-Learning. Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive Domain. White Plains, NY: Longman. Clark, R.E. (2004, March). The 10 Most Wanted motivation killers. PerformanceXpress. Clark, D. (2003, August). How effective is training? A new summary of the past 40 years of training field research and evaluation. PerformanceXpress. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. Dick, W., & Carey, L. (1990). The Systematic Design of Instruction. Glenview, IL: Scott, Foresman. Kirkpatrick, D. (1998). Evaluative training programs: The four levels (2nd ed.). New York, NY: Barrett-Kohler. Krathwohl, D. R. (2002). A revision of Bloom's Taxonomy: An overview. Theory into Practice, 41(4), 212-218. Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2, 151-160. Nitko, A. J. (1996). Educational Assessment of Students (2nd Ed). Englewood Cliffs, NJ: Prentice-Hall. Prestera, G.E. (2004a). Are your e-learners learning? A rapid prototyping process and tool for test development. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm.http://www.effectPerformance.com/html/library.htm Prestera, G.E. (2004b). Training needs assessment: Process and tools to help you identify and prioritize training needs quickly. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm. http://www.effectPerformance.com/html/library.htm Prestera, G.E. (2004c). Understanding ADDIE: A foundation for designing instruction. effectPerformance White Papers. Retrieved from the effectPerformance, Inc. web site: http://www.effectPerformance.com/html/library.htm.http://www.effectPerformance.com/html/library.htm Rossett, A. (1999). First things fast: A handbook for performance analysis. San Francisco, CA: Jossey-Bass. Sullivan, R. L., Wircenski, J. L., & Major, M. J. (1999). Analyzing knowledge-based tests. In D. L. Kirkpatrick (Ed.), Another Look at Evaluating Training Programs (pp. 113-118). Alexandria, VA: ASTD. Thiagarajan, S. (1999). Rapid Instructional Design. Workshops by Thiagi, Inc. Retrieved 11/18/2003, from the World Wide Web: http://www.thiagi.com/article-rid.html. http://www.thiagi.com/article-rid.html


Download ppt "Are Your e-Learners Learning? How to develop online level 2 evaluations quickly and effectively Gus Prestera, PhD, CPT President, effectPerformance, Inc."

Similar presentations


Ads by Google