Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Goldilocks Dilemma: Too Hard, Too Easy, or Just Right? Developing Pre/Posttest Questions to Evaluate Training Courses Christine M Allegra, PhD, LCSW.

Similar presentations


Presentation on theme: "The Goldilocks Dilemma: Too Hard, Too Easy, or Just Right? Developing Pre/Posttest Questions to Evaluate Training Courses Christine M Allegra, PhD, LCSW."— Presentation transcript:

1 The Goldilocks Dilemma: Too Hard, Too Easy, or Just Right? Developing Pre/Posttest Questions to Evaluate Training Courses Christine M Allegra, PhD, LCSW Theresa McCutcheon, MSW Institute for Families May 2016

2 Overview New Jersey Child Welfare Training Partnership Pre/Posttests Development Process Revision Process Findings Discussion

3 New Jersey Child Welfare Training Partnership Educate 5,000+ annually 160 unique course titles 1,500 training days per year Funder interest and court-appointed monitor in documenting knowledge gain

4 Training Evaluation Two Methods – Satisfaction Surveys – Pre/Posttests

5 NJCWTP Pre/Posttests: The Beginning Starting on July 1, 2013: All training titles required to have a pretest and posttest January to June 2013 – 46 tests Average pretest score: 60% Average posttest score: 79% July to December 2013 – 84 tests Average pretest score: 61% Average posttest score: 81%

6 Importance of Good Test Questions Evaluate trainee performance – Scores appear on participant’s employee transcript – Unsatisfactory posttest scores (< 80%) result in request to repeat test and/or training Demonstrate whether and the extent to which trainees are learning Evaluate instructor performance Contract requirement

7 Measuring Knowledge Gain: Approach Pretest – Participants complete a multiple choice knowledge assessment before training Posttest – Participants complete the same multiple choice test after training Training effect on knowledge = difference in pretest and posttest – Difference attributed to knowledge gained as a result of the training – May be related to test, course content, instructor, trainee population

8 Solution: Collaboration, Testing, Iteration No one can do it alone in a singular attempt – An untested single draft by a single author will almost certainly produce a poor test NJCWTP Approach – Content expert drafts questions using standard guidelines – Revision by committee – Revisions sent to content expert for approval – Revised test administered in training – Analysis of scores indicates whether further revisions are necessary

9 Developing Test Questions: Template

10 Guidelines for Writing Questions Link Questions to Core Learning Objectives Test Trainees’ Understanding – Not Memory Challenge the Test Taker Focus on Facts That Can Be Substantiated Be Clear, Concise, and Specific Be Careful Not to Give Away the Answer Use ‘All/None of the Above’ Wisely OR Eliminate! Eliminate True and False

11 Common Issues Flagged in Review Questions are too easy Indicator: More than 80% correct on pretest Questions are too difficult (or perhaps not adequately covered during training) Indicator: Less than 80% correct on posttest Questions are confusing (or perhaps not adequately covered during training) Indicator: Higher percent correct on pretest than on posttest

12 Pre/Posttest Revision Committee Includes a variety of professionals – Each brings a unique perspective and skill set to evaluate questions: PhD MSWs / MSW students Subject matter experts Child welfare professionals Trainers Discusses improvements to pre/posttests – Based on Past question performance Their own assessment of the clarity, level of difficulty, and consistency with course objectives

13 NJCWTP Pre/Posttests: Current Findings July to December 2015 – 124 tests Average pretest score: 59% Average posttest score: 83%  78 more tests than before mandate started  Average posttest score 4 points higher than before mandate started

14 Question Analysis: Example Part 1 Foundation Class Example Old Version of Test: – Pretest: 46% (n=107) – Posttest: 80% (n=104) *8 questions were too hard, ranging from 54% to 79% choosing correct answer Revised Version of Test: – Pretest: 49% (n=82) – Posttest: 82% (n=80) *6 questions were too hard, ranging from 51% to 78% choosing correct answer

15 Question Analysis: Example Part 2 Old Version of Test: Results by Class Sept, n=15, posttest: 83% Trainer A Sept, n=17, posttest: 93% Trainer B Sept, n=12, posttest: 93% Trainer B Oct, n= 19, posttest: 87% Trainer B Nov, n=19, posttest: 64% Trainer B Dec, n=21, posttest: 66% Trainer B Revised Version of Test: Results by Class July, n=22, posttest: 82% Trainer C Nov, n=15, posttest: 83% Trainer C Nov, n=17, posttest: 77% Trainer D Nov, n=13, posttest: 90% Trainer E Dec, n=13, posttest: 78% Trainer C

16 Discussion Questions How do we better guide curriculum writers in developing new questions? How do we improve our revision process? How do we ensure test questions are fair? How do we reinforce standards when administrating tests with trainers who may be overly empathetic with participant concerns? How do we support test takers (e.g., language, literacy, time constraints)? How do we guide the funder in understanding the value and limits of the test results? How do we guide the field in using results as information about learning not measures of practice performance?

17 Christine M Allegra, PhD, LCSW Research Analyst callegra@ssw.rutgers.edu Theresa McCutcheon, MSW Director, Office of Child Welfare Workforce Advancement tmccutcheon@ssw.rutgers.edu


Download ppt "The Goldilocks Dilemma: Too Hard, Too Easy, or Just Right? Developing Pre/Posttest Questions to Evaluate Training Courses Christine M Allegra, PhD, LCSW."

Similar presentations


Ads by Google