Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lessons learned in assessment History, Research and Practical Implications Cees van der Vleuten Maastricht University MHPE, Unit 1, 3 June 2010 Powerpoint.

Similar presentations


Presentation on theme: "Lessons learned in assessment History, Research and Practical Implications Cees van der Vleuten Maastricht University MHPE, Unit 1, 3 June 2010 Powerpoint."— Presentation transcript:

1 Lessons learned in assessment History, Research and Practical Implications Cees van der Vleuten Maastricht University MHPE, Unit 1, 3 June 2010 Powerpoint at: www.fdg.unimaas.nl/educ/cees/mhpe

2 Medical Education Anno 2008 Steep explosion of knowledge in medical education 10 international journals and many national ones 2 large international conferences, 2 large regional conferences and many national ones Mix of practice, research and theory Many training programmes, including masters and PhDs Groups of professionals appointed in medical schools A flourishing community of practice!

3 Overview of presentation Where is education going? Lessons learned in assessment Areas of development and research

4 Where is education going? School-based learning Discipline-based curricula (Systems) integrated curricula Problem-based curricula Outcome/competency-based curricula

5 Where is education going? Underlying educational principles: Continuous learning of, or practicing with, authentic tasks (in steps of complexity; with constant attention to transfer) Integration of cognitive, behavioural and affective skills Active, self-directed learning & in collaboration with others Fostering domain-independent skills, competencies (e.g. team work, communication, presentation, science orientation, leadership, professional behaviour….).

6 Where is education going? Underlying educational principles: Continuous learning of, or practicing with, authentic tasks (in steps of complexity; with constant attention to transfer) Integration of cognitive, behavioural and affective skills Active, self-directed learning & in collaboration with others Fostering domain-independent skills, competencies (e.g. team work, communication, presentation, science orientation, leadership, professional behaviour….). Cognitive psychology Cognitive psychology Constructivis m Cognitive load theory Cognitive load theory Collaborative learning theory Collaborative learning theory Empirical evidence Empirical evidence

7 Where is education going? Work-based learning Practice, practice, practice…. Optimising learning by: More reflective practice More structure in the haphazard learning process More feedback, monitoring, guiding, reflection, role modelling Fostering of learning culture or climate Fostering of domain-independent skills (professional behaviour, team skills, etc).

8 Where is education going? Work-based learning Practice, practice, practice…. Optimising learning by: More reflective practice More structure in the haphazard learning process More feedback, monitoring, guiding, reflection, role modelling Fostering of learning culture or climate Fostering of domain-independent skills (professional behaviour, team skills, etc). Deliberate Practice theory Deliberate Practice theory Emerging work-based learning theories Emerging work-based learning theories Empirical evidence Empirical evidence

9 Where is education going? Educational reform is on the agenda everywhere Education is professionalizing rapidly A lot of ‘educational technology’ is available How about assessment?

10 Overview of presentation Where is education going? Lessons learned in assessment Areas of development and research

11 Miller’s pyramid of competence Knows Shows how Knows how Does Miller GE. The assessment of clinical skills/competence/performance. Academic Medicine (Supplement) 1990; 65: S63-S7. Lessons learned while climbing this pyramid with assessment technology

12 Assessing knowing how Knows Shows how Knows how Does Knows Knows how 60-ies: Written complex simulations (PMPs)

13 Key findings written simulations (Van der Vleuten, 1995) Performance on one problem hardly predicted performance on another High correlations with simple MCQs Experts performed less well than intermediate experts Stimulus format more important than the response format

14 Assessing knowing how Knows Shows how Knows how Does Knows how Specific Lessons learned! Simple short scenario-based formats work best (Case & Swanson, 2002) Validity is a matter of good quality assurance around item construction (Verhoeven et al 1999) Generally, medical schools can do a much better job (Jozewicz et al 2002) Sharing of (good) test material across institutions is a smart strategy (Van der Vleuten et al 2004).

15 Moving from assessing knows Knows: What is arterial blood gas analysis most likely to show in patients with cardiogenic shock? A. Hypoxemia with normal pH B. Metabolic acidosis C. Metabolic alkalosis D. Respiratory acidosis E. Respiratory alkalosis

16 To assessing knowing how Knowing How: A 74-year-old woman is brought to the emergency department because of crushing chest pain. She is restless, confused, and diaphoretic. On admission, temperature is 36.7 C, blood pressure is 148/78 mm Hg, pulse is 90/min, and resp are 24/min. During the next hour, she becomes increasingly stuporous, blood pressure decreases to 80/40 mm Hg, pulse increases to 120/min, and respirations increase to 40/min. Her skin is cool and clammy. An ECG shows sinus rhythm and 4 mm of ST segment elevation in leads V2 through V6. Arterial blood gas analysis is most likely to show: A. Hypoxemia with normal pH B. Metabolic acidosis C. Metabolic alkalosis D. Respiratory acidosis E. Respiratory alkalosis

17 http://www.nbme.org/publications/item-writing-manual.html

18 Maastricht item review process anatomy physiology int medicine surgery psychology item pool review committee test administration item analyses student comments Info to usersitem bank Pre-test review Post-test review

19 Assessing knowing how Knows Shows how Knows how Does Knows how General Lessons learned! Competence is specific, not generic Assessment is as good as you are prepared to put into it.

20 Assessing showing how Knows Shows how Knows how Does Knows howShows how 70-ies: Performance assessment in vitro (OSCE)

21 Key findings around OSCEs 1 Performance on one station poorly predicted performance on another (many OSCEs are unreliable) Validity depends on the fidelity of the simulation (many OSCEs are testing testing fragmented skills in isolation) Global rating scales do well (improved discrimination across expertise groups; better intercase reliabilities; Hodges, 2003) OSCEs impacted on the learning of students 1 Van der Vleuten & Swanson, 1990

22 Reliabilities across methods Testing Time in Hours 1 2 4 8 MCQ 1 0.62 0.76 0.93 Case- Based Short Essay 2 0.68 0.73 0.84 0.82 PMP 1 0.36 0.53 0.69 0.82 Oral Exam 3 0.50 0.69 0.82 0.90 Long Case 4 0.60 0.75 0.86 0.90 OSCE 5 0.47 0.64 0.78 0.88 1 Norcini et al., 1985 2 Stalenhoef-Halling et al., 1990 3 Swanson, 1987 4 Wass et al., 2001 5 Petrusa, 2002

23 Reliability oral examination (Swanson, 1987) Testing Time in Hours 1 2 4 8 Two New Examiners for Each Case 0.61 0.76 0.86 0.93 New Examiner for Each Case 0.50 0.69 0.82 0.90 Same Examiner for All Cases 0.31 0.47 0.48 Number of Cases 2 4 8 12

24 Checklist or rating scale reliability in OSCE 1 1 Van Luijk & van der Vleuten, 1990

25 Assessing showing how Knows Shows how Knows how Does Shows how Specific Lessons learned! OSCE-ology (patient training, checklist writing, standard setting, etc.; Petrusa 2002) OSCEs are not inherently valid nor reliable, that depends on the fidelity of the simulation and the sampling of stations (Van der Vleuten & Swanson, 1990).

26 Assessing showing how Knows Shows how Knows how Does Shows how General Lessons learned! Objectivity is not the same as reliability (Van der Vleuten, Norman, De Graaff, 1991) Subjective expert judgment has incremental value (Van der Vleuten & Schuwirth, in prep) Sampling across content and jugdes/examiners is eminently important Assessment drives learning.

27 Assessing does Knows Shows how Knows how Does Shows how Does 90-ies: Performance assessment in vivo by judging work samples (Mini-CEX, CBD, MSF, DOPS, Portfolio)

28 Key findings assessing does Ongoing work; this is where we currently are Reliable findings point to feasible sampling (8- 10 judgments seems to be the magical number; Williams et al 2003) Scores tend to be inflated (Govaerts et al 2007) Qualitative/narrative information is (more) useful (Govaerts et al 2007) Lots of work still needs to be done How (much) to sample across instruments? How to aggregate information?

29 Reliabilities across methods Testing Time in Hours 1 2 4 8 MCQ 1 0.62 0.76 0.93 Case- Based Short Essay 2 0.68 0.73 0.84 0.82 PMP 1 0.36 0.53 0.69 0.82 Oral Exam 3 0.50 0.69 0.82 0.90 Long Case 4 0.60 0.75 0.86 0.90 OSCE 5 0.47 0.64 0.78 0.88 Practice Video Assess- ment 7 0.62 0.76 0.93 1 Norcini et al., 1985 2 Stalenhoef-Halling et al., 1990 3 Swanson, 1987 4 Wass et al., 2001 5 Petrusa, 2002 6 Norcini et al., 1999 In- cognito SPs 8 0.61 0.76 0.92 0.93 Mini CEX 6 0.73 0.84 0.92 0.96 7 Ram et al., 1999 8 Gorter, 2002

30 Assessing does Knows Shows how Knows how Does Specific Lessons learned! Reliable sampling is possible Qualitative information carries a lot of weight Assessment impacts on work-based learning (more feedback, more reflection…) Validity strongly depends on the users of these instruments and therefore on the quality of implementation.

31 Assessing does Knows Shows how Knows how Does General Lessons learned! Work-based assessment cannot replace standardised assessment (yet), or, no single measure can do it all (Tooke report, UK) Validity strongly depends on the implementation of the assessment (Govaerts et 2007) But, there is a definite place for (more subjective) expert judgment (Van der Vleuten & Schuwirth, under ed review).

32 Competency/outcome categorizations CanMeds roles Medical expert Communicator Collaborator Manager Health advocate Scholar Professional ACGME competencies Medical knowledge Patient care Practice-based learning & improvement Interpersonal and communication skills Professionalism Systems-based practice

33 Measuring the unmeasurable Knows Shows how Knows how Does “Domain independent” skills “Domain specific” skills

34 Measuring the unmeasurable Importance of domain-independent skills If things go wrong in practice, these skills are often involved (Papadakis et 2005; 2008) Success in labour market is associated with these skills (Meng 2006) Practice performance is related to school performance (Padakis et al 2004).

35 Measuring the unmeasurable Knows Shows how Knows how Does “Domain independent” skills “Domain specific” skills Assessment (mostly in vivo) heavily relying on expert judgment and qualitative information

36 Measuring the unmeasurable Self assessment Peer assessment Co-assessment (combined self, peer, teacher assessment) Multisource feedback Log book/diary Learning process simulations/evaluations Product-evaluations Portfolio assessment

37 Eva, K. W., & Regehr, G. (2005). Self-assessment in the health professions: a reformulation and research agenda. Acad Med, 80(10 Suppl), S46-54.

38 Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287-322.

39 Driessen, E., van Tartwijk, J., van der Vleuten, C., & Wass, V. (2007). Portfolios in medical education: why do they meet with mixed success? A systematic review. Med Educ, 41(12), 1224-1233.

40 General lessons learned Competence is specific, not generic Assessment is as good as you are prepared to put into it Objectivity is not the same as reliability Subjective expert judgment has incremental value Sampling across content and judges/examiners is eminently important Assessment drives learning No single measure can do it all Validity strongly depends on the implementation of the assessment

41 Practical implications Competence is specific, not generic One measure is no measure Increase sampling (across content, examiners, patients…) within measures Combine information across measures and across time Be aware of (sizable) false positive and negative decisions Build safeguards in examination regulations.

42 Practical implications Assessment is as good as you are prepared to put into it Train your staff in assessment Implement quality assurance procedures around test construction Share test material across institutions Reward good assessment and assessors Involve students as a source of quality assurance information

43 Practical implications Objectivity is not the same as reliability Don’t trivialize the assessment (and compromise on validity) with unnecessary objectification and standardization Don’t be afraid of holistic judgment Sample widely across sources of subjective influences (raters, examiners, patients)

44 Practical implications Subjective expert judgment has incremental value Use expert judgment for assessing complex skills Who is an expert depends on assessment context (i.e. peer, patient, clerk, etc) Invite assessors to provide qualitative information or mediation of feedback

45 Practical implications Sampling across content and judges/examiners is eminently important Use efficient test designs: use single examiners per test item (question, essay, station, encounter…) and different examiners across items Psychometrically analyse sources of variance affecting the measurement to optimise the sampling plan and sample sizes needed

46 Practical implications Assessment drives learning For every evaluative action there is an educational reaction Verify and monitor the impact of assessment (evaluate the evaluation); many intended effects are not actually effective -> hidden curriculum No assessment without feedback! Embed the assessment within the learning programme (cf. Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181-208.) Use the assessment strategically to reinforce desirable learning behaviours

47 Practical implications No single measure can do it all Use a cocktail of methods across the competency pyramid Arrange methods in a programme of assessment Any method may have utility (including the ‘old’ assessment methods depending on its utility within the programme) Compromises on the quality of methods should be made in light of its function in the programme Compare assessment design with curriculum design Responsible people/committee(s) Use an overarching structure Involve your stakeholders Implement, monitor and change (assessment programmes ‘wear out’)

48 Practical implications Validity strongly depends on the implementation of the assessment Pay special attention to implementation (good educational ideas often fail due to implementation problems) Involve your stakeholders in the design of the assessment Many naive ideas exist around assessment; train and educate your staff and students.

49 Overview of presentation Where is education going? Where are we with assessment? Where are we going with assessment? Conclusions

50 Areas of development and research Understanding expert judgment

51 Understanding human judgment How does the mind work of expert judges? How is it influenced? Link between clinical expertise and judgment expertise? Clash between psychology literature on expert judgment and psychometric research.

52 Areas of development and research Understanding expert judgment Building non-psychometric rigour into assessment

53 Qualitative methodology as an inspiration QuantitativeQualitative Criterionapproachapproach Truth valueInternal validityCredibility ApplicabilityExternal validityTransferability ConsistencyReliabilityDependability NeutralityObjectivityConfirmability Strategies for establishing trustworthiness: Prolonged engagement Triangulation Peer examination Member checking Structural coherence Time sampling Stepwise replication Dependability audit Thick description Confirmability audit Procedural measures and safeguards: Assessor training & benchmarking Appeal procedures Triangulation across sources, saturation Assessor panels Intermediate feedback cycles Decision justification Moderation Scoring rubrics ……….

54 Driessen, E. W., Van der Vleuten, C. P. M., Schuwirth, L. W. T., Van Tartwijk, J., & Vermunt, J. D. (2005). The use of qualitative research criteria for portfolio assessment as an alternative to reliability evaluation: a case study. Medical Education, 39(2), 214-220.

55 Areas of development and research Understanding expert judgment Building non-psychometric rigour into assessment Construction and governance of assessment programmes (Van der Vleuten 2005)

56 Assessment programmes How to design assessment programmes? Strategies for governance (implementation, quality assurance)? How to aggregate information for decision making? When is enough enough?

57 A model for designing programmes 1 1 Dijkstra, J. et al, in preparation.

58 Areas of development and research Understanding expert judgment Building non-psychometric rigour into assessment Construction and governance of assessment programmes Understanding and using assessment impacting learning

59 Assessment impacting learning Lab studies convincingly show tests improve retention and performance (Larsen et al., 2008) Relatively little empirical research supporting educational practice Absence of theoretical insights.

60 Theoretical model under construction 1 1 Cilliers, F. in preparation. Metacognitive regulation strategies choice effort persistence OUTCOMES OF LEARNING impact appraisal likelihood severity response appraisal efficacy costs value perceived agency interpersonal factors normative beliefs motivation to comply SOURCES OF IMPACT Assessment assessment strategy assessment task volume of assessable material sampling cues individual assessor DETERMINANTS OF ACTION CONSEQUENCES OF IMPACT Cognitive processing strategies choice effort persistence Dr Hanan Al-Kadri

61 Areas of development and research Understanding expert judgment Building non-psychometric rigour into assessment Construction and governance of assessment programmes Understanding and using assessment impacting learning Understanding and using qualitative information.

62 Understanding and using qualitative information Assessment is dominated by the quantitative discourse (Hodges 2006) How to improve the use of qualitative information? How to aggregate qualitative information? How to combine qualitative and quantitative information? How to use expert judgment here?

63 Finally Assessment in medical education has a rich history of research and development with clear practical implications (we’ve covered some ground in 40 yrs!) We are moving beyond the psychometric discourse into an educational design discourse We are starting to measure the unmeasurable Expert human judgment is reinstated as an indispensable source of information both at the method level as well as at the programmatic level Lots of exciting developments lie still ahead of us!

64 This presentation can be found at: www.fdg.unimaas.nl/educ/cees/singaporewww.fdg.unimaas.nl/educ/cees/singapore

65 Literature Cillier, F. (In preparation). Assessment impacts on learning, you say? Please explain how. The impact of summative assessment on how medical students learn. Driessen, E., van Tartwijk, J., van der Vleuten, C., & Wass, V. (2007). Portfolios in medical education: why do they meet with mixed success? A systematic review. Med Educ, 41(12), 1224-1233. Driessen, E. W., Van der Vleuten, C. P. M., Schuwirth, L. W. T., Van Tartwijk, J., & Vermunt, J. D. (2005). The use of qualitative research criteria for portfolio assessment as an alternative to reliability evaluation: a case study. Medical Education, 39(2), 214- 220. Dijkstra, J., Schuwirth, L. & Van der Vleuten (In preparation) A model for designing assessment programmes. Eva, K. W., & Regehr, G. (2005). Self-assessment in the health professions: a reformulation and research agenda. Acad Med, 80(10 Suppl), S46-54. Gorter, S., Rethans, J. J., Van der Heijde, D., Scherpbier, A., Houben, H., Van der Vleuten, C., et al. (2002). Reproducibility of clinical performance assessment in practice using incognito standardized patients. Medical Education, 36(9), 827-832. Govaerts, M. J., Van der Vleuten, C. P., Schuwirth, L. W., & Muijtjens, A. M. (2007). Broadening Perspectives on Clinical Performance Assessment: Rethinking the Nature of In-training Assessment. Adv Health Sci Educ Theory Pract, 12, 239-260. Hodges, B. (2006). Medical education and the maintenance of incompetence. Med Teach, 28(8), 690-696. Jozefowicz, R. F., Koeppen, B. M., Case, S. M., Galbraith, R., Swanson, D. B., & Glew, R. H. (2002). The quality of in-house medical school examinations. Academic Medicine, 77(2), 156-161. Meng, C. (2006). Discipline-specific or academic ? Acquisition, role and value of higher education competencies., PhD Dissertation, Universiteit Maastricht, Maastricht. Norcini, J. J., Swanson, D. B., Grosso, L. J., & Webster, G. D. (1985). Reliability, validity and efficiency of multiple choice question and patient management problem item formats in assessment of clinical competence. Medical Education, 19(3), 238- 247. Papadakis, M. A., Hodgson, C. S., Teherani, A., & Kohatsu, N. D. (2004). Unprofessional behavior in medical school is associated with subsequent disciplinary action by a state medical board. Acad Med, 79(3), 244-249. Papadakis, M. A., A. Teherani, et al. (2005). "Disciplinary action by medical boards and prior behavior in medical school." N Engl J Med 353(25): 2673-82. Papadakis, M. A., G. K. Arnold, et al. (2008). "Performance during internal medicine residency training and subsequent disciplinary action by state licensing boards." Annals of Internal Medicine 148: 869-876.

66 Literature Petrusa, E. R. (2002). Clinical performance assessments. In G. R. Norman, C. P. M. Van der Vleuten & D. I. Newble (Eds.), International Handbook for Research in Medical Education (pp. 673-709). Dordrecht: Kluwer Academic Publisher. Ram, P., Grol, R., Rethans, J. J., Schouten, B., Van der Vleuten, C. P. M., & Kester, A. (1999). Assessment of general practitioners by video observation of communicative and medical performance in daily practice: issues of validity, reliability and feasibility. Medical Education, 33(6), 447-454. Stalenhoef- Halling, B. F., Van der Vleuten, C. P. M., Jaspers, T. A. M., & Fiolet, J. B. F. M. (1990). A new approach to assessign clinical problem-solving skills by written examination: Conceptual basis and initial pilot test results. Paper presented at the Teaching and Assessing Clinical Competence, Groningen. Swanson, D. B. (1987). A measurement framework for performance-based tests. In I. Hart & R. Harden (Eds.), Further developments in Assessing Clinical Competence (pp. 13 - 45). Montreal: Can-Heal publications. van der Vleuten, C. P., Schuwirth, L. W., Muijtjens, A. M., Thoben, A. J., Cohen-Schotanus, J., & van Boven, C. P. (2004). Cross institutional collaboration in assessment: a case on progress testing. Med Teach, 26(8), 719-725. Van der Vleuten, C. P. M., & D. Swanson, D. (1990). Assessment of Clinical Skills With Standardized Patients: State of the Art. Teaching and Learning in Medicine, 2(2), 58 - 76. Van der Vleuten, C. P. M., & Newble, D. I. (1995). How can we test clinical reasoning? The Lancet, 345, 1032-1034. Van der Vleuten, C. P. M., Norman, G. R., & De Graaff, E. (1991). Pitfalls in the pursuit of objectivity: Issues of reliability. Medical Education, 25, 110-118. Van der Vleuten, C. P. M., & Schuwirth, L. W. T. (2005). Assessment of professional competence: from methods to programmes. Medical Education, 39, 309-317. Van der Vleuten, C. P. M., & Schuwirth, L. W. T. (Under editorial review). On the value of (aggregate) human judgment. Med Educ. Van Luijk, S. J., Van der Vleuten, C. P. M., & Schelven, R. M. (1990). The relation between content and psychometric characteristics in performance-based testing. In W. Bender, R. J. Hiemstra, A. J. J. A. Scherp bier & R. P. Zwierstra (Eds.), Teaching and Assessing Clinical Competence. (pp. 202-207). Groningen: Boekwerk Publications. Wass, V., Jones, R., & Van der vleuten, C. (2001). Standardized or real patients to test clinical competence? The long case revisited. Medical Education, 35, 321-325. Williams, R. G., Klamen, D. A., & McGaghie, W. C. (2003). Cognitive, social and environmental sources of bias in clinical performance ratings. Teaching and Learning in Medicine, 15(4), 270-292.


Download ppt "Lessons learned in assessment History, Research and Practical Implications Cees van der Vleuten Maastricht University MHPE, Unit 1, 3 June 2010 Powerpoint."

Similar presentations


Ads by Google