Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Something from Nothing: Limitations of Diagnostic Information in a CAT.

Similar presentations


Presentation on theme: "Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Something from Nothing: Limitations of Diagnostic Information in a CAT."— Presentation transcript:

1 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Something from Nothing: Limitations of Diagnostic Information in a CAT Environment Casey Marks NCSBN 2006 Annual ConferenceAlexandria, Virginia Council on Licensure, Enforcement and Regulation Expect the Unexpected: Are We Clearly Prepared?

2 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia NCSBN Background Twenty-seven year old, not -for-profit organization consisting of 59 state and territorial boards of nursing Owner and developer of the national nurse licensure examinations (NCLEX-RN® and NCLEX-PN®) 17 NCSBN staff members assigned to NCLEX program operations NCSBN contracts with a testing service, Pearson VUE, to aid in the development and administration of the NCLEX

3 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia NCLEX® Facts NCLEX is the successor to the SBTPE, the national nurse licensure examination used by various nursing board for more than fifty years NCLEX has been administered via computer exclusively since 1994 when it was converted from Paper and Pencil to CBT administration Approximately 235,000 examinations administered per year, over 2 million examinations administered since 1994 Examinations administered continuously, on-demand in approximately 220 NCSBN approved Pearson Professional Centers around the world Both NCLEX-RN and NCLEX–PN are variable length, computerized adaptive examinations

4 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia What Is An Adaptive Test? It is tailored to the person taking it. It asks high ability people very few easy items. (They would very likely get them correct.) It asks low ability people very few difficult questions. (They would be guessing.) Everyone finds the test challenging. It can be fixed length or variable. It can be designed for maximum efficiency (50% correct) or something less efficient (perhaps 65% correct).

5 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia One Ability Is Measured The NCLEX examination makes a single decision, pass or fail. This decision is based on the assessment of the examinee’s ability. NCLEX treats “nursing ability” as a unitary concept. There is only the global ability estimate. Pass-fail decisions are never based on “subtest scores.”

6 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Item Difficulty To get the maximum amount of examinee information per question, the computer attempts to select questions for which the examinee has a 50/50 chance of answering it correctly. Because an adaptive test targets items to the person’s ability, the difficulty of the item must be known in advance.

7 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Therefore, CAT requires that: The difficulty of each item is known. Item calibrations are stable across the ability continuum. (Item calibrations cannot be contingent on the ability level of the group of people testing.) Item calibrations are predominantly invariant across nursing-irrelevant factors such as gender and ethnicity.

8 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia How CAT Works in NCLEX® It re-estimates the examinee’s ability after every answer. An item selection algorithm selects (from a large bank of items) an item that the examinee should find challenging. The test is efficient because high ability people get few easy items and low ability people get few difficult items.

9 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia NCLEX® is a Variable Length Test The number of items an examinee receives on their test depends upon their ability. The RN test can range from 75 to 265 items (60 to 250 operational items). The PN/VN test can range from 85 to 205 items (60 to 180 operational items).

10 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia NCLEX® is a Variable Length Test To ensure that the content is adequately covered, no decisions regarding an examinee’s pass-fail status are made until at least 60 operational items have been answered. Although the test estimates the amount of ability an examinee has, ultimately a yes-no decision about the examinee’s competence must be made.

11 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Content Balancing A specific percentage of the operational items must come from each content area. For example, Safety & Infection Control must be 11% (±3%) of each RN test and 10 (±3%) of each PN test. No examination ever deviates from these targeted percentages by more than 3%.

12 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Content Balancing Each examination must conform to the Test Plan specifications. To ensure this, the item selection algorithm first determines what content area deviates the most from the test specifications. An item from that content area is administered next. Within that content area, an item of appropriate difficulty is selected.

13 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Sample NCLEX-RN ® -4 -3 -2 0 1 2 3 4 123456789101112131415 + -+ - - + + - --

14 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Pass-Fail Decisions Beginning with the 60th operational item, the ability estimate is compared to the passing standard. If you are clearly above the standard, you pass and the exam ends. If you are clearly below the standard, then you fail and the exam ends. If your ability estimate is so close to the standard that it's still not clear whether you should pass or not, then the computer continues to ask you questions.

15 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Pretest Items On every test, there are a few pretest items. These items are randomly intermixed with operational questions in the beginning of the test. Pretest items are not used to estimate the examinee’s ability. Examinees cannot distinguish between pretest and operational items.

16 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Test Length Minimum RN test length is 75 items, maximum RN test length is 265 items. (60 – 250 operational + 15 pretest) Minimum PN test length is 85 items,maximum PN test length is 205 items. (60 – 180 operational + 25 pretest)

17 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Pass Fail Begin Evaluation Fail Minimum Item Fail Chart

18 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia So why does all this make reporting difficult? Exam built for max information on min item exposure –Not enough items taken to reliability estimate sub scores –Most candidates get relatively the same number of items correct/incorrect –Even when enough info to calculate subscores highly correlated… –So…. What can be done?

19 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Candidate Performance Reports Are only provided to failing candidates. If the candidate did not answer at least the minimum number of items, no “diagnostic” feedback is given. If at least the minimum number of items were answered, descriptions are given for each of the content areas.

20 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia

21 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia

22 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Program Reports Detailed information regarding program performance Can aid programs with: –Curriculum planning –Program evaluation –Trends Data compiled based on testing date for 1st time test takers only

23 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Program Reports Summary Overview Report Test Plan Report Content Dimension Reports Test Duration/Test Plan Performance Report

24 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Summary Overview Report Lists jurisdictions where graduates applied to for licensure. Percentage of graduates passing. Rank of program - based on the percentage of graduates passing. Distribution of programs based on national pass rates.

25 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Summarizes the test plan. Summarizes graduates’ performance by test plan category. Graduates are compared to: –Graduates from last year –Graduates from the same jurisdiction –National population of graduates Test Plan Report

26 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Non-Test Plan Content Dimension Reports 6 frameworks –Nursing process –Categories of human functioning –Categories of health alterations –Wellness/illness continuum –Stages of maturity –Stress, adaptation and coping

27 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Non-Test Plan Content Dimension Reports Each framework describes how a typical graduate performed as compared to jurisdictional and national populations of graduates. Typical? – the ability of your median graduate.

28 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Non-Test Plan Content Dimensions Nursing Process Categories of Human Functioning Categories of Health Alterations Wellness-Illness Continuum Stages of Maturity Stress, Adaptation, and Coping

29 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Test Duration/Test Performance Report Average number of questions answered Percent taking maximum questions Percentage taking minimum questions Average test time Reported for passers and failures separately, as well as for the total group.

30 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Conclusions? Limited information individual reports More detailed group reports for the “greater”good Suggestions?

31 Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Speaker Contact Information Casey Marks,PhD NCSBN 111 E. Wacker Drive, Suite 2900 312.525.3600 cmarks@ncsbn.org http://www.ncsbn.org/


Download ppt "Presented at the 2006 CLEAR Annual Conference September 14-16 Alexandria, Virginia Something from Nothing: Limitations of Diagnostic Information in a CAT."

Similar presentations


Ads by Google