Presentation on theme: "Using Test Item Analysis to Improve Students’ Assessment"— Presentation transcript:
1 Using Test Item Analysis to Improve Students’ Assessment Institutional Assessment starts with Classroom Assessment
2 Learning Objectives of This Session 1. Explain difficulty index and discrimination index2. Calculate difficulty index and discrimination index3. Identify ineffective distracters4. Evaluate multiple-choice test items based on analysis results5. Apply table of specifications to improve content validity
3 Purpose of Item Analysis 1. Ensure accurate measurement of knowledge or skill2. Enhance student learning3. Increase student engagement4. Avoid demoralizing students5. Increase confidence in drawing conclusionsOutcome achievementLevel of knowledge or skill masteryTeaching effectiveness
4 Components of a multiple-choice item Test items used to measure the lowest level of cognitive taxonomy are (stem)Analysis (distracter)Application (distracter)Knowledge (key)Comprehension (distracter)The correct answer usually numbered as 1The wrong answers usually numbered as 0
5 Two important indexes for Item Analysis Item Difficulty IndexTo tell how hard the item isItem Discrimination IndexTo tell how well the item to distinguish between high ability and low ability students
6 Item Difficulty IndexIs defined as the percentage or proportion of test takers who correctly answer the item.For example, in a class of 30 students, if 20 students get the item correct and 10 are incorrect, the item difficulty index is 20/30 =0.67Range from 0 to 1
7 ITEM DIFFICULTY = NO. CORRECT / TOTAL Students Item1 Item2 Item3 Item4 Item5RobertMillieDeanShenanCunyCorkyRandyJeanneIlianaLindseyItem p =
8 Optimal P Values for Items with Varying Number of Options Optimal Mean p Value20.8530.7740.7450.69
9 Special Assessment Situations and Item Difficulty Previously discussed item difficulty is most applicable to norm-referenced testsFor criterion-referenced tests or classroom tests, it is normal to have average p values as high as 0.9 because we expect most students to be successfulIf a test were developed to select the upper 25%, it would be desirable to have items with p values that average 0.25In summary, although a mean p of 0.5 is optimal, item difficulty levels vary with purpose of a test.
10 Item Discrimination Index Is defined as the difference of item difficulty between those who succeeded (called upper group or high- achievement group) and those who failed the test (called lower group or low-achievement group)D = discrimination index (range from -1 to 1)PU = difficulty index in the upper groupPL = difficulty index in the lower groupFor example, Pu=0.8, PL=0.3, D= =0.5
11 Guidelines for Evaluating D Values Discrimination Index0.4 and larger excellentgoodfair(it is OK for classroom tests) poorNegative miskeyed or major flaw
13 Distracter AnalysisIt allows you to examine how many students in the upper group and the lower group selected each option on a multiple-choice itemWe expect distracters to be selected by more students in the lower group than students in the upper group.An effective distracter must be selected by some students.
14 Distracter Analysis Options Item 1 A* B C D Number in the upper group 2232Number in the lower group9786
15 Distracter Analysis Options Item 2 A* B C D Number in the upper group 1794Number in the lower group13611
16 Building a Table of Specifications 1. Selecting content areas2. Selecting learning outcomes to be tested3. Determining the levels of objectives4. Determining the question type5. Determining the points for each question6. Building a table
17 A Sample of Table of Specifications Content AreaLearning ObjectivesLevel of objectiveItem TypenumberpointItem AnalysisExplaining item difficulty and discrimination indexCalculate P and DIdentify ineffectivedistractersComprehensionApplicationMultiple-choiceConstructed214Preparing a classroom testApply table of specificationsProject5
Your consent to our cookies if you continue to use this website.