Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classroom Analytics.

Similar presentations


Presentation on theme: "Classroom Analytics."— Presentation transcript:

1 Classroom Analytics

2 What is Analytics? Systematic computational analysis of data
Discovery and communication of meaningful patterns of data Scientific process of transforming data into meaningful insight for making decisions The field of data analysis Statistics

3 Most common Classroom Analytics
Mean and median Range, variance, and standard deviation Histograms (bar graph)

4 More Advanced Analytics
Using the mean and standard deviation to test for “outliers” in test grades Applying manufacturing style quality control charts to test and quiz grades Predicting final exam grades from chapter test grades using regression Using factor and discriminate analysis to find the best test questions

5 IQR Rule for Outliers Sort data
Find first and third quartiles, Q1 and Q3 IQR = Q3 – Q1 Lower hinge = Q1 – 1.5 * IQR Upper hinge = Q * IQR Outliers lie outside these values

6 Mean Quality Control Chart
Plot weekly classroom quiz average Plot weekly individual quiz grades Watch for outliers

7 Example

8 Prediction using Regression
From previous semester, compute regression model predicting final exam grade based on class average before the final Use model to predict final exam grade for current semester

9 Example

10 Test Analysis

11 Step 1: Overall Test Analysis
How well did the class perform? You can easily compute descriptive statistics using Excel (e.g. the mean, standard deviation, range, and skewness, etc.). This gives a picture of the extent to which the students mastered the content, assuming of course that the test is valid and reliable.

12 Now what does this imply?
Does the fact that the range of test scores was from 90% to 100% mean they all mastered the content well? Is the test too easy? If students did not do so well on a test, does it mean the test is difficult? Does the test have both curricular and instructional validity?

13 Item Analysis Three elements of item analysis: Item difficulty
Item discrimination Distractor analysis

14 Step 2: Item Difficulty Defined as the percentage of students who got the item correct It ranges from 0.00 (where none of the students got it right) to 1.00 (where all the students got it correct). It is desirable to have items that are moderately difficult. Items which everybody miss are practically useless as are the items which everybody get correct. An item with a difficulty of .75 is easier than an item with .25 difficulty. It is recommended that the average level of difficulty for a four-option multiple choice test should be between .60 and .80. If an item has a low difficulty value, say, less than .25, it could be that the answer key is incorrect the item may be too difficult the item may be poorly worded.

15 Step 2: Item Discrimination
Purpose: Differentiating better prepared (upper quartile) students from the less prepared (less quartile) ones Defined and computed as the difference between the proportion of students who got the item correct and the proportion of students who got the item incorrect. The majority of the better-prepared students should get the item correct compared with the less prepared students.

16 Example: Item Discrimination
Students A B* C D Upper 25% 0.00 0.80 0.20 Lower 25% 0.50 0.30 All students 0.36 0.38 0.26

17 Example contd. Given that the correct answer is B, 80% of the better- prepared students got the item correct. 20% of the less- prepared students got the item correct. Overall, 38% of students got the item correct, and therefore .38 is the item difficulty. Item discrimination index: simply subtract .20 from (equals 0.60, meaning 60% more of the better prepared students got the item correct than the less prepared students).

18 Item Discrimination & Reliability
Items with higher discrimination indices are more reliable and you would want more of these items on your test Items in the range of .20 to .39 are good for test reliability Items with negative indices reduce the reliability of the test You should aim to have moderately difficult items with high discrimination indices

19 Point Biserial as Item Discrimination Index
The point-biserial correlation can also be used as an index of discrimination between the upper group and the lower group. This is simply the correlation between the item (scored dichotomously as correct and incorrect) and the total score on the test. Items with a difficulty of 0 or 1 will always have a discrimination index of 0. These do not help you much in assessing student learning. Generally, desirable values of the point biserial coefficient are .20 and above. Items with negative discrimination indices must be reviewed This would mean that you have more of the low-performing students who got the item correct.

20 Step 3: Distractor Analysis
Multiple choice response options that are not the correct answer but are seemingly correct. They often developed based upon students’ common misconceptions or incorrect calculations. Distractor analysis should only be conducted when an item is not performing well in terms of difficulty and discrimination. For distractors to work, you should see a higher proportion of the low-performing students choosing them. A distractor is said to be performing poorly if very few or no students choose that option at all. Similarly, it should be troublesome if a larger proportion of students choose the distractor than the correct response. In that case the item should be reexamined to see if it has been miskeyed. Carefully consider distractors such as “none of the above,” “all of the above” as they may do more harm than good to the reliability of the item and the test in general. An important property of a good distractor is that albeit enticing, it must be clearly incorrect.

21 How do I do it? Crosstabs in SPSS can easily handle this Correct
Incorrect Upper Quartile 60% 30% Lower Quartile 70%

22 The End


Download ppt "Classroom Analytics."

Similar presentations


Ads by Google