Presentation is loading. Please wait.

Presentation is loading. Please wait.

TEST SCORES INTERPRETATION - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because.

Similar presentations


Presentation on theme: "TEST SCORES INTERPRETATION - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because."— Presentation transcript:

1

2 TEST SCORES INTERPRETATION

3 - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because the raw score obtained from a test standing on itself rarely has meaning.

4 - is the interpretation of test raw score based on the conversion of the raw score into a description of the specific tasks that the learner can perform. - a score is given meaning by comparing it with the standard of performance that is set before the test is given. - It permits the description of a learner’s test performance without referring to the performance of others. This is essentially done in terms of some universally understood measure of proficiency like speed, precision or the percentage correct score in some clearly defined domain of learning tasks. Examples of criterion-referenced interpretation are: · Types 60 words per minute without error. · Measures the room temperature within + 0∙1 degree of accuracy (precision). · Defines 75% of the elementary concepts of electricity items correctly (percentage-correct score). driving test, when learner drivers are measured against a range of explicit criteria (not endangering other road drivers

5 - is the interpretation of raw score based on the conversion of the raw score into some type of derived score that indicates the learner’s relative position in a clearly defined referenced group. - This type of interpretation reveals how a learner compares with other learners who have taken the same test. - Example : IQ of a person (145-154 Genius)

6

7

8 Reliability of a test - the degree to which a test is consistent, stable, dependable or trustworthy in measuring what it is measuring. How can we rely on the results from a test? How dependable are scores from the test? How well are the items in the test consistent in measuring whatever it is measuring? Reliability of a test - the degree to which a test is consistent, stable, dependable or trustworthy in measuring what it is measuring. How can we rely on the results from a test? How dependable are scores from the test? How well are the items in the test consistent in measuring whatever it is measuring? - seeks to find if the ability of a set of testees are determined based on testing them two different times using the same test, or using two parallel forms of the same test, or using scores on the same test marked by two different examiners, will the relative standing of the testees on each of the pair of scores remain the same? - seeks to find if the ability of a set of testees are determined based on testing them two different times using the same test, or using two parallel forms of the same test, or using scores on the same test marked by two different examiners, will the relative standing of the testees on each of the pair of scores remain the same?

9  Validity - the most important quality you have to consider when constructing or selecting a test.  Refers to the meaningfulness or appropriateness of the interpretations to be made from test scores and other evaluation results.  Validity is therefore, a measure or the degree to which a test measures what it is intended to measure.  Validity - the most important quality you have to consider when constructing or selecting a test.  Refers to the meaningfulness or appropriateness of the interpretations to be made from test scores and other evaluation results.  Validity is therefore, a measure or the degree to which a test measures what it is intended to measure. While reliability is necessary, it alone is not sufficient. For a test to be reliable, it also needs to be valid.

10 Determining the extent to which performance on a test represents level of knowledge of the subject matter content which the test was designed to measure (Content validity) Determining the extent to which performance on a test represents the amount of what was being measured possessed by the Examinee (Construct validity) ·Determining the extent to which performance on a test represents an examinee’s probable task (Criterion validity)

11

12 Face Validation is a quick method of establishing the content validity of a test after its preparation. This is done by presenting the test to subject experts in the field for their experts’ opinion as to how well the test “looks like” it measures what it was supposed to measure. This process is refereed to as face validity. It is a subjective evaluation based on a superficial examination of the items, of the extent to which a test measures what it was intended to measure.

13  A correlation coefficient expresses the degree of relationship between two sets of scores by numbers ranging from +1∙00 from – 1.00 to + 1.00 to - 1∙00.  A perfect positive correlation is indicated by a coefficient of +1∙00 and a perfect negative correlation by a coefficient of -1∙00.  The larger the coefficient (positive or negative), the higher the degree of relationship expressed. There are two common methods of computing correlation coefficient: 1. Spearman Rank-Difference Correlation- number of scores to be correlated is small (less than 30) 2. Pearson Product-Moment Correlation - the number of scores is large

14

15  One way to measure the Internal consistency (reliability) of the test items is the Cronbach’s Alpha.  The higher the correlation among the items, the greater the alpha. High correlations imply that high (or low) scores one question are associated with high (or low) scores on other questions.  Alpha can vary from 0 to 1, with indicating that the test is perfectly reliable.  The computation of Cronbach’s Alpha when a particular item is removed from consideration is a good measure of that item’s contribution to the entire test’s assessment performance. Cronbach’s α interpretation (George and Mallery, 2003) α >.9 – Excellent α >.8 – Good α >.7 – Acceptable α >.6 – Questionable α >.5 – Poor α <.5 – Unacceptable” Cronbach’s α interpretation (George and Mallery, 2003) α >.9 – Excellent α >.8 – Good α >.7 – Acceptable α >.6 – Questionable α >.5 – Poor α <.5 – Unacceptable”

16 Table 1. Multi-item statements to measure pleasure with their graduate program

17 Use excel and SPSS to compute Cronbach’s α and interpret your results.

18 1. Open excel and encode answers. Save. (Note : In the use of SPSS; In Data View, columns represent variables, and rows represent cases (observations). In Variable View, each row is a variable, and each column is an attribute that is associated with that variable. 2. Open SPSS, open saved data in excel form. 3. Click Analyze scale reliability stat correlation

19

20

21

22


Download ppt "TEST SCORES INTERPRETATION - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because."

Similar presentations


Ads by Google