Presentation is loading. Please wait.

Presentation is loading. Please wait.

Foundations of Educational Measurement

Similar presentations


Presentation on theme: "Foundations of Educational Measurement"— Presentation transcript:

1 Foundations of Educational Measurement
Chapter 6 This multimedia product and its contents are protected under copyright law. The following are prohibited by law: • Any public performance or display, including transmission of any image over a network; • Preparation of any derivative work, including the extraction, in whole or in part, of any images; • Any rental, lease, or lending of the program. Copyright © Allyn & Bacon 2008

2 Copyright © Allyn & Bacon 2008
Discussion Topics Educational measurement Descriptive statistics Central tendency Variation Relationships Validity of measurement Reliability of measurement Copyright © Allyn & Bacon 2008

3 Educational Measurement
Measurement: assignment of numbers to differentiate values of a variable Evaluation: procedures for collecting information and using it to make decisions for which some value is placed on the results Assessment - multiple meanings Measurement of a variable Evaluation Diagnosis of individual difficulties Procedures to gather information on student performance Copyright © Allyn & Bacon 2008

4 Educational Measurement
Purpose of measurement for research Obtain information about the variables being studied Provide a standard format for recording observations, performances, or other responses of participants Provide for a quantitative summary of the results from many participants Copyright © Allyn & Bacon 2008

5 Educational Measurement
Four measurement scales Nominal – categories Race, gender, types of schools (e.g., public, private, parochial) Ordinal - ordered categories Finishing position in a race, grade levels Interval - equal intervals between numbers on the scale Test scores, achievement levels Ratio - equal intervals and an absolute zero (0) Height, weight, time Copyright © Allyn & Bacon 2008

6 Descriptive Statistics
Statistics: procedures that summarize and analyze quantitative data Descriptive statistics: statistical procedures that summarize a set of numbers in terms of central tendency, variation, or relationships Important for understanding what the data tells the researcher Copyright © Allyn & Bacon 2008

7 Descriptive Statistics
Frequency distributions An organization of the data set indicating the number of times (i.e., frequency) each score was present Types Frequency table Frequency polygon Histogram Copyright © Allyn & Bacon 2008

8 Descriptive Statistics
Frequency distributions Shapes (see Figure 6.2) Normal - scores are equally distributed around the middle Positively skewed - the set of scores is characterized by a large number of low scores and a small number of high scores Negatively skewed - the set of scores is characterized by a large number of high scores and a small number of low scores Outlier scores – scores that distort findings because they are so different from the other scores in the sample Copyright © Allyn & Bacon 2008

9 Descriptive Statistics
Central tendency What is the typical score? Three measures Mode: the most frequently occurring score Median: the score above and below which one-half of the scores occur Mean The arithmetic average of all scores Statistical properties make it very useful Concerns related to outlying scores Copyright © Allyn & Bacon 2008

10 Descriptive Statistics
Variability How different are the scores? Two types Range: the difference between the highest and lowest scores Standard deviation The average distance of the scores from the mean The relationship to the normal distribution ±1 SD . 68% of all scores in a distribution ±2 SD . 97% of all scores in a distribution Use of percentile ranks - the percentage of scores at or below a specified score Copyright © Allyn & Bacon 2008

11 Copyright © Allyn & Bacon 2008

12 Descriptive Statistics
Relationship How do two sets of scores relate to one another? Correlation A measure of the relationship between two variables Strength to 1.00 Direction - positive (+) or negative (-) Scatterplots – graphic depictions of correlations Interactive scatterplots Copyright © Allyn & Bacon 2008

13 Interpreting Descriptive Statistics

14 Validity of Measurement
Validity: the extent to which inferences are appropriate, meaningful, and useful Refers to the interpretation of the results A matter of degree Specific to a particular use or interpretation A unitary concept Involves an overall evaluative judgment Copyright © Allyn & Bacon 2008

15 Validity of Measurement
Three sources of validity evidence Test content - evidence of the extent to which items on a test are representative of the larger domain of content or items from which they are drawn Internal structure - evidence of the extent to which the relationships between items and parts of the instrument are consistent with those reflected in the theoretical basis of the instrument or its intended use Copyright © Allyn & Bacon 2008

16 Validity of Measurement
Three sources of validity evidence Relationships with other variables - evidence of the extent to which scores from an instrument are related to similar as well as different traits Convergent evidence - scores correlate with measures of the same thing being measured Discriminate evidence - scores do not correlate with measures of something different than that being measured Predictability - the extent to which test scores predict performance on a criterion variable Copyright © Allyn & Bacon 2008

17 Validity of Measurement
Importance of validity to research If the research results are to have any value, validity of the measurement of a variable must exist Use of established and “new” instruments and the implications for establishing validity Importance of establishing validity prior to data collection (e.g., pilot tests) Copyright © Allyn & Bacon 2008

18 Validity of Measurement
Importance of validity to research Validity as a matter of degree (i.e., the extent to which...) Judged on the basis of available evidence Varying levels of validity evidence are reported in articles Copyright © Allyn & Bacon 2008

19 Reliability of Measurement
The extent to which scores are free from error Error is measured by consistency Sources of error Test construction and administration Ambiguous questions, confusing directions, changes in scoring, interrupted testing, etc. Participants’ characteristics Test anxiety, lack of motivation, fatigue, guessing, etc. Copyright © Allyn & Bacon 2008

20 Reliability of Measurement
Reliability coefficients range from 0.00 to 1.00 regardless of the formula used to calculate them 0.00 indicates no reliability or consistency 1.00 indicates total reliability or consistency Copyright © Allyn & Bacon 2008

21 Reliability of Measurement
Five types of reliability evidence Stability (i.e., test-retest) Testing the same subject using the same test on two occasions Limitation - carryover effects from the first to second administration of the test Equivalence (i.e., parallel form) Testing the same subject with two parallel (i.e., equal) forms of the same test taken at the same time Limitation - difficulty in creating parallel forms Copyright © Allyn & Bacon 2008

22 Reliability of Measurement
Equivalence and stability Testing the same participants with two forms of the same test taken at different times Limitation - difficulty in creating parallel forms Internal consistency Testing the same subject with one test and “artificially” splitting the test into two halves Limitations - must have a minimum of ten (10) questions Copyright © Allyn & Bacon 2008

23 Reliability of Measurement
Internal consistency (continued) Two forms KR 20 Dichotomously scored (i.e., right or wrong) items Typical of cognitive measures Cronbach alpha Non-dichotomously scored (e.g., strongly agree, agree, disagree, strongly disagree) items Typical of non-cognitive measures Copyright © Allyn & Bacon 2008

24 Reliability of Measurement
Agreement Used when traditional estimates such as stability, equivalence, equivalence and stability, or internal consistency are not applicable Typically some form of agreement is used (e.g., raters agreeing with one another) Copyright © Allyn & Bacon 2008

25 Reliability of Measurement
Agreement (continued) Situations in which this estimate is used Observational measures - agreement between raters making the same observation Insufficient numbers of test items on an instrument - agreement across the percentage of responses that are the same for several participants Data with highly skewed distributions - percentage of agreement in the number of participants Copyright © Allyn & Bacon 2008

26 Reliability of Measurement
Importance of reliability If the results are to have any value, reliability of the measurement of a variable must exist Established prior to conducting the research (e.g., pilot study) Necessary but not sufficient condition for validity (i.e., to be valid, an instrument must be reliable, but a reliable instrument is not necessarily valid) Copyright © Allyn & Bacon 2008

27 Reliability of Measurement
Conditions affecting reliability Length of the test (i.e., longer tests are typically more reliable) Participants Greater reliability with heterogeneous samples Scores for older participants are typically more reliable than those for younger children Trait being measured (i.e., cognitive traits are more reliable than affective characteristics) Copyright © Allyn & Bacon 2008

28 Reliability of Measurement
Enhancing reliability Standardized administration procedures (e.g., directions, conditions, etc.) Appropriate reading level Reasonable length of the testing period Counterbalancing the order of testing if several tests are being given Copyright © Allyn & Bacon 2008

29 Validity and Reliability
For a discussion of validity and reliability see the American Educational Research Association’s recently revised Standards for Educational and Psychological Testing Copyright © Allyn & Bacon 2008


Download ppt "Foundations of Educational Measurement"

Similar presentations


Ads by Google