Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research Methodology and Methods of Social Inquiry www.socialinquiry.wordpress.com Nov 8, 2011 Assessing Measurement Reliability & Validity.

Similar presentations


Presentation on theme: "Research Methodology and Methods of Social Inquiry www.socialinquiry.wordpress.com Nov 8, 2011 Assessing Measurement Reliability & Validity."— Presentation transcript:

1 Research Methodology and Methods of Social Inquiry www.socialinquiry.wordpress.com Nov 8, 2011 Assessing Measurement Reliability & Validity

2 3 sources of variation: Observed variation = true differences + systematic measurement error + random measurement error

3 The measure needs to be: Valid Reliable Exhaustive Mutually Exclusive

4 VALIDITY Claims of having appropriately measured the DV and IVs are valid  Validity of measurement Assuming that there is a relationship in this study, if we claim causality, is the relationship causal?  Internal Validity of the causal argument Generalizations from the results of our study to other units of observations (e.g. persons) in other places & at other times  External Validity of our conclusions

5 Validity of Measurement An empirical measure is valid to the extent to which it adequately captures the real meaning of the concept under consideration (how well it measures the concept it is intended to measure)

6 Face validity Content validity Criterion-related Validity Construct Validity

7 Face Validity - look at the operationalization; assess whether "on its face" it seems like a good translation of the construct - improve the quality of face validity assessment: make it more systematic.

8 Content Validity -to what extent does the measure represent all facets of the concept; -identify clearly the components of the total content ‘domain’; then show that the items adequately represent the components; Ex: knowledge tests -assumes a good detailed description of the content domain, which may not always be so.

9 Criterion-related Validity, I - applies to measures (‘tests’) that should indicate a person’s present/future standing on a specific behavior (trait) The behavior (trait) = the criterion; Validation is a matter of how well scores on the measure correlate with the criterion of interest.

10 Criterion-related Validity, II Predictive Validity - assess how well the measure is able to predict something it should theoretically be able to predict. Concurrent Validity - assess how well the measure is able to distinguish between groups that it should theoretically be able to distinguish between.

11 Construct Validity, I -based upon accumulation of research evidence (lit. rev) -‘construct’ ‘concept’ Assumption: the meaning of any concept is implied by statements of its theoretical relation to other concepts Hence: -Examine theory; -Hypotheses about variables that should be related to measure(s) of the concept; -Hypotheses about variables that should NOT be related to measure(s) of the concept; -Gather ‘evidence ’

12 Construct Validity II Convergent Validity - examine the degree to which the measure is similar to (converges on) other measures that it theoretically should be similar to. Discriminant Validity - examine the degree to which the measure is not similar to (diverges from) other measures that it theoretically should be not be similar to. To estimate the degree to which any two measures are related to each other one typically uses the correlation coefficient. www.socialresearchmethods.net/kb/constval.phpwww.socialresearchmethods.net/kb/constval.php - construct validity

13 RELIABILITY (consistency of measurement) -deals with the quality of measurement; A measure = reliable if it would give the same result over & over again (Assumption: what we are measuring is not changing!) True Score Theory - every measurement has 2 additive components: true ability (or the true level) of the respondent on that measure; PLUS random error. - foundation of reliability theory: A measure that has no random error (i.e., is all true score) is perfectly reliable; a measure that has no true score (i.e., is all random error) has zero reliability.

14 Assessing Reliability Inter-coder reliability: - check the degree to which different interviewers/observers/raters/coders give consistent estimates of the same phenomenon. A.Nominal measure  raters are checking off which category each observation falls in: calculate % of agreement between raters. Ex: 1 measure, with 3 categories; N = 100 observations, rated by 2 raters. On 86 of the 100 observations, the raters checked the same category (i.e., 86% inter-rater agreement)

15 B. Measure = continuous: calculate the correlation btw. the ratings of the 2 raters/observers. Ex: rating the overall level of activity in a classroom on a 1- to-7 scale. Ask raters to give their rating at regular time intervals (e.g., every 60 seconds); the correlation btw. these ratings = estimate of the consistency between the raters.

16 Test-retest reliability -administer same test to the same sample on 2 different occasions; -calculate the correlation btw. repeated applications of the measure through time. Problems: - people remember answers; - real change in attitudes may occur; - first application of measure may have produced change in the subject.

17 Internal consistency reliability -examines the consistency of responses across all items (simultaneously) in a composite measure (uses a single measurement instrument administered to a group of people on 1 occasion to estimate reliability). How consistent are the results for different items for the same construct within the measure?

18 Split-half reliability -calculate the correlation btw. responses to subsets of items from the same measure (apply scale to sample; then divide scale in 2, randomly; reapply each half; make correlation of results; the higher the better)

19 - when factors systematically influence the process of measurement, or the concept we measure - systematic measurement error

20 -when temporary, chance factors affect measurement – random error -its presence, extent and direction are unpredictable from one question to the next, or from one respondent to the next www.socialresearchmethods.net

21 Relation Validity – Reliability – Measurement Error Systematic error affects distance from center; Random error affects tightness of pattern AND distance from center

22 Appropriate Measurement Valid Reliabile Exhaustive - should exhaust the possibilities of what it is intended to measure; There must be sufficient categories so that virtually all units of observations being classified will fit into one of the categories. Mutually exclusive - each observation fits one and only one of the scale values (categories).


Download ppt "Research Methodology and Methods of Social Inquiry www.socialinquiry.wordpress.com Nov 8, 2011 Assessing Measurement Reliability & Validity."

Similar presentations


Ads by Google