Presentation is loading. Please wait.

Presentation is loading. Please wait.

RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity.

Similar presentations


Presentation on theme: "RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity."— Presentation transcript:

1 RELIABILITY AND VALIDITY Dr. Rehab F. Gwada

2 Control of Measurement Reliabilityvalidity

3 Control of Measurement  In selecting a measuring instrument, the research is faced with two basic questions:  Does the instrument measure a variable consistently?  Is the instrument a true measure of the variable?  The first question seeks a reliability index while the second question raises the issue of validity.  The instrument must be reliable to be valid.

4 Reliability  Reliability(consistency)= the extent to which an instrument consistently measures what it is supposed to ReliabilityInter-rater Internal consistency Intra-rater reliability Parallel Forms Reliability

5 1-Intra-rater reliability  The degree to which same raters/observers give consistent estimates of the same measurements over time.  Stability and consistency over time as it is used to assess measure from one time to another.  The same groups, used the same measurement at two different times.  Consistency of patient/client results over time.  To monitor changes following treat.  A single examiner can replicate the results

6 2-Inter-rater Reliability The degree to which different raters/observers give consistent estimates of the same measurements.  Stability and consistency across raters/examiners.  There are many ways in which studies on inter-tester reliability can be performed:  1-Therapist and then a second therapist may measure the same patient  2-Therapists make their measurements and judgments simultaneously.

7 2-Inter-rater reliability  This is important because clients often move between therapy services, for example from an acute ward to a rehabilitation ward, from in-patient service to a day hospital or outpatient service, or from an intermediate care/rapid-response service to longer-term support by a community team. So a person might be given the same assessment on a number of occasions but each time a different therapist administers the test.

8 3-Parallel form reliability Parallel/Alternate Forms Method - refers to the administration of two alternate forms of the same measurement device and then comparing the scores.  Both forms are administered to the same person and the scores are correlated. If the two produce the same results, the instrument is considered reliable.  A good example is the SAT. There are two versions that measure Verbal and Math skills. Two forms for measuring Math should be highly correlated and that would document reliability.

9 3-Parallel form reliability  In parallel forms reliability you first have to create two parallel forms. One way to accomplish this is to create a large set of questions that address the same construct and then randomly divide the questions into two sets(instruments). You administer both instruments to the same sample of people.

10 4-Internal consistency reliability (IC)  In internal consistency reliability estimation we use our single measurement instrument administered to a group of people on one occasion to estimate reliability.  In effect we judge the reliability of the instrument by estimating how well the items that reflect the same construct yield similar result

11 4-Internal consistency reliability (IC)  Used to assess the degree of consistency of results across items within a test.  The degree of homogeneity among the items in a scale or measure The IC used to calculate the association: The higher (IC), the test is homogenous construct, The lower (IC), the test has heterogenic factors.

12 What is the Validity? Valid=faithful= true Validity: It is the measuring of what the instrument intended to measuring.

13 Relationship Between Reliability & Validity A measure that is not reliable cannot be valid  Thus, reliability is a necessary condition for validity A measure that is reliable is not necessarily valid  Thus, reliability is not a sufficient condition for validity

14  In the first one, you are hitting the target consistently, but you are missing the centre of the target.  That is, you are consistently and systematically measuring the wrong value for all respondents.  In this case your measure is reliable, but not valid (that is, it's you are consistent but wrong!!). Reliable but not Valid

15  The second, shows hits that are randomly spread across the target.  You seldom hit the centre of the target but, on average, you are getting the right answer for the group (but not very well for individuals).  In this case, you get a valid group estimate, but you are inconsistent.  Here, you can clearly see that reliability is directly related to the variability of your measure. Fairly Valid but not very Reliable

16  The third scenario shows a case where your hits are spread across the target and you are consistently missing the centre.  In this case your measure is neither reliable nor valid. Neither Valid nor Reliable

17  Finally, we see the "Robin Hood “ scenario -- you consistently hit the centre of the target.  In this case your measure is both reliable and valid. Valid & Reliable

18 Types of Measurement Validity ValidityConstruct Translation Criterion Validity

19 types of validity Criterion Validity  If an instrument has demonstrated a close relationship to another instrument (criterion) when measuring some known quantity or quality,the instrument is said valid.  The criterion is an instrument which is well established, accepted, or considered the best instrument of its kind. often called a ‘gold standard ‘.  For example Manual muscle test and the dynamometer

20 Types of Measurement Validity Translation validity contentFace

21 Translation validity Face validity  The simplest and least scientific definition of validity  It is demonstrated when a measure superficially appears to measure what it claims to measure.  Based on subjective judgment and difficult to quantify.  In simple terms, face validity is whether a test seems to measure what it is intended to measure (Asher, 1996).  Hydrostatic weighing, and lower extremity function scale

22 Translation validity  Content Validity The extent that, the test items actually represent the kinds of material (i.e., content) they are supposed to represent.  Law (1997) defines content validity as ‘ the comprehensiveness of an assessment and its inclusion of items that fully represent the attribute being measured.

23 Content Validity  Asher (1996) notes that content validity is descriptive rather than statistically determined. Because of this, it is sometimes considered a weaker form of validity compared to other types of validity.

24 Content Validity For example, if you want to measure balance and have 10 different items related to aspects of balance, you would need to examine each item separately to see if it really did relate to the domain of balance (Lewis and Bottomley, 1994). For example, lower extremity functional scale, one would expect a set of activities that cover all aspects of lower extremity function 1-Walking a mile. 2-Ability to climb stairs

25 types of validity Construct Validity  Refers to the extend to which a test is measuring the underlying theoretical constructs.  A test designed to measure depression must only measure that particular construct, not closely related ideas such as anxiety or stress.

26 Construct Validity Construct validation involves forming theories about the domain of interest and then assessing the extent to which the measure under investigation provides results that are consistent with the theories. For example: several theories applied to the construct validation of functional scale measure in low back pain

27 EXTERNAL VALIDITY  External validity is about generalization: To what extent can an effect in research, be generalized to populations, settings, treatment variables, and measurement variables?  External validity is usually split into two distinct types, population validity and ecological validity and they are both essential elements in judging the strength of an experimental design.

28 INTERNAL VALIDITY  The extent to which the results demonstrate that a causal relationship exists between the independent and dependent variables.  If – effect on dependant variable only due to variation in the independent variable(s), which need Controlling of extraneous variables.  then – internal validity achieved

29 Question?


Download ppt "RELIABILITY AND VALIDITY Dr. Rehab F. Gwada. Control of Measurement Reliabilityvalidity."

Similar presentations


Ads by Google