Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reliability Ability to produce similar results when repeated measurements are made under identical conditions. Consistency of the results Can you get.

Similar presentations


Presentation on theme: "Reliability Ability to produce similar results when repeated measurements are made under identical conditions. Consistency of the results Can you get."— Presentation transcript:

1

2 Reliability Ability to produce similar results when repeated measurements are made under identical conditions. Consistency of the results Can you get the same result if you or somebody else do it again? Consistent -- Stable

3 Valid and Reliable A good measurement Measures what it should measure in a consistent way

4 Reliable but Invalid Your measurement is consistent, but not measuring what it is supposed to measure

5 Unreliable Sometime you get it right, other times not If your measurement is unreliable, you cannot claim high validity either

6 O bserved score = T rue score + E error Observed = measured score, result True = “true”, actual, exact state Error = measurement error “O = T + E” rule

7 Types of Reliability Interobserver (interrater) reliability Test-Retest reliability Parallel-forms reliability Internal consistency

8 Test- re-test reliability Ability of measure to produce same or highly similar results when given again. If on testing and re-testing, results are similar - reliable instrument If results vary widely, then your instrument is not reliable. OR what you are measuring is not a stable characteristic (e.g., mood or anxiety level vs. intelligence)

9 Test-Retest Reliability Are peoples’ scores consistent over time? Same people Same instrument Measure at two different times Correlate time 1 scores with time 2 scores The test-retest r is your estimate of reliability CAUTION—pick test-retest interval so that real change is not expected to occur

10 Interobserver Reliability Also known as Inter-rater reliability Consistency between measurements by two or more observers

11 Inter-rater Agreement Are observers consistent in seeing the same things? Different observers Watch the same sample of behavior Compute proportion of time both observers recorded the same behavior as happening # agreements # agreements + # disagreements (# of observations) Caution—Training observers to be consistent may not be easy

12 Internal Consistency How consistent is performance on each item with performance on the total measure? Same people One measure, one time Correlate score on Q1 with total score (r 1t ) Correlate score on Q2 with total score (r 2t ) Correlate score on Q3 with total score (r 3t ), etc. Average result 3, 4, 5, etc. This average r is your estimate of reliability (Often called KR-20, or coefficient alpha)

13 Split Half Method -- Reliability Type of internal consistency Score odd items on your questionnaire Score even items on questionnaire Correlate the two numbers = index of reliability There are other ways to split survey in half

14 How Reliable Must it Be? Reliability coefficient ranges from 0 to 1 Higher is better (1.00 is perfect) For research, journals expect.60 or better For measures that are used to make decisions about people’s lives, need.80 or better

15 Would you measure a baseball player’s hitting ability based on a single time at bat? Why or why not? Would you like to be tested on the research methods exam with a few items only?

16 Increasing reliability More items are better. –Increase number of items on your questionnaire (no 1 or 2 item measures) Don’t measure something with one item only. Reliability improves if larger number of observations or survey questions

17 Increasing reliability continued Write clear, well-written items on survey Standardize administration procedures –Treat all participants alike –Timing, procedures, instructions alike –Testing situation free of distractions –Clear instructions Score survey carefully -- avoid errors

18 Quasi-experimental research Naturally occurring conditions (IV change) No control over variables influencing behavior (confounding variables) –Another variable that changed along with the variable of interest may have caused the observed effect

19 Quasi-experimental Hanauma Bay 10 years ago –Unregulated parking Hanauma Bay 5 years ago –Parking regulated Hanauma Bay Last year –Admission fee (unless kamaaina) Hanauma Bay This year –Admission fee (unless kamaaina) and must view educational program If...Same satisfaction rating survey each year

20 Field experiment - nursing home residents Independent variable: Degree of control over decisions that affect their lives Group 1: were given responsibility/ control for making choices about home’s operation Group 2: the staff would be responsible for their care and needs Dependent variables: Activity level, happiness, physical health

21 Program evaluation Research on programs –that are proposed and implemented to achieve some positive effect on people Outcome evaluation –Did the program result in the positive outcome for which it was designed? Process evaluation –Is program reaching target population, and attracting enough clients? –Is staff providing the planned services?

22 Non-equivalent control group pre-test -- post-test design Dependent Dependent VariableVariable Pre-testPost-test Group 1  Measure  Treatment  Measure Grp. 2  Measure  No Treatment  Measure Control


Download ppt "Reliability Ability to produce similar results when repeated measurements are made under identical conditions. Consistency of the results Can you get."

Similar presentations


Ads by Google