Download presentation

Presentation is loading. Please wait.

Published byZain Tufts Modified about 1 year ago

1
Epidemiologic Methods- Fall 2002

2
Course Administration Format –Lectures: Tuesdays 8:15 am, except for Dec. 10 at 1:30 pm –Small Group Sections: Tuesdays 1:00 pm except for last Section, Dec. 3, from 10:30 to 11:30. Begin next week Content: Overview and discussion of lectures, and review of assignments. Textbooks –Epidemiology: Beyond the Basics by Szklo and Nieto (S & N). –Multivariable Analysis: A Practical Guide for Clinicians by M. Katz Grading –Based on points achieved on homework (~80%) & final (~20%). –Late assignments are not accepted. Missed sessions –All material distributed in class is posted on website.

3
Definitions of Epidemiology The study of the distribution and determinants (causes) of disease –e.g. cardiovascular epidemiology The method used to conduct human subject research –the methodologic foundation of any research where individual humans or groups of humans are the unit of observation

4
Understanding Measurement: Aspects of Reproducibility and Validity Review Measurement Scales Reproducibility vs Validity Reproducibility –importance –sources of measurement variability –methods of assessment by variable type: interval vs categorical

5
Clinical Research Sample Measure (Intervene) Analyze Infer

6
A study can only be as good as the data... -Martin Bland

7
Measurement Scales

8
Reproducibility vs Validity Reproducibility –the degree to which a measurement provides the same result each time it is performed on a given subject or specimen Validity –from the Latin validus - strong –the degree to which a measurement truly measures (represents) what it purports to measure (represent)

9
Reproducibility vs Validity Reproducibility –aka: reliability, repeatability, precision, variability, dependability, consistency, stability Validity –aka: accuracy

10
Relationship Between Reproducibility and Validity Good Reproducibility Poor Validity Poor Reproducibility Good Validity

11
Relationship Between Reproducibility and Validity Good Reproducibility Good Validity Poor Reproducibility Poor Validity

12
Why Care About Reproducibility? Impact on Validity Mathematically, the upper limit of a measurement’s validity is a function of its reproducibility Consider a study to measure height in the community: –Assume the measurement has imperfect reproducibility: if we measure height twice on a given person, we get two different values; 1 of the 2 values must be wrong (imperfect validity) –If study measures everyone only once, errors, despite being random, will lead to biased inferences when using these measurements (i.e. lack validity)

14
Impact of Reproducibility on Statistical Precision Classical Measurement Theory: –observed value (O) = true value (T) + measurement error (E) –If we assume E is random and normally distributed: E ~ N (0, 2 E ) Error

15
Impact of Reproducibility on Statistical Precision –observed value (O) = true value (T) + measurement error (E) –E is random and ~ N (0, 2 E ) When measuring a group of subjects, the variability of observed values is a combination of: the variability in their true values and the variability in the measurement error 2 O = 2 T + 2 E

16
Why Care About Reproducibility? 2 O = 2 T + 2 E More measurement error means more variability in observed measurements –e.g. measure height in a group of subjects. –If no measurement error –If measurement error Height

17
Why Care About Reproducibility? 2 O = 2 T + 2 E More variability of observed measurements has profound influences on statistical precision/power: – Descriptive studies: wider confidence intervals – RCT’s: power to detect a treatment difference is reduced – Observational studies: power to detect an influence of a particular risk factor upon a given disease is reduced.

18
Mathematical Definition of Reproducibility Reproducibility Varies from 0 (poor) to 1 (optimal) As 2 E approaches 0 (no error), reproducibility approaches 1

19
Phillips and Smith, J Clin Epi 1993 Power

20
Sources of Measurement Error Observer within-observer (intrarater) between-observer (interrater) Instrument within-instrument between-instrument

21
Sources of Measurement Error e.g. plasma HIV viral load –observer: measurement to measurement differences in tube filling, time before processing –instrument: run to run differences in reagent concentration, PCR cycle times, enzymatic efficiency

22
Within-Subject Variability Although not the fault of the measurement process, moment-to-moment biological variability can have the same effect as errors in the measurement process Recall that: –observed value (O) = true value (T) + measurement error (E) –T = the average of measurements taken over time –E is always in reference to T –Therefore, lots of moment-to-moment within-subject biologic variability will serve to increase the variability in the error term and thus increase overall variability because 2 O = 2 T + 2 E

24
Assessing Reproducibility Depends on measurement scale Interval Scale –within-subject standard deviation –coefficient of variation Categorical Scale –Cohen’s Kappa

25
Reproducibility of an Interval Scale Measurement: Peak Flow Assessment requires >1 measurement per subject Peak Flow Rate in 17 adults (Bland & Altman)

26
Assessment by Simple Correlation

27
Pearson Product-Moment Correlation Coefficient r (rho) ranges from -1 to +1 r r describes the strength of linear association r 2 = proportion of variance (variability) of one variable accounted for by the other variable

28
r = -1.0 r = 0.8 r = 0.0 r = 1.0 r = -1.0 r = 0.8r = 0.0

29
Correlation Coefficient for Peak Flow Data r ( meas.1, meas. 2) = 0.98

30
Limitations of Simple Correlation for Assessment of Reproducibility Depends upon range of data –e.g. Peak Flow r (full range of data) = 0.98 r (peak flow <450) = 0.97 r (peak flow >450) = 0.94

32
Limitations of Simple Correlation for Assessment of Reproducibility Depends upon ordering of data Measures linear association only

33
Meas. 2 Meas 1 1003005007009001100130015001700 100 300 500 700 900 1100 1300 1500 1700

34
Limitations of Simple Correlation for Assessment of Reproducibility Gives no meaningful parameter using the same scale as the original measurement

35
Within-Subject Standard Deviation Mean within-subject standard deviation (s w ) = 15.3 l/min

36
Computationally easier with ANOVA table: Mean within-subject standard deviation (s w ) :

37
s w : Further Interpretation If assume that replicate results: – are normally distributed – mean of replicates estimates true value 95% of replicates are within (1.96)(s w ) of true value x true value swsw (1.96) (s w )

38
s w : Peak Flow Data If assume that replicate results: – are normally distributed – mean of replicates estimates true value 95% of replicates within (1.96)(15.3) = 30 l/min of true value x true value s w = 15.3 l/min (1.96) (s w ) = (1.96) (15.3) = 30

39
s w : Further Interpretation Difference between any 2 replicates for same person = diff = meas 1 - meas 2 Because var(diff) = var(meas 1 ) + var(meas 2 ), therefore, s 2 diff = s w 2 + s w 2 = 2s w 2 s diff

40
s w : Difference Between Two Replicates If assume that differences: – are normally distributed and mean of differences is 0 – s diff estimates standard deviation The difference between 2 measurements for the same subject is expected to be less than (1.96)(s diff ) = (1.96)(1.41)s w = 2.77s w for 95% of all pairs of measurements x diff 0 s diff (1.96) (s diff )

41
s w : Further Interpretation For Peak Flow data: The difference between 2 measurements for the same subject is expected to be less than 2.77s w =(2.77)(15.3) = 42.4 l/min for 95% of all pairs Bland-Altman refer to this as the “repeatability” of the measurement

42
One Common Underlying s w Appropriate only if there is one s w i.e, s w does not vary with true underlying value Within-Subject Std Deviation Subject Mean Peak Flow 100300500700 0 10 20 30 40 Kendall’s correlation coefficient = 0.17, p = 0.36

43
Another Interval Scale Example Salivary cotinine in children (Bland-Altman) n = 20 participants measured twice

44
Cotinine: Absolute Difference vs. Mean Subject Absolute Difference Subject Mean Cotinine 0246 0 1 2 3 4 Kendall’s tau = 0.62, p = 0.001

45
Logarithmic Transformation

46
Log Transformed: Absolute Difference vs. Mean Subject abs log diff Subject mean log cotinine -.50.51 0.2.4.6 Kendall’s tau=0.07, p=0.7

47
s w for log-transformed cotinine data s w back-transforming to native scale: antilog(s w ) = antilog(0.175) = 10 0.175 = 1.49

48
Coefficient of Variation On the natural scale, there is not one common within-subject standard deviation for the cotinine data Therefore, there is not one absolute number that can represent the difference any replicate is expected to be from the true value or from another replicate Instead, within-subject standard deviation varies with the level of the measurement and it is reasonable to depict the within-subject standard deviation as a % of the level = coefficient of variation

49
Cotinine Data Coefficient of variation = 1.49 -1 = 0.49 At any level of cotinine, the within-subject standard deviation of repeated measures is 49% of the level

50
Coefficient of Variation for Peak Flow Data By definition, when the within-subject standard deviation is not proportional to the mean value, as in the Peak Flow data, then there is not a constant ratio between the within-subject standard deviation and the mean. Therefore, there is not one common coefficient of variation Estimating the the “average” coefficient of variation is not very meaningful

51
Peak Flow Data: Use of Coefficient of Variation when s w is Constant

53
Reproducibility of a Categorical Measurements: Kappa Statistic Agreement above that expected by chance (observed agreement - chance agreement) is the amount of agreement above chance If maximum amount of agreement is 1.0, then (1 - chance agreement) is the maximum amount of agreement above chance that is possible Therefore, kappa is the ratio of “agreement beyond chance” to “maximal possible agreement beyond chance”

54
Sources of Measurement Variability: Which to Assess? Observer within-observer (intrarater) between-observer (interrater) Instrument within-instrument between-instrument Subject within-subject Which to assess depends upon the use of the measurement and how/when the measurement will be made: –For clinical use: all of the above are needed –For research: depends upon logistics of study (e.g., within-observer and within-instrument only are needed if just one person/instrument used throughout study)

55
Assessing Validity Measures can be assessed for validity in 3 ways: –Content validity Face Sampling –Construct validity –Empirical validity (aka criterion) Concurrent (i.e. when gold standards are present) –Interval scale measurement: 95% limits of agreement –Categorical scale measurement: sensitivity & specificity Predictive

56
Conclusions Measurement reproducibility plays a key role in determining validity and statistical precision in all different study designs When assessing reproducibility, for interval scale measurements: avoid correlation coefficients use within-subject standard deviation if constant or coefficient of variation if within-subject sd is proportional to the magnitude of measurement For categorical scale measurements, use Kappa What is acceptable reproducibility depends upon desired use Assessment of validity depends upon whether or not gold standards are present, and can be a challenge when they are absent

57
Assessing Validity - With Gold Standards A new and simpler device to measure peak flow becomes available (Bland-Altman)

58
Plot of Difference vs. Gold Standard Difference Gold standard

59
Examine the Differences Difference Gold standard d 1 = -81 d 2 = 7 d 3 = -35

60
Are the Differences Normally Distributed?

61
The mean difference describes any systematic difference between the gold standard and the new device: The standard deviation of the differences: 95% of differences will lie between -2.3 + (1.96)(38.8), or from -78 to 74 l/min. These are the 95% limits of agreement

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google