Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intermediate methods in observational epidemiology 2008

Similar presentations


Presentation on theme: "Intermediate methods in observational epidemiology 2008"— Presentation transcript:

1 Intermediate methods in observational epidemiology 2008
Quality Assurance and Quality Control

2 Threats to Causal Inference in Epidemiologic Studies
Solution Experimental Design Adjustment/Control Confounding Quality Assurance Quality Control Bias

3 Definitions of Quality Assurance and Quality Control
QA: Activities to assure quality of data that take place prior to data collection (through protocol and manuals of operation) QC: Efforts during the study to monitor the quality of data at identified points during the collection and processing of data

4 STEPS IN QUALITY ASSURANCE
(1) Specify hypothesi(e)s (2) Specify general design -- develop protocol (3) Select or prepare data collection instruments, and develop procedures for data collection/ processing -- develop operation manuals (4) Train staff -- certify staff (5) Using certified staff, pre-test and pilot study instruments and procedures. In the pilot study, assess alternative strategies for data collection- eg, telephone vs. in-person interviews (6) Modify (2) and (3) and retrain staff on the basis of results of (5)

5 STEPS IN QUALITY ASSURANCE
(1) Specify hypothesi(e)s (2) Specify general design -- develop protocol (3) Select or prepare data collection instruments, and develop procedures for data collection/ processing -- develop operation manuals (4) Train staff -- certify staff Based on a “grab” sample (5) Using certified staff, pre-test and pilot study instruments and procedures. In the pilot study, assess alternative strategies for data collection- eg, telephone vs. in-person interviews (6) Modify (2) and (3) and retrain staff on the basis of results of (5)

6 STEPS IN QUALITY ASSURANCE
(1) Specify hypothesi(e)s (2) Specify general design -- develop protocol (3) Select or prepare data collection instruments, and develop procedures for data collection/ processing -- develop operation manuals Based on a sample as similar as possible to the study population (4) Train staff -- certify staff (5) Using certified staff, pre-test and pilot study instruments and procedures. In the pilot study, assess alternative strategies for data collection- eg, telephone vs. in-person interviews (6) Modify (2) and (3) and retrain staff on the basis of results of (5)

7 STEPS IN QUALITY ASSURANCE
(1) Specify hypothesi(e)s (2) Specify general design -- develop protocol (3) Select or prepare data collection instruments, and develop procedures for data collection/ processing -- develop operation manuals (4) Train staff -- certify staff (5) Using certified staff, pre-test and pilot study instruments and procedures. In the pilot study, assess alternative strategies for data collection- eg, telephone vs. in-person interviews (6) Modify (2) and (3) and retrain staff on the basis of results of (5)

8 Example: - Taping of interviews QUALITY CONTROL PROCEDURES: TYPES
1. Observation monitoring “Over the shoulder” observation of staff by experienced supervisor(s) to identify problems in the implementation of the protocol. Example: - Taping of interviews

9 QUALITY CONTROL PROCEDURES: TYPES 1. Observation monitoring
2. Quantitative monitoring Random repeat (phantom) measurements based on either internal or external pools (biologic samples) to examine: . Intra-observer . Inter-observer Advantages . Better overall quality of data . Measurement of reliability variability

10 STUDY BASE BLOOD SAMPLES OF 7 PARTICIPANTS
Phantom sample based on an internal pool Aliquot 2: measurement in study lab Aliquot 1: measurement in gold standard lab Internal phantom sample STUDY BASE BLOOD SAMPLES OF 7 PARTICIPANTS

11 STUDY BASE BLOOD SAMPLES OF 7 PARTICIPANTS
Phantom sample based on an external pool Aliquot 1: measurement in gold standard lab Aliquot 2: measurement in study lab Phantomsample from the gold standard lab STUDY BASE BLOOD SAMPLES OF 7 PARTICIPANTS

12 QUALITY CONTROL PROCEDURES: TYPES
1. Observation monitoring 2. Quantitative monitoring - Random repeat measurements Monitoring of individual technicians for deviations from expected values Example: monitoring of digit preference for blood pressure (expected: 10% for each digit)

13 Digit Preference in Systolic Blood Pressure (SBP) Measurements

14 Digit Preference in Systolic Blood Pressure (SBP) Measurements

15 Quality Control Indices
Validity (Accuracy) Precision (Repeatability, Reliability)

16 Problem: Limited to 2 x 2 tables
Validity: Usually estimated by calculating sensitivity and specificity. The study (observed) measurement (“test”) is compared with a more accurate method (“gold standard”). When clearcut gold standard not available: “inter-method reliability” Problem: Limited to 2 x 2 tables

17 • • • • • • • • • • • • • • • Gold Standard results Study results
...Thus, traditional reliability indices (e.g., kappa, correlation coefficient) can be also used to estimate validity of continuous variables or variables with more than 2 categories Gold Standard results Study results

18 Reliability: Sources of Variability
Measurement Error Instrument/Technique/Lab Observer/Technician Intra-observer Inter-observer Intra-individual (physiologic)

19 Time Blood collected from an individual (1st measurement)
To measure within-individual variability? Blood collected from the individual (replicate measurement) Repeat blood collection in same individual X time later Phantom sample Aliquot 1.2 Aliquot 1.3 Aliquot 1.1: Study lab determination Aliquot 1.4 Aliquot 1.2: Lab determination done by same technician Aliquot 1.2: measurement done by same technician in a masked fashion To examine within-technician variability? For other sources of variability, use phantom samples Send Aliquot 1.3 to a different lab Aliquot 1.3: Lab determination done at a different lab To examine between-lab variability? Aliquot 1.3: Lab determination done by a different technician at study lab Aliquot 1.2: measurement done by a different technician in a masked fashion at study lab To examine between-technician variability? Time Design of a study to evaluate sources of variability (Based on Chambless et al, Am J Epidemiol 1992;136: )

20 Indices of Reliability (also used for validity)
% differences between repeat measurements (expected if no bias: ½ positive and ½ negative) % observed agreement Kappa Correlation coefficient Coefficient of variation Bland-Altman plot

21 Indices of Reliability (also used for validity)
% differences between repeat measurements (expected if no bias: ½ positive and ½ negative) % observed agreement Kappa Correlation coefficient Coefficient of variation Bland-Altman plot

22 Percent Observed Agreeement: [140 + 725] ÷ 986 = 88%
Agreement Between First and Second Readings to Identify Atherosclerotic Plaque in the Left Carotid Bifurcation by B-Mode Ultrasound in the ARIC Study (Li et al, Ultrasound Med Biol 1996;22:791-9) 986 777 209 Total 794 725 69 Normal 192 52 140 Plaque Second Reading First Reading Percent Observed Agreeement: [ ] ÷ 986 = 88% Shortcomings Chance agreement is not taken into account If most observations are in one of the concordance cell(s), % Observed Agreement overestimates agreement

23 Percent Observed Agreeement: [140 + 725] ÷ 986 = 88%
Agreement Between First and Second Readings to Identify Atherosclerotic Plaque in the Left Carotid Bifurcation by B-Mode Ultrasound in the ARIC Study (Li et al, Ultrasound Med Biol 1996;22:791-9) 986 777 209 Total 794 725 69 Normal 192 52 140 Plaque Second Reading First Reading Percent Observed Agreeement: [ ] ÷ 986 = 88% Shortcomings Chance agreement is not taken into account If most observations are in one of the concordance cell(s), % Observed Agreement overestimates agreement

24 Indices of Reliability (also used for validity)
% differences between repeat measurements (expected if no bias: ½ positive and ½ negative) % observed agreement Kappa Correlation coefficient Coefficient of variation Bland-Altman plot

25 PO Observed agreement proportion
The most popular measure of agreement: Kappa Statistics 986 777 209 Total 794 725 69 Normal 192 52 140 Plaque Second Reading First Reading PO Observed agreement proportion PE Expected (chance) agreement proportion

26 Kappa Statistics 986 777 209 Total 794 725 69 Normal 192 52 140 Plaque
Second Reading First Reading PO = [ ] ÷ 986 = 0.88

27 Kappa Statistics 986 777 209 Total 794 725 69 Normal 192 52 140 Plaque
Second Reading First Reading PO = [ ] ÷ 986 = 0.88 Expected agreement: (1) multiply the marginals converging on the concordance cells, (2) add the products, and (3) divide by the square of the total:

28 Kappa Statistics 986 777 209 Total 794 725 69 Normal 192 52 140 Plaque
Second Reading First Reading PO = [ ] ÷ 986 = 0.88 Expected agreement: (1) multiply the marginals converging on the concordance cells, (2) add the products, and (3) divide by the square of the total:

29 Agreement not due to chance
Kappa Statistics 986 777 209 Total 794 725 69 Normal 192 52 140 Plaque Second Reading First Reading PO = [ ] ÷ 986 = 0.88 Expected agreement: (1) multiply the marginals converging on the concordance cells, (2) add the products, and (3) divide by the square of the total: PE = [(209 x 192) + (777 x 794)] ÷ 9862= 0.68 Maximum agreement not due to chance Agreement not due to chance Thus, kappa values obtained from different populations may not be comparable Shortcomings Kappa is a function of the prevalence of the condition Can be calculated only for categorical variables (2 or more)

30 Interpretation of Kappa values 1.0
0.8 0.6 0.4 0.2 -1.0 VERY GOOD GOOD MODERATE FAIR POOR (Altman & Bland, Statistician 1983;32:307-17)

31 Indices of Reliability (also used for validity)
% differences between repeat measurements (expected if no bias: ½ positive and ½ negative) % observed agreement and % observed positive agreement Kappa Coefficient of variation Bland-Altman plot

32 Coefficient of variation (CV) General definition: Standard Deviation
(SD) as a percentage of the mean value

33 Calculation of the Coefficient of Variability
Xi1 and Xi2 = values of repeat measurements on same lab sample Xi = mean of these measurements For each pair of values: and For each pair of repeat measurements: The mean overall CV over all pairs is the average of all pair-wise CVs

34 Example of Calculation of the Coefficient of Variation - I
Replicates (e.g., 2 different observers, 2 measurements done by same observer, 2 different labs, etc.) Phantoms PAIR No. 1 1 2 2 3 4 . . k

35 Example of Calculation of the Coefficient of Variation - I
Replicates Phantoms PAIR No. Do the calculations for each pair of replicate samples 1 Pair (Split samples) No. 1: Measurement of total cholesterol 1 2 Measurement No. 1 (X11)= 154 mg/dL Measurement No. 2 (X12)= 148 mg/dL Repeat the calculation for all pairs of measurements and calculate average to obtain overall CV Mean= [ ] / 2= 151 mg/dL V1= ( )2 + ( )2 = 18 mg/dL

36 Reliability in the ARIC study (Am J Epi 1992;136:1069)
Analyte Intra-Class Correlation Coefficient* Coefficient of variation (%)** Total serum cholesterol 0.94 5.1 HDL 6.8 HDL2 0.77 24.8 *Best: as high as possible **Best: as low as possible

37 Indices of Reliability (also used for validity)
% differences between repeat measurements (expected if no bias: ½ positive and ½ negative) % observed agreement and % observed positive agreement Kappa Coefficient of variation Bland-Altman plot

38


Download ppt "Intermediate methods in observational epidemiology 2008"

Similar presentations


Ads by Google