Presentation is loading. Please wait.

Presentation is loading. Please wait.

Assessing Validity of Association

Similar presentations


Presentation on theme: "Assessing Validity of Association"— Presentation transcript:

1 Assessing Validity of Association
Introduction to Research Methods In the Internet Era Assessing Validity of Association Bias All lectures from Workshop - This project is made possible by the support of the American people through the United States Agency for International Development (USAID). The contents are the sole responsibility of the University of Pittsburgh and do not necessarily reflect the views of USAID or the United States  Government. Thomas Songer, PhD

2 Learning Objectives: 1. Identify the possible alternative explanations for statistical associations: --- Chance --- Bias --- Confounding 2. Distinguish between the major types of bias in epidemiologic studies. 2

3 Research Process Research question Hypothesis Identify research design
Data collection Presentation of data Data analysis Interpretation of data 3 Polgar, Thomas

4 Epidemiologic Reasoning
Assess validity of association true relationship between the exposure and disease Does the observed association really exist? Is the association valid? Are there alternative explanations for the association? Chance (Random Error) Bias (Systematic Error) Confounding 4

5 Evaluating Associations
A “valid” statistical association implies “Internal Validity” in the study Internal Validity: The results of an observation are correct for the particular group being studied What about “external validity”? Do the results of the study apply (“generalize”) to to people who were not in the study (e.g. the target population)? 5

6 Evaluating Associations
Internal Validity Strength of the measurement tools, assessment methods of exposure and outcome variables in the study, and control for study effects External Validity -- strength of the study sample with regards to generalizability Validity here refers to study validity. Two types of study validity are of interest in epidemiology; internal and external validity. In internal validity, the goal is to be able to show that the difference between cases and controls is related to the effect of the difference in exposure under study, and not some other factor. Another factor that is a part of the selection process for cases or controls could compromise internal validity. 6

7 Threats to validity in research studies
Random error Sample size Systematic error Selection bias Measurement bias Loss to follow-up Hawthorne Effect Confounding Regression to the mean 7

8 Evaluating Associations
Note: DO NOT compromise internal validity in the goal of generalization * An invalid result cannot be generalized * Thus, internal validity should never be compromised in an attempt to achieve generalizability 8

9 Evaluating Associations
Note: Keep in mind that even if chance, bias, and confounding have been sufficiently ruled out (or taken into account), it does not necessarily mean that the valid association observed is causal. The observed association may simply be a coincidence. (i.e. In the last 10, years, incidence rates for prostate cancer have increased, as have sales of plasma TV screens). 9

10 How do we know that the associations observed in epidemiologic studies are real?
10

11 Evaluating Associations
Evaluating the validity of an association: In any epidemiologic study, there are at least 3 alternative explanations for the observed results: CHANCE (random error) 2. BIAS (systematic error) CONFOUNDING These explanations are not mutually exclusive -- more than one can be present in the same study In most epidemiologic studies, 3 factors may play a role. It is not unusual for all 3 to be at play at the same time. 11

12 Bias or Systematic Error
Systematic, non-random, deviation of results from the truth high systematic error low systematic error 12

13 Bias Potential biases must be considered and addressed in all epidemiologic studies We often assume that exposed and unexposed groups are comparable This is not necessarily true -- Another important aspect of epi. studies is presence of biases in results - need to address these to show that results (association) is not just to bias, but there is an association -- In order for causal inferences to be valid - the exposed and unexposed groups need to be comparable with respect to other factors (besides the exposure) that can influence disease frequency 13

14 Systematic Error (Bias)
BIAS: Systematic error in the design, conduct, or analysis of a study that results in a mistaken estimate of an exposure/disease relationship SELECTION BIAS INFORMATION BIAS * Recall Bias * Interviewer Bias * Reporting Bias * Surveillance Bias 14

15 Selection Bias A distortion in a measure of disease frequency or association resulting from the manner in which subjects are selected for the study Result of deficiencies in study design E. g. - Case-control study - exposure status may influence selection of subjects to a different extent in cases and controls - self-selection bias -- Selection bias results from deficiencies in study design -- Depends on study design - in general - disease or exposure status influences selection of subjects to a different extent in groups being compared -- Since subjects are selected for a prospective cohort study before disease occurrence - selection bias not a serious problem (but can get selective loss to follow-up) -- Self-selection bias - differential participation of subjects after selected for case-control study; selected exposed persons agree to participate but many selected unexposed persons refuse to participate; if disease is associated with participation in unexposed group- bias will result (may elect not to participate because they are sick 15

16 Bias SELECTION BIAS: Any systematic error that arises in the process of identifying the two 2 study groups to be compared) • Results in the study groups being non-comparable, unless some type of statistical adjustment can be made 16

17 Selection Bias EXAMPLE: Case Control Study Outcome: Hemorrhagic stroke
Exposure: Appetite suppressant products that contain Phenylpropanolamine (PPA) Cases: Persons who experienced a stroke Controls: Persons in the community without stroke Bias: Control subjects were recruited by random-digit dialing from 9:00 AM to 5:00 PM. This resulted in over- representation of unemployed persons who may not represent the study base in terms of use of appetite suppressant products. 17

18 Selection Bias EXAMPLE: Non-Response • If refusal or non-response is related to exposure, the estimate of effect may be biased. For example, if controls are selected by use of a household survey, non-response may be related to demographic and lifestyle factors associated with employment. • Responders often differ systematically from persons who do not respond. 18

19 Berkson’s Bias A form of selection bias that affects hospital- based epidemiology studies. People in hospital are likely to suffer from multiple diseases and engage in unhealthy behaviours (e.g. smoking) As a result, they are atypical of the population in the community -- Selection bias results from deficiencies in study design -- Depends on study design - in general - disease or exposure status influences selection of subjects to a different extent in groups being compared -- Since subjects are selected for a prospective cohort study before disease occurrence - selection bias not a serious problem (but can get selective loss to follow-up) -- Self-selection bias - differential participation of subjects after selected for case-control study; selected exposed persons agree to participate but many selected unexposed persons refuse to participate; if disease is associated with participation in unexposed group- bias will result (may elect not to participate because they are sick 19

20 Healthy Worker Effect A form of selection bias that affects epidemiology studies of workers. Ill and disabled people are likely to be unemployed. The employed (workers) are healthier than other segments of the population. As a result, they are atypical of the population in the community -- Selection bias results from deficiencies in study design -- Depends on study design - in general - disease or exposure status influences selection of subjects to a different extent in groups being compared -- Since subjects are selected for a prospective cohort study before disease occurrence - selection bias not a serious problem (but can get selective loss to follow-up) -- Self-selection bias - differential participation of subjects after selected for case-control study; selected exposed persons agree to participate but many selected unexposed persons refuse to participate; if disease is associated with participation in unexposed group- bias will result (may elect not to participate because they are sick 20

21 Information Bias Definition: Systematic differences in the way in which data on exposure and outcome are obtained from the various study groups. Some Types/Sources of Information Bias: • Bias in abstracting records • Bias in interviewing • Bias from surrogate interviews • Surveillance bias • Reporting and recall bias 21

22 Information Bias Results from systematic differences in the way data on exposure or outcome are obtained May result from measurement defects or questionnaires or interviews that do not measure what they claim to Examples of information bias Recall bias : self-reported information may be inaccurate due to low levels of recall 22

23 Recall Bias DEFINITION: Study group participants systematically differ in the way data on exposure or outcome are recalled • Particularly problematic in case-control studies • Individuals who have experienced a disease or adverse health outcome may tend to think about possible “causes” of the outcome. This can lead to differential recall 23

24 Recall Bias - Example Outcome: Cleft palate
Exposure: Systemic infection during pregnancy Cases: Mothers giving birth to children with cleft palate Controls: Mothers giving birth to children free of cleft palate Bias: Mothers who have given birth to a child with cleft palate may recall more thoroughly colds and other infections experienced during pregnancy 24

25 Interviewer Bias DEFINITION: Systematic difference in the soliciting, recording, or interpretation of information from study participants • Can affect every type of epidemiologic study • May occur when interviewers are not “blinded” to exposure or outcome status of participants. 25

26 Interviewer Bias • Interviewer’s knowledge of subjects’ disease status may result in differential probing of exposure history • Similarly, interviewer’s knowledge of subjects’ exposure history may result in differential probing and recording of the outcome under examination • Placebo control is one method used to maintain observer blindness in randomized trials. 26

27 Reporting Bias DEFINITION: Selective suppression or revealing of information such as past history of sexually transmitted disease. • Often occurs because subject reluctance to report an exposure due to attitudes, beliefs, and perceptions • “Wish bias” may occur among subjects who have developed a disease and seek to show that the disease “is not their fault.” 27

28 Surveillance Bias • If a population is monitored over a period of time, disease ascertainment may be better in the monitored population than in the general population (“surveillance bias”). • May lead to biased estimate of exposure/disease relationship. 28

29 Misclassification Bias
DEFINITION: Erroneous classification of the exposure or disease status of an individual into a category to which it should not be assigned Misclassification of the exposure or outcome Example: --- Cases incorrectly classified as controls --- Controls incorrectly classified as cases --- Exposed incorrectly classified as non-exposed --- Non-exposed incorrectly classified as exposed 29

30 Control of Bias Can only be prevented and controlled
during the design and conduct of a study Choice of a study population Methods of data collection Sources of case ascertainment and risk factor information Sever 30

31 Good Study Design Protects Against All Forms of Error
31


Download ppt "Assessing Validity of Association"

Similar presentations


Ads by Google