Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bias Lecture notes Sam Bracebridge.

Similar presentations

Presentation on theme: "Bias Lecture notes Sam Bracebridge."— Presentation transcript:

1 Bias Lecture notes Sam Bracebridge

2 By the end of the lecture fellows will be able to
Define bias Identify different types of bias Explain how bias affects risk estimates Critique study designs for bias Develop strategies to minimise bias

3 What do epidemiologists do? Measure effects Attempt to define a cause
Epidemiologic Study What do epidemiologists do? Measure effects Attempt to define a cause an estimate of the truth Implement public health measure An epidemiologic measure could be, for example, a rate or risk.

4 Estimated effect: the truth?
Mayonnaise Salmonella RR = 4.3 True association? Bias? Chance? Confounding?

5 Chance and confounding can be evaluated quantitatively
Warning! Chance and confounding can be evaluated quantitatively Bias is much more difficult to evaluate Minimise by design and conduct of study Increased sample size will not eliminate bias Despite all preventive efforts, bias should always be considered among alternative explanations of a finding. It has to be remembered that bias:    - may mask an association or cause a spurious one    - may cause over or underestimation of the effect size Increasing the sample size will not eliminate any bias. A study that suffers from bias lacks internal validity

6 Definition of bias Any systematic error in the design or conduct of an epidemiological study resulting in a conclusion which is different from the truth Measurements

7 Errors in epidemiological studies
Random error (chance) Systematic error (bias) Study size Source: Rothman, 2002 7

8 Main sources of bias Selection bias Information bias

9 Selection bias Two main reasons: Selection of study subjects
Factors affecting study participation association between exposure and disease differs between those who participate and those who don’t

10 Types of selection bias
Sampling bias Ascertainment bias referral, admission Diagnostic/surveillance Participation bias self-selection (volunteerism) non-response, refusal survival Sampling bias is systematic error due to a non-random sample of a population,2 causing some members of the population to be less likely to be included than others, resulting in a biased sample Ascertainment bias Surveillance - referral, admission – tertiary referral centre Diagnostic – those who are tested are diagnosed, pre-screening of trial participants Participation bias self-selection (volunteerism) – new screening test likely to be taken up by those who are health concious, or worried well, non-response, refusal – moved out of area, tobacco smokers survival - discounting trial subjects/tests that did not run to completion

11 Selection bias in case-control studies

12 Estimate association of alcohol intake and cirrhosis
Selection of controls Estimate association of alcohol intake and cirrhosis OR = 6 How representative are hospitalised trauma patients of the population which gave rise to the cases?

13 Selection of controls a b c d
OR = OR = 36 Higher proportion of controls drinking alcohol in trauma ward than non-trauma ward a b c d

14 Some worked examples Work in pairs In 2 minutes:
Identify the reason for bias How will it effect your study estimate? Discuss strategies to minimise the bias

15 Oral contraceptive and uterine cancer
You are aware OC use can cause breakthrough bleeding Overestimation of “a”  overestimation of OR Diagnostic bias OC use  breakthrough bleeding  increased chance of testing & detecting uterine cancer Alternative: Prospective cohort study with clinician blinded to the exposure a b c d

16 Asbestos and lung cancer
Prof. “Pulmo”, head specialist respiratory referral unit, has 145 publications on asbestos/lung cancer Lung cancer cases exposed to asbestos not representative of lung cancer cases Alternative: Strict admission criteria? Strict testing criteria? Study in another hospital? Other lung disease controls? Overestimation of “a”  overestimation of OR Admission bias a b c d

17 Selection bias in cohort studies

18 Association between occupational exposure X and disease Y
Healthy worker effect Association between occupational exposure X and disease Y General population includes those who do not work due to illness Source: Rothman, 2002

19 Healthy worker effect Source: Rothman, 2002

20 Prospective cohort study- Year 1
lung cancer yes no Smoker Non-smoker

21 50% of cases that smoked lost to follow up
Loss to follow up – Year 2 lung cancer yes no Smoker Non-smoker 50% of cases that smoked lost to follow up

22 Minimising selection bias
Clear definition of study population Explicit case, control and exposure definitions CC: Cases and controls from same population Same possibility of exposure Cohort: selection of exposed and non-exposed without knowing disease status

23 Sources of bias Selection bias Information bias

24 Information bias During data collection Differences in measurement
of exposure data between cases and controls of outcome data between exposed and unexposed 24

25 Information bias Arises if the information about or from study subjects is erroneous

26 Information bias 3 main types: Recall bias Interviewer bias

27 Cases remember exposure differently than controls
Recall bias Cases remember exposure differently than controls e.g. risk of malformation Overestimation of “a”  overestimation of OR What to do? Check other sources of information Questionnaire Mothers of children with malformations remember past exposures better than mothers with healthy children

28 Interviewer bias Investigator asks cases and controls differently about exposure e.g: soft cheese and listeriosis Cases of Controls listeriosis Overestimation of “a”  overestimation of OR Eats soft cheese a b Does not eat c d soft cheese Investigator may probe listeriosis cases about consumption of soft cheese (knows hypothesis)

29 Misclassification Exposure Outcome
Measurement error leads to assigning wrong exposure or outcome category Exposure Outcome

30 Misclassification Systematic error
Missclassification of exposure DIFFERS between cases and controls Missclassification of outcome DIFFERS between exposed & nonexposed => Measure of association distorted in any direction

31 Misclassification OR = ad/bc = 3.0; RR = a/(a+b)/c/(c+d) = 1.6

32 Misclassification OR = ad/bc = 1.5; RR = a/(a+b)/c/(c+d) = 1.2

33 Minimising information bias
Standardise measurement instruments questionnaires + train staff Administer instruments equally to cases and controls exposed / unexposed Use multiple sources of information

34 Summary: Controls for Bias
Choose study design to minimize the chance for bias Clear case and exposure definitions Define clear categories within groups (eg age groups) Set up strict guidelines for data collection Train interviewers

35 Summary: Controls for Bias
Direct measurement registries case records Optimise questionnaire Minimize loss to follow-up

36 The epidemiologist’s role
Reduce error in your study design Interpret studies with open eyes: Be aware of sources of study error Question whether they have been addressed

37 Bias: the take home message
Should be prevented !!!! At PROTOCOL stage Difficult to correct for bias at analysis stage If bias is present: Incorrect measure of true association Should be taken into account in interpretation of results Magnitude = overestimation? underestimation?

38 Questions?

39 References Rothman KJ; Epidemiology: an introduction. Oxford University Press 2002, Hennekens CH, Buring JE; Epidemiology in Medicine. Lippincott-Raven Publishers 1987,

Download ppt "Bias Lecture notes Sam Bracebridge."

Similar presentations

Ads by Google