# Bias in Clinical Trials

## Presentation on theme: "Bias in Clinical Trials"— Presentation transcript:

Bias in Clinical Trials

Bias Having a preference to one particular person / group / point of view - “one-sided inclination” Prejudice – negative bias In statistics, if a bias exists it means that the processes involved are not uniformly random and one outcome is favoured over others.

Factors Influencing CT Results
Factors other than the intervention under study can influence results on the study Random Error Natural variation Systematic Error Bias All efforts are made to reduce both types of error

Random Error Error that occurs due to natural / biological / random variation in the process May be on either side of true value

How to deal with Random Error
Sample size large enough to detect clinically meaningful difference Repeated sampling

Bias or Systematic Error
Difference between the true value and observed value due to all causes other than random variability A flaw in either the study design or data analysis Leads to an erroneous result Intentional or unintentional

Bias in Clinical Trials
The control and intervention groups must be similar enough so that any differences detected in patient outcomes can reasonably be attributed only to the intervention under study. If systematic differences exist between the control and intervention groups, then it is possible that the results of the study are biased.

1. Sampling Bias

Sampling Bias Systematic error due to study of a non-random sample of a population Sample is not a random sample - some individuals are more likely than others to be chosen For example, if you are asking college students how much they study, going to the library and randomly selecting people there to ask would introduce obvious bias: People who spend more time in the library are more likely to be chosen, and presumably report spending more time studying. Going to the Campus canteen at mealtime is subtler.

Sampling Bias A special kind of sampling bias of particular significance is non-response bias. This occurs when individuals have a choice of whether or not to respond. If significant numbers of individuals choose not to respond, you very likely have response bias, because it is likely that those who refuse have different answers than those that agree. This is a serious problem with modern polls, because large percentages of people refuse to cooperate with pollsters. These end up being a sampling of the most passionate people, who's views are generally dramatically different from the broad middle.

Sources of Sampling Bias
Failure to adhere to the random sampling procedures. Omission of specific subgroups of the population from the sampling frame and therefore from the sample. Faulty measuring devices (this may be in terms of the specific questions used in a questionnaire, and may also arise in a survey that involves taking physical measurements, when the measuring device is incorrect, e.g., using a defective BP machine, so that all measurements are low / high). Non-response to a survey by specific subgroups of the population that are relevant to the measures of concern in the survey.

Preventing Sampling Bias
Random sampling Sampling all subgroups (representative sampling) Accurate measurements Taking into account non-responders in a survey

2. Comparator Bias

Comparator Bias Not using control treatment known to be beneficial / standard For example, even though the effectiveness of erythropoietin in preventing anemia in cancer patients had been convincingly demonstrated by a number of controlled trials, some researchers continued to compare their drug with placebos. Comparator biases will be introduced when patients are denied effective treatments, and the active treatments studied in the trial will be given an unfair advantage.

Comparator Bias Giving an inappropriately low dose of a treatment
This has occurred in comparisons of new non-steroidal anti-inflammatory agents used for arthritis with older drugs in the same class (Rochon et al. 1994). Inappropriately low doses can also result from giving a treatment by an inappropriate route, for example, by comparing intravenous administration of a drug with oral administration of a drug that is poorly absorbed from the gastro-intestinal tract (Johanson and Gøtzsche 1999).

Comparator Bias Giving an inappropriately high dose of a treatment
Some of the newer drugs for treating schizophrenia, for example, have been shown to be preferable to established drugs for this reason. However, this apparent advantage may be because the newer agents have been compared with inappropriately high doses of the older, comparator drug (Waraich et al. 2004). The net usefulness of treatments often requires trade-offs between wanted and unwanted effects. Treatments may be of real value if, although their beneficial effects are no better than alternatives, they have fewer adverse effects.

How to Reduce Comparator Bias
Appropriate choice of comparator group by systematic review of existing evidence Use of placebo only when essential Using appropriate dose & route of administration of comparator drug Evaluation of net effects of treatments (benefits vs risks)

3. Selection Bias

Selection Bias Selecting & allocating participants to treatment groups depending upon investigator’s beliefs about efficacy / safety of treatments or other subjective reasons Results in ‘dissimilar’ groups

Ways to minimize Selection Bias
Randomization – single most effective way to reduce selection / allocation bias Every subject has equal chance of receiving test / comparator treatment Results in similar ‘intervention’ & ‘control’ groups Provides basis for statistical inference

Randomization Alternate allocation to groups Tossing a coin
Randomization Tables Computarized randomization (all patients, blocked, stratified) Methods for concealed randomization

4. Expectation Bias

Expectation Bias Both the patient’s and therapist’s expectations can influence the results of a clinical trial (even after randomization) Not let one / both know which treatment is being given - Blinding Blinding reduces expectation bias

Blinding & Techniques Look alike trial medications / placebo
Single-blind Double-blind Assessor-blind Look alike trial medications / placebo Sham techniques

Effect of blinding on outcome of trials of acupuncture for chronic back pain

5. Analysis Bias

Bias in Analysis Analyzing only select group of subjects for showing positive outcome Not including drop-outs, withdrawn subjects in analysis Multiple subgroup analysis (not pre-planned) to find some favourable outcomes Confounding variables - factors other than intervention (e.g. age, degree of severity of disease, previous treatment) that may influence outcome

Ways to minimize bias in analysis
Have a statistical analysis plan before the study & include in protocol Stratified design for significant variables Intention-to-treat analysis Separate subgroup analysis for significant variables Stratified or multivariate analysis for confounding variables

6. Reporting Bias

Reporting / Publishing Bias
Only reporting studies with good outcome Not reporting / publishing studies with unfavourable outcome Hiding evidence of negative outcomes

Ways to Minimize Reporting Bias
Clinical Trial Registry – registering all clinical trials on new drugs Compulsion to submit results of all studies to regulatory authority Publishing results of all clinical trials on websites Publishing significant negative / no-difference studies on new treatments in well recognized journals