Presentation is loading. Please wait.

Presentation is loading. Please wait.

Choice of study design: randomized and non-randomized approaches Iná S. Santos Federal University of Pelotas Brazil PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD.

Similar presentations


Presentation on theme: "Choice of study design: randomized and non-randomized approaches Iná S. Santos Federal University of Pelotas Brazil PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD."— Presentation transcript:

1 Choice of study design: randomized and non-randomized approaches Iná S. Santos Federal University of Pelotas Brazil PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD OBESITY PREVENTION: A LIFE-COURSE APPROACH Aruba, June 2012

2 Outline of the presentation Introduction –Types of evidence –Internal and external validity Randomized controlled trials Non-randomized designs Victora et al. Evidence-based Public Health: moving beyond randomized trials. Am J Public Health 2004;94(3):400-405 Habicht JP et al. Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. Intern J Epidemiology 1999;28:10-18

3 Part I Introduction –Types of evidence –Internal and external validity

4 Types of epidemiological evidence for Public Health Type of evidenceType of epidemiological study Frequency of diseaseDescriptive Frequency of exposureDescriptive Exposure/disease relationship Experimental (or observational) Coverage of interventionDescriptive Efficacy of interventionExperimental (or observational) Programme effectivenessObservational

5 Valididy: internal and external External population Target population Actual population Sample

6 Validity Internal validity –Are the study results true for the target population? –Are there errors that affect the study findings? Systematic error (bias, confounding) Random error (precision) External validity –Generalizability –Are the study results applicable to other settings?

7 Validity Internal validity –May be judged on the basis of the study methods External validity –Require a “value judgment”

8 Part II Randomized controlled trials (RCTs)

9 Internal validity in probability studies Issue: Comparability of Probability study (RCT) Bias avoided PopulationsRandomizationSelection bias ObservationsBlindingInformation bias EffectsUse of placeboHawthorne effect Placebo effect RCTs are the gold standard for internal validity

10 RCT (from Cochrane Collaboration) In a RCT participants are assigned by chance to receive either an experimental or control treatment. When a RCT is done properly, the effect of a treatment can be studied in groups of people who are the same at the outset, and treated in the same way, except for the intervention being studied. Any differences then seen in the groups at the end of the trial can be attributed to the difference in treatment alone, and not to bias or chance.

11 Randomised controlled trials Prioritise internal validity –random allocation reduces selection bias and confounding –blinding reduces information bias Gained popularity through clinical trials of new drugs Essential for determining efficacy of new biological agents Adequate for short causal chains –biological effects of drugs, vaccines, nutritional supplements, etc. drug  pharmacological reaction  disease cure or alleviation

12 Pooling data from RCTs Systematic review –Comprehensive search for all high-quality scientific studies on a specific subject E.g. on effects of a drug, vaccine, surgical technique, behavioral intervention, etc Meta-analysis –Groups data from different studies to determine an average effect –Improves the precision of the available estimates by including a greater number of people –But: data from different studies cannot always be combined

13 What does a RCT show? The probability that the observed result is due to the intervention But additional evidence is required to make this result conceptually plausible –Biological plausibility –Operational plausibility

14 Special issues in RCTs “Intent-to-treat” analyses –Individuals/groups should remain in the group to which they were originally assigned Units of analyses –It is incorrect to use group allocation (e.g., health centers, communities, etc) and to analyse the data at individual level –This has implications for sample size calculation and for analysis methods

15 CONSORT Statement Allocation Rationale Eligibility Interventions Objectives Outcomes Sample size Randomization –Sequence generation –Concealment –Implementation –Blinding (masking) Statistical methods Participant flow Recruitment Baseline data Numbers analyzed Outcomes and estimation Ancillary analyses Adverse events Interpretation Generalizability Overall evidence

16 Major steps in Public Health trials Central-level provision of intervention to local outlets (e.g. health facilities) Local providers’ compliance with delivery of intervention Recipient compliance with intervention Biological effect of intervention Source: Victora, Habicht, Bryce, AJPH 2004

17 Example of Public Health Intervention: Nutrition Counselling Trial Health workers are trained Nutritional status improves HW knowledge increases HW performance improves Maternal knowledge increases Child diets change Energy intake increases National programme is implemented Source: Santos, Victora et al. J Nutr 2001 Utilization is adequate HWs are trainable Equipment is available Food is available Central team is competent Lack of food is a cause of malnutrition

18 Example of Public Health Intervention: Nutrition Counselling Trial Health workers are trained Nutritional status improves HW knowledge increases HW performance improves Maternal knowledge increases Child diets change Energy intake increases National programme is implemented Source: Santos, Victora et al. J Nutr 2001 0.80 7 =0.21

19 Are RCT findings generalizable to routine programmes? The dose of the intervention may be smaller –behavioural effect modification provider behaviour recipient behaviour The dose- response relationship may be different –biological effect modification The longer the causal chain, the more likely is effect modification Source: Victora, Habicht, Bryce, AJPH 2004

20 Curvilinear associations Trials often done here Results often applied here Source: Victora, Habicht, Bryce, AJPH 2004

21 Why do RCTs have a limited role in large-scale effectiveness evaluations Often impossible to randomize –unethical, politically unacceptable, rapid scaling up Evaluation team affects service delivery –service delivery is at least “best-practice” Effect modification is the rule –are meta-analyses of complex programmes meaningful? –need for local data Need for supplementary approaches for evaluations in Public Health

22 Part III Non-randomized designs (Quasi-experiments)

23 Types of inference in impact evaluations Adequacy (descriptive studies) – the expected changes are taking place Plausibility (observational studies) – observed changes seem to be due to the programme Probability (RCTs) – randomised trial shows that the programme has a statistically significant impact Source: Habicht, Victora, Vaughan, IJE 1999

24 Ensuring internal validity in probability and plausibility studies Issue: Comparability Probability (RCT) Plausibility (quasi-experiment) PopulationsRandomizationMatching Understanding determinants of implementation Handling contextual factors ObservationsBlindingAvoiding information bias EffectsUse of placeboBeing aware of Hawthorne bias and of the placebo effect

25 Adequacy evaluations Questions: –Were the initial goals achieved? E.g.: reduce underfive mortality by 20% –Were the observed trends in impact indicators in the expected direction? of adequate magnitude?

26 Plausibility evaluations Question: –Is the observed impact likely due to the intervention? Require ruling out influence of external factors: –need for comparison group –adjustment for confounders Also known as quasi-experiments

27 Adequacy/plausibility designs (1) Design: cross-sectional Measurement points: once Outcome: difference or ratio Control group: –Individuals who did not receive the intervention –Groups/areas without the intervention –Dose-response analyses, if possible

28 ORT and diarrhea deaths in Brazil Spearman r = -0,61 (p=0,04) Each dot = 1 state

29 Adequacy/plausibility designs (2) Design: longitudinal (before-and-after) Measurement points: twice or more Outcome: change Control group: –The same or similar individuals, before the intervention –The same groups/areas, before the intervention –Time-trend analyses, if possible

30 Hib vaccine in Uruguay In Uruguay, reported Hib cases declined by over 95 percent after the introduction of routine infant Hib immunisation in 1994. Source: PAHO, 2004

31 Adequacy/plausibility designs (3) Design: longitudinal-control Measurement points: twice or more Outcome: relative change Control group: –The same or similar individuals, before the intervention –The same groups/areas, before the intervention –Time-trend analyses, if possible

32 Adequacy/plausibility designs (4) Design: case-control Measurement points: once Comparison: exposure to intervention Groups: –Cases: individuals with the disease of interest –Controls: sample of the population from which cases originated

33 Stunting in Tanzania Source: Schellenberg J et al Stunting prevalence among children aged 24-59 months p (mean haz) = 0.05

34 Transparent Reporting for Evaluations with Nonrandomised Designs (TREND) Similar to CONSORT guidelines Include –conceptual frameworks used –intervention and comparison conditions –research design –methods of adjusting for possible biases AJPH, March 2004 Source: Des Jarlais, Lyles, Crepaz and the TREND Group, AJPH 2004

35 Conclusions (1) RCTs are essential for –clinical studies –community studies for establishing the efficacy of relatively simple interventions RCTs require additional evidence from non-randomised studies for increasing their external validity

36 Conclusions (2) Given the complexity of many Public Health interventions, adequacy and plausibility studies are essential in different populations –even for interventions proven by RCTs Adequacy evaluations should become part of the routine of decision-makers –and plausibility evaluations too, when possible

37 THANK YOU


Download ppt "Choice of study design: randomized and non-randomized approaches Iná S. Santos Federal University of Pelotas Brazil PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD."

Similar presentations


Ads by Google