Choice of study design: randomized and non-randomized approaches Iná S. Santos Federal University of Pelotas Brazil PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD.

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

II. Potential Errors In Epidemiologic Studies Random Error Dr. Sherine Shawky.
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Experimental designs Non-experimental pre-experimental quasi-experimental experimental No time order time order variable time order variables time order.
Observational Studies and RCT Libby Brewin. What are the 3 types of observational studies? Cross-sectional studies Case-control Cohort.
Study Designs in Epidemiologic
Program Evaluation Regional Workshop on the Monitoring and Evaluation of HIV/AIDS Programs February 14 – 24, 2011 New Delhi, India.
KINE 4565: The epidemiology of injury prevention Randomized controlled trials.
1 Case-Control Study Design Two groups are selected, one of people with the disease (cases), and the other of people with the same general characteristics.
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Reading the Dental Literature
ODAC May 3, Subgroup Analyses in Clinical Trials Stephen L George, PhD Department of Biostatistics and Bioinformatics Duke University Medical Center.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Elements of a clinical trial research protocol
Journal Club Alcohol and Health: Current Evidence July-August 2006.
Non-Experimental designs: Developmental designs & Small-N designs
Monitoring and Evaluation: Evaluation Designs. Objectives of the Session By the end of this session, participants will be able to: Understand the purpose,
Clinical Trials Hanyan Yang
Chapter 7. Getting Closer: Grading the Literature and Evaluating the Strength of the Evidence.
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Experimental Study.
Measuring Progress: Strategies for Monitoring and Evaluation Rebecca Stoltzfus.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Chapter 8 Experimental Research
Are the results valid? Was the validity of the included studies appraised?
Multiple Choice Questions for discussion
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Quantitative Research Designs
Reading Scientific Papers Shimae Soheilipour
Research Designs for Complex Community Interventions for Childhood Obesity Prevention Robert W. Jeffery, Ph.D. Division of Epidemiology University of Minnesota.
Chapter 13 Notes Observational Studies and Experimental Design
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
1 Experimental Study Designs Dr. Birgit Greiner Dep. of Epidemiology and Public Health.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Julio A. Ramirez, MD, FACP Professor of Medicine Chief, Infectious Diseases Division, University of Louisville Chief, Infectious Diseases Section, Veterans.
Research Study Design. Objective- To devise a study method that will clearly answer the study question with the least amount of time, energy, cost, and.
Chapter 2 Research in Abnormal Psychology. Slide 2 Research in Abnormal Psychology  Clinical researchers face certain challenges that make their investigations.
Understanding real research 4. Randomised controlled trials.
CHP400: Community Health Program - lI Research Methodology STUDY DESIGNS Observational / Analytical Studies Present: Disease Past: Exposure Cross - section.
Plymouth Health Community NICE Guidance Implementation Group Workshop Two: Debriding agents and specialist wound care clinics. Pressure ulcer risk assessment.
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Clinical Writing for Interventional Cardiologists.
Experiments and Causal Inference ● We had brief discussion of the role of randomized experiments in estimating causal effects earlier on. Today we take.
How to find a paper Looking for a known paper: –Field search: title, author, journal, institution, textwords, year (each has field tags) Find a paper to.
Chapter 2 Nature of the evidence. Chapter overview Introduction What is epidemiology? Measuring physical activity and fitness in population studies Laboratory-based.
Gathering Useful Data. 2 Principle Idea: The knowledge of how the data were generated is one of the key ingredients for translating data intelligently.
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
EXPERIMENTAL EPIDEMIOLOGY
Causal relationships, bias, and research designs Professor Anthony DiGirolamo.
1 Policy and programme lessons from the Multi-Country Evaluation (MCE) of IMCI The MCE Team.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Study designs. Kate O’Donnell General Practice & Primary Care.
Finding, Evaluating, and Presenting Evidence Sharon E. Lock, PhD, ARNP NUR 603 Spring, 2001.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Strengthening Research Capabilities Professor John B. Kaneene DVM, MPH, PhD, FAES, FAVES Center for Comparative Epidemiology Michigan State University.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
EVIDENCE-BASED MEDICINE AND PHARMACY 1. Evidence-based medicine 2. Evidence-based pharmacy.
Impact Evaluation for Evidence-Based Policy Making Arianna Legovini Lead Specialist Africa Impact Evaluation Initiative.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
Evidence-Based Mental Health PSYC 377. Structure of the Presentation 1. Describe EBP issues 2. Categorize EBP issues 3. Assess the quality of ‘evidence’
Randomized Trials: A Brief Overview
HEC508 Applied Epidemiology
Presentation transcript:

Choice of study design: randomized and non-randomized approaches Iná S. Santos Federal University of Pelotas Brazil PAHO/PAHEF WORKSHOP EDUCATION FOR CHILDHOOD OBESITY PREVENTION: A LIFE-COURSE APPROACH Aruba, June 2012

Outline of the presentation Introduction –Types of evidence –Internal and external validity Randomized controlled trials Non-randomized designs Victora et al. Evidence-based Public Health: moving beyond randomized trials. Am J Public Health 2004;94(3): Habicht JP et al. Evaluation designs for adequacy, plausibility and probability of public health programme performance and impact. Intern J Epidemiology 1999;28:10-18

Part I Introduction –Types of evidence –Internal and external validity

Types of epidemiological evidence for Public Health Type of evidenceType of epidemiological study Frequency of diseaseDescriptive Frequency of exposureDescriptive Exposure/disease relationship Experimental (or observational) Coverage of interventionDescriptive Efficacy of interventionExperimental (or observational) Programme effectivenessObservational

Valididy: internal and external External population Target population Actual population Sample

Validity Internal validity –Are the study results true for the target population? –Are there errors that affect the study findings? Systematic error (bias, confounding) Random error (precision) External validity –Generalizability –Are the study results applicable to other settings?

Validity Internal validity –May be judged on the basis of the study methods External validity –Require a “value judgment”

Part II Randomized controlled trials (RCTs)

Internal validity in probability studies Issue: Comparability of Probability study (RCT) Bias avoided PopulationsRandomizationSelection bias ObservationsBlindingInformation bias EffectsUse of placeboHawthorne effect Placebo effect RCTs are the gold standard for internal validity

RCT (from Cochrane Collaboration) In a RCT participants are assigned by chance to receive either an experimental or control treatment. When a RCT is done properly, the effect of a treatment can be studied in groups of people who are the same at the outset, and treated in the same way, except for the intervention being studied. Any differences then seen in the groups at the end of the trial can be attributed to the difference in treatment alone, and not to bias or chance.

Randomised controlled trials Prioritise internal validity –random allocation reduces selection bias and confounding –blinding reduces information bias Gained popularity through clinical trials of new drugs Essential for determining efficacy of new biological agents Adequate for short causal chains –biological effects of drugs, vaccines, nutritional supplements, etc. drug  pharmacological reaction  disease cure or alleviation

Pooling data from RCTs Systematic review –Comprehensive search for all high-quality scientific studies on a specific subject E.g. on effects of a drug, vaccine, surgical technique, behavioral intervention, etc Meta-analysis –Groups data from different studies to determine an average effect –Improves the precision of the available estimates by including a greater number of people –But: data from different studies cannot always be combined

What does a RCT show? The probability that the observed result is due to the intervention But additional evidence is required to make this result conceptually plausible –Biological plausibility –Operational plausibility

Special issues in RCTs “Intent-to-treat” analyses –Individuals/groups should remain in the group to which they were originally assigned Units of analyses –It is incorrect to use group allocation (e.g., health centers, communities, etc) and to analyse the data at individual level –This has implications for sample size calculation and for analysis methods

CONSORT Statement Allocation Rationale Eligibility Interventions Objectives Outcomes Sample size Randomization –Sequence generation –Concealment –Implementation –Blinding (masking) Statistical methods Participant flow Recruitment Baseline data Numbers analyzed Outcomes and estimation Ancillary analyses Adverse events Interpretation Generalizability Overall evidence

Major steps in Public Health trials Central-level provision of intervention to local outlets (e.g. health facilities) Local providers’ compliance with delivery of intervention Recipient compliance with intervention Biological effect of intervention Source: Victora, Habicht, Bryce, AJPH 2004

Example of Public Health Intervention: Nutrition Counselling Trial Health workers are trained Nutritional status improves HW knowledge increases HW performance improves Maternal knowledge increases Child diets change Energy intake increases National programme is implemented Source: Santos, Victora et al. J Nutr 2001 Utilization is adequate HWs are trainable Equipment is available Food is available Central team is competent Lack of food is a cause of malnutrition

Example of Public Health Intervention: Nutrition Counselling Trial Health workers are trained Nutritional status improves HW knowledge increases HW performance improves Maternal knowledge increases Child diets change Energy intake increases National programme is implemented Source: Santos, Victora et al. J Nutr =0.21

Are RCT findings generalizable to routine programmes? The dose of the intervention may be smaller –behavioural effect modification provider behaviour recipient behaviour The dose- response relationship may be different –biological effect modification The longer the causal chain, the more likely is effect modification Source: Victora, Habicht, Bryce, AJPH 2004

Curvilinear associations Trials often done here Results often applied here Source: Victora, Habicht, Bryce, AJPH 2004

Why do RCTs have a limited role in large-scale effectiveness evaluations Often impossible to randomize –unethical, politically unacceptable, rapid scaling up Evaluation team affects service delivery –service delivery is at least “best-practice” Effect modification is the rule –are meta-analyses of complex programmes meaningful? –need for local data Need for supplementary approaches for evaluations in Public Health

Part III Non-randomized designs (Quasi-experiments)

Types of inference in impact evaluations Adequacy (descriptive studies) – the expected changes are taking place Plausibility (observational studies) – observed changes seem to be due to the programme Probability (RCTs) – randomised trial shows that the programme has a statistically significant impact Source: Habicht, Victora, Vaughan, IJE 1999

Ensuring internal validity in probability and plausibility studies Issue: Comparability Probability (RCT) Plausibility (quasi-experiment) PopulationsRandomizationMatching Understanding determinants of implementation Handling contextual factors ObservationsBlindingAvoiding information bias EffectsUse of placeboBeing aware of Hawthorne bias and of the placebo effect

Adequacy evaluations Questions: –Were the initial goals achieved? E.g.: reduce underfive mortality by 20% –Were the observed trends in impact indicators in the expected direction? of adequate magnitude?

Plausibility evaluations Question: –Is the observed impact likely due to the intervention? Require ruling out influence of external factors: –need for comparison group –adjustment for confounders Also known as quasi-experiments

Adequacy/plausibility designs (1) Design: cross-sectional Measurement points: once Outcome: difference or ratio Control group: –Individuals who did not receive the intervention –Groups/areas without the intervention –Dose-response analyses, if possible

ORT and diarrhea deaths in Brazil Spearman r = -0,61 (p=0,04) Each dot = 1 state

Adequacy/plausibility designs (2) Design: longitudinal (before-and-after) Measurement points: twice or more Outcome: change Control group: –The same or similar individuals, before the intervention –The same groups/areas, before the intervention –Time-trend analyses, if possible

Hib vaccine in Uruguay In Uruguay, reported Hib cases declined by over 95 percent after the introduction of routine infant Hib immunisation in Source: PAHO, 2004

Adequacy/plausibility designs (3) Design: longitudinal-control Measurement points: twice or more Outcome: relative change Control group: –The same or similar individuals, before the intervention –The same groups/areas, before the intervention –Time-trend analyses, if possible

Adequacy/plausibility designs (4) Design: case-control Measurement points: once Comparison: exposure to intervention Groups: –Cases: individuals with the disease of interest –Controls: sample of the population from which cases originated

Stunting in Tanzania Source: Schellenberg J et al Stunting prevalence among children aged months p (mean haz) = 0.05

Transparent Reporting for Evaluations with Nonrandomised Designs (TREND) Similar to CONSORT guidelines Include –conceptual frameworks used –intervention and comparison conditions –research design –methods of adjusting for possible biases AJPH, March 2004 Source: Des Jarlais, Lyles, Crepaz and the TREND Group, AJPH 2004

Conclusions (1) RCTs are essential for –clinical studies –community studies for establishing the efficacy of relatively simple interventions RCTs require additional evidence from non-randomised studies for increasing their external validity

Conclusions (2) Given the complexity of many Public Health interventions, adequacy and plausibility studies are essential in different populations –even for interventions proven by RCTs Adequacy evaluations should become part of the routine of decision-makers –and plausibility evaluations too, when possible

THANK YOU