Design and conduct of evaluations of CVD control programs (part I) Gilles Paradis, MD, MSc, FRCPC Jennifer O’Loughlin, PhD McGill University Health Center.

Slides:



Advertisements
Similar presentations
Andrea M. Landis, PhD, RN UW LEAH
Advertisements

Donald T. Simeon Caribbean Health Research Council
GROUP-LEVEL DESIGNS Chapter 9.
KINE 4565: The epidemiology of injury prevention Randomized controlled trials.
Reading the Dental Literature
Chance, bias and confounding
Mother and Child Health: Research Methods
Assessing Disease Frequency
Program Evaluation Spero Manson PhD
Clinical Trials Hanyan Yang
RESEARCH METHODS Lecture 19
By Dr. Ahmed Mostafa Assist. Prof. of anesthesia & I.C.U. Evidence-based medicine.
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Thoughts on Biomarker Discovery and Validation Karla Ballman, Ph.D. Division of Biostatistics October 29, 2007.
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
Principles of Research Writing & Design Educational Series Fundamentals of Study Design Lauren Duke, MA Program Coordinator Meharry-Vanderbilt Alliance.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
Chapter 8 Experimental Research
Validity and Reliability Dr. Voranuch Wangsuphachart Dept. of Social & Environmental Medicine Faculty of Tropical Medicine Mahodil University 420/6 Rajvithi.
Before and After Studies in Injury Research Thomas Songer, PhD University of Pittsburgh
BASIC STATISTICS: AN OXYMORON? (With a little EPI thrown in…) URVASHI VAID MD, MS AUG 2012.
Lecture 8 Objective 20. Describe the elements of design of observational studies: case reports/series.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Program Evaluation Using qualitative & qualitative methods.
Selecting a Research Design. Research Design Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Research Designs for Complex Community Interventions for Childhood Obesity Prevention Robert W. Jeffery, Ph.D. Division of Epidemiology University of Minnesota.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 12 Undertaking Research for Specific Purposes.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
HSA 171 CAR. 1436/ 7/4  The results of activities of an organization or investment over a given period of time.  Organizational Performance: ◦ A measure.
CHP400: Community Health Program- lI Research Methodology STUDY DESIGNS Observational / Analytical Studies Case Control Studies Present: Disease Past:
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Study Designs Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /4/20151.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
دکتر خلیلی 1. Lucid the way to “ Research” And Follow an “ Evidence Based Medicine”
Types of study designs Arash Najimi
Study design P.Olliaro Nov04. Study designs: observational vs. experimental studies What happened?  Case-control study What’s happening?  Cross-sectional.
Applied Epidemiology Sharla Smith. Discussion Assignments How to complete a discussion assignment –Read the chapters –Evaluate the question –Be very specific.
Evaluating a Research Report
Tips for Researchers on Completing the Data Analysis Section of the IRB Application Don Allensworth-Davies, MSc Statistical Manager, Data Coordinating.
Systematic Review Module 7: Rating the Quality of Individual Studies Meera Viswanathan, PhD RTI-UNC EPC.
 2008 Johns Hopkins Bloomberg School of Public Health Evaluating Mass Media Anti-Smoking Campaigns Marc Boulay, PhD Center for Communication Programs.
Intervention Studies - Cluster (Randomized) Trials Intervention at the cluster level - What are clusters? - Why intervene in clusters (rather than in individuals)?
Psychological Research Strategies Module 2. Why is Research Important? Gives us a reliable, systematic way to consider our questions Helps us to draw.
Quantitative and Qualitative Approaches
1 Experimental Research Cause + Effect Manipulation Control.
Lecture 7 Objective 18. Describe the elements of design of observational studies: case ‑ control studies (retrospective studies). Discuss the advantages.
Case-control study Chihaya Koriyama August 17 (Lecture 1)
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
Research Strategies. Why is Research Important? Answer in complete sentences in your bell work spiral. Discuss the consequences of good or poor research.
1 Lecture 6: Descriptive follow-up studies Natural history of disease and prognosis Survival analysis: Kaplan-Meier survival curves Cox proportional hazards.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Making epidemiological evidence more accessible using pictures Rod Jackson Updated November 09.
Designs of Quasi-Experiments Studies for Assessing the Transport Enhancements and Physical Activity.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Case-Control Studies Abdualziz BinSaeed. Case-Control Studies Type of analytic study Unit of observation and analysis: Individual (not group)
Finding, Evaluating, and Presenting Evidence Sharon E. Lock, PhD, ARNP NUR 603 Spring, 2001.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
1 Lecture 11: Cluster randomized and community trials Clusters, groups, communities Why allocate clusters vs individuals? Randomized vs nonrandomized designs.
Case Control Study Dr Pravin Pisudde Moderator: Abhishek Raut.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Introduction to General Epidemiology (2) By: Dr. Khalid El Tohami.
Copyright © 2008 Delmar. All rights reserved. Chapter 4 Epidemiology and Public Health Nursing.
Collecting Sample Data Chapter 1 Section 4 Part 2.
Measures of disease frequency Simon Thornley. Measures of Effect and Disease Frequency Aims – To define and describe the uses of common epidemiological.
Epidemiological Study Designs And Measures Of Risks (1)
Journal Club Curriculum-Study designs. Objectives  Distinguish between the main types of research designs  Randomized control trials  Cohort studies.
Public Health Interventions
Evidence Based Practice
Presentation transcript:

Design and conduct of evaluations of CVD control programs (part I) Gilles Paradis, MD, MSc, FRCPC Jennifer O’Loughlin, PhD McGill University Health Center Department of Epidemiology and Biostatistics McGill University

Outline Part I l Why evaluate? l What’s evaluation l Evaluate what? l Scope of evaluation l Methodological issues

Why evaluate? 1. Accountability Report on the attainment of objectives and use of limited resources 2. Improvement Treatment, program performance. 3. Advocacy Enhance programs, build consensus, support coalitions

Why evaluate? l Social responsibility beyond "Primum non nocere" l Many (well established) interventions have been subsequently shown to be useless or harmful l M.I.: Prolonged bed rest Magnesium Class I antiarrhythmics Ca ++ channel blockers l Prevention:  carotene HRT (?)

What is evaluation? Process of systematic data collection or information gathering to shed light on some aspects of an action or intervention Respond to specific questions regarding a program "Who is being reached by…?" Support decision making "Which of two alternative strategies is more effective?"

What is evaluation? Enhance community participation "What are key community concerns?" Improve the understanding of mechanisms of action "How can I reach low SES populations with this program?" Support community mobilization "What do key stakeholders expect from a coalition?"

Evaluate what? l Primary prevention programs Reduce exposure to risk factors Decrease incidence l Secondary prevention Prevent progression among affected asymptomatic individuals (HBP, …) Screening, case-finding

l Individual practice Diagnostic, preventive, therapeutic l Organizational or community changes Structural (inputs, resources mobilized) Process (quality of services) Outcomes (attainment of objectives) l Tertiary prevention Decrease morbidity, mortality among symptomatic individuals Improve QOL, functioning

Scope of Evaluation Broad approaches 1 - Normative 2 - Evaluative research

Scope of Evaluation Quality of preventive care l GOAL: Compare practices to standards of excellence or criterias l EXA:Rules for use of resources l Who gets fasting lipoprotein profiles? l Who gets 24 hour BP monitoring? l Streptokinase or tpa? Criterias of quality preventive care l Management of HBP, type II diabetes l Management of pts with IHD l METHODS:Chart audits Surveys 1 - Normative

Scope of Evaluation Quality of programs l GOAL:Structural:Appropriate use of resources? Process:Target population attained? Program implemented as intended? Impact:Were objectives achieved? l EXA:HBP screening in worksites l Methods:Review of reports, existing databases Key informant interviews Surveys Evaluation of (public health) organizations l Structure, functioning, planning, etc. 1 - Normative

Scope of Evaluation l Efficacy l Effectiveness l Efficiency (cost-benefit, cost- effectiveness) l Quality of preventive care (decision analysis ) 2 - Evaluative research

1 - Specification of theoretical model 2 - Design 3 - Measures (what and how) 4 - Biases 5 - Analysis Methodological issues

1 - Theoretical model l Avoid “Black Box” phenomenon l Observe connecting processes between inputs and outputs l Key to understand and improve interventions l Describes how program produces the effect l Blueprint for selection of variables, guiding analysis, interpreting results Methodological issues

2 - Design l General model Initial state  Subsequent state  t 0 Intervention t 1 Effect of intervention, time or other? Initial state  Subsequent state  t 0 Intervention t 1 Initial state  Subsequent state Methodological issues

2 - Design l Repeat cross-sectional surveys l Cohort l RCT l (Case-control) Methodological issues

2 - Design Cohort l Individual behavior change l Non-anonymous participation l Attrition related to behavior evaluated l Repeat testing, co-intervention l Maturation, aging l More long-term residents ll l 1-   Repeat C / S l Community-wide prevalence l Anonymous ll ll ll l Less of a problem l Cross-contamination l 1-   Methodological issues

2 - Design RCT l Unbiased allocation l Similar distribution of R.F. (known or unknown) to groups l Comparability of groups l Validity of statistical tests l Feasability, costs l Other options to minimize biases (matching, stratification, …) Methodological issues

3 - Measures What? l Mortality, morbidity l Q O L l Risk factors l Behaviors l Physical and social environments Proximal impact easier to measure than distal Methodological issues

3 - Measures How? Reliability Validity l Self-reported behaviors l Social desirability l Pre-testing instruments l Objective measures / Gold standard l Environmental measures(shelf space, no-smoking signs, …) l Surrogate reports from next of kin l Bogus measurements Methodological issues

4 - Biases “Distortion in the estimate of effect of an exposure” due to l Selection of subjects l How information is collected l Confounding Methodological issues

4 - Biases Community programs particularly prone to biases l Random allocation is rare l Limited # of clusters l Important differences between groups (absolute and secular trends) l Multiple co-interventions l Blinding is impossible Methodological issues

Solutions: l Matching: l  # of pairs l  # measurements Methodological issues

5 - Analysis Effects measured at the individual level but allocation and intervention are at the community level  1-   High intra-class correlations Biased standard error at the individual level (  false - positive results) Standard error must be computed at the community level requires  N adjustment for sampling procedures  # data collection Methodological issues