Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 7: Evaluation of interventions

Similar presentations


Presentation on theme: "Lecture 7: Evaluation of interventions"— Presentation transcript:

1 Lecture 7: Evaluation of interventions
Types of intervention Introduction to social science terminology and concepts of intervention study design Study design Experimental Quasi-experimental Observational

2 Requirements of health care
Effective effectiveness vs efficacy? Efficient minimize use of resources Equitable equity in access, use related to need Acceptable client perception of care

3 Efficacy vs effectiveness (Definitions from Last’s Dictionary of Epidemiology)
Efficacy (Can it work?) The extent to which a specific intervention procedure, regimen or service produces a beneficial result under ideal conditions. Ideally, the determination of efficacy is based on the results of a randomized controlled trial. Effectiveness (Does it work?): The extent to which a specific intervention procedure regimen or service when deployed in the field does what it is intended to do for a defined population. (The main distinction between effectiveness and efficacy is that effectiveness refers to average rather than ideal conditions of use).

4 Types of intervention Classified by purpose:
primary prevention (prevention of onset of disease) secondary prevention (screening, early detection, and prompt treatment) tertiary prevention (of chronic conditions, to decrease disability and increase quality of life)

5 Types of intervention Classified by complexity of technology involved (technology assessment paradigm): drugs devices procedures systems of care

6 Intervention study or study of an intervention?
Intervention study (referring to a study design): An investigation involving intentional change in some aspect of the status of the subjects, e.g., introduction of a preventive or therapeutic regimen, or designed to test a hypothesized relationship; usually an experiment such as a randomized controlled trial (Definitions from Last’s Dictionary of Epidemiology) Study of an intervention (referring to the study purpose): study of a health care intervention; may be experimental or non-experimental (observational)

7 Level of evaluation STRUCTURE: Staff, equipment needed to deliver intervention. PROCESS: is the intervention service provided as planned? (Interaction between structure and patient/client) OUTCOMES: expected or unexpected results, either positive or negative.

8 Level of evaluation In evaluation of intervention, outcomes are of primary interest To help interpret the results, measures of structure and process are desirable, e.g.: adherence to intervention “dose” of intervention actually received characteristics of staff who deliver intervention

9 Step 1: intervention objectives
Specify positive and negative outcomes expected Measurable outcomes Changes in natural history death, disease, disability, distress Behaviors, attitudes (e.g., educational interventions) Lecture 7 (Oct 23, 2003)

10 Methodological issues in evaluation of interventions
Two paradigms: epidemiological (clinical and public health roots) social science (sociological roots) Two sets of terminology!

11 Internal and external validity of an intervention study
Internal validity: The degree to which an observed effect can be attributed to an intervention. External validity: The degree to which an observed effect that is attributable to an intervention can be generalized to similar populations and settings (generalizability). Note: both internal and external validity are aspects of the validity of a study and should be distinguished from the validity of measurements.

12 Threats to internal validity
History extraneous events (e.g. breast cancer screening) Maturation aging (e.g., drug abuse treatment) Testing e.g., effects of pretesting Instrumentation Regression (to mean) Selection Attrition

13 Threats to external validity
Is intervention equally effective in different populations, including more naturalistic applications? Usually not - why?: Methodological Interaction of intervention with pre-testing Reactive effects (to testing) - Hawthorne effects Differences in intervention Characteristics of intervention personnel Process of implementation

14 Study designs Experimental Quasi-experimental Observational
investigator has complete control over allocation and timing of intervention usually randomized Quasi-experimental investigator has no control Observational

15 Diagramming Intervention Evaluation Designs Campbell and Stanley
X = program O = measurement R = randomization

16 Randomized (Experimental) Designs
Randomized pre-test post-test control group design R O1 X O2 R O O4 Post-test only control group design R X O1 R O2

17 Quasi-experimental study designs
Investigator has “some control” over timing or allocation of intervention Non-randomized or quasi-randomized trials Non-equivalent control group designs (MAY OR MAY NOT BE RANDOMIZED): pre-test and post-test post-test only Solomon 4 group

18 Some quasi-experimental designs

19 Solomon four-group design
R O1 X O2 R O O4 R X O5 R O6

20 Examples of pre-post non-equivalent control group design
Stanford 5-city study of CHD prevention Intervention included mass media education and group interventions for high-risk 5 cities selected - similar characteristics those with shared media market were allocated to intervention isolated cities allocated to control group

21 Other designs: recurrent institutional cycle design
Finnish mental hospital study of dietary intervention to prevent CHD 2 hospitals selected, received intervention sequentially Useful design if considered unethical to withhold intervention

22 Observational designs
Investigator has NO control over allocation or timing of intervention: Cross-sectional (after only) Separate sample pre- post-test Time series (trend) designs single or multiple Cohort studies Panel studies

23 Example of trend study: Health insurance in Quebec
1961: universal hospital insurance included ER care for accidents 1970: universal health insurance (Medicare) added MD care including hospital outpatient clinics and ERs

24 Example of trend study: Health insurance in Quebec
Population surveys before and after Effects on: use of physician services by general population physician workload use of emergency rooms hospitalization and surgery

25 MD visits/person/year by income (household surveys)

26 MD visits/person/year (household surveys)

27 MD visits/person/year by income (household surveys)

28 % adults with cough 2+ weeks who consulted MD (household surveys)

29 % children (<17) with tonsilitis or sore throat and fever who consulted MD (household surveys)

30 % pregnancies with visit in first trimester (household survey)

31 % Tried to contact MD before ED visit; of these, % successful (6 hospital sample)

32 Time series designs

33 Example of time series study: Tamblyn et al, 2001
Evaluation of prescription drug cost-sharing among poor and elderly Methods: Trend study: Multiple pre- and post- measurements Cohort study:

34 Source: Tamblyn et al, JAMA 2001, 285(4): 421-429

35 Source: Tamblyn et al, JAMA 2001, 285(4): 421-429

36 Some Weak Observational Designs
One-shot case-study X O Static group comparison: X O1 O3

37 Time-series design: Home care in terminal cancer
Evaluation of home-hospice programme in Rochester, NY Expansion of home-care benefits in 1978 Hypothesis: home-hospice care in last month of life reduces hospital days and costs Data sources: Linkage of tumor registry and health insurance claims databases

38

39

40 Epidemiological observational analytical designs
Difference in independent and dependent variables: Studies of risk factors: independent variable: risk factor dependent variable: disease Studies of interventions: independent variable: intervention dependent variable: outcome

41 Cohort study Selection of controls: could they receive either treatment? Example: medical vs surgical treatment of CHD Sources of bias: confounding by indication selection bias detection bias (etc.)

42 Cohort study Cohorts with and without “exposure” (intervention) followed to determine outcomes Control cohort - concurrent or historical (confounding by changes over tine in patient population, aspects of treatment other than intervention; measurement of confounders)

43 Example of cohort study
Do HMOs reduce hospitalization in terminal cancer patients, during 6 months before death? Administrative databases and tumor registry from Rochester NY Cancer deaths in 100 pairs of HMO members and non-members Matched by age, cancer site, months from diagnosis to death

44

45 Case-control study Cases (with outcome) compared to controls (without outcome) with regard to (previous) intervention Limited to single, categorical outcome Sources of bias Confounding by selection Confounding by indication Detection bias (For screening programs) Separation of screening tests from tests done after symptoms appear

46 Case-control study: Examples
Screening programs: screening Pap test and invasive cervical cancer screening mammography and breast cancer deaths screening sigmoidoscopy and colon cancer deaths Vaccine effectiveness (e.g., BCG) Neonatal intensive care and neonatal deaths

47 Considerations in selection of a study design
Cost Feasibility Ethical issues Internal validity External validity Credibility


Download ppt "Lecture 7: Evaluation of interventions"

Similar presentations


Ads by Google