Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Data Analysis and Interpretation: Computing effect sizes.

Similar presentations


Presentation on theme: "The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Data Analysis and Interpretation: Computing effect sizes."— Presentation transcript:

1 The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Data Analysis and Interpretation: Computing effect sizes

2 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org A brief introduction to effect sizes Meta-analysis expresses the results of each study using a quantitative index of effect size (ES). ESs are measures of the strength or magnitude of a relationship of interest. ESs have the advantage of being comparable (i.e., they estimate the same thing) across all of the studies and therefore can be summarized across studies in the meta-analysis. Also, they are relatively independent of sample size.

3 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Effect Size Basics Effect sizes can be expressed in many different metrics – d, r, odds ratio, risk ratio, etc. So be sure to be specific about the metric! Effect sizes can be unstandardized or standardized – Unstandardized = expressed in measurement units – Standardized = expressed in standardized measurement units

4 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Unstandardized Effect Sizes Examples – 5 point gain in IQ scores – 22% reduction in repeat offending – €600 savings per person Unstandardized effect sizes are helpful in communicating intervention impacts – But in many systematic reviews are not usable since not all studies will operationalize the dependent variable in the same way

5 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standardized Effect Sizes Some standardized effect sizes are relatively easy to interpret – Correlation coefficient – Risk ratio Others are not – Standardized mean difference ( d ) – Odds ratio, logged odds ratio

6 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Types of effect size Most reviews use effect sizes from one of three families of effect sizes: the d family, including the standardized mean difference, the r family, including the correlation coefficient, and the odds ratio (OR) family, including proportions and other measures for categorical data.

7 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Effect size computation Compute a measure of the “effect” of each study as our outcome Range of effect sizes: – Differences between two groups on a continuous measure – Relationship between two continuous measures – Differences between two groups on frequency or incidence

8 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Types of effect sizes Standardized mean difference Correlation Coefficient Odds Ratios

9 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standardized mean difference Used when we are interested in two-group comparisons using means Groups could be two experimental groups, or in an observational study, two groups of interest such as boys versus girls.

10 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Notation for study-level statistics n is sample size

11 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Notation for study-level statistics

12 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standardized mean difference Pooled sample standard deviation

13 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Pooled sample standard deviation

14 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Correction to ES sm

15 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standard error of standardized mean difference

16 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Example Table 1 from: Henggeler, S. W., Melton, G. B. & Smith, L. A. (1992). Family preservation sing multisystemic therapy: An effective alternative to incarcerating seriuos juvenile offenders. Journal of Consulting and Clinical Psychology, 60(6), 953- 961.

17 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Note: Text of paper (p. 954) indicates that MST n = 43, usual services n = 41.

18 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Computing pooled sd

19 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Computing ES sm

20 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Computing unbiased ES sm

21 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Computing SE sm

22 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org 95% Confidence interval for ES’ sm The 95% confidence interval for the standardized mean difference in weeks of incarceration ranges from -1 sds to -0.2 sds. Given that the sd of weeks is 16.6, the juveniles in MST were incarcerated on average -1.06*16.6 = -17.6 to -0.18*16.6 = -3 less weeks than juveniles in the standard treatment. In weeks, the confidence interval is [-17.6, -3.0].

23 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Note: Text of paper (p. 954) indicates that MST n = 43, usual services n = 41.

24 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Practice computations Compute effect size for number of arrests Compute effect size with bias correction Compute 95% confidence interval for effect size Interpret the effect size

25 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Pooled sd for arrests

26 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org ES sm for arrests

27 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Computing unbiased ES sm

28 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Computing SE sm

29 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org 95% Confidence interval for ES’ sm The 95% confidence interval for the standardized mean difference in number of arrests is from -0.87 sds to -0.01 sds. Given that the sd of arrests is 1.44, the juveniles in MST were arrested on average -0.87*1.44 = -1.25 to -0.01*1.44 = -0.01 less than juveniles in the standard treatment. In arrests, the confidence interval is [-1.25, -0.01].

30 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Computing standardized mean differences The first steps in computing d effect sizes involve assessing what data are available and what ’ s missing. You will look for: Sample size and unit information Means and SD s or SE s for treatment and control groups ANOVA tables F or t tests in text, or Tables of counts

31 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Sample sizes Regardless of exactly what you compute you will need to get sample sizes (to correct for bias and compute variances). Sample sizes can vary within studies so check initial reports of n against (1) n for each test or outcome or (2) df associated with each test

32 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standardized Mean Differences Means, standard deviations and sample sizes the most direct method Without individual group sample sizes (n 1 and n 2 ), assume equal group n’s Can compute standardized mean differences from t -statistic and from one-way F -statistic

33 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org ES sm from t- tests

34 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org

35

36 Standardized mean difference from t-test

37 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standardized mean difference from means and sds

38 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org ES sm from F- tests (one-way) Note that you have to decide the direction of the effect given the results.

39 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org

40

41 Standardized mean difference from F- test Note that we choose a negative effect size since the number of arrests is less for the MST group than for the control group

42 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org From means and sds from before

43 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Correlational data

44 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Correlation data

45 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standard error of z-transform

46 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Example

47 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standard error of z-transform

48 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org 95% confidence interval for z

49 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org To translate back to r-metric

50 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Confidence interval in r-metric

51 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org

52 Outcomes of one study Drummond et al. (1990) SuccessFailureTOTAL Treatment51419 Comparison61218 TOTAL112637

53 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Odds of improving, Ω Trt

54 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Odds of improving, Ω Trt Estimate Ω Trt by O E

55 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Odds of improving, Ω Cntl Estimate Ω Cntl by O E’

56 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Odds ratio, ω

57 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Example

58 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Outcomes of one study FrequenciesSuccessFailure Treatmentab Comparisoncd

59 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Odds ratio, o or ES OR

60 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Interpretation of ES OR ES OR = 1, Treatment & Control equally effective ES OR > 1, Treatment successes more likely than Control successes 0 < ES OR < 1, Treatment successes less likely than Control

61 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org ES LOR, log-odds ratio

62 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Standard Error of ES LOR

63 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Interpretation of ES LOR ES LOR = 0, No difference between Treatment and Control ES LOR > 0, Treatment successes more likely than control successes ES LOR < 0, Treatment successes less likely than control successes

64 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org

65 Information for a 2 x 2 table MST n = 92 IT (Control) n = 84 26.1% of MST group re-arrested 71.4% of IT group re-arrested

66 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org 2 x 2 Table Not arrestedRe-arrested MST92 – 24 = 6826.1% of 92 = 24 IT84 – 60 = 2471.4% of 84 = 60

67 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Log-odds ratio

68 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org SE of log-odds ratio

69 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org 95% Confidence interval

70 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org

71 2 x 2 Table In home placementOut of home placement MST90.6% of 59 = 53.459.4% of 59 = 5.55 Usual child welfare services 58.1% of 37 = 21.541.9% of 37 = 15.5

72 C2 Training Materials – Oslo – May 2011www.campbellcollaboration.org Log-odds ratio


Download ppt "The Campbell Collaborationwww.campbellcollaboration.org C2 Training: May 9 – 10, 2011 Data Analysis and Interpretation: Computing effect sizes."

Similar presentations


Ads by Google