Presentation is loading. Please wait.

Presentation is loading. Please wait.

Effect Size and Statistical Power Analysis in Behavioral and Educational Research Effect size 1 (P. Onghena) 09.00-10.30 a.m. Effect size 2 (W. Van den.

Similar presentations


Presentation on theme: "Effect Size and Statistical Power Analysis in Behavioral and Educational Research Effect size 1 (P. Onghena) 09.00-10.30 a.m. Effect size 2 (W. Van den."— Presentation transcript:

1 Effect Size and Statistical Power Analysis in Behavioral and Educational Research Effect size 1 (P. Onghena) 09.00-10.30 a.m. Effect size 2 (W. Van den Noortgate) 10.45-12.15 a.m. Power 1 (I. Van Mechelen) 02.00-03.30 p.m. Power 2 (P. Onghena) 03.45-04.30 (A-N) / 04.30-05.15 (O-Z)

2 SIGNIFICANCE TESTING CRISIS Carver, R. P. (1993). The case against statistical significance testing Cohen, J. (1994). The earth is round (p <.05). Falk, R., & Greenbaum, C. W. (1995). Significance tests die hard: The amazing persistence of a probabilistic misconception. Hunter, J. E. (1997). Needed: A ban on the significance test.

3 CHILDHOOD TRAUMATA Furious parental conflicts Karl Pearson versus Ronald Fisher Ronald Fisher versus Jerzy Neyman (Egon Pearson) – see Box (1978), Gigerenzer et al. (1990), Oakes (1986) Morrison, D. R., & Henkel, R. E. (Eds.). (1970). The significance test controversy: A reader.

4 POSSIBILITY FOR GROWTH APA Task Force on Statistical Inference 1999 American Psychologist article: Wilkinson & the Task Force 2001 Publication Manual (5th ed.) Editorial boards of flagship journals: Journal of Consulting & Clinical Psychology, Journal of Counseling and Development, Exceptional Children, Journal of Learning Disabilities,…

5 GUIDELINES Power and sample size. Provide information on sample size and the process that led to sample size decisions. Document the effect sizes, sampling and measurement assumptions, as well as analytic procedures used in power calculations.

6 Because power computations are most meaningful when done before data are collected and examined, it is important to show how effect-size estimates have been derived from previous research and theory in order to dispel suspicions that they might have been taken from data used in the study or, even worse, constructed to justify a particular sample size. Once the study is analyzed, confidence intervals replace calculated power in describing results.

7 GUIDELINES Hypothesis tests. It is hard to imagine a situation in which a dichotomous accept-reject decision is better than reporting an actual p value or, better still, a confidence interval. Never use the unfortunate expression "accept the null hypothesis." Always provide some effect-size estimate when reporting a p value.

8 GUIDELINES Effect sizes. Always present effect sizes for primary outcomes. If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d). It helps to add brief comments that place these effect sizes in a practical and theoretical context.

9 For a simple, general purpose display of the practical meaning of an effect size, see Rosenthal and Rubin (1982). Consult Rosenthal and Rubin (1994) for information on the use of “counternull intervals” for effect sizes, as alternatives to confidence intervals.

10 GUIDELINES Interval estimates. Interval estimates should be given for any effect sizes involving principal outcomes. Provide intervals for correlations and other coefficients of association or variation whenever possible.

11 EFFECT SIZE: IMPORTANCE For power analysis (Cohen, 1969) For meta-analysis (Glass, 1976) For descriptive statistics Test of Significance = Size of Effect × Size of Study Rosenthal, 1991

12 EFFECT SIZE: WHAT THE HELL…? Cohen (1969): “By the above route, it can now readily made clear that when the null hypothesis is false, it is false to some degree, i.e., the effect size (ES) is some specific nonzero value in the population.” (p. 10)

13 EFFECT SIZE: WHAT THE HELL…? Use of the tables for significance testing Cohen (1969): “Accordingly, we refine our ES index, d, so that its elements are sample results, rather than population parameters, and call it d s.” (p. 64)

14 EFFECT SIZE: WHAT THE HELL…?

15 Glass (1976): uses d s in meta-analysis but only uses S of the control group in the denominator. Hedges (1981), Hedges and Olkin (1985) d s is called g (with reference to Gene Glass)  Hedges’s g Hedges (1981), Hedges and Olkin (1985) confusion: an approximately unbiased estimator called... d!?

16 EFFECT SIZE: SUMMARY COMPARISON OF TWO MEANS Cohen’s d: population value (if you use the sample as your population, then use the sample size in the denominator) Hedges’s g: sample estimator (use the degrees of freedom in the denominator) Hedges’s unbiased estimator is rarely used outside meta-analytic contexts point biserial correlation coefficient (Rosenthal, 1991)

17 EFFECT SIZE: EXAMPLE ExperimentalControl 74 74 63 52 52 Sum3015 Mean63 S (  ) 1 (0.894)

18 EFFECT SIZE: EXAMPLE Cohen’s d = (6 – 3) /.894 = 3.35 Hedges’s g = (6 – 3) / 1 = 3 Point biserial correlation coefficient: 77 6 5 5 4 4 3 2 2 1 1 1 1 1 0 0 0 0 0 r =.86 All kinds of transformations possible t  d  g  r

19 COUNTERNULL VALUE OFAN ES Tackle the misconceptions –that failure to reject the null hypothesis  ES = 0 –that finding a statistically significant p value implies an ES of important magnitude The counternull value is the nonnull magnitude of ES that is supported by exactly the same amount of evidence as is the null value of the ES. If the counternull value were taken as H 0, then the resulting p value would be the same as the obtained p for the actual H 0

20 COUNTERNULL VALUE OF AN ES For symmetric reference distributions ES counternull = 2ES obtained – ES null For asymmetric reference distributions –transform the ES as to have a symmetric reference distribution –calculate the counternull on the symmetric scale –transform back to obtain the counternull on the original scale Example of its use: RRR (2000)

21 INTERPRETING EFFECT SIZES Cohen’s heuristic values small: d = 0.20 the size of the difference between 15- and 16-year-old girls medium: d = 0.50 visible to the naked eye 14- and 18-year-old girls large: d = 0.80 grossly perceptible 13- and 18-year-old girls

22 INTERPRETING EFFECT SIZES Comparison with other measures small: d = 0.20 r =.10 r 2 =.01 medium:d = 0.50 r =.243 r 2 =.059 large: d = 0.80 r =.371 r 2 =.138

23 BINOMIAL EFFECT SIZE DISPLAY r =.32Treatment outcome ConditionImprovedNot improvedTotals Psychotherapy6634100 Control3466100 Totals100 200

24 BINOMIAL EFFECT SIZE DISPLAY What is the effect on the success rate of the implementation of a certain treatment? Psychotherapy success rate:.50 + r/2 =.66 Control success rate:.50 – r/2 =.34 Notice:.66 –.34 =.32 “standardized” percentages in order for all margins to be equal

25 ASPIRIN’S EFFECT ON HEART ATTACK ConditionHeart attackNo heart attackTotal Aspirin1041093311037 Placebo1891084511034 Totals2932177822071

26 ASPIRIN’S EFFECT ON HEART ATTACK: BESD ConditionHeart attackNo heart attackTotal Aspirin48.351.7100 Placebo51.748.3100 Totals100 200

27 SMALL EFFECTS MAY BE IMPRESSIVE and vice versa (Prentice & Miller, 1992) consider the amount of variation in the independent variable consider the importance / the assumed stability of the dependent variabele

28 WHAT EFFECT SIZE HAS PRACTICAL SIGNIFICANCE? assess practical significance closely related to the particular problems, populations, and measures relevant to the treatment under investigation Example: community mental health study inpatient versus outpatient therapy Example: effects of school characteristics on reading achievement fifth grade pupils versus sixth grade pupils


Download ppt "Effect Size and Statistical Power Analysis in Behavioral and Educational Research Effect size 1 (P. Onghena) 09.00-10.30 a.m. Effect size 2 (W. Van den."

Similar presentations


Ads by Google