Download presentation

Presentation is loading. Please wait.

Published byRonnie Stockley Modified over 2 years ago

2
Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs OR A Better Way to Calculate Effect Size than the Blatantly Incorrect Way We Have All Been Mindlessly Doing It

3
Trouble in Paradise: Why Effect Sizes Sometimes Aren’t So Comparable After All Effect sizes make effects comparable across different studies by dividing a measure of the effect by a measure of the variation between scores. The problem is that research design can influence the computed variation between scores, defeating the point. Consider Cohen's d: Mean Difference ---------------------------------------- Standard Deviation If your experimental design changes the standard deviation, it changes the effect size, as with: Blocking FactorsRepeated MeasuresCovariance Adjustment

4
Fortunately, notable minds have written copiously about how to make standardized mean differences comparable. However, we are still going HORRIBLY ARWY when it comes to eta squared and omega squared! So what’s a statistician to do? Regular eta squared Variation Due to the Effect (SSeffect) ------------------------------------------------ Total Variation (SStotal) Takes the total variation into account, but depends on the number of experimental manipulations employed and their interactions. Partial eta squared Variation Due to the Effect (SSeffect) ---------------------------------------------------- Variation Due to the Effect (SSeffect) + Variation Within Cells (SSs/cell) Does not depend on the number of experimental manipulations, but changes if the research design effects the within cell variation.

5
It’s Generalized Eta/Omega Squared to the Rescue!! Variance Due to the Effect ------------------------------------------------------------------------------------------------------- Variance Due to the Effect (if the effect is not an individual difference) + Variance Due to Individual Differences Leaves variance due to other experimental manipulations out of the denominator, and so does not depend on the number of experimental manipulations employed. Why is this such a magical solution? Includes all of the variation between individuals that is not due to the experimental manipulation(s), and thus does not change depending on research design differences that alter the variation within cells. e.g. If researcher used ANCOVA, eta squared = Effect Variation ------------------------------------------------------ Effect Var + Covariate Var + W/in cell Var

6
Well, that's all good in theory, but in reality … Experiments use different manipulations (other than that for the effect of interest) Generalized eta and omega squared make effect size measures (for the same effect) comparable when: Experiments differ in using between-subject or within-subject manipulations Generalized eta and omega squared do not make effect size measures comparable when: Experiments differ in using blocking variables or covariates Experiments differ in controlling relevant characteristics of the experimental setting (e.g. time of day) Experiments differ in the populations they sample (e.g. above 60 vs. 18+)

7
The key is that effect sizes try to make effect magnitudes comparable across studies by making them relative to the variation between scores, but the research design used in a given study can influence this variation as computed, defeating the point. Generalized eta and omega squared correct for the precisely correctable ways in which research designs change the computed variation, making these effect sizes more comparable. So you can know which effect size wins in a fair fight…

Similar presentations

OK

Standard Deviation A Measure of Variation in a set of Data.

Standard Deviation A Measure of Variation in a set of Data.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google