Presentation on theme: "Section VII Comparing means & analysis of variance."— Presentation transcript:
Section VII Comparing means & analysis of variance
How to display means- ok in simple situations
Presenting means - ANOVA data One can also add “error bars” to these means. In analysis of variance, these error bars are based on the sample size and the pooled standard deviation, SDe. This SDe is the same residual SDe as in regression.
4 Don’t use bar graphs in complex situations
5 Use line graph
Comparing Means Two groups – t test (review) Mean differences are “statistically significant” (different beyond chance) relative to their standard error (SE d ), a measure of mean variability noise”). ___ ____ t = (Y 1 - Y 2 )= “signal” SE d “noise” _ Y i = mean of group i, SE d =standard error of mean difference t is mean difference in SE d units. As |t| increases, p value gets smaller. Rule of thumb: p 2 ____ ___ | Y 1 - Y 2 | > t cr SE d = 2 SE d =LSD t cr SE d = 2 SE d is the critical or least significant difference (LSD). So, getting the correct SE d is crucial!! SE d is the “yardstick” for significance
How to compute SE d ? SE d depends on n, SD and study design. (example: factorial or repeated measures) For a single mean, if n=sample size _ _____ SEM = SD/ n = SD 2 /n __ __ For a mean difference (Y 1 - Y 2 ) The SE of the mean difference, SE d is given by _________________ SE d = [ SD 1 2 /n 1 + SD 2 2 /n 2 ] or ________________ SE d = [SEM SEM 2 2 ] If data is paired (before-after), first compute differences (d i =Y 2i -Y 1i ) for each person. For paired: SE d =SD(d i )/√n
3 or more groups-analysis of variance (ANOVA) Pooled SDs What if we have many treatment groups, each with its own mean and SD? Group Mean SD sample size (n) __ A Y 1 SD 1 n 1 B Y 2 SD 2 n 2 C Y 3 SD 3 n 3 … __ k Y k SD k n k
Check variance homogeneity
The Pooled SD e SD 2 pooled error = SD 2 e = (n 1 -1) SD (n 2 -1) SD … (n k -1) SD k 2 (n 1 -1) + (n 2 -1) + … (n k -1) ____ so, SD e = = SD 2 e
In ANOVA - we use pooled SD e to compute SE d and to compute “post hoc” (post pooling) t statistics and p values. ____________________ SE d = [ SD 1 2 /n 1 + SD 2 2 /n 2 ] ____________ = SD e (1/n 1 ) + (1/n 2 ) SD 1 and SD 2 are replaced by SD p a “common yardstick”. If n 1 =n 2 =n, then SE d = SD e 2/n=constant
Transformations There are two requirements for the analysis of variance (ANOVA) model. 1. Within any treatment group, the mean should be the middle value. That is, the mean should be about the same as the median. When this is true, the data can usually be reasonably modeled by a Gaussian (“normal”) distribution. 2. The SD s should be similar (variance homogeneity) from group to group. Can plot mean vs median & residual errors to check #1 and mean versus SD to check #2.
What if its not true? Two options: a. Find a transformed scale where it is true. b. Don’t use the usual ANOVA model (use non constant variance ANOVA models or non parametric models). Option “a” is better if possible - more power.
Most common transform is log transformation Usually works for: 1. Radioactive count data 2. Titration data (titers), serial dilution data 3. Cell, bacterial, viral growth, CFUs 4. Steroids & hormones (E2, Testos, …) 5. Power data (decibels, earthquakes) 6. Acidity data (pH), … 7. Cytokines, Liver enzymes (Bilirubin…) In general, log transform works when a multiplicative phenomena is transformed to an additive phenomena.
Compute stats on the log scale & back transform results to original scale for final report. Since log(A)–log(B) =log(A/B), differences on the log scale correspond to ratios on the original scale. Remember 10 mean(log data) = geometric mean < arithmetic mean monotone transformation ladder- try these Y 2, Y 1.5, Y 1, Y 0.5 =√Y, Y 0 =log(Y), Y -0.5 =1/√Y, Y -1 =1/Y,Y -1.5, Y -2
Multiplicity & F tests Multiple testing can create “false positives”. We incorrectly declare means are “significantly” different as an artifact of doing many tests even if none of the means are truly different. Imagine we have k=four groups: A, B, C and D. There are six possible mean comparisons: A vs B A vs C A vs D B vs C B vs D C vs D
If we use p < 0.05 as our “significance” criterion, we have a 5% chance of a “false positive” mistake for any one of the six comparisons, assuming that none of the groups are really different from each other. We have a 95% chance of no false positives if none of the groups are really different. So, the chance of a “false positive” in any of the six comparisons is 1 – (0.95) 6 = 0.26 or 26%.
To guard against this we first compute the “overall” F statistic and its p value. The overall F statistic compares all the group means to the overall mean (M). __ F = n i ( Y i – M) 2 /(k-1) =MS x = between group var (SD p ) 2 MS error within group var __ __ __ =[n 1 (Y 1 – M) 2 + n 2 (Y 2 -M) 2 + …n k (Y k -M) 2 ]/(k-1) (SD p ) 2 If “overall” p > 0.05, we stop. Only if the overall p < 0.05 will the pairwise post hoc (post overall) t tests and p values have no more than an overall 5% chance of a “false positive”.
This criterion was suggested by RA Fisher and is called the Fisher LSD (least significant difference) criterion. It is less conservative (has fewer false negatives) than the very conservative Bonferroni criterion. Bonferroni criterion: if making “m” comparisons, declare significant only if p < 0.05/m. This overall F is the same as the overall F test in regression for testing β 1 =β 2 =β 3 =…β k =0 (all regression coeffs=0).
Ex:Clond-time to fall off rod
One way analysis of variance time to fall data, k= 4 groups, df= k-1 R square Adj R square Root Mean Square Error=SD e Mean of Response30.20 Observations (or Sum Wgts)51 SourceDFSum of SquaresMean SquareF RatioProb > F group <.0001* Error C. Total
Means & SDs in sec (JMP) LevelNumberMeanmedianSDSEM KO-no TBI KO-TBI WT-noTBI WT-TBI No model ANOVA model, pooled SD e = sec LevelNumber Mean SEM KO-no TBI KO-TBI WT-noTBI WT-TBI Why are SEMs not the same??
Mean comparisons- post hoc t LevelMean WT-noTBIA WT-TBIB KO-no TBIB KO-TBIB Means not connected by the same letter are significantly different
Multiple comparisons-Tukey’s q As an alternative to Fisher LSD, for pairwise comparisons of “k” means, Tukey computed percentiles for q=(largest mean-smallest mean)/SE d under the null hyp that all means are equal. If mean diff > q SE d is the significance criterion, type I error is ≤ α for all comparisons. q > t > Z One looks up”t” on the q table instead of the t table.
t vs q for α=0.05, large n num means=k t q* * Some tables give q for SE, not SE d, so must multiply q by √2.
Post hoc: t vs Tukey q, k=4 Level- LevelMean Diff SE difftp-Value- no correction p-Value- Tukey WT-noTBIKO-TBI <.0001* WT-noTBIKO-no TBI <.0001* WT-noTBIWT-TBI <.0001* WT-TBIKO-TBI WT-TBIKO-no TBI KO-no TBIKO-TBI
Mean comparisons-Tukey LevelMean WT-noTBIA WT-TBIB KO-no TBIB KO-TBIB Means not connected by the same letter are significantly different
One way analysis of variance comparing means across groups-ANOVA vs regr Example: Comparing mean birth weight by race.
ANOVA via regression Coding categorical variables – dummy vs effect coding Below, we create two new variables, “af_am” and “other” from the variable “Race”. Dummy coding - “white” is the referent category Race Af_am other White Black Other-3 0 1
Dummy (0,1) coded variables are usually correlated with each other even in balanced designs – not orthogonal. However, they are easier to interpret. Effect coding, ‘white” is the referent category Race Af_am other White Black Other-3 0 1
In balanced designs, effect coded (-1, 0, 1) variables have zero correlation = they are orthogonal. In balanced designs, effect coded variables have sum and mean zero and cross products of zero. Under effect coding, cell means correspond to X i = -1 or 1 and marginal means correspond to X i =0.
ANOVA VIA REGRESSION (dummy vars) Birth weight Overall Analysis of Variance table Sum of Mean Source DF Squares Square F Value p value Model Error Total Root MSE=SD e = gm R-Square Dependent Mean
ANOVA via regression - Dummy coding Variable df regr coef SE t p value Intercept <.0001 af_am other Birth wt = 3104 – 384 af_am – 300 other+error With dummy coding, regression coefficients are the mean difference from the referent group (white in this example)
ANOVA via regression (cont) effect coding for race Overall Analysis of Variance table Sum of Mean Source DF Squares Square F Value p value Model Error Total Root MSE R-Square Dependent Mean
ANOVA via regression - effect coding Variable df regr coef SE t p value Intercept <.0001 af_am other Birth wt= af_am – 72 other + error With effect coding, the 2875 is the mean of the race means, the unweighted overall mean. The regression coeffs are the deviations from this overall mean for each factor.
Mean brain weights (gms) in Males and Females with and without dementia Cell mean A balanced* 2 x 2 (ANOVA) design, n c = 7 obs per cell, n=7 x 4 = 28 obs total Means Dementia Males (1)Female (-1)Total Yes (1) No (-1)
MalesFemalesOverall DementiaCell Margin No dementiaCell Margin OverallMargin Terminology – cell means, marginal means
Difference in marginal sex means (Male – Female) = , /2 = Difference in marginal dementia means (Yes – No) = , /2 = Difference in cell mean differences ( ) – ( ) = 5.86 ( ) – ( ) = 5.86 note: 5.86/(2x2) = 1.46 * balanced = same sample size (n c ) in every cell
Brain weight via ANOVA - Effect coding (-1,1) MODEL: brain wt = sex dementia sex*dementia Class Levels Values sex dementia observations Source DF Sum of Squares Mean Square F Value Pr > F Model <.0001 Error = SD 2 e C Total R-Square Coeff Var Root MSE Mean brain wt =SD e Source DF Type III SS Mean Square F Value Pr > F=p value sex <.0001 dementia <.0001 sex*dementia
brain weight - via regression –effect coding Sum of Mean Source DF Squares Square F Value Pr > F Model <.0001 Error Corrected Total R-Square= Root MSE=8.453 Mean = Parameter Standard Variable DF Estimate Error t Value Pr > |t| Intercept <.0001 sex <.0001 dementia <.0001 sexdem Brain wt= sex-7.6 dementia sex dementia
Balanced designs and Effect coding Type of person variable: dementia gender dementia*gender no dementia-Female dementia-Female no dementia-Male dementia-Male total Correlations among X 1 =dementia, X 2 =gender, X 3 = dementia*gender Effect coding used with balanced data creates orthogonality Dementia Gender Dementia*gender Dementia Gender Dementia*gender
Relation between sum of squares (SS) and regression coefficients, SS=nb 2 Factor regr coefficient (b) nb 2 =Sum squares (n=28) Dementia (7.607) 2 = Gender (58.25) 2 = Dementia*Gender ( ) 2 = The SS are functions of the squared regression coefficient & n. Dementia, Gender and the Dementia x Gender interaction are orthogonal. The statistical significance of each factor does not depend on whether the other factors are in the model. Makes evaluating each factor easy. Orthogonality holds if : 1. Effect coding is used in the regression 2. The design is balanced
ANOVA tables as a compact regression In general, if factor A has “a” levels (and “a” means), in a regression it must be represented by a-1 dummy or effect coded variables with a-1 corresponding regression coefficients. In the ANOVA table for factor A, the sum of squares for A (SSa), is made out of the sum of squares of the a-1 regression coefficients. DF=a-1.
Ex: a=4, a-1=3, three dummy vars SSa = constant (b 1 + b 2 + b 3 ) 2 So, if factor A is NOT significant in the ANOVA table, we can conclude that β 1 =β 2 =… β a-1 =0 without looking at each one individually, a major simplification. If factor B has “b” levels, there are a x b possible combinations (cells) and (a-1) + (b-1) + (a-1)(b-1)= ab-1 dummy (or effect coded) variables/ regression coefficients for A, B and the A x B interaction respectively. There are ab combinations of A and B. The squared effects of A, B and AxB are represented in a “condensed” form in the ANOVA table.
ANOVA table – summarizes ab-1 effects in three lines Factor df Sum Squares (SS) Mean square=SS/df A a-1 SSa SSa/(a-1) B b-1 SSb SSb/(b-1) AB (a-1)(b-1) SSab SSab/(a-1)(b-1)
When is the ANOVA table useful? Dependent Variable: depression score Source DF SS Mean Square F Value overall p value Model <.0001 Error Corrected Total root MSE=1.962, R 2 =0.687 Source DF SS Mean Square F Value p value gender <.0001 race <.0001 educ <.0001 occ <.0001 gender*race gender*educ gender*occ race*educ race*occ educ*occ gender*race*educ gender*race*occ gender*educ*occ race*educ*occ gender*race*educ*occ
8 graphs of 200 depression means. Y=depr, X=occ (occupation), X=educ. separate graph for each gender & race Males Females W W B B H H A A
One of the 8 graphs Note parallelism implying no interaction
Depression-final model Sum of Source DF Squares Mean Square F overall p Model <.0001 Error Corrected Total R-Square Coeff Var Root MSE y Mean =SD e Source DF SS Mean Square F Value p value gender <.0001 race <.0001 educ <.0001 occ <.0001 Analysis shows that factors are additive (no significant interactions)
Example2 : ANOVA as a compact regression Example: Y = log pertussus antibody titer What if the potential predictive factors are: Blood type: A-, A+, B-, B+, Ab-, Ab+, O-,O+ (8 levels) Center: LA, SF, Chicago, NY, Houston, Seattle (6 levels) Vaccine: placebo, IgA, IgG (3 levels) How many β parameters are summarized? Factor df (= number of βs) SS MS=SS/df F p value (Intercept 1) -- Bloodtype 7 Center 5 Vaccine 2 Bloodtype * Center (7 x 5) = 35 Bloodtype * Vaccine (7 x 2) = 14 Center * Vaccine (5 x 2) = 10 BT*Center*Vaccine (7 x 5 x 2) = 70 Total model 144
If one of the "condensed" factors above is NOT significant, the entire set of βs for that factor can be removed from the model. The "sum of squares" ANOVA table is a condensed regression table that is useful for screening, particularly screening interactions. It allows one to test "chunks" of the model. If we also have balance, then all the parts above are orthogonal so the assessment of one factor or interaction is not affected if another factor or interaction is significant or not. This is an ideal analysis situation.
If all of the interaction terms are NOT significant, then one has proven that the influence of all the factors on the outcome is additive. If all the interaction terms for factor “B” are not significant, then the impact of factor B on Y is additive.
Balanced versus unbalanced ANOVA below “n c =” denotes the sample size in each cell unbalanced since n not same in each cell Cell and marginal mean amygdala volumes in cc MaleFemaleadj marg. mean Obs marg. mean Dementia0.5 (n c =10)0.5 (n c =90) (n=100) No Dementia1.5 (n c =190)1.5 (n c =10) (n=200) Adjusted marg. Means1.0 Observed marg. means1.45 (n=200)0.6 (n=100)n=300 (10 x x 1.5)/200=1.45, (90 x x 1.5)/100=0.60 Gender & dementia NOT orthogonal
Repeated measure ANOVA – paired t test example
Motivation-Jaw growth in children (cm) -Potthoff & Roy child8 yrs10 yrsdifference
8 yrs 10 yrs | difference Mean | 1.0 SD | 1.2 SE | 0.36=SE d unpaired paired t df p value Paired & unpaired t test give different p values even though they are using the same means. Corr is r= 0.83, not zero. SE d =sqrt(SE SE SE 1 SE 2 r)
Repeated measures- must add subject effect Factor df SS MS F p value A a-1 SSa MS a F a p value for a B b-1 SSb MS b F b p value for b AB (a-1)(b-1) SSab MS ab F ab p value for ab Subject n-1 SSsub MS sub Error (n-1)(T-1) SSe MSe Otherwise “Error” SS is too big as it is within subject error and between subject variation combined
Factorial vs repeated measure ANOVA Model Residual SD 2 e SD e Factorial Repeated measure The SD e is too large if the subject effect is not taken into account. If SD e is too large, SEs are too large & p values are too large.
p values for comparing means to zero Type 3 Tests of Fixed Effects (ANOVA table) Effect Num DF Den DF F Value Pr > F year Least Squares Means Standard Effect year Estimate Error DF t Value Pr > |t| year <.0001 year <.0001 Differences of Least Squares Means Standard Effect year year Estimate Error DF t Value Pr > |t| year
Repeated measure ANOVA Correct p value for comparing means (Co)variance Parameter Estimates Cov Parm Estimate id =SD p 2 -between person var (controlled) Residual =SD e 2 -within person variation (note = from incorrect analysis) Type 3 Tests of Fixed Effects Num Den Effect DF DF F Value Pr > F year Least Squares Means Standard Effect year Estimate Error DF t Value Pr > |t| year <.0001 year <.0001 Differences of Least Squares Means Standard Effect year year Estimate Error DF t Value Pr > |t| year