2What Is ANOVA? Analysis of Variance. This is a hypothesis-testing procedure that is used to evaluate mean differences between two of more treatments (or populations).What is the difference between this and t tests?ANOVA allows us to look at more than two groups at once.
3Some TerminologyIn analysis of variance, the variable (independent or quasi-independent) that designates the groups being compared is called a factor.The individual conditions or values that make up a factor are called the levels of the factor.
7Looking At ErrorThe F statistic is calculated by comparing the error between groups (theoretically due to the treatment effect) to the error within groups (theoretically due to chance, or error). We look to see if differences due to our treatment effect is proportionally greater than differences due to chance alone.
11Effect Size For ANOVA we use eta squared 2 Calculated 2 = SSbetween/SStotal
12Post Hoc TestsThese are additional hypothesis tests that are done after an ANOVA to determine exactly which mean differences are significant and which are not.If you reject the null, and there are three or more treatments, you may wish to explore which groups contain the mean differences.
13Accumulation of Type I Error Experimentwise alpha: This is the overall probability of a Type I error that can accumulate over a series of separate hypothesis tests. Typically, the experiment-wise alpha is substantially greater than the value of alpha used for any one of the individual tests.
14Planned ComparisonsPlanned comparisons refer to specific mean differences that are relevant to specific hypotheses the researcher had in mind before the study was conducted.For planned comparisons, we generally don’t worry about accumulation of Type I error.What we will do is use a smaller alpha level to test these hypotheses, often dividing alpha by the number of planned comparisons.
15Unplanned Comparisons Unplanned comparisons involve sifting through the data to find significant results. When doing this you have to worry about accumulation of Type I error in your results. Two commonly used procedures to protect against this accumulation are Tukey’s HSD and the Scheffe test.
16Tukey’s HSDTukey’s Honestly Significant Difference (HSD) is used to compare two treatment conditions. If the mean difference between those treatment conditions exceeds Tukey’s HSD, you conclude that there is a significant difference between the treatmentsHSD = q√ MSw/nWhere the value of q is found in Table B.5 (p. 708)Tukey’s requires that n be the same for all treatments.
17The Scheffe TestScheffe is very conservative, therefore if you use Scheffe and find a significant difference, you can feel safe you have found a true difference.For Scheffe’s test, you calculate a serperate between groups MSb for the two groups you are looking at, and then compare this new MSb to the experimentwise MSw.
18F as Compared to TFor a two sample test, an ANOVA or a t-test can be used. For these situations you would get F = t2
19Assumptions The observations within each sample must be independent The populations must be normalThe populations must has equal variances (homogeneity of variance)This is important to test, and we due so by Hartley’s F-max test for homogeneity of variance (table on 704)Compute the sample variance for each sampleDivide the largest by the smallestK is the number of samplesn is the sample size for each sample (assuming equal n’s)