Download presentation

Presentation is loading. Please wait.

Published byJeremiah Stevenson Modified over 3 years ago

1
Intro to ANOVA

2
What Is ANOVA? Analysis of Variance. Analysis of Variance. This is a hypothesis-testing procedure that is used to evaluate mean differences between two of more treatments (or populations). This is a hypothesis-testing procedure that is used to evaluate mean differences between two of more treatments (or populations). What is the difference between this and t tests? What is the difference between this and t tests? ANOVA allows us to look at more than two groups at once. ANOVA allows us to look at more than two groups at once.

3
Some Terminology In analysis of variance, the variable (independent or quasi-independent) that designates the groups being compared is called a factor. In analysis of variance, the variable (independent or quasi-independent) that designates the groups being compared is called a factor. The individual conditions or values that make up a factor are called the levels of the factor. The individual conditions or values that make up a factor are called the levels of the factor.

4

5

6
Formulae SSw = (X ij – X. j ) 2 SSw = (X ij – X. j ) 2 SSbet = n j (X. j – X..) 2 SSbet = n j (X. j – X..) 2 SStot = (X ij – X..) 2 SStot = (X ij – X..) 2 MSw = SSw/dfw MSw = SSw/dfw MSb = SSb/dfb MSb = SSb/dfb F = MSb/MSw F = MSb/MSw

7
Looking At Error The F statistic is calculated by comparing the error between groups (theoretically due to the treatment effect) to the error within groups (theoretically due to chance, or error). We look to see if differences due to our treatment effect is proportionally greater than differences due to chance alone. The F statistic is calculated by comparing the error between groups (theoretically due to the treatment effect) to the error within groups (theoretically due to chance, or error). We look to see if differences due to our treatment effect is proportionally greater than differences due to chance alone.

8
Reporting ANOVA SourceSSdfMS Between F = Within Total

9
Examples From Excel

10
Reading The Table

11
Effect Size For ANOVA we use eta squared For ANOVA we use eta squared 2 2 Calculated Calculated 2 = SS between /SS total 2 = SS between /SS total

12
Post Hoc Tests These are additional hypothesis tests that are done after an ANOVA to determine exactly which mean differences are significant and which are not. These are additional hypothesis tests that are done after an ANOVA to determine exactly which mean differences are significant and which are not. If you reject the null, and there are three or more treatments, you may wish to explore which groups contain the mean differences. If you reject the null, and there are three or more treatments, you may wish to explore which groups contain the mean differences.

13
Accumulation of Type I Error Experimentwise alpha: This is the overall probability of a Type I error that can accumulate over a series of separate hypothesis tests. Typically, the experiment-wise alpha is substantially greater than the value of alpha used for any one of the individual tests. Experimentwise alpha: This is the overall probability of a Type I error that can accumulate over a series of separate hypothesis tests. Typically, the experiment-wise alpha is substantially greater than the value of alpha used for any one of the individual tests.

14
Planned Comparisons Planned comparisons refer to specific mean differences that are relevant to specific hypotheses the researcher had in mind before the study was conducted. Planned comparisons refer to specific mean differences that are relevant to specific hypotheses the researcher had in mind before the study was conducted. For planned comparisons, we generally dont worry about accumulation of Type I error. For planned comparisons, we generally dont worry about accumulation of Type I error. What we will do is use a smaller alpha level to test these hypotheses, often dividing alpha by the number of planned comparisons. What we will do is use a smaller alpha level to test these hypotheses, often dividing alpha by the number of planned comparisons.

15
Unplanned Comparisons Unplanned comparisons involve sifting through the data to find significant results. When doing this you have to worry about accumulation of Type I error in your results. Two commonly used procedures to protect against this accumulation are Tukeys HSD and the Scheffe test. Unplanned comparisons involve sifting through the data to find significant results. When doing this you have to worry about accumulation of Type I error in your results. Two commonly used procedures to protect against this accumulation are Tukeys HSD and the Scheffe test.

16
Tukeys HSD Tukeys Honestly Significant Difference (HSD) is used to compare two treatment conditions. If the mean difference between those treatment conditions exceeds Tukeys HSD, you conclude that there is a significant difference between the treatments Tukeys Honestly Significant Difference (HSD) is used to compare two treatment conditions. If the mean difference between those treatment conditions exceeds Tukeys HSD, you conclude that there is a significant difference between the treatments HSD = q MSw/n HSD = q MSw/n Where the value of q is found in Table B.5 (p. 708) Where the value of q is found in Table B.5 (p. 708) Tukeys requires that n be the same for all treatments. Tukeys requires that n be the same for all treatments.

17
The Scheffe Test Scheffe is very conservative, therefore if you use Scheffe and find a significant difference, you can feel safe you have found a true difference. Scheffe is very conservative, therefore if you use Scheffe and find a significant difference, you can feel safe you have found a true difference. For Scheffes test, you calculate a serperate between groups MSb for the two groups you are looking at, and then compare this new MSb to the experimentwise MSw. For Scheffes test, you calculate a serperate between groups MSb for the two groups you are looking at, and then compare this new MSb to the experimentwise MSw.

18
F as Compared to T For a two sample test, an ANOVA or a t- test can be used. For these situations you would get F = t 2 For a two sample test, an ANOVA or a t- test can be used. For these situations you would get F = t 2

19
Assumptions The observations within each sample must be independent The observations within each sample must be independent The populations must be normal The populations must be normal The populations must has equal variances (homogeneity of variance) The populations must has equal variances (homogeneity of variance) This is important to test, and we due so by Hartleys F-max test for homogeneity of variance (table on 704) This is important to test, and we due so by Hartleys F-max test for homogeneity of variance (table on 704) Compute the sample variance for each sample Compute the sample variance for each sample Divide the largest by the smallest Divide the largest by the smallest K is the number of samples K is the number of samples n is the sample size for each sample (assuming equal ns) n is the sample size for each sample (assuming equal ns)

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google