Download presentation

Presentation is loading. Please wait.

Published byClaire Manville Modified about 1 year ago

1
Analysis of Variance (ANOVA) ANOVA methods are widely used for comparing 2 or more population means from populations that are approximately normal in distribution. ANOVA methods are widely used for comparing 2 or more population means from populations that are approximately normal in distribution. ANOVA data can be graphically displayed with dot plots for small data sets and box plots for medium to large data sets. ANOVA data can be graphically displayed with dot plots for small data sets and box plots for medium to large data sets. Look at graphical examplesLook at graphical examples

2
One-Way Analysis of Variance The simplest case of ANOVA is when a single factor determines the populations being compared. The Completely Randomized Design (CRD) is the simplest of experimental designs, and it commonly lends itself to ANOVA techniques. A CRD involves either The Completely Randomized Design (CRD) is the simplest of experimental designs, and it commonly lends itself to ANOVA techniques. A CRD involves either randomly sampling from each of I populationsrandomly sampling from each of I populations having the observations within a single population randomly assigned one of I treatments.having the observations within a single population randomly assigned one of I treatments.

3
ANOVA Appropriate Hypothesis: Appropriate Hypothesis:

4
ANOVA It is a single test which simultaneously tests for any difference among the I population means, thus it controls the experimentwise error rate at α. It is a single test which simultaneously tests for any difference among the I population means, thus it controls the experimentwise error rate at α. Multiple comparison procedures can be used to explore the specific differences in the means if the initial ANOVA shows that differences do exist. Multiple comparison procedures can be used to explore the specific differences in the means if the initial ANOVA shows that differences do exist. This approach is much better than performing all pairwise comparisions. This approach is much better than performing all pairwise comparisions.

5
Bonferonni Correction Performing all pairwise comparisons if there are I treatments will results in an experimentwise error rate of Performing all pairwise comparisons if there are I treatments will results in an experimentwise error rate of The Bonferonni Correction to get 0.05 as the experimentwise error rate is use 0.05 divided by the number of tests performed as the significance level for each individual test The Bonferonni Correction to get 0.05 as the experimentwise error rate is use 0.05 divided by the number of tests performed as the significance level for each individual test

6
Multiple Comparisons Multiple comparison procedures can be used to explore the specific differences in the means. Multiple comparison procedures can be used to explore the specific differences in the means. Can be stand-alone procedure or done in conjunction with a statistically significant ANOVA test. Can be stand-alone procedure or done in conjunction with a statistically significant ANOVA test. Better approach than performing all pairwise comparisons. Better approach than performing all pairwise comparisons. Common methods: Fisher, Duncan, Tukey, SNK, Dunnett Common methods: Fisher, Duncan, Tukey, SNK, Dunnett

7
ANOVA Assumptions Assumptions Independent Random Samples Independent Random Samples Normally distributed populations with means i and common variance 2. Normally distributed populations with means i and common variance 2. ANOVA is robust to these assumptions ANOVA is robust to these assumptions Proper sampling takes care of 1 st assumptionProper sampling takes care of 1 st assumption Normality can be checked with residual analysis, either an appropriate plot or a normality testNormality can be checked with residual analysis, either an appropriate plot or a normality test Common variance can be checked with a Bartlett F testCommon variance can be checked with a Bartlett F test

8
ANOVA Notation Notation X i,j – j th measurement from the i th population X i,j – j th measurement from the i th population

9
ANOVA (balanced) Mean square for treatments (between) Mean square for treatments (between)

10
ANOVA (balanced) Mean square for error (within) Mean square for error (within)

11
ANOVA (balanced) Test Statistic for one-factor ANOVA is Test Statistic for one-factor ANOVA is has an F distribution with numerator degrees of freedom I-1 and denominator degrees of freedom I (J-1) = n- I. has an F distribution with numerator degrees of freedom I-1 and denominator degrees of freedom I (J-1) = n- I.

12
Interpreting the ANOVA F If H 0 is true, both MS are unbiased whereas when H 0 is false, E(MSTr) tends to overestimate , so H 0 is rejected when F is too large.

13
ANOVA Table The total variation is partitioned into that due to treatment and error, so the DF and SS for Treatment and Error sum to the Total. Source of Variation Degrees of freedom Sum of Squares Mean Square F TreatmentsI-1SSTrMSTrMSTr / MSE ErrorI(J-1) = n-ISSEMSE TotalIJ-1 = n-1SST

14
ANOVA (unbalanced) For an unbalanced design, the Total Sum of Square is: For an unbalanced design, the Total Sum of Square is: where df = n-1

15
ANOVA (unbalanced) Mean square for treatments (between) Mean square for treatments (between) Where df = I-1, so Where df = I-1, so

16
ANOVA (unbalanced) Mean square for Error (within) Mean square for Error (within) where df = n-I, so

17
Multiple Comparisons Procedures that allow for comparisons of individual pairs or combinations of means. Post hoc procedures are determined after the data is collected and vary greatly in how well they control the experimentwise error rate. Post hoc procedures are determined after the data is collected and vary greatly in how well they control the experimentwise error rate.

18
Fisher’s LSD Fisher’s Least Significant Difference is powerful, but does not control experimentwise error rate Reject H 0 that i = k if

19
Tukey’s HSD Tukey’s Honest Significant Difference is less powerful, but has full control over the experimentwise error rate Reject H 0 that i = k if Note: Q is value from the studentized range distribution in Table A10.

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google