Presentation on theme: "Chapter 10 Analysis of Variance (ANOVA) Part III: Additional Hypothesis Tests Renee R. Ha, Ph.D. James C. Ha, Ph.D Integrative Statistics for the Social."— Presentation transcript:
Chapter 10 Analysis of Variance (ANOVA) Part III: Additional Hypothesis Tests Renee R. Ha, Ph.D. James C. Ha, Ph.D Integrative Statistics for the Social & Behavioral Sciences
Why do we need ANOVA? You would need to conduct three independent t tests to determine whether or not there were significant differences among these means. Low caffeine versus high caffeine Low caffeine versus no caffeine High caffeine versus no caffeine
Why do we need ANOVA? When you perform multiple pairwise comparisons on the same data set, you run the risk of an inflated Type I error rate.
ANOVA One overall test to determine whether or not one or more of the conditions has different effects than the other conditions. Then, only if the overall test is statistically significant do you proceed to do pairwise comparisons to determine exactly which comparisons are different from one another.
ANOVA Your alternative hypothesis (HA) is that one or more conditions have an effect (one or more means differ from other means), and your null hypothesis (H0) is that there is no effect (your means are equal).
ANOVA You must assume that only the means are affected by your experimental manipulations and that the variance remains unaffected.
Figure 10.1 Comparing Means When There is Low vs High Within- Group Variability
Figure 10.2 Comparing Three Means When You Have Low vs. High Within-Group Variability
Figure 10.3 Comparing Different Means When Within Group Variability is the Same
Linear or Additive Equation
F Distribution: Characteristics Family of curves that varies with the degrees of freedom for the numerator (df B ) and the denominator (df W ) of the F formula. Use critical values to indicate the values that F- obtained must equal, or be more extreme than, to achieve statistical significance.
F Distribution: Characteristics Because F-obtained always yields a positive number, the F distribution is also positive and positively skewed. All tests of ANOVA are nondirectional. The F distribution table in the Appendix (Table C) contains the F-critical values.
Results if you use Microsoft Excel to calculate the ANOVA
Sum of Squares dfMean Square FSig. Between Groups Within Groups Total Results if you use SPSS to calculate the ANOVA (or F test)
When to use ANOVA? 1. You have more than two samples and a between- groups design. 2. The sampling distribution is normally distributed. 3. The dependent variable is on an interval or ratio scale 4. The variances of the groups are the same or are homogeneous.
Multiple Comparisons If you obtain statistical significance with your ANOVA (F test), then you need to do multiple comparisons to determine which of your means is significantly different from which of your other means. Two major categories of comparisons, planned and unplanned.
Multiple Comparisons Planned comparisons (or “a priori” comparisons) are performed whenever you have preplanned a limited number of comparisons prior to collecting your data. In this situation, you may conduct independent t tests to determine if your planned comparisons have means that are significantly different from one another.
Multiple Comparisons Unplanned comparisons are often called post hoc or “a posteriori” comparisons. Not planning a limited number of comparisons prior to the start of the study also means that you must perform comparisons that limit your Type I error rate.
Effect Size and Power For ANOVA, you can measure effect size with this formula, and the numbers you need can be found in the computer output from an ANOVA test.
Overview of Sing Sample, Two-Sample, and Three or More Sample Tests