Presentation is loading. Please wait.

Presentation is loading. Please wait.

6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis.

Similar presentations


Presentation on theme: "6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis."— Presentation transcript:

1 6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis of variance (ANOVA). The question we want to answer in analysis of variance is whether or not some categorical independent variable has a relationship to some quantitative dependent variable. The basis for deciding there is a relationship is the difference in errors (variance or sum of squares) using the means of the groups defined by the independent variable compared to the errors (variance or sum of squares) when we use the mean for all cases ignoring the independent variable. The null hypothesis for this test is: the means for all populations represented by the groups are equal. If the means of all groups are equal, they will be the same as the overall mean for the entire distribution. In order to reject the null hypothesis, we require statistical evidence that at least one of the group means is different from the others. The research hypothesis is that at least one of the means is not equal to the others. Analysis of variance is similar to linear regression in that we are computing the reduction in error achieved by using the independent variable to estimate values for the dependent versus using only the mean of the dependent variable. The following slides demonstrate graphically how this works. Note the calculations may not match exactly because or rounding.

2 6/2/2016Slide 2 The variance is the standard deviation squared (.17²) which is equal to.0279. The variance is the total sum of squares divided by the number of cases minus one. Reversing the formula, we multiply.0279 by 79 to get the total sum of squares equal to 2.202, which matches the total sum of squares in the ANOVA table for this problem. This is a histogram for the variable GNDREMPV, UN gender empowerment measure.The mean of the distribution, 0.57, is the blue line in the chart. Given the standard deviation ( 0.17) and the number of cases in the distribution (80), we can compute the variance and the total sum of squares for all cases in the distribution. The total sum of squares is the total amount of error that we would realize if we used the mean of gender empowerment as our best guess for the score for every case in the distribution.

3 6/2/2016Slide 3 Next, we will compute the total error that we would have if we used the mean for each of the groups defined by the independent variable as the estimate for all of the cases in the distribution of the group. To do this we will break our histogram into three parts – one for each group. We will use the red line to represent the group mean. Again we can use the standard deviation ( 0.13) and the number of cases in the distribution (8) to compute the variance and the total sum of squares for this group. The variance is the standard deviation squared (.13²) which is equal to.0176. The variance is the total sum of squares divided by the number of cases minus one. Reversing the formula, we multiply. 0176 by 7 to get the sum of squares for this group equal to.1231. We will add this sum of squares to the sum of squares for the other two groups to compare to the Within Groups Sum of Squares.

4 6/2/2016Slide 4 Again we can use the standard deviation ( 0.11) and the number of cases in the distribution (15) to compute the variance and the total sum of squares for the second group. The variance is the standard deviation squared (.11²) which is equal to.0120. The variance is the total sum of squares divided by the number of cases minus one. Reversing the formula, we multiply. 0120 by 14 to get the sum of squares for this group equal to.1681. We will add this sum of squares to the sum of squares for the other two groups to compare to the Within Groups Sum of Squares.

5 6/2/2016Slide 5 Again we can use the standard deviation (0.15) and the number of cases in the distribution (57) to compute the variance and the total sum of squares for the third group. The variance is the standard deviation squared (.15²) which is equal to.0217. The variance is the total sum of squares divided by the number of cases minus one. Reversing the formula, we multiply. 0217 by 56 to get the sum of squares for this group equal to 1.2132. We will add this sum of squares to the sum of squares for the other two groups to compare to the Within Groups Sum of Squares. If we add the sum of squares for the three groups defined by the independent variable, we get:.1231 +.1681 + 1.2132 = 1.504 which matches the Within Groups Sum of Squares.

6 6/2/2016Slide 6 Let’s review what we have done. First, we computed the total amount of error we would have if we used the mean of the dependent variable as the estimate for the score of every cases in the distribution, and found that we would have 2.202 units of error. Second, we computed the total amount of error we would have if we used the mean for each group defined by the independent variable as the estimate of the scores for the cases in the group, and found we would have 1.504 units of error. The difference between the total sums of squares and the within groups sum of squares is the reduction in the amount of error that we can attribute to the relationship between the dependent and independent variable. For this problem, the reduction in error is.697 units of error (2.202 – 1.504), which matches the Between Groups Sum of Squares in the ANOVA table. We have reduced the errors in estimating the dependent variable by about one- third. To determine whether or not this amount is a statistically significant reduction, we would need to complete the remainder of the ANOVA table.

7 6/2/2016Slide 7 We have computed the reduction in error by subtracting the within groups sum of squares from the total groups sum of squares. We can also directly compute the between groups sum of squares by examining the distribution of the group means around the overall mean, as shown in the chart to the right. Group meanMean of All CasesDifferenceDifference SquaredN in GroupSum of Squares.364.565-0.2010.04080.323.455.565-0.1100.012150.182.623.5650.0580.003570.192 Total0.696 The mean for each of the three groups is entered in the first column of the table, with the overall mean entered in the second column. The difference or deviation of the group means from the overall mean is shown in the second column. The fourth column contains the difference or deviations squared.

8 6/2/2016Slide 8 Group meanMean of All CasesDifferenceDifference SquaredN in GroupSum of Squares.364.565-0.2010.04080.323.455.565-0.1100.012150.182.623.5650.0580.003570.192 Total0.696 The difference or deviation squared applies to all cases in each group. We enter the number in each group in the fifth column. To compute the sum of squares, we multiply the deviation squared by the number in the group, and sum across all groups. This results in a sum of squares of 0.696, which is within rounding error of the Between Groups Sum of Squares in the ANOVA table. Thus, we can directly compute the reduction in sum of squares error that can be directly attributed to substituting the group means as the estimated scores in place of the overall mean. If the ANOVA supports the existence of a relationship between the independent and dependent variables, it is directly attributable to the increase in accuracy in predicting the dependent variable when we take into account the different groups to which a case can potentially belong.

9 6/2/2016Slide 9 The ANOVA test is based on the ratio of between groups sum of squared errors and within groups sum of squared errors. If the probability of the test statistic is less than alpha, we reject the null hypothesis and conclude that the mean of one or more of the populations represented by the groups is different from the others. Rejecting the ANOVA null hypothesis that the group means are all the same does not tell us which specific group means were different from each other. To answer that question, the one-way analysis of variance requires a second step: using post hoc tests to compare each pair of group means to identify specific differences. We might think we could use multiple t-tests to identify differences in pairs of group means, but this inflates our target alpha error rate. If we set alpha to.05, the probability that we would make a correct decision for an individual test is 1 -.05 = 0.95. If we did three related t-tests, each with alpha set at 0.05, the probability of making correct decisions on all three of the tests is: 0.95 x 0.95 x 0.95 = 0.857375. The probability that we are making an error is 1 - 0.857375 = 0.142625. Thus, the probability that we would erroneously reject the null hypothesis has increased from our desired 0.05 to an actual error rate of 0.142625. The Bonferroni post hoc tests adjusts the error rate so that the alpha for the problem stays at the target value. This is the post hoc test we will use in our assignments. There are numerous post hoc tests that differ in their sensitivity to differences between groups and their strategy to avoid inflating the error rates.

10 6/2/2016Slide 10 To hold our error rate to 0.05, the Bonferroni method divides the alpha error rate by the number of tests, i.e. 0.05 / 3 = 0.0167. If we set alpha to 0.0167, the probability that we would make a correct decision for an individual test is 1 -.0167 = 0.983. The probability that we would make three correct decisions would be 0.983 x 0.983 x 0.983 = 0.950829. The probability of making an error is 1 – 0.950829 = 0.049171. (The difference from 0.05 is due to rounding.) The assumptions or conditions required for ANOVA are equality of variance and normality across the groups. We will use the Levene test of homogeneity of variance to test for equality of variance and the usual criteria of skewness, kurtosis, and presence of outliers for normality. If we satisfy the assumptions, we interpret the F-test for group differences if its probability is less than alpha. If we reject the null hypothesis for the F-test we examine the post hoc tests. On the post hoc tests, it is possible that one, two, or all of the paired differences will be significant. It can also happen that the post hoc tests will show no significant differences between pairs of means even though the F-test indicated that at least one of the means is different from the others. We do not interpret post hoc tests if we fail to reject the null hypothesis for the ANOVA, even if we notice that there are statistically significant differences between some pair of groups.

11 6/2/2016Slide 11 This is the first screen for a one-way analysis of variance problem.

12 6/2/2016Slide 12 This is the second screen for a one- way analysis of variance problem.

13 6/2/2016Slide 13 The first paragraph introduces the problem: the variable included in the analysis and the statistical test conducted. We will compute the analysis of variance in order to complete table 1.

14 6/2/2016Slide 14 Select Compare Means > One-Way ANOVA… from the Analyze menu.

15 6/2/2016Slide 15 First, move the dependent variable poverty to the Test Variable(s) list box. Second, move the independent variable freeSpee to the Factor text box. Third, click on the Post Hoc button to select the tests to compare pairs of means.

16 6/2/2016Slide 16 First, we mark the checkboxes for the Bonferroni test, which computes a series of t-tests, but reduces the alpha level to compensate for the multiple tests. Second, if the level of significance were different than the default of.05, we would change it here. Third, click on the Continue button to close the dialog box.

17 6/2/2016Slide 17 Next, we click on the options test to specify additional output.

18 6/2/2016Slide 18 First, we mark the check boxes for Descriptive statistics, the Homogeneity of variance test, and the Means plot. Second, click on the Continue button to close the dialog box.

19 6/2/2016Slide 19 Having provided the specifications for the analysis, we click on the OK button to request the output.

20 6/2/2016Slide 20 The Descriptives table contains the number of cases in each group, as well as the mean and standard deviation for each group. The Test of Homogeneity of Variances is the Levene Test. The ANOVA table is the overall test of the relationship and the null hypothesis.

21 6/2/2016Slide 21 The Multiple Comparisons table contains the t-tests for the comparison of each pair of groups.

22 6/2/2016Slide 22 The Means Plot gives us some preliminary idea of the results. This plot suggests that there may be no difference between “Complete” and “Some,” but both are different from “None.” The statistical tests will give us more precise results.

23 6/2/2016Slide 23 Transfer the n’s, means, and standard deviations from the table of Descriptives to table 1.

24 6/2/2016Slide 24 The first sentence in the next paragraph asks which variable is the target of preliminary data screening. The correct answers is the dependent variable. There is no expectation that the independent variable which defines the groups compared (a categorical variable) be normally distributed.

25 6/2/2016Slide 25 The second sentence in the paragraph asks about the normality of the dependent variable. Based on the information in the problem narrative (skewness = 0.56, kurtosis = -0.33, 0 outliers), we would conclude that the variable is nearly normal.

26 6/2/2016Slide 26 The next blanks expect us to enter the value of the F ratio and p-value for the Levene test of homogeneity of variance. The Levene test of homogeneity of variance tests the null hypothesis that the variance of the groups is equal versus the alternative hypothesis that the variance of the one or several groups is different from the variance of the other groups. Unlike the independent samples t-test, there is not an alternative formula to use when we violate this assumption. The one-way Anova is robust to the violation of this assumption if the counts in the groups are similar. In our problems, a violation of this assumption would result in answers of na for all subsequent questions.

27 6/2/2016Slide 27 The Levene test is an F statistic. The APA style for reporting an F test is: F(numerator degrees of freedom, denominator degrees of freedom) = F ratio, p = p value We transfer the degrees of freedom from the table of the Test of Homogeneity of variances to the problem narrative.

28 6/2/2016Slide 28 The F statistic and the Sig. value are transferred to the problem narrative. Since the output shows Sig. as.289, we enter.29 and the = symbol between p and the sig value.

29 6/2/2016Slide 29 In this problem, the interpretation of equal variance is supported by the Levene statistic of 1.25 with a probability of p =.29, greater than the alpha of p =.05. The null hypothesis is not rejected. The assumption of equal variance is supported, and we find no significant violation. The uniformity of the variance of the dependent variable across groups defined by the independent variable is evaluated with the Levene Test of Equality of Error Variances. The Levene statistic tests the null hypothesis that the variances for all of the groups are equal. When the probability of Levene statistic is less than or equal to alpha, we reject the null hypothesis, supporting a finding that the variances of one or more groups is different and we do not satisfy the assumption of equal variances.

30 6/2/2016Slide 30 The overall test the equality of means is an F statistic. The APA style for reporting an F test is: F(numerator degrees of freedom, denominator degrees of freedom) = F ratio, p = p value The third paragraph deals with the significance of the overall test of the relationship.

31 6/2/2016Slide 31 We transfer the degrees of freedom from the ANOVA table to the problem narrative.

32 6/2/2016Slide 32 We transfer the f statistic and the sig value from the ANOVA table to the problem narrative.

33 6/2/2016Slide 33 When the p-value for the F-test is less than or equal to alpha, we reject the null hypothesis that the means of the populations represented by the groups in the sample were all equal, and we interpret the results of the test. If the p-value is greater than alpha, we fail to reject the null hypothesis and do not interpret the result. The p-value for the ANOVA test (p =.04) was less than or equal to the alpha level of significance (.05) supporting the conclusion to reject the null hypothesis. At least one of the means of the populations represented by the groups in the sample was different from the other means. the ANOVA test was statistically significant.

34 6/2/2016Slide 34 The next sentence deals with the effect size for the relationship. A common measure of effect size in analysis of variance is eta-squared (η²), which SPSS does not compute, but which can easily be computed from the ANOVA table.

35 6/2/2016Slide 35 Eta-squared is computed as the between groups variance divided by the total variance. For this problem: η² = 2351.844 ÷ 50578.401 = 0.046499 The rounded effect size, 0.05, is entered in the program narrative as a decimal effect size and as the percent of variance explained. Note the similarity between r² and η². Both are computed by dividing the error explained by the dependent variable by the total error. Both are interpreted using the same adjectives for strength of the relationship.

36 6/2/2016Slide 36 Comparing the computed value for η² to the interpretative criteria, we find that the effect is weak.

37 6/2/2016Slide 37 The next paragraph deals with the pairwise comparisons using the Bonferroni method of adjusting alpha.

38 6/2/2016Slide 38 The first comparison test for a statistical difference in the means of countries with some government censorship of the press compared to countries with complete censorship of the press.

39 6/2/2016Slide 39 We transfer the means and standard deviations for the “some” censorship group to the problem narrative.

40 6/2/2016Slide 40 We transfer the means and standard deviations for the “complete” censorship group to the problem narrative.

41 6/2/2016Slide 41 In the table of Multiple Comparisons, the p-value for the difference in means for the Some versus the Complete group was 1.00, greater than alpha of 0.05, so we fail to reject the null hypothesis that the difference is equal to 0. The some group was not statistically higher than the complete group. Note: the probability is very close to 1.0 rather than exactly 1.0 which is produced by rounding. This is analogous to p-values of.000.

42 6/2/2016Slide 42 The second comparison test for a statistical difference in the means of countries with some government censorship of the press compared to countries with no censorship of the press.

43 6/2/2016Slide 43 We transfer the means and standard deviations for the “some” censorship group to the problem narrative.

44 6/2/2016Slide 44 We transfer the means and standard deviations for the “no” censorship group to the problem narrative.

45 6/2/2016Slide 45 In the table of Multiple Comparisons, the p-value for the difference in means for the Some versus the None group was.047, less than alpha of 0.05, so we reject the null hypothesis that the difference is equal to 0. The Some group was statistically higher than the None group.

46 6/2/2016Slide 46 The final comparison test for a statistical difference in the means of countries with complete government censorship of the press compared to countries with no censorship of the press.

47 6/2/2016Slide 47 We transfer the means and standard deviations for the “complete” censorship group to the problem narrative.

48 6/2/2016Slide 48 We transfer the means and standard deviations for the “no” censorship group to the problem narrative.

49 6/2/2016Slide 49 In the table of Multiple Comparisons, the p-value for the difference in means for the Complete versus the None group was.156, greater than alpha of 0.05, so we fail to reject the null hypothesis that the difference is equal to 0. The none group was not statistically higher than the complete group.

50 6/2/2016Slide 50 The green shading of the answers when we submit the problem indicate that our answers are correct.

51 6/2/2016Slide 51 Example of using na: overall test not significant If the p-value is greater than alpha, we do not reject the null hypothesis for the overall F-test, and all of the remaining answers are na.


Download ppt "6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis."

Similar presentations


Ads by Google