Download presentation

Presentation is loading. Please wait.

Published byAngel Naylon Modified about 1 year ago

1
ANOVA (Analysis of Variance) Martina Litschmannová K210

2
The basic ANOVA situation

3
An example ANOVA situation Subjects: 25 patients with blisters Treatments: Treatment A, Treatment B, Placebo Measurement: # of days until blisters heal Data [and means]: A: 5,6,6,7,7,8,9,10 [7.25] B: 7,7,8,9,9,10,10,11 [8.875] P: 7,9,9,10,10,10,11,12,13 [10.11] Are these differences significant?

4
Informal Investigation Graphical investigation: side-by-side box plots multiple histograms Whether the differences between the groups are significant depends on the difference in the means the standard deviations of each group the sample sizes ANOVA determines p-value from the F statistic

5
Side by Side Boxplots

6
What does ANOVA do? At its simplest (there are extensions) ANOVA tests the following hypotheses: H 0 : The means of all the groups are equal. H A : Not all the means are equal. Doesn’t say how or which ones differ. Can follow up with “multiple comparisons”. Note: We usually refer to the sub-populations as “groups” when doing ANOVA.

7
Assumptions of ANOVA each group is approximately normal check this by looking at histograms and/or normal quantile plots, or use assumptions can handle some nonnormality, but not severe outliers test of normality standard deviations of each group are approximately equal rule of thumb: ratio of largest to smallest sample st. dev. must be less than 2:1 test of homoscedasticity

8
Normality Check We should check for normality using: assumptions about population histograms for each group normal quantile plot for each group test of normality (Shapiro-Wilk, Liliefors, Anderson- Darling test, …) With such small data sets, there really isn’t a really good way to check normality from data, but we make the common assumption that physical measurements of people tend to be normally distributed.

9
Shapiro-Wilk test One of the strongest tests of normality. [Shapiro, Wilk] Online computer applet (Simon Ditto, 2009) for this test can be found here.here

10
Standard Deviation Check Compare largest and smallest standard deviations: largest: 1,764 smallest: 1,458 1,764/1,458=1,210<2 OK Note: Std. dev. ratio greather then 2 signs heteroscedasticity. Variable treatment N Mean Median StDev days A B P

11
ANOVA Notation Group12…k Sample … Sample Size… Sample average… Sample Std. Deviation…

12
Levene Test

13
How ANOVA works (outline) ANOVA measures two sources of variation in the data and compares their relative sizes.

14
How ANOVA works (outline) Difference between Means Difference within Groups

15
The ANOVA F-statistic is a ratio of the Between Group Variaton divided by the Within Group Variation: A large F is evidence against H 0, since it indicates that there is more difference between groups than within groups.

16
ANOVA Output Source of VariationSSDFMSFp-value Between Groups34,74217,376,450,006 Within Groups59,26222,69 Total94,0024

17
How are these computations made? Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups --- Total ---

18
Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI. Count Average Variance <35 let 53 25, , let , ,2775 >50 let 76 26, , Total , ,8971

19
Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

20

21
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups --- Total --- Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

22
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups 3451,9 --- Total--- Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

23
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups 3451,9 --- Total 3485,9 --- Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

24
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups 3451,9 --- Total 3485,9 --- Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

25
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups 3451,9 --- Total 3485,9 --- /= /= Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

26
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups 3451,9 --- Total 3485,9 --- Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

27
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups 3451,9 --- Total 3485,9 --- Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

28
Source of Variation Sum of Squares Degrees of Freedom Mean of Squares Between Groups Within Groups 3451,9 --- Total 3485,9 --- Using the results of exploratory analysis and ANOVA test, verify that the age of a statistically significant effect on BMI.

29
Where’s the Difference? Analysis of Variance for days Source DF SS MS F P treatmen Error Total Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev A ( * ) B ( * ) P (------* ) Pooled StDev = Once ANOVA indicates that the groups do not all appear to have the same means, what do we do? Clearest difference: P is worse than A (CI’s don’t overlap)

30
Multiple Comparisons Once ANOVA indicates that the groups do not all have the same means, we can compare them two by two using the 2-sample t test. We need to adjust our p-value threshold because we are doing multiple tests with the same data. There are several methods for doing this. If we really just want to test the difference between one pair of treatments, we should set the study up that way.

31
Bonferroni method – post hoc analysis

32
Kruskal-Wallis test The Kruskal–Wallis test is most commonly used when there is one nominal variable and one measurement variable, and the measurement variable does not meet the normality assumption of an ANOVA. It is the non-parametric analogue of a one-way ANOVA..

33
Kruskal-Wallis test Like most non-parametric tests, it is performed on ranked data, so the measurement observations are converted to their ranks in the overall data set: the smallest value gets a rank of 1, the next smallest gets a rank of 2, and so on. The loss of information involved in substituting ranks for the original values can make this a less powerful test than an anova, so the anova should be used if the data meet the assumptions., If the original data set actually consists of one nominal variable and one ranked variable, you cannot do an anova and must use the Kruskal–Wallis test.

34
1.The farm bred three breeds of rabbits. An attempt was made (rabbits.xls), whose objective was to determine whether there is statistically significant (conclusive) the difference in weight between breeds of rabbits. Verify.

35
2.The effects of three drugs on blood clotting was determined. Among other indicators was determined the thrombin time. Information about the 45 monitored patients are recorded in the file thrombin.xls. Does the magnitude of thrombin time depend on the used preparation?

36
Study materials : (p p.154) Shapiro, S.S., Wilk, M.B. An Analysis of Variance Test for Normality (Complete Samples). Biometrika. 1965, roč. 52, č. 3/4, s Dostupné z:

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google