Presentation is loading. Please wait.

Presentation is loading. Please wait.

PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)

Similar presentations


Presentation on theme: "PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)"— Presentation transcript:

1 PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)

2 Fisher’s F-Test (ANOVA) Ronald Fisher

3 Testing Yields in Agriculture X1X1 X2X2 X1X1 ==

4 ANOVA  Analysis of Variance (ANOVA) – a test of more than two population means.  One-Way ANOVA – only one factor or independent variable is manipulated.  ANOVA compares two sources of variability.

5 Two Sources of Variability  Treatment effect – the existence of at least one difference between the population means defined by IV.  Between groups variability – variability among subjects receiving different treatments (alternative hypothesis).  Within groups variability – variability among subjects who receive the same treatment (null hypothesis).

6 F-Test  If the null hypothesis is true, the numerator and denominator of the F-ratio will be the same. F = random error / random error  If the null hypothesis is false, the numerator will be greater than the denominator and F > 1. F = random error + treatment effect random error

7 Difference vs Error  Difference on the top and the error on the bottom: Difference is the variability between the groups, expressed as the sum of the squares for the groups. Error is the variability within all of the subjects treated as one large group.  When the difference exceeds the variability, the F-ratio will be large.

8 F-Ratio  F = MS between MS within  MS = SS df  SS is the sum of the squared differences from the mean.

9 F-Ratio  F = MS between MS within  MS between treats the values of the group means as a data set and calculates the sum of squares for it.  MS within combines the groups into one large group and calculates the sum of squares for the whole group.

10 Testing Hypotheses  If there is a true difference between the groups, the numerator will be larger than the denominator. F will be greater than 1  Writing hypotheses: H 0 :  1 =  2 =  3 H 1 : H 0 is false

11 Formulas for F  Description in words of what is being computed.  Definitional formula – uses the SS, described in the Witte text  Computational formula – used by Aleks and in examples in class.

12 Formula for SS total  SS total is the total Sum of the Squares It is the sum of the squared deviations of scores around the grand mean.  SS total = ∑(X – X grand ) 2  SS total = ∑(X 2 – G 2 /N) Where G is the grand total and N is its sample size

13 Hours of Sleep Deprivation 02448 036 468 2610Grand Total 61524G = 45

14 Formula for SS between  SS between is the between Sum of the Squares It is the sum of the squared deviations for group means around the grand mean.  SS between = n∑(X – X grand ) 2  SS between = ∑(T 2 /n – G 2 /N) Where T is each group’s total and n is each group’s sample size definition computation

15 Formula for SS within  SS within is the within Sum of the Squares It is the sum of the squared deviations for scores around the group mean.  SS within = ∑(X – X group ) 2  SS within = ∑X 2 – ∑T 2 /n) Where T is each group’s total and n is each group’s sample size definition computation

16 Degrees of Freedom  df total = N-1 The number of all scores minus 1  df between = k-1 The number of groups (k) minus 1  df within = N-k The number of all scores minus the number of groups (k)

17 Checking Your Work  The SS total = SS between + SS within.  The same is true for the degrees of freedom: df total = df between + df within

18 Calculating F (Computational)  SS between =  T 2 – G 2 n N Where T is the total for each group and G is the grand total  SS within =  X 2 -  T 2 N  SS total =  X 2 – G 2 /N

19 F-Distribution Critical value Common – retain null Rare – reject null Look up F critical value in the F table using df for numerator and denominator

20 Effect Sizes  Effect size ( 2 ) for ANOVA is the squared curvilinear correlation.  The effect size ( 2 ) is the amount of variance in the dependent variable explained by the independent variable.  To calculate effect size, divide the SS between by SS total

21 Interpreting  2  Cohen’s rule of thumb: If  2 approximates.01, effect size is small. If  2 approximates.06, effect size is medium. If  2 approximates.14 or more, effect size is large.  Effect size is especially important when large samples sizes are used.

22 ANOVA Assumptions  Assumptions for the F-test are the same as for the t-test  Underlying populations are assumed to be normal with equal variances.  Results are still valid with violations of normality if: All sample sizes are close to equal Samples are > 10 per group  Otherwise use a different test

23 Cautions  The ANOVA presented in the text assumes independent samples.  With matched samples or repeated measures use a different form of ANOVA.  The sample sizes shown in the text are small in order to simplify calculations. Small samples should not be used.


Download ppt "PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)"

Similar presentations


Ads by Google