ANOVA, Continued PSY440 July 1, 2008. Quick Review of 1-way ANOVA When do you use one-way ANOVA? What are the components of the F Ratio? How do you calculate.

Slides:



Advertisements
Similar presentations
Statistics for the Social Sciences
Advertisements

Analysis of variance (ANOVA)-the General Linear Model (GLM)
PSY 307 – Statistics for the Behavioral Sciences
ANOVA: Analysis of Variance
Analysis of Variance (ANOVA) Statistics for the Social Sciences Psychology 340 Spring 2010.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Statistics for the Social Sciences Psychology 340 Fall 2006 Repeated Measures & Mixed Factorial ANOVA.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Analysis of Variance: Inferences about 2 or More Means
Independent Samples and Paired Samples t-tests PSY440 June 24, 2008.
PSY 307 – Statistics for the Behavioral Sciences
Statistics for the Social Sciences Psychology 340 Spring 2005 Factorial ANOVA.
Lecture 9: One Way ANOVA Between Subjects
Two Groups Too Many? Try Analysis of Variance (ANOVA)
Statistics for the Social Sciences Psychology 340 Spring 2005 Analysis of Variance (ANOVA)
Statistics for the Social Sciences Psychology 340 Spring 2005 Within Groups ANOVA.
One-way Between Groups Analysis of Variance
Wrap-up and Review Wrap-up and Review PSY440 July 8, 2008.
Statistics for the Social Sciences
Using Statistics in Research Psych 231: Research Methods in Psychology.
Repeated Measures ANOVA Used when the research design contains one factor on which participants are measured more than twice (dependent, or within- groups.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
ANOVA Chapter 12.
ANOVA Greg C Elvers.
1 of 46 MGMT 6970 PSYCHOMETRICS © 2014, Michael Kalsher Michael J. Kalsher Department of Cognitive Science Inferential Statistics IV: Factorial ANOVA.
Chapter 13: Introduction to Analysis of Variance
One-Way Analysis of Variance Comparing means of more than 2 independent samples 1.
January 31 and February 3,  Some formulae are presented in this lecture to provide the general mathematical background to the topic or to demonstrate.
t(ea) for Two: Test between the Means of Different Groups When you want to know if there is a ‘difference’ between the two groups in the mean Use “t-test”.
Chapter 7 Experimental Design: Independent Groups Design.
Statistics for the Social Sciences Psychology 340 Spring 2006 Factorial ANOVA.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Statistics Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
Statistics for the Social Sciences Psychology 340 Fall 2012 Analysis of Variance (ANOVA)
Factorial Analysis of Variance
ANOVA: Analysis of Variance.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
Chi-square Test of Independence
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Chapter 13 Repeated-Measures and Two-Factor Analysis of Variance
Two-Way (Independent) ANOVA. PSYC 6130A, PROF. J. ELDER 2 Two-Way ANOVA “Two-Way” means groups are defined by 2 independent variables. These IVs are typically.
Statistics for the Social Sciences Psychology 340 Spring 2009 Analysis of Variance (ANOVA)
Statistics for the Social Sciences
Copyright c 2001 The McGraw-Hill Companies, Inc.1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent variable.
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent.
Statistics (cont.) Psych 231: Research Methods in Psychology.
HYPOTHESIS TESTING FOR DIFFERENCES BETWEEN MEANS AND BETWEEN PROPORTIONS.
Statistics (cont.) Psych 231: Research Methods in Psychology.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
ANOVA PSY440 June 26, Clarification: Null & Alternative Hypotheses Sometimes the null hypothesis is that some value is zero (e.g., difference between.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Inferential Statistics Psych 231: Research Methods in Psychology.
Test the Main effect of A Not sign Sign. Conduct Planned comparisons One-way between-subjects designs Report that the main effect of A was not significant.
Inferential Statistics Psych 231: Research Methods in Psychology.
Statistics for the Social Sciences
Statistics for the Social Sciences
Statistics for the Social Sciences
Statistics for the Social Sciences
Psych 231: Research Methods in Psychology
Statistics for the Social Sciences
Analysis of Variance: repeated measures
Psych 231: Research Methods in Psychology
Presentation transcript:

ANOVA, Continued PSY440 July 1, 2008

Quick Review of 1-way ANOVA When do you use one-way ANOVA? What are the components of the F Ratio? How do you calculate the degrees of freedom for a one- way ANOVA? What are the null and alternative hypotheses in a typical one-way ANOVA? What are the assumptions of ANOVA? Why can’t you test a “one-tailed” hypothesis with ANOVA? Questions before we move on?

The structural model and ANOVA The structural model is all about deviations Score (X) Group mean (M) Grand mean (GM) Score’s deviation from group mean (X-M) Group’s mean’s deviation from grand mean (M-GM) Score’s deviation from grand mean (X-GM)

1 factor ANOVA Null hypothesis: H 0 : all the groups are equal XBXB XAXA XCXC X A = X B = X C Alternative hypotheses H A : not all the groups are equal X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B The ANOVA tests this one!!

Planned Comparisons Simple comparisons Complex comparisons Bonferroni procedure –Use more stringent significance level for each comparison

Which follow-up test? Planned comparisons –A set of specific comparisons that you “planned” to do in advance of conducting the overall ANOVA – don’t actually need to calculate the “omnibus” ANOVA F statistic in order to test planned comparisons. –General rule of thumb, don’t exceed the number of conditions that you have (or even stick with one fewer), & make sure comparisons are orthogonal (more on this in a minute) Post-hoc tests –A set of comparisons that you decided to examine only after you find a significant (reject H 0 ) ANOVA

Planned Comparisons Different types –Simple comparisons - testing two groups –Complex comparisons - testing combined groups –Bonferroni procedure Use more stringent significance level for each comparison Basic procedure: –Within-groups population variance estimate (denominator) –Between-groups population variance estimate of the two groups of interest (numerator) –Figure F in usual way

Post-hoc tests Generally, you are testing all of the possible comparisons (rather than just a specific few) –Different types Tukey’s HSD test (use if testing only pairwise comparisons) Scheffe test (use if testing more complex comparisons) Others (Fisher’s LSD, Neuman-Keuls test, Duncan test) –Generally they differ with respect to how conservative they are.

Planned Comparisons & Post-Hoc Tests as Contrasts A contrast is basically a way of assigning numeric values to your grouping variable in a manner that allows you to test a specific difference between two means (or between one mean and a weighted average of two or more other means).

Contrasts for follow-up tests Think of the formula for independent samples t-tests: Typically, we are testing the null hypothesis that µ A - µ B = 0, so the numerator reduces to the difference between the two sample means. This is a simple contrast comparing two groups. The contrast is an array of multipliers defining a linear combination of the means. In this case, the array is (1,-1). The mean of sample A is multiplied by 1, and the mean of sample B is multiplied by -1, and the two products are added together, forming the numerator of the t statistic.

Contrasts for follow-up tests In the case of one-way ANOVA, contrasts can be used in a similar way to test specific statements about the equality or inequality of particular means in the analysis. In a one-way ANOVA with three groups, contrasts such as: (0,1,-1), (1,0,-1), or (1,-1,0) define linear combinations of means that test whether a specified pair of means is equal, while ignoring the third mean. The null hypothesis is that the linear combination of means is equal to zero (similar to independent-samples t-test).

Contrasts for follow-up tests Contrasts can also define linear combinations of means to test more complex hypotheses (such as mean one is equal to the average of mean two and mean three). For example: (1, -.5, -.5) The weights must sum to 0, because the null hypothesis is always that the linear combination of means defined by the contrast is equal to 0.

Contrasts for follow-up tests Follow-up tests are not always independent of each other. The hypothesis that A>B=C is not independent of the hypothesis that A>B>C, because both include the inequality A>B. For planned comparisons, contrasts tested should be independent. Independence of contrasts can be tested by summing the cross-products of the elements (see next slide)

Contrasts for follow-up tests Consider two contrasts testing first the pairwise comparison between means a and c, and second whether mean b is equal to the average of means and c: (1,0,-1) (.5,-1,.5) Sum of cross-products = (1*.5) + (0*-1) + (-1*.5) =.5+0+(-.5)=0, so these are independent. Consider contrasts testing one pairwise comparison (b vs. c) and one comparison between average of means a and b vs. mean c: (.5,.5,-1) (0,1,-1) Sum of cross-products = (.5*0) + (.5*1) + (-1*-1) =0+.5+1=1.5, so these contrasts are not independent (both are comparing b and c).

Contrasts for follow-up tests SPSS will let you specify contrasts for planned comparisons that are not independent, but this is not recommended practice. If you want to test several dependent contrasts, you should use a post-hoc correction such as Scheffe. If you want to test all pairwise comparisons, you can use Tukey’s HSD correction.

Fixed vs. Random Factors in ANOVA One-way ANOVAs can use grouping variables that are fixed or random. –Fixed: All levels of the variable of interest are represented by the variable (e.g., treatment and control, male and female). –Random: The grouping variable represents a random selection of levels of that variable, sampled from a population of levels (e.g., observers). –For one-way ANOVA, the math is the same either way, but the logic of the test is a little different. (Testing either that means are equal or that the between group variance is 0)

ANOVA in SPSS Let’s see how to do a between groups 1-factor ANOVA in SPSS (and the other tests too) Analyze=>Compare Means=>One-Way ANOVA Your grouping variable is the “factor” and your continuous (outcome) variable goes in the “dependent list” box. Specify contrasts for planned comparisons Specify any post-hoc tests you want to run Under “options,” you can request descriptive statistics (e.g., to see group means)

Within groups (repeated measures) ANOVA Basics of within groups ANOVA –Repeated measures –Matched samples Computations Within groups ANOVA in SPSS

Example Suppose that you want to compare three brand name pain relievers. –Give each person a drug, wait 15 minutes, then ask them to keep their hand in a bucket of cold water as long as they can. The next day, repeat (with a different drug) Dependent variable: time in ice water Independent variable: 4 levels, within groups –Placebo –Drug A –Drug B –Drug C

Statistical analysis follows design The 1 factor within groups ANOVA: –One group –Repeated measures –More than 2 scores per subject

Statistical analysis follows design The 1 factor within groups ANOVA: –One group –Repeated measures –More than 2 scores per subject –More than 2 groups –Matched samples –Matched groups - OR -

Within-subjects ANOVA PlaceboDrug ADrug BDrug C XBXB XAXA XCXC XPXP n = 5 participants Each participates in every condition (4 of these)

Within-subjects ANOVA –Step 2: Set your decision criteria –Step 3: Collect your data –Step 4: Compute your test statistics Compute your estimated variances (2 steps of partitioning used) Compute your F-ratio Compute your degrees of freedom (there are even more now) –Step 5: Make a decision about your null hypothesis Hypothesis testing: a five step program –Step 1: State your hypotheses

Step 4: Computing the F-ratio Analyzing the sources of variance –Describe the total variance in the dependent measure Why are these scores different? Sources of variability –Between groups –Within groups XBXB XAXA XCXC XPXP Individual differences Left over variance (error) Because we use the same people in each condition, we can figure out how much of the variability comes from the individuals and remove it from the analysis

Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance

Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Stage 2 Between subjects varianceError variance

Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Stage 2 Between subjects varianceError variance 1)Treatment effect 2)Error or chance (without individual differences) 1)Individual differences 2)Other error 1)Other error (without individual differences) 1)Individual differences Because we use the same people in each condition, none of this variability comes from having different people in different conditions

The F ratio –Ratio of the between-groups variance estimate to the population error variance estimate Step 4: Computing the F-ratio Observed variance Variance from chance F-ratio =

Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Stage 2 Between subjects varianceError variance 1)Treatment effect 2)Error or chance (without individual differences) 1)Individual differences 2)Other error 1)Other error (without individual differences) 1)Individual differences

Partitioning the variance Total variance Stage 1 Between groups varianceWithin groups variance

Partitioning the variance PlaceboDrug ADrug BDrug C

Partitioning the variance Total variance Stage 1 Between groups varianceWithin groups variance Stage 2 Between subjects varianceError variance

What is ? Partitioning the variance PlaceboDrug ADrug BDrug C The average score for each person Between subjects variance

Partitioning the variance PlaceboDrug ADrug BDrug C What is ? The average score for each person Between subjects variance

Partitioning the variance Total variance Stage 1 Between groups varianceWithin groups variance Stage 2 Between subjects variance Error variance

Partitioning the variance PlaceboDrug ADrug BDrug C Error variance

Partitioning the variance Total variance Stage 1 Between groups varianceWithin groups variance Stage 2 Between subjects variance Error variance

Partitioning the variance Mean Squares (Variance) Between groups varianceError variance Now we return to variance. But, we call it Means Square (MS) Now we return to variance. But, we call it Means Square (MS) Recall:

Partitioning the variance Total variance Stage 1 Between groups varianceWithin groups variance Stage 2 Between subjects variance Error variance

Within-subjects ANOVA The F tableThe F table –Need two df’s df between (numerator) df error (denominator) –Values in the table correspond to critical F’s Reject the H 0 if your computed value is greater than or equal to the critical F –Separate tables for 0.05 & 0.01 Do we reject or fail to reject the H 0 ? Do we reject or fail to reject the H 0 ? –From the table (assuming 0.05) with 3 and 12 degrees of freedom the critical F = –So we reject H 0 and conclude that not all groups are the same

Within-subjects ANOVA in SPSS –Setting up the file –Running the analysis –Looking at the output

Within-subjects ANOVA in SPSS Setting up the file: –Each person has one line of data, with each of the “conditions” represented as different variables. –Our chocolate chip cookie data set is a good example. Each type of cookie (jewel, oatmeal, and chips ahoy) is a different “condition” that everyone in the sample experienced. –We created three “sets” of similar variables, with data from each person entered into all three sets of variable fields.

Within-subjects ANOVA in SPSS Running the analysis: –Analyze=>General Linear Model=>Repeated Measures –Define your within-subjects factor (give it a name and specify number of levels - this is the number of variables it will be based on, then click on define and select variables for each level of the factor). –Can request descriptives and contrasts (though the contrasts are defined in a different manner)

Within-subjects ANOVA in SPSS Interpreting the output –Output is complex, and generally set up to accommodate much more complex designs than one- way repeated measures ANOVA, so for current purposes much can be ignored. –Scroll down to where it says: Tests of Within-Subjects Effects & find between group and error sums of squares, df, F, and Sig. for “sphericity assumed.”

Factorial ANOVA Basics of factorial ANOVA –Interpretations Main effects Interactions –Computations –Assumptions, effect sizes, and power –Other Factorial Designs More than two factors Within factorial ANOVAs

Statistical analysis follows design The factorial (between groups) ANOVA: –More than two groups –Independent groups –More than one Independent variable

Factorial experiments Two or more factors –Factors - independent variables –Levels - the levels of your independent variables 2 x 3 design means two independent variables, one with 2 levels and one with 3 levels “condition” or “groups” is calculated by multiplying the levels, so a 2x3 design has 6 different conditions B1B2B3 A1 A2

Factorial experiments Two or more factors (cont.) –Main effects - the effects of your independent variables ignoring (collapsed across) the other independent variables –Interaction effects - how your independent variables affect each other Example: 2x2 design, factors A and B Interaction: –At A1, B1 is bigger than B2 –At A2, B1 and B2 don’t differ

Results So there are lots of different potential outcomes: A = main effect of factor A B = main effect of factor B AB = interaction of A and B With 2 factors there are 8 basic possible patterns of results: 5) A & B 6) A & AB 7) B & AB 8) A & B & AB 1) No effects at all 2) A only 3) B only 4) AB only

2 x 2 factorial design Condition mean A1B1 Condition mean A2B1 Condition mean A1B2 Condition mean A2B2 A1A2 B2 B1 Marginal means B1 mean B2 mean A1 meanA2 mean Main effect of B Main effect of A Interaction of AB What’s the effect of A at B1? What’s the effect of A at B2?

Main effect of A Main effect of B Interaction of A x B A B A1 A2 B1 B2 Main Effect of A Main Effect of B A A1 A2 Dependent Variable B1 B2 √ X X Examples of outcomes

Main effect of A Main effect of B Interaction of A x B A B A1 A2 B1 B2 Main Effect of A Main Effect of B A A1 A2 Dependent Variable B1 B2 √ X X Examples of outcomes

Main effect of A Main effect of B Interaction of A x B A B A1 A2 B1 B2 Main Effect of A Main Effect of B A A1 A2 Dependent Variable B1 B2 √ X X Examples of outcomes

Main effect of A Main effect of B Interaction of A x B A B A1 A2 B1 B2 Main Effect of A Main Effect of B A A1 A2 Dependent Variable B1 B2 √ √ √ Examples of outcomes

Factorial Designs Benefits of factorial ANOVA (over doing separate 1- way ANOVA experiments) –Interaction effects –One should always consider the interaction effects before trying to interpret the main effects –Adding factors decreases the variability –Because you’re controlling more of the variables that influence the dependent variable –This increases the power of the statistical tests

Basic Logic of the Two-Way ANOVA Same basic math as we used before, but now there are additional ways to partition the variance The three F ratios –Main effect of Factor A (rows) –Main effect of Factor B (columns) –Interaction effect of Factors A and B

Partitioning the variance Total variance Stage 1 Between groups variance Within groups variance Stage 2 Factor A varianceFactor B varianceInteraction variance

Figuring a Two-Way ANOVA Sums of squares

Figuring a Two-Way ANOVA Degrees of freedom Number of levels of A Number of levels of B

Figuring a Two-Way ANOVA Means squares (estimated variances)

Figuring a Two-Way ANOVA F-ratios

Figuring a Two-Way ANOVA ANOVA table for two-way ANOVA

Example Factor B: Arousal Level Low B 1 Medium B 2 High B 3 FactorA: Task Difficulty A 1 Easy A 2 Difficult

Example Factor B: Arousal Level Low B 1 Medium B 2 High B 3 FactorA: Task Difficulty A 1 Easy A 2 Difficult

Example Factor B: Arousal Level Low B 1 Medium B 2 High B 3 FactorA: Task Difficulty A 1 Easy A 2 Difficult

Example Factor B: Arousal Level Low B 1 Medium B 2 High B 3 FactorA: Task Difficulty A 1 Easy A 2 Difficult

Example SourceSSdfMSF Between A B AB Within Total √ √ √

Assumptions in Two-Way ANOVA Populations follow a normal curve Populations have equal variances Assumptions apply to the populations that go with each cell

Extensions & Special Cases of Factorial ANOVA Three-way and higher ANOVA designs Repeated measures ANOVA Mixed factorial ANOCA

Factorial ANOVA in Research Articles A two-factor ANOVA yielded a significant main effect of voice, F(2, 245) = 26.30, p <.001. As expected, participants responded less favorably in the low voice condition (M = 2.93) than in the high voice condition (M = 3.58). The mean rating in the control condition (M = 3.34) fell between these two extremes. Of greater importance, the interaction between culture and voice was also significant, F(2, 245) = 4.11, p <.02.

Repeated Measures & Mixed Factorial ANOVA Basics of repeated measures factorial ANOVA –Using SPSS Basics of mixed factorial ANOVA –Using SPSS Similar to the between groups factorial ANOVA –Main effects and interactions –Multiple sources for the error terms (different denominators for each main effect)

Example Suppose that you are interested in how sleep deprivation impacts performance. You test 5 people on two tasks (motor and math) over the course of time without sleep (24 hrs, 36 hrs, and 48 hrs). Dependent variable is number of errors in the tasks. –Both factors are manipulated as within subject variables –Need to conduct a within groups factorial ANOVA

Example Factor B: Hours awake 24 B 1 36 B 2 48 B 3 Factor A: Task A 1 Motor A 2 Math

Example SourceSSdfMSF A Error (A) B Error (B) AB Error (AB)

Example It has been suggested that pupil size increases during emotional arousal. A researcher presents people with different types of stimuli (designed to elicit different emotions). The researcher examines whether similar effects are demonstrated by men and women. –Type of stimuli was manipulated within subjects –Sex is a between subjects variable –Need to conduct a mixed factorial ANOVA

Example Factor B: Stimulus Neutral B 1 Pleasant B 2 Aversive B 3 FactorA: Sex A 1 Men A 2 Women

Example SourceSSdfMSF Between A Error (A) Within B AB Error (B)