1 Psych 5510/6510 Chapter 14 Repeated Measures ANOVA: Models with Nonindependent ERRORs Part 2 (Crossed Designs) Spring, 2009.

Slides:



Advertisements
Similar presentations
Psych 5500/6500 t Test for Two Independent Groups: Power Fall, 2008.
Advertisements

ANCOVA Workings of ANOVA & ANCOVA ANCOVA, Semi-Partial correlations, statistical control Using model plotting to think about ANCOVA & Statistical control.
Other Analysis of Variance Designs Chapter 15. Chapter Topics Basic Experimental Design Concepts  Defining Experimental Design  Controlling Nuisance.
1 Psych 5510/6510 Chapter Eight--Multiple Regression: Models with Multiple Continuous Predictors Part 3: Testing the Addition of Several Parameters at.
Smith/Davis (c) 2005 Prentice Hall Chapter Thirteen Inferential Tests of Significance II: Analyzing and Interpreting Experiments with More than Two Groups.
1 G Lect 14b G Lecture 14b Within subjects contrasts Types of Within-subject contrasts Individual vs pooled contrasts Example Between subject.
Dr George Sandamas Room TG60
Cal State Northridge  320 Andrew Ainsworth PhD Regression.
Two Factor ANOVA.
C82MST Statistical Methods 2 - Lecture 7 1 Overview of Lecture Advantages and disadvantages of within subjects designs One-way within subjects ANOVA Two-way.
Chapter 11 One-way ANOVA: Models with a Single Categorical Predictor
Chapter 9 - Lecture 2 Some more theory and alternative problem formats. (These are problem formats more likely to appear on exams. Most of your time in.
1 Psych 5510/6510 Chapter Eight--Multiple Regression: Models with Multiple Continuous Predictors Part 2: Testing the Addition of One Parameter at a Time.
PSY 307 – Statistics for the Behavioral Sciences
Chapter 10 - Part 1 Factorial Experiments.
The Two-way ANOVA We have learned how to test for the effects of independent variables considered one at a time. However, much of human behavior is determined.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
PSY 307 – Statistics for the Behavioral Sciences Chapter 19 – Chi-Square Test for Qualitative Data Chapter 21 – Deciding Which Test to Use.
DOCTORAL SEMINAR, SPRING SEMESTER 2007 Experimental Design & Analysis Further Within Designs; Mixed Designs; Response Latencies April 3, 2007.
Relationships Among Variables
1 Two Factor ANOVA Greg C Elvers. 2 Factorial Designs Often researchers want to study the effects of two or more independent variables at the same time.
ANOVA: Factorial Designs. Experimental Design Choosing the appropriate statistic or design involves an understanding of  The number of independent variables.
Example of Simple and Multiple Regression
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
ANOVA Chapter 12.
Chapter 14Prepared by Samantha Gaies, M.A.1 Chapter 14: Two-Way ANOVA Let’s begin by reviewing one-way ANOVA. Try this example… Does motivation level affect.
Research Methods for Counselors COUN 597 University of Saint Joseph Class # 9 Copyright © 2014 by R. Halstead. All rights reserved.
CORRELATION & REGRESSION
Psych 5500/6500 ANOVA: Single-Factor Independent Means Fall, 2008.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Chapter 18 Four Multivariate Techniques Angela Gillis & Winston Jackson Nursing Research: Methods & Interpretation.
1 Psych 5510/6510 Chapter 12 Factorial ANOVA: Models with Multiple Categorical Predictors and Product Terms. Spring, 2009.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Chapter 13 Analysis of Variance (ANOVA) PSY Spring 2003.
1 Psych 5500/6500 t Test for Two Independent Means Fall, 2008.
1 Psych 5510/6510 Chapter 10. Interactions and Polynomial Regression: Models with Products of Continuous Predictors Spring, 2009.
Multivariate Analysis. One-way ANOVA Tests the difference in the means of 2 or more nominal groups Tests the difference in the means of 2 or more nominal.
Lab 5 instruction.  a collection of statistical methods to compare several groups according to their means on a quantitative response variable  Two-Way.
Testing Hypotheses about Differences among Several Means.
MBP1010H – Lecture 4: March 26, Multiple regression 2.Survival analysis Reading: Introduction to the Practice of Statistics: Chapters 2, 10 and 11.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
INTRODUCTION TO ANALYSIS OF VARIANCE (ANOVA). COURSE CONTENT WHAT IS ANOVA DIFFERENT TYPES OF ANOVA ANOVA THEORY WORKED EXAMPLE IN EXCEL –GENERATING THE.
Psych 5500/6500 Other ANOVA’s Fall, Factorial Designs Factorial Designs have one dependent variable and more than one independent variable (i.e.
1 Psych 5500/6500 t Test for Dependent Groups (aka ‘Paired Samples’ Design) Fall, 2008.
Chapter 13 Multiple Regression
1 Psych 5510/6510 Chapter 14 Repeated Measures ANOVA: Models with Nonindependent ERRORs Part 3: Factorial Designs Spring, 2009.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Chapter 12 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 Chapter 12: One-Way Independent ANOVA What type of therapy is best for alleviating.
1 Psych 5500/6500 Measures of Variability Fall, 2008.
1 Psych 5510/6510 Chapter Eight--Multiple Regression: Models with Multiple Continuous Predictors Part 1: Testing the Overall Model Spring, 2009.
Chapter 10 The t Test for Two Independent Samples
1 Psych 5510/6510 Chapter 13 ANCOVA: Models with Continuous and Categorical Predictors Part 2: Controlling for Confounding Variables Spring, 2009.
1 Psych 5510/6510 Chapter 14 Repeated Measures ANOVA: Models with Nonindependent Errors Part 1 (Crossed Designs) Spring, 2009.
Experimental Statistics - week 3
One-Way Analysis of Variance Recapitulation Recapitulation 1. Comparing differences among three or more subsamples requires a different statistical test.
Smith/Davis (c) 2005 Prentice Hall Chapter Fifteen Inferential Tests of Significance III: Analyzing and Interpreting Experiments with Multiple Independent.
1 Psych 5510/6510 Chapter 13: ANCOVA: Models with Continuous and Categorical Predictors Part 3: Within a Correlational Design Spring, 2009.
Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.Introduction To Basic Ratios 3.Basic Ratios In Excel 4.Cumulative.
Factorial BG ANOVA Psy 420 Ainsworth. Topics in Factorial Designs Factorial? Crossing and Nesting Assumptions Analysis Traditional and Regression Approaches.
Independent Samples ANOVA. Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.The Equal Variance Assumption 3.Cumulative.
Copyright © 2012 by Nelson Education Limited. Chapter 12 Association Between Variables Measured at the Ordinal Level 12-1.
Ch. 14: Comparisons on More Than Two Conditions
Factorial Anova – Lecture 2
Comparing Several Means: ANOVA
Main Effects and Interaction Effects
Joanna Romaniuk Quanticate, Warsaw, Poland
Reasoning in Psychology Using Statistics
Factorial ANOVA 2 or More IVs.
Analysis of Variance: repeated measures
Presentation transcript:

1 Psych 5510/6510 Chapter 14 Repeated Measures ANOVA: Models with Nonindependent ERRORs Part 2 (Crossed Designs) Spring, 2009

2 Nonindependence in Crossed Designs Now we are going to look at crossed designs. Example: Each subject is measured once in both conditions (Experimenter Absent and Experimenter Present). Thus the effect of the independent variable is now showing up within-subjects.

3 Design Experimenter AbsentExperimenter Present S1 S2 S3 S4 S5 S6 S7 S8

4 SubjectY 1 : Exp. AbsentY 2 : Exp. Present Data: Note two scores per subject.

5 Inappropriate Analysis SubjectYX (group) S17 S25 S36 S47 S58 S67 S75 S86 S181 S251 S361 S491 S581 S671 S761 S881 Ignore that there are two scores from each subject (one in each group). Contrast code group (X).

6 Inappropriate Analysis Model C: Ŷ i = β o Ŷ i = 6.75 Model A: Ŷ i = β o + β 1 X i Ŷ i = X i

7 Inappropriate Analysis (cont.) Ŷ i = X i Source bSSdfMSF*PREp SSRRegressionModel (X i ) SSE(A)ResidualError SSE(C)Total 2315

8 Residuals from inappropriate analysis Exp. PresentExp. AbsentSubject Positive nonindependence

9 Appropriate Approach Due to likely nonindependence among the scores from the same subject, the solution is once again to change the nonindependent scores into one score per person. Remember how we handled this last semester when we learned the t test for dependent groups, we computed a ‘difference’ score for each subject, reflecting how their score differed from the first measure to the second. We then analyzed the difference scores.

10 From t Test for Dependent Groups SubjectY 1 : Exp. AbsentY 2 : Exp. PresentDifference The ‘difference’ scores measure the effect of the independent variable on each subject, we then test to see whether the mean difference score differ significantly from zero.

11 W 1 Scores We are going to do something very similar using the same formula as before but with different deltas. The deltas come from our contrast code (X=-1 and 1). We plug in the two scores for each subject to arrive at a W 1 score for each subject. The W 1 score for the first subject is shown below.

12 W 1i Scores SubjectY 1 Exp. Absent Y 2 Exp. Present W 1i Note that when the subject gets the same scores in both Y 1 and Y 2 that W 1i =0

13 SubjectY 1 : Exp. AbsentY 2 : Exp. PresentDifferenceW1W W1 is a measure of the difference between the subjects’ two scores. If the independent variable had no effect the mean value of the W1 scores would be zero The reason the W1 scores have the opposite sign of the difference scores is simply because I used (-1 and 1) for the contrast rather then (1 and –1).

14 Expected Value If we look at the mean value of W 1 across subjects we find it is: Which will equal 0 if there is no difference between the means of the two conditions. So….if the independent variable had no effect we would expect the mean of the W scores to equal zero…consequently…

15 Approach We then do the multiple regression approach (Chapter 5) of testing to see if the mean of the variable we are modeling (i.e. W 1 ) is equal to some value (i.e. zero).

16 The Models and Hypotheses Following the procedures of Chapter 5: Model C: Ŵ i = B o where B o =0 PC=0 Model A: Ŵ i = β o where β o = μ w PA=1 H 0 : β o = B o or μ w = 0 H A : β o  B o or μ w  0

17 Computations p=.0479

18 Appropriate Summary Table In the table above the value of b has been changed back to the metric of the original Y scores by dividing it by the denominator of the W formula (this is a convention). Compare this summary table to the inappropriate analysis, there is a huge drop in SSE(A) and SSE(C) when doing it this way (while SSR is the same in both approaches). Source bSSdfMSF*PREp SSRRegressionModel (X i ) SSE(A)ResidualError SSE(C)Total 58

19 Why the Drop in Error? With the original Y scores the variance between the subjects within each group is part of the error that can’t be explained by the independent variable. With the W1 analysis the variance of the W1 scores is part of the error that can’t be explained by the independent variable. Remember that W1 scores measure the effect of the IV on each subject, in our example the IV had a pretty similar effect on everyone, thus the W1 scores didn’t vary much. So what can’t be explained by the independent variable is less with the W1 scores than with the Y scores (see next slide).

20 SubjectY 1 : Exp. AbsentY 2 : Exp. PresentW1W The scores within Y1 and Y2 vary more than the scores within W1, thus the analysis of the W1 scores will be more powerful. This is common in repeated measures designs, that the effect of the independent variable (measured by W1) shows less variability than the differences between subjects (as reflected in their Y scores)

21 The Error Term What is MS error in the summary table? 1) Model A is using the mean of W to predict each W score. 2) W measures the effect of the IV on each individual. 3) If the W scores differ from each other (i.e. differ from mean of W) then that is due to the IV having different effects on each individual, and there will be error in the model... Source bSSdfMSF*PREp SSRRegressionModel (X i ) SSE(A)ResidualError SSE(C)Total 58

22 Thus... Thus the error of Model A reflects a difference in how the strength of the IV varies across various individuals, or in other words, the error of the model is the interaction between the treatment (IV) and the individual subjects.

23 Full Summary Table for the Crossed Design The gray cells represent the analysis within subjects, what we just accomplished by using W scores, which is what we are really interested in. The white cells represent what we lost when we moved to W scores, they are included just to be complete. SS Total is the SS of all of the Y scores (including two per subject), SS BetweenS is found by SS Total – SS WithinS. The same goes for the df.

24 More on Crossed Designs What if we have three levels to our independent variable and subjects are crossed with this variable? Group: a1Group: a2Group: a3 S1 S2 S3 S4 S5 S6

25 Data SubjectGroup: a1Group: a2Group: a3 S1572 S S S4881 S S Note large within group variance.

26 With three levels in our independent variable we are going to need two contrasts to completely code it. Let’s say we select: Contrast 1: (first group vs. other two groups combined) λ 11 = -2 λ 12 = 1 λ 13 = 1 Contrast 2: (second group vs. third group) λ 21 = 0 λ 22 = -1 λ 23 = 1

27 Analyzing Contrast 1 Contrast 1: λ 11 = -2 λ 12 = 1 λ 13 = 1 Using SPSS you have it compute W1 scores, then analyze them to see if the mean of the W1 scores differs significantly from zero.

28 Data SubjectGroup: a1Group: a2Group: a3W1 S S S S S S Does mean of W1 differ from zero?

29 Contrast 1

30 Contrast 1 Source SSdfMSF*PREp SSRRegressionModel (X i ) SSE(A)ResidualError SSE(C)Total You could simply say PRE (or R²)=.027, p=.726, or you could express it in a summary table as seen below.

31 Analyzing Contrast 2 Contrast 2: λ 11 = 0 λ 12 = -1 λ 13 = 1 Using SPSS you have it compute W2 scores, then analyze them to see if the mean of the W2 scores differs significantly from zero.

32 Data SubjectGroup: a1Group: a2Group: a3W2 S S S S S S Does mean of W2 differ from zero?

33 Contrast 2

34 Contrast 2 Source SSdfMSF*PREp SSRRegressionModel (X i ) SSE(A)ResidualError SSE(C)Total You could simply say PRE (or R²)=.931, p=.0004, or you could express it in a summary table as seen below.

35 Biases in Ignoring Nonindependence All these are taken care of by changing the data until you get just one score per person.

36 Summary W 0 is used to come up with one score that represents (more or less) that subject’s average score. It is used to see how much the subjects differed from each other. Use in nested designs. W 1, W 2, etc., are used to measure the difference in the subject’s score across various contrasts (i.e. to see how the subject’s scores differed across various levels of the independent variable). Use in crossed designs.