Random Effects & Repeated Measures Alternatives to Fixed Effects Analyses.

Slides:



Advertisements
Similar presentations
Analysis by design Statistics is involved in the analysis of data generated from an experiment. It is essential to spend time and effort in advance to.
Advertisements

Randomized Complete Block and Repeated Measures (Each Subject Receives Each Treatment) Designs KNNL – Chapters 21,
Within Subjects Designs
Mixed Designs: Between and Within Psy 420 Ainsworth.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Research Support Center Chongming Yang
FACTORIAL ANOVA Overview of Factorial ANOVA Factorial Designs Types of Effects Assumptions Analyzing the Variance Regression Equation Fixed and Random.
1 Chapter 4 Experiments with Blocking Factors The Randomized Complete Block Design Nuisance factor: a design factor that probably has an effect.
Chapter 4 Randomized Blocks, Latin Squares, and Related Designs
Other Analysis of Variance Designs Chapter 15. Chapter Topics Basic Experimental Design Concepts  Defining Experimental Design  Controlling Nuisance.
Design Supplemental.
DOCTORAL SEMINAR, SPRING SEMESTER 2007 Experimental Design & Analysis Analysis of Covariance; Within- Subject Designs March 13, 2007.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Design of Experiments and Analysis of Variance
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
C82MST Statistical Methods 2 - Lecture 7 1 Overview of Lecture Advantages and disadvantages of within subjects designs One-way within subjects ANOVA Two-way.
Part I – MULTIVARIATE ANALYSIS
ANalysis Of VAriance (ANOVA) Comparing > 2 means Frequently applied to experimental data Why not do multiple t-tests? If you want to test H 0 : m 1 = m.
ANCOVA Psy 420 Andrew Ainsworth. What is ANCOVA?
Chapter 14 Conducting & Reading Research Baumgartner et al Chapter 14 Inferential Data Analysis.
Experimental Design Terminology  An Experimental Unit is the entity on which measurement or an observation is made. For example, subjects are experimental.
Lecture 9: One Way ANOVA Between Subjects
Experimental Design & Analysis
Incomplete Block Designs
Analysis of Variance & Multivariate Analysis of Variance
Repeated Measures ANOVA Used when the research design contains one factor on which participants are measured more than twice (dependent, or within- groups.
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Factorial ANOVA 2 or More IVs. Questions (1)  What are main effects in ANOVA?  What are interactions in ANOVA? How do you know you have an interaction?
One-Factor Experiments Andy Wang CIS 5930 Computer Systems Performance Analysis.
1 Experimental Statistics - week 7 Chapter 15: Factorial Models (15.5) Chapter 17: Random Effects Models.
1 Experimental Statistics - week 6 Chapter 15: Randomized Complete Block Design (15.3) Factorial Models (15.5)
Chapter 14: Repeated-Measures Analysis of Variance.
Randomized Block Design (Kirk, chapter 7) BUSI 6480 Lecture 6.
Multivariate Analysis. One-way ANOVA Tests the difference in the means of 2 or more nominal groups Tests the difference in the means of 2 or more nominal.
Repeated Measurements Analysis. Repeated Measures Analysis of Variance Situations in which biologists would make repeated measurements on same individual.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Psych 5500/6500 Other ANOVA’s Fall, Factorial Designs Factorial Designs have one dependent variable and more than one independent variable (i.e.
IE341 Midterm. 1. The effects of a 2 x 2 fixed effects factorial design are: A effect = 20 B effect = 10 AB effect = 16 = 35 (a) Write the fitted regression.
The Completely Randomized Design (§8.3)
General Linear Model 2 Intro to ANOVA.
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
Stats/Methods II JEOPARDY. Jeopardy Compare & Contrast Repeated- Measures ANOVA Factorial Design Factorial ANOVA Surprise $100 $200$200 $300 $500 $400.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
General Linear Model.
Chapter 13 Repeated-Measures and Two-Factor Analysis of Variance
1 Experimental Statistics - week 9 Chapter 17: Models with Random Effects Chapter 18: Repeated Measures.
Multivariate Analysis: Analysis of Variance
Correlated-Samples ANOVA The Univariate Approach.
1 Experimental Statistics Spring week 6 Chapter 15: Factorial Models (15.5)
Experimental Statistics - week 3
One-Way Analysis of Variance Recapitulation Recapitulation 1. Comparing differences among three or more subsamples requires a different statistical test.
The Mixed Effects Model - Introduction In many situations, one of the factors of interest will have its levels chosen because they are of specific interest.
Experimental Statistics - week 9
ANOVA Overview of Major Designs. Between or Within Subjects Between-subjects (completely randomized) designs –Subjects are nested within treatment conditions.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
1 G Lect 13b G Lecture 13b Mixed models Special case: one entry per cell Equal vs. unequal cell n's.
ANOVA and Multiple Comparison Tests
1 Experimental Statistics - week 8 Chapter 17: Mixed Models Chapter 18: Repeated Measures.
Designs for Experiments with More Than One Factor When the experimenter is interested in the effect of multiple factors on a response a factorial design.
Factorial BG ANOVA Psy 420 Ainsworth. Topics in Factorial Designs Factorial? Crossing and Nesting Assumptions Analysis Traditional and Regression Approaches.
Differences Among Groups
An Introduction to Two-Way ANOVA
Random Effects & Repeated Measures
Main Effects and Interaction Effects
Statistics for the Social Sciences
Randomized Complete Block and Repeated Measures (Each Subject Receives Each Treatment) Designs KNNL – Chapters 21,
12 Inferential Analysis.
Factorial ANOVA 2 or More IVs.
Multivariate Analysis: Analysis of Variance
One-Factor Experiments
Presentation transcript:

Random Effects & Repeated Measures Alternatives to Fixed Effects Analyses

Questions  What is the difference between fixed- and random-effects in terms of treatments?  How are F tests with random effects different than with fixed effects?  Describe a concrete example of a randomized block design. You should have 1 factor as the blocking factor and one other factor as the factor of main interest.

Questions (2)  How is a repeated measures design different from a totally between subjects design in the collection of the data?  How does the significance testing change from the totally between to a design to one in which one or more factors are repeated measures (just the general idea, you don’t need to show actual F ratios or computations)?  Describe one argument for using repeated measures designs and one argument against using such designs (or describe when you would and would not want to use repeated measures).

Fixed Effects Designs All treatment conditions of interest are included in the study All in cell get identical stimulus (treatment, IV combination) Interest is in specific means Expected mean squares are (relatively) simple; F tests are all based on common error term.

Random Effects Designs Treatment conditions are sampled; not all conditions of interest are included. Replications of the experiment would get different treatments Interest in the variance produced by an IV rather than means Expected mean squares relatively complex; the denominator for F changes depending on the effect being tested.

Fixed vs. Random RandomFixed ExamplesConditions Examples Persuasiveness of commercials Treatment Sampled All of interestSex of participant Experimenter effect Replication different Replication sameDrug dosage Impact of team members Variance due to IV Means due to IVTraining program effectiveness

Single Factor Random The expected mean squares and F-test for the single random factor are the same as those for the single factor fixed-effects design.

Experimenter effects (Hays Table )

Sum of Source DF Squares Mean Square F Value Pr > F Model <.0001 Error Corrected Total

Random Effects Significance Tests (A & B random/within) SourceE(MS)Fdf AJ-1, (J-1)(K-1) BK-1, (J-1)(K-1) AxB(J-1)(K-1), JK(n-1) Error

Why the Funky MS? Treatment effects for A, B, & AxB are the same for fixed & random in the population of treatments. In fixed, we have the population, in random, we just have a sample. Therefore, in a given (random) study, the interaction effects need not sum to zero. The AxB effects appear in the main effects.

Applications of Random Effects Reliability and Generalizability –How many judges do I need to get a reliability of.8? –How well does this score generalize to a particular universe of scores? –Intraclass correlations (ICCs) Estimated variance components –Meta-analysis Control (Randomized Blocks and Repeated Measures)

Review  What is the difference between fixed- and random-effects in terms of treatments?  How are F tests with random effects different than with fixed effects?

Randomized Blocks Designs A block is a matched group of participants who are similar or identical on a nuisance variable Suppose we want to study effect of a workbook on scores on a test in research methods. A major source of nuisance variance is cognitive ability We can block students on cognitive ability.

Randomized Blocks (2) Say 3 blocks (slow, average, fast learners) Within each block, randomly assign to workbook or control. Resulting design looks like ordinary factorial (3x2), but people are not assigned to blocks. The block factor is sampled, i.e., random. The F test for workbook is more powerful because we subtract nuisance variance. Unless blocks are truly categorical, a better design is analysis of covariance, described after we introduce regression.

Randomized Blocks (3) SourceE(MS)Fdf A workbook (fixed) J-1, (J-1)(K-1) B learner (random) If desired, use MSe AxB(J-1)(K-1), JK(n-1) ErrorLook up designs

Review  Describe a concrete example of a randomized block design. You should have 1 factor as the blocking factor and one other factor as the factor of main interest.  Describe a study in which Depression is a blocking factor.

Repeated Measures Designs In a repeated measures design, participants appear in more than one cell. –Painfree study –Sports instruction Commonly used in psychology

Pros & Cons of RM ProCon Individuals serve as own control – improved power Carry over effects May be cheaper to runParticipant sees design - demand characteristics Scarce participants

RM – Participant ‘Factor’ SourcedfMSE(MS)F Between Subjects K-1No test Within Subjects TreatmentsJ-1 Subjects x Treatments (J-1)(K-1)No test TotalJK-1

Drugs on Reaction Time Order of drug random. All Ss, all drugs. Interest is drug. PersonDrug 1Drug 2Drug 3 Drug 4Mean Mean Drug is fixed; person is random. ‘1 Factor’ repeated measures design. Notice 1 person per cell. We can get 3 SS: row, column, and residual (interaction plus error).

Total SS PersonDrug 1Drug 2Drug 3 Drug 4Mean Mean

Drug SS PersonDrugMD*DPersonDrugMD*D Total698.20

Person SS PersonDrugMD*DPersonDrugMD*D Total680.8

Summary Total = ; Drugs = 698.2, People= Residual = Total –(Drugs+People) = ( ) =112.8 SourceSSdfMSF Between People680.84Nuisance variance Within people (by summing) Drugs Residual Total Fcrit(.05)=3.95

SAS Run the same problem using SAS.

2 Factor, 1 Repeated SubjectB1B2B3B4M A A M DV=errors in control setting dials; IV(A) is dial calibration - between; IV(B) is dial shape - within. Observation is randomized over dial shape.

data d1; input i1-i4; cards; data d2; set d1; array z i1-i4; do over z; if _N_ le 3 then a =1; if _N_ gt 3 then a =2; sub = _N_; b = _I_; y=z; output; end; proc print; proc glm; class a b sub; model y = a b a*b sub(a) sub*b; test h=a e=sub(a)/htype=1 etype=1; test h=b a*b e=sub*b/htype=1 etype=1; run;

Summary SourceSSdfMSF Between people A(calibration) Subjects within groups Within people B (dial shape) AB BxSub within group Note that different factors are tested with different error terms.

SAS & Post Hoc Tests Run the same problem using SAS. The SAS default is to use what ever is residual as denominator of F test. You can use this to your advantage or else over-ride it to produce specific F tests of your desire. If you use the default error term, be sure you know what it is. Post hoc tests with repeated measures are tricky. You have to use the proper error term for each test. The error term changes depending on what you are testing. Be sure to look up the right error term.

Assumptions of RM Orthogonal ANOVA assumes homogeneity of error variance within cells. IVs are independent. With repeated measures, we introduce covariance (correlation) across cells. For example, the correlation of scores across subjects 1-3 for the first two calibrations is.89. Repeated measures designs make assumptions about the homogeneity of covariance matrices across conditions for the F test to work properly. If the assumptions are not met, you have problems and may need to make adjustments. You can avoid these assumptions by using multivariate techniques (MANOVA) to analyze your data. I suggest you do so. If you use ANOVA, you need to look up your design to get the right F tests and check on the assumptions.

Review  How is a repeated measures design different from a totally between subjects design in the collection of the data?  How does the significance testing change from the totally between to a design to one in which one or more factors are repeated measures (just the general idea, you don’t need to show actual F ratios or computations)?  Describe one argument for using repeated measures designs and one argument against using such designs (or describe when you would and would not want to use repeated measures).