Between Groups & Within-Groups ANOVA BG & WG ANOVA –Partitioning Variation –“making” F –“making” effect sizes Things that “influence” F –Confounding –Inflated.

Slides:



Advertisements
Similar presentations
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
Advertisements

Multiple-choice question
Mixed Designs: Between and Within Psy 420 Ainsworth.
ANOVA Demo Part 2: Analysis Psy 320 Cal State Northridge Andrew Ainsworth PhD.
ANCOVA Workings of ANOVA & ANCOVA ANCOVA, Semi-Partial correlations, statistical control Using model plotting to think about ANCOVA & Statistical control.
Independent t -test Features: One Independent Variable Two Groups, or Levels of the Independent Variable Independent Samples (Between-Groups): the two.
Smith/Davis (c) 2005 Prentice Hall Chapter Thirteen Inferential Tests of Significance II: Analyzing and Interpreting Experiments with More than Two Groups.
ONE-WAY BETWEEN SUBJECTS ANOVA Also called One-Way Randomized ANOVA Purpose: Determine whether there is a difference among two or more groups – used mostly.
LINEAR REGRESSION: Evaluating Regression Models. Overview Standard Error of the Estimate Goodness of Fit Coefficient of Determination Regression Coefficients.
Between Groups & Within-Groups ANOVA
Independent Sample T-test Formula
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Between Groups & Within-Groups ANOVA How F is “constructed” Things that “influence” F –Confounding –Inflated within-condition variability Integrating “stats”
Chapter 9 - Lecture 2 Some more theory and alternative problem formats. (These are problem formats more likely to appear on exams. Most of your time in.
Experimental Design & Analysis
ANCOVA Workings of ANOVA & ANCOVA ANCOVA, Semi-Partial correlations, statistical control Using model plotting to think about ANCOVA & Statistical control.
ANCOVA Psy 420 Andrew Ainsworth. What is ANCOVA?
Chapter 3 Analysis of Variance
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 17: Repeated-Measures ANOVA.
PSY 307 – Statistics for the Behavioral Sciences
Variability Measures of spread of scores range: highest - lowest standard deviation: average difference from mean variance: average squared difference.
Lecture 9: One Way ANOVA Between Subjects
Chapter 10 - Part 1 Factorial Experiments.
One-way Between Groups Analysis of Variance
ANOVA  Used to test difference of means between 3 or more groups. Assumptions: Independent samples Normal distribution Equal Variance.
Wednesday, October 3 Variability. nominal ordinal interval.
Anthony J Greene1 ANOVA: Analysis of Variance 1-way ANOVA.
K-group ANOVA & Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation & Correction LSD & HSD procedures.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Statistical Methods in Computer Science Hypothesis Testing II: Single-Factor Experiments Ido Dagan.
Intro to Parametric Statistics, Assumptions & Degrees of Freedom Some terms we will need Normal Distributions Degrees of freedom Z-values of individual.
Introduction to Analysis of Variance (ANOVA)
Basic Analysis of Variance and the General Linear Model Psy 420 Andrew Ainsworth.
PXGZ6102 BASIC STATISTICS FOR RESEARCH IN EDUCATION Chap 3 - Measures of Variability – Standard Deviation, Variance.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
1 One Way Analysis of Variance – Designed experiments usually involve comparisons among more than two means. – The use of Z or t tests with more than two.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 13 Experimental Design and Analysis of Variance nIntroduction to Experimental Design.
1 of 46 MGMT 6970 PSYCHOMETRICS © 2014, Michael Kalsher Michael J. Kalsher Department of Cognitive Science Inferential Statistics IV: Factorial ANOVA.
Stats Lunch: Day 7 One-Way ANOVA. Basic Steps of Calculating an ANOVA M = 3 M = 6 M = 10 Remember, there are 2 ways to estimate pop. variance in ANOVA:
1)Test the effects of IV on DV 2)Protects against threats to internal validity Internal Validity – Control through Experimental Design Chapter 10 – Lecture.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Psychology 301 Chapters & Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Testing Hypotheses about Differences among Several Means.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
1 Chapter 15 Introduction to the Analysis of Variance IThe Omnibus Null Hypothesis H 0 :  1 =  2 =... =  p H 1 :  j =  j′ for some j and j´
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
Sources of Variance & ANOVA BG ANOVA –Partitioning Variation –“making” F –“making” effect sizes Things that “influence” F –Confounding –Inflated within-condition.
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
ANOVA Overview of Major Designs. Between or Within Subjects Between-subjects (completely randomized) designs –Subjects are nested within treatment conditions.
ANCOVA Workings of ANOVA & ANCOVA ANCOVA, partial correlations & multiple regression Using model plotting to think about ANCOVA & Statistical control Homogeneity.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 12: Single Variable Between- Subjects Research 1.
1 Lecture 15 One Way Analysis of Variance  Designed experiments usually involve comparisons among more than two means.  The use of Z or t tests with.
Factorial BG ANOVA Psy 420 Ainsworth. Topics in Factorial Designs Factorial? Crossing and Nesting Assumptions Analysis Traditional and Regression Approaches.
Monday, September 27 More basics.. _ “Life is a series of samples, you can infer the truth from the samples but you never see the truth.”
Multiple Regression.
Repeated Measures ANOVA
Internal Validity – Control through
Multiple Regression.
Chapter 6 Research Validity.
Other Analysis of Variance Designs
Chapter 13 Group Differences
Remember You just invented a “magic math pill” that will increase test scores. On the day of the first test you give the pill to 4 subjects. When these.
Analysis of Variance: repeated measures
Experiments with More Than Two Groups
Chapter 10 – Part II Analysis of Variance
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Between Groups & Within-Groups ANOVA BG & WG ANOVA –Partitioning Variation –“making” F –“making” effect sizes Things that “influence” F –Confounding –Inflated within-condition variability Integrating “stats” & “methods”

ANOVA  ANalysis Of VAriance Variance means “variation” Sum of Squares (SS) is the most common variation index SS stands for, “Sum of squared deviations between each of a set of values and the mean of those values” SS = ∑ (value – mean) 2 So, Analysis Of Variance translates to “partitioning of SS” In order to understand something about “how ANOVA works” we need to understand how BG and WG ANOVAs partition the SS differently and how F is constructed by each.

Variance partitioning for a BG design Tx C Mean SS Total = SS Effect + SS Error Variation among all the participants – represents variation due to “treatment effects” and “individual differences” Variation between the conditions – represents variation due to “treatment effects” Variation among participants within each condition – represents “individual differences” Called “error” because we can’t account for why the folks in a condition -- who were all treated the same – have different scores.

How a BG F is constructed F = = MS effect MS error Mean Square is the SS converted to a “mean”  dividing it by “the number of things” SS effect / df effect SS error / df error df effect = k - 1 represents # conditions in design df error = ∑n - k represents # participants in study SS Total = SS Effect + SS Error

How a BG r is constructed F = = MS effect MS error r 2 = effect / (effect + error)  conceptual formula = SS effect / ( SS effect + SS error )  definitional formula = F / (F + df error )  computational forumla SS effect / df effect SS error / df error

SS total = SS effect + SS error = r 2 = SS effect / ( SS effect + SS error ) = / ( ) =.34 r 2 = F / (F + df error ) = / ( ) =.34 An Example …

Variance partitioning for a WG design Tx C Mean SS Total = SS Effect + SS Subj + SS Error Variation among participants – estimable because “S” is a composite score (sum) Variation among participant’s difference scores – represents “individual differences” Called “error” because we can’t account for why folks who were in the same two conditions -- who were all treated the same two ways – have different difference scores. Sum Dif

How a WG F is constructed F = = MS effect MS error Mean Square is the SS converted to a “mean”  dividing it by “the number of things” SS effect / df effect SS error / df error df effect = k - 1 represents # conditions in design df error = (k-1)*(n-1) represents # data points in study SS Total = SS Effect + SS Subj + SS Error

How a WG r is constructed F = = MS effect MS error r 2 = effect / (effect + error)  conceptual formula = SS effect / ( SS effect + SS error )  definitional formula = F / (F + df error )  computational forumla SS effect / df effect SS error / df error

SS total = SS effect + SS subj + SS error = r 2 = SS effect / ( SS effect + SS error ) = / ( ) =.68 r 2 = F / (F + df error ) = / ( ) =.68 An Example … Don’t ever do this with real data !!!!!! Professional statistician on a closed course. Do not try at home!

What happened????? Same data. Same means & Std. Same total variance. Different F ??? BG ANOVA SS Total = SS Effect + SS Error WG ANOVA SS Total = SS Effect + SS Subj + SS Error The variation that is called “error” for the BG ANOVA is divided between “subject” and “error” variation in the WG ANOVA. Thus, the WG F is based on a smaller error term than the BG F  and so, the WG F is generally larger than the BG F.

What happened????? Same data. Same means & Std. Same total variance. Different r ??? The variation that is called “error” for the BG ANOVA is divided between “subject” and “error” variation in the WG ANOVA. Thus, the WG r is based on a smaller error term than the BG r  and so, the WG r is generally larger than the BG r. r 2 = effect / (effect + error)  conceptual formula = SS effect / ( SS effect + SS error )  definitional formula = F / (F + df error )  computational forumla

A “more realistic” model of F IndDif  individual differences BG SS Total = SS Effect + SS Error WG SS Total = SS Effect +SS Subj +SS Error F = SS effect / df effect SS error / df error Both of these models assume there are no confounds, and that the individual differences are the only source of within- condition variability BG SS Total = SS Effect + SS confound + SS IndDif + SS wcvar SS confound  between condition variability caused by anything(s) other than the IV (confounds) SS wcvar  inflated within condition variability caused by anything(s) other than “natural population individual differences” WG SS Total = SS Effect + SS confound +SS Subj + SS IndDif + SS wcvar

Imagine an educational study that compares the effects of two types of math instruction (IV) upon performance (% - DV) Participants were randomly assigned to conditons, treated, then allowed to practice (Prac) as many problems as they wanted to before taking the DV-producing test Control Grp Exper. Grp Prac DV Prac DV S S S S S S S S IV compare Ss 5&2 - 7&4 Confounding due to Prac mean prac dif between cond WG variability inflated by Prac wg corrrelation or prac & DV Individual differences compare Ss 1&3, 5&7, 2&4, or 6&8

F = SS effect / df effect SS error / df error The problem is that the F-formula will … Ignore the confounding caused by differential practice between the groups and attribute all BG variation to the type of instruction (IV)  overestimating the effect Ignore the inflation in within-condition variation caused by differential practice within the groups and attribute all WG variation to individual differences  overestimating the error As a result, the F & r values won’t properly reflect the relationship between type of math instruction and performance  we will make a statistical conclusion error ! Our inability to procedurally control variables like this will lead us to statistical models that can “statistically control” them r = F / (F + df error )

How research design impacts F  integrating stats & methods! SS Total = SS Effect +SS confound +SS IndDif +SS wcvar SS Effect  “bigger” manipulations produce larger mean difference between the conditions  larger F F = SS effect / df effect SS error / df error SS IndDif  more heterogeneous populations have larger within- condition differences  smaller F SS confound  between group differences – other than the IV -- change mean difference  changing F if the confound “augments” the IV  F will be inflated if the confound “counters” the IV  F will be underestimated SS wcvar  within-group differences – other than natural individual differences  smaller F could be “procedural”  differential treatment within-conditions could be “sampling”  obtain a sample that is “more heterogeneous than the target population”