Effect Size Estimation in Fixed Factors Between- Groups Anova.

Slides:



Advertisements
Similar presentations
Multiple-choice question
Advertisements

Week 2 – PART III POST-HOC TESTS. POST HOC TESTS When we get a significant F test result in an ANOVA test for a main effect of a factor with more than.
One-Way BG ANOVA Andrew Ainsworth Psy 420. Topics Analysis with more than 2 levels Deviation, Computation, Regression, Unequal Samples Specific Comparisons.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
1 SSS II Lecture 1: Correlation and Regression Graduate School 2008/2009 Social Science Statistics II Gwilym Pryce
Correlation Mechanics. Covariance The variance shared by two variables When X and Y move in the same direction (i.e. their deviations from the mean are.
Introduction to Factorial ANOVA Designs
Testing Differences Among Several Sample Means Multiple t Tests vs. Analysis of Variance.
Part I – MULTIVARIATE ANALYSIS
Analysis of Variance: Inferences about 2 or More Means
Chapter 3 Experiments with a Single Factor: The Analysis of Variance
Intro to Statistics for the Behavioral Sciences PSYC 1900
Intro to Statistics for the Behavioral Sciences PSYC 1900
Lecture 9: One Way ANOVA Between Subjects
Two Groups Too Many? Try Analysis of Variance (ANOVA)
One-way Between Groups Analysis of Variance
Comparing Several Means: One-way ANOVA Lesson 14.
Intro to Parametric Statistics, Assumptions & Degrees of Freedom Some terms we will need Normal Distributions Degrees of freedom Z-values of individual.
One-Way ANOVA Independent Samples. Basic Design Grouping variable with 2 or more levels Continuous dependent/criterion variable H  :  1 =  2 =... =
6.1 - One Sample One Sample  Mean μ, Variance σ 2, Proportion π Two Samples Two Samples  Means, Variances, Proportions μ 1 vs. μ 2.
Analysis of Variance. ANOVA Probably the most popular analysis in psychology Why? Ease of implementation Allows for analysis of several groups at once.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Overview of Meta-Analytic Data Analysis
Comparisons among groups within ANOVA
Repeated Measures Design
Extension to ANOVA From t to F. Review Comparisons of samples involving t-tests are restricted to the two-sample domain Comparisons of samples involving.
T-test Mechanics. Z-score If we know the population mean and standard deviation, for any value of X we can compute a z-score Z-score tells us how far.
Intermediate Applied Statistics STAT 460
Repeated Measures Design
Psy 524 Lecture 2 Andrew Ainsworth. More Review Hypothesis Testing and Inferential Statistics Making decisions about uncertain events The use of samples.
Stats Lunch: Day 7 One-Way ANOVA. Basic Steps of Calculating an ANOVA M = 3 M = 6 M = 10 Remember, there are 2 ways to estimate pop. variance in ANOVA:
Comparing Two Proportions
Which Test Do I Use? Statistics for Two Group Experiments The Chi Square Test The t Test Analyzing Multiple Groups and Factorial Experiments Analysis of.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
t(ea) for Two: Test between the Means of Different Groups When you want to know if there is a ‘difference’ between the two groups in the mean Use “t-test”.
Effect Size Estimation in Fixed Factors Between-Groups ANOVA
Modern Approaches Effect Size
Psychology 301 Chapters & Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons.
Testing Hypotheses about Differences among Several Means.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
For a mean Standard deviation changes to standard error now that we are dealing with a sampling distribution of the mean.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
I. Statistical Tests: A Repetive Review A.Why do we use them? Namely: we need to make inferences from incomplete information or uncertainty þBut we want.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
Analysis of Variance 1 Dr. Mohammed Alahmed Ph.D. in BioStatistics (011)
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Within Subjects Analysis of Variance PowerPoint.
Jeopardy Hypothesis Testing t-test Basics t for Indep. Samples Related Samples t— Didn’t cover— Skip for now Ancient History $100 $200$200 $300 $500 $400.
Copyright © 2010 Pearson Education, Inc. Chapter 22 Comparing Two Proportions.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 22 Comparing Two Proportions.
1 G Lect 11a G Lecture 11a Example: Comparing variances ANOVA table ANOVA linear model ANOVA assumptions Data transformations Effect sizes.
ANOVA: Analysis of Variance.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Comparisons among groups within ANOVA. Problem with one-way anova There are a couple issues regarding one-way Anova First, it doesn’t tell us what we.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Empirically Based Characteristics of Effect Sizes used in ANOVA J. Jackson Barnette, PhD Community and Behavioral Health College of Public Health University.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 13: One-way ANOVA Marshall University Genomics Core.
Stats Lunch: Day 8 Repeated-Measures ANOVA and Analyzing Trends (It’s Hot)
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 14 th February 2013.
One-Way Analysis of Variance Recapitulation Recapitulation 1. Comparing differences among three or more subsamples requires a different statistical test.
Kin 304 Inferential Statistics Probability Level for Acceptance Type I and II Errors One and Two-Tailed tests Critical value of the test statistic “Statistics.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Chapter 13 Understanding research results: statistical inference.
ANOVA and Multiple Comparison Tests
CHAPTER 15: THE NUTS AND BOLTS OF USING STATISTICS.
Dependent-Samples t-Test
Kin 304 Inferential Statistics
Analysis of Variance (ANOVA)
I. Statistical Tests: Why do we use them? What do they involve?
Presentation transcript:

Effect Size Estimation in Fixed Factors Between- Groups Anova

Contrast Review  Concerns design with a single factor A with a 2 levels (conditions) The omnibus comparison concerns all levels (i.e., df A > 2) A focused comparison or contrast concerns just two levels (i.e.,df = 1)  The omnibus effect is often relatively uninteresting compared with specific contrasts (e.g., treatment 1 vs. placebo control)  A large omnibus effect can also be misleading if due to a single discrepant mean that is not of substantive interest

Comparing Groups  Traditional approach is to analyze the omnibus effect followed by analysis of all possible pairwise contrasts (i.e. compare each condition to every other condition)  However, this approach is typically incorrect (Wilkinson & TFSI,1999)—for example, it is rare that all such contrasts are interesting Also, use of traditional methods for post hoc comparisons (e.g. Newman-Keuls) reduces power for every contrast, and power may already be low

Contrast specification and tests  A contrast is a directional effect that corresponds to a particular facet of the omnibus effect often represented with the symbol y for a population or yˆ for a sample a weighted sum of means  In a sample, a contrast is calculated as:  a 1, a 2,..., a j is the set of weights that specifies the contrast  As we have mentioned Contrast weights must sum to zero and weights for at least two different means should not equal zero Means assigned a weight of zero are excluded from the contrast Means with positive weights are compared with means given negative weights

Contrast specification and tests  For effect size estimation with the d family, we generally want a standard set of contrast weights  In a one-way design, the sum of the absolute values of the weights in a standard set equals two (i.e., ∑| a j | = 2.0)  Mean difference scaling permits the interpretation of a contrast as the difference between the averages of two subsets of means

Contrast specification and tests  An exception to the need for mean difference scaling is for trends (polynomials) specified for a quantitative factor (e.g., drug dosage)  There are default sets of weights that define trend components (e.g. linear, quadratic, etc.) that are not typically based on mean difference scaling  Not usually a problem because effect size for trends is generally estimated with the r family (measures of association)  Measures of association for contrasts of any kind generally correct for the scale of the contrast weights

Orthogonal Contrasts  Two contrasts are orthogonal if they each reflect an independent aspect of the omnibus effect  For balanced designs and unbalanced designs (latter)

Orthogonal Contrasts  The maximum number of orthogonal contrasts is df A = a − 1  For a set of all possible orthogonal pairwise contrasts, the SS A = the total SS from the contrasts, and their eta-squares will sum to the SS A eta-square  That is, the omnibus effect can be broken down into a − 1 independent directional effects  However, it is more important to analyze contrasts of substantive interest even if they are not orthogonal

Contrast specification and tests  t-test for a contrast against the nil hypothesis  The F is

Dependent Means  Test statistics for dependent mean contrasts usually have error terms based on only the two conditions compared— for example:  s 2 here refers to the variance of the contrast difference scores  This error terms do not assume sphericity, which we’ll talk about more with repeated measures design

Confidence Intervals  Approximate confidence intervals for contrasts are generally fine  The general form of an individual confidence interval for Ψ is:  df error is specific to that contrast

Contrast specification and tests  There are also corrected confidence intervals for contrasts that adjust for multiple comparisons (i.e., inflated Type I error) Known as simultaneous or joint confidence intervals  Their widths are generally wider compared with individual confidence intervals because they are based on a more conservative critical value  Program for correcting au/research/resources/psy program.html au/research/resources/psy program.html

Standardized contrasts  The general form for standardized contrasts (in terms of population parameters)

Standardized contrasts  There are three general ways to estimate σ (i.e., the standardizer) for contrasts between independent means:  1. Calculate d as Glass’s Δ i.e., use the standard deviation of the control group  2. Calculate d as Hedge’s g i.e., use the square root of the pooled within-conditions variance for just the two groups being compared  3. Calculate d as an extension of g Where the standardizer is the square root of MS W based on all groups Generally recommended

Standardized contrasts  Calculate from a t contrast for a paper not reporting effect size like they should  Recall the weights should sum to 2

CIs  Once the d is calculated one can easily obtain exact confidence intervals via the MBESS package in R or Steiger’s standalone program The latter will provide the interval for the noncentrality parameter which must then be converted to d

Cohen’s f  Cohen’s f provides what can interpreted as the average standardized mean difference across the groups in question  It has a direct relation to a measure of association  As with Cohen’s d, there are guidelines regarding Cohen’s f .10,.25,.40 for small, moderate and large effect sizes  These correspond to eta-square values of:.01,.06,.14  Again though, one should conduct the relevant literature for effect size estimation

Measures of Association  A measure of association describes the amount of the covariation between the independent and dependent variables  It is expressed in an unsquared metric or a squared metric—the former is usually a correlation, the latter a variance-accounted-for effect size  A squared multiple correlation (R 2 ) calculated in ANOVA is called the correlation ratio or estimated eta-squared ( 2 )

Eta-squared  A measure of the degree to which variability among observations can be attributed to conditions  Example:  2 =.50 50% of the variability seen in the scores is due to the independent variable.

More than One factor  It is a fairly common practice to calculate eta 2 (correlation ratio) for the omnibus effect but to calculate the partial correlation ratio for each contrast  As we have noted before SPSS calls everything partial eta-squared in it’s output, but for a one-way design you’d report it as eta-squared

Problem  Eta-squared (since it is R-squared) is an upwardly biased measure of association (just like R-squared was)  As such it is better used descriptively than inferentially

Omega-squared  Another effect size measure that is less biased and interpreted in the same way as eta-squared  So why do we not see omega-squared so much?  People don’t like small values  Stat packages don’t provide it by default

Omega-squared  Put differently

Omega-squared  Assumes a balanced design eta 2 does not assume a balanced design  Though the omega values are generally lower than those of the corresponding correlation ratios for the same data, their values converge as the sample size increases  Note that the values can be negative—if so, interpret as though the value were zero

Comparing effect size measures  Consider our previous example with item difficulty and arousal regarding performance

Comparing effect size measures 22 ω2ω2 Partial  2 f B/t groups Difficulty Arousal Interactio n Slight differences due to rounding, f based on eta-squared

No p-values  As before, programs are available to calculate confidence intervals for an effect size measure  Example using the MBESS package for the overall effect 95% CI on ω 2 : .20 to.69

No p-values  Ask yourself as we have before, if the null hypothesis is true, what would our effect size be (standardized mean difference or proportion of variance accounted for)?  0  Rather than do traditional hypothesis testing, one can simply see if our CI for the effect size contains the value of zero (or, in eta-squared case, gets really close)  If not, reject H 0  This is superior in that we can use the NHST approach, get a confidence interval reflecting the precision of our estimates, focus on effect size, and de-emphasize the p-value