STA 6166 - MCP1 Multiple Comparisons: Example Study Objective: Test the effect of six varieties of wheat to a particular race of stem rust. Treatment:

Slides:



Advertisements
Similar presentations
Intro to ANOVA.
Advertisements

Dr. AJIT SAHAI Director – Professor Biometrics JIPMER, Pondicherry
Treatment Comparisons ANOVA can determine if there are differences among the treatments, but what is the nature of those differences? Are the treatments.
Treatment comparisons
Analysis of Variance (ANOVA)
Lesson #24 Multiple Comparisons. When doing ANOVA, suppose we reject H 0 :  1 =  2 =  3 = … =  k Next, we want to know which means differ. This does.
STA305 Spring 2014 This started with excerpts from STA2101f13
Analysis of Variance (ANOVA) ANOVA methods are widely used for comparing 2 or more population means from populations that are approximately normal in distribution.
Model Adequacy Checking in the ANOVA Text reference, Section 3-4, pg
Analysis of Variance (ANOVA) ANOVA can be used to test for the equality of three or more population means We want to use the sample results to test the.
C82MST Statistical Methods 2 - Lecture 4 1 Overview of Lecture Last Week Per comparison and familywise error Post hoc comparisons Testing the assumptions.
Regression Part II One-factor ANOVA Another dummy variable coding scheme Contrasts Multiple comparisons Interactions.
Analysis of Variance (ANOVA) Statistics for the Social Sciences Psychology 340 Spring 2010.
One-Way ANOVA Multiple Comparisons.
MARE 250 Dr. Jason Turner Analysis of Variance (ANOVA)
Locating Variance: Post-Hoc Tests Dr James Betts Developing Study Skills and Research Methods (HL20107)
POST HOC COMPARISONS A significant F in ANOVA tells you only that there is a difference among the groups, not which groups are different. Post hoc tests.
Part I – MULTIVARIATE ANALYSIS
ANOVA Determining Which Means Differ in Single Factor Models Determining Which Means Differ in Single Factor Models.
Comparing Means.
Analysis of Variance (ANOVA) MARE 250 Dr. Jason Turner.
Analysis of Variance Chapter 15 - continued Two-Factor Analysis of Variance - Example 15.3 –Suppose in Example 15.1, two factors are to be examined:
POST HOC COMPARISONS What is the Purpose?
PSY 1950 Post-hoc and Planned Comparisons October 6, 2008.
Analyses of K-Group Designs : Omnibus F & Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation.
Statistics for the Social Sciences Psychology 340 Spring 2005 Analysis of Variance (ANOVA)
One-way Between Groups Analysis of Variance
Lecture 12 One-way Analysis of Variance (Chapter 15.2)
K-group ANOVA & Pairwise Comparisons ANOVA for multiple condition designs Pairwise comparisons and RH Testing Alpha inflation & Correction LSD & HSD procedures.
Comparing Means.
Statistical Methods in Computer Science Hypothesis Testing II: Single-Factor Experiments Ido Dagan.
Linear Contrasts and Multiple Comparisons (Chapter 9)
QNT 531 Advanced Problems in Statistics and Research Methods
Intermediate Applied Statistics STAT 460
1 Multiple Comparison Procedures Once we reject H 0 :   =   =...  c in favor of H 1 : NOT all  ’s are equal, we don’t yet know the way in which.
When we think only of sincerely helping all others, not ourselves,
Comparing Means. Anova F-test can be used to determine whether the expected responses at the t levels of an experimental factor differ from each other.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 13 Experimental Design and Analysis of Variance nIntroduction to Experimental Design.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Comparing Three or More Means 13.
Regression Part II One-factor ANOVA Another dummy variable coding scheme Contrasts Multiple comparisons Interactions.
Statistics 11 Confidence Interval Suppose you have a sample from a population You know the sample mean is an unbiased estimate of population mean Question:
ANOVA (Analysis of Variance) by Aziza Munir
Everyday is a new beginning in life. Every moment is a time for self vigilance.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
Regression Part II One-factor ANOVA Another dummy variable coding scheme Contrasts Multiple comparisons Interactions.
Orthogonal Linear Contrasts This is a technique for partitioning ANOVA sum of squares into individual degrees of freedom.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
One-way ANOVA: - Comparing the means IPS chapter 12.2 © 2006 W.H. Freeman and Company.
Chapter 8 1-Way Analysis of Variance - Completely Randomized Design.
MARE 250 Dr. Jason Turner Analysis of Variance (ANOVA)
1 1 Slide MULTIPLE COMPARISONS. 2 2 Slide Multiple Comparison Procedures n nSuppose that analysis of variance has provided statistical evidence to reject.
Statistics for the Social Sciences Psychology 340 Spring 2009 Analysis of Variance (ANOVA)
Topic 22: Inference. Outline Review One-way ANOVA Inference for means Differences in cell means Contrasts.
Chapters Way Analysis of Variance - Completely Randomized Design.
Multiple Comparison Procedures Comfort Ratings of 13 Fabric Types A.V. Cardello, C. Winterhalter, and H.G. Schultz (2003). "Predicting the Handle and Comfort.
ANOVA: Why analyzing variance to compare means?.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
ANalysis Of VAriance (ANOVA) Used for continuous outcomes with a nominal exposure with three or more categories (groups) Result of test is F statistic.
Stats/Methods II JEOPARDY. Jeopardy Estimation ANOVA shorthand ANOVA concepts Post hoc testsSurprise $100 $200$200 $300 $500 $400 $300 $400 $300 $400.
MARE 250 Dr. Jason Turner Analysis of Variance (ANOVA)
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Posthoc Comparisons finding the differences. Statistical Significance What does a statistically significant F statistic, in a Oneway ANOVA, tell us? What.
Comparing Three or More Means
Statistics Analysis of Variance.
Linear Contrasts and Multiple Comparisons (§ 8.6)
Multiple Comparisons: Example
1-Way Analysis of Variance - Completely Randomized Design
Comparing Means.
1-Way Analysis of Variance - Completely Randomized Design
Presentation transcript:

STA MCP1 Multiple Comparisons: Example Study Objective: Test the effect of six varieties of wheat to a particular race of stem rust. Treatment: Wheat Variety Levels: A(i=1), B (i=2), C (i=3), D (i=4), E (i=5), F (i=6) Experimental Unit: Pot of well mixed potting soil. Replication: Four (4) pots per treatment, four(4) plants per pot. Randomization: Varieties randomized to 24 pots (CRD) Response: Yield (Y ij ) (in grams) of wheat variety(i) at maturity in pot (j). Implementation Notes: Six seeds of a variety are planted in a pot. Once plants emerge, the four most vigorous are retained and inoculated with stem rust.

STA MCP2 Statistics and AOV Table RankVarietyMean Yield 5A50.3 4B69.0 6C24.0 2D94.0 3E75.0 1F95.3 n 1 =n 2 =n 3 =n 4 =n 5 =n=4 ANOVA Table SourcedfMeanSquareF Variety ** Error

STA MCP3 Overall F-test indicates that we reject H 0 and assume H A Which mean is not equal to which other means. Consider all possible comparisons between varieties: First sort the treatment levels such that the level with the smallest sample mean is first down to the level with the largest sample mean. Then in a table (matrix) format, compute the differences for all of the t(t-1)/2 possible pairs of level means.

STA MCP4 Question: How big does the difference have to be before we consider it “significantly big”? Largest Difference Smallest difference Differences for all of the t(t-1)/2=15 possible pairs of level means

STA MCP5 Fisher’s Protected LSD F=24.8 > F 5,18,.05 = > F is significant ‡‡‡ ‡ ‡ ‡‡ ‡ ‡ ‡ ‡ ‡‡ ‡ Implies that the two treatment level means are statistically different at the  = 0.05 level. acbdd c Alternate ways to indicate grouping of means.

STA MCP6 Tukey’s W (Honestly Significant Difference) Not protected hence no preliminary F test required. ‡‡‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ Implies that the two treatment level means are statistically different at the  = 0.05 level. abcbddc d Table 10

STA MCP7 Student-Newman-Keul Procedure (SNK) Not protected hence no preliminary F test required. Table 10 row Error df=18  = 0.05 col = r neighbors One between Two between

STA MCP8 ‡‡‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ Implies that the two treatment level means are statistically different at the  = 0.05 level. acbddc ‡ ‡ ‡ SNK

STA MCP9 Duncan’s New Multiple Range Test (Passe) Not protected hence no preliminary F test required. Table 11 (next pages) row error df = 18  = 0.05 col = r neighbors One between Two between

STA MCP10 Duncan’s Test Critical values

STA MCP11

STA MCP12 ‡‡‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ ‡ Implies that the two treatment level means are statistically different at the  = 0.05 level. acbddc ‡ ‡ ‡ Duncan’s MRT

STA MCP13 Scheffé’s S Method F=24.8 > F 5,18,.05 =2.77 => F is significant For comparing Reject H o : l=0 at  =0.05 if Since each treatment is replicated the same number of time, S will be the same for comparing any pair of treatment means.

STA MCP14 ‡‡‡ ‡‡ ‡ ‡ Implies that the two treatment level means are statistically different at the  = 0.05 level. ab ca bccb c Any difference larger than S=28.82 is significant. Very conservative => Experimentwise error driven. Scheffe’s S Method

STA MCP15 Grouping of Ranked Means LSD SNK Duncan’s Tukey’s HSD Scheffe’s S Which grouping will you use? 1) What is your risk level? 2) Comparisonwise versus Experimentwise error concerns.

STA MCP16 So, which MC method should you use…? There is famous story of a statistician and his two clients: Client 1 arrives daily with his hypothesis test and asks for assistance. The statistician helps him using α=0.05. After 1 year they have done 365 tests. If all nulls tested were indeed true, they would have made approx (365)(0.05) = 18 erroneous rejections, but they are satisfied with the progress of the research. Client 2 saves all his statistical analysis for end of the year, and approaches the statistician for help. The statistician responds: “My! You have a terrible multiple comparisons problem!” In cases where the researcher is just searching the data (does not have an interest in every comparison made), some form of error rate control beyond the simple Fisher’s LSD may be appropriate. On the other hand, if you definitely have an interest in every comparison, it may be better to use LSD (and accept the comparison-wise error rate).

STA MCP17 Which method to use? Some practical advice If comparisons were decided upon before examining the data (best): Just one comparison – use the standard (two-sample) t-test. (In this case use the pooled estimate of the common variance, MSE, and it’s corresponding error df. This is just Fisher’s LSD.) Few comparisons – use Bonferroni adjustment to the t-test. With m comparisons, use  /m for the critical value. Many comparisons – Bonferroni becomes increasingly conservative as m increases. At some point it is better to use Tukey (for pairwise comparisons) or Scheffe (for contrasts). If comparisons were decided upon after examining the data: Just want pairwise comparisons – use Tukey. All contrasts (linear combinations of treatment means) – use Scheffe.