Everyday is a new beginning in life. Every moment is a time for self vigilance.

Slides:



Advertisements
Similar presentations
Analysis of Variance (ANOVA) ANOVA methods are widely used for comparing 2 or more population means from populations that are approximately normal in distribution.
Advertisements

BPS - 5th Ed. Chapter 241 One-Way Analysis of Variance: Comparing Several Means.
ANOVA: Analysis of Variation
Comparing k Populations Means – One way Analysis of Variance (ANOVA)
Session 3 ANOVA POST HOC Testing STAT 3130 Statistical Methods I.
Analysis of Variance (ANOVA) Statistics for the Social Sciences Psychology 340 Spring 2010.
One-Way ANOVA Multiple Comparisons.
MARE 250 Dr. Jason Turner Analysis of Variance (ANOVA)
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Be humble in our attribute, be loving and varying in our attitude, that is the way to live in heaven.
ANOVA Determining Which Means Differ in Single Factor Models Determining Which Means Differ in Single Factor Models.
Analysis of Variance: Inferences about 2 or More Means
Comparing Means.
Analysis of Variance (ANOVA) MARE 250 Dr. Jason Turner.
Lecture 9: One Way ANOVA Between Subjects
Two Groups Too Many? Try Analysis of Variance (ANOVA)
Statistics for the Social Sciences Psychology 340 Spring 2005 Analysis of Variance (ANOVA)
13-1 Designing Engineering Experiments Every experiment involves a sequence of activities: Conjecture – the original hypothesis that motivates the.
One-way Between Groups Analysis of Variance
Lecture 12 One-way Analysis of Variance (Chapter 15.2)
The Analysis of Variance
Comparing Means.
Linear Contrasts and Multiple Comparisons (Chapter 9)
Chapter 12: Analysis of Variance
Analysis of Variance (ANOVA) Quantitative Methods in HPELS 440:210.
Intermediate Applied Statistics STAT 460
1 Multiple Comparison Procedures Once we reject H 0 :   =   =...  c in favor of H 1 : NOT all  ’s are equal, we don’t yet know the way in which.
When we think only of sincerely helping all others, not ourselves,
ANOVA Greg C Elvers.
Comparing Means. Anova F-test can be used to determine whether the expected responses at the t levels of an experimental factor differ from each other.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Statistics 11 Confidence Interval Suppose you have a sample from a population You know the sample mean is an unbiased estimate of population mean Question:
ANOVA (Analysis of Variance) by Aziza Munir
Running Scheffe’s Multiple Comparison Test in Excel
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
Chapter 19 Analysis of Variance (ANOVA). ANOVA How to test a null hypothesis that the means of more than two populations are equal. H 0 :  1 =  2 =
Be humble in our attribute, be loving and varying in our attitude, that is the way to live in heaven.
STA MCP1 Multiple Comparisons: Example Study Objective: Test the effect of six varieties of wheat to a particular race of stem rust. Treatment:
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
One-way ANOVA: - Comparing the means IPS chapter 12.2 © 2006 W.H. Freeman and Company.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
1 Orthogonality One way to delve further into the impact a factor has on the yield is to break the Sum of Squares (SSQ) into “orthogonal” components. If.
Chapter 8 1-Way Analysis of Variance - Completely Randomized Design.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 13: One-way ANOVA Marshall University Genomics Core.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Two-Way (Independent) ANOVA. PSYC 6130A, PROF. J. ELDER 2 Two-Way ANOVA “Two-Way” means groups are defined by 2 independent variables. These IVs are typically.
Statistics for the Social Sciences Psychology 340 Spring 2009 Analysis of Variance (ANOVA)
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Analysis of Variance STAT E-150 Statistical Methods.
Topic 22: Inference. Outline Review One-way ANOVA Inference for means Differences in cell means Contrasts.
Chapters Way Analysis of Variance - Completely Randomized Design.
Formula for Linear Regression y = bx + a Y variable plotted on vertical axis. X variable plotted on horizontal axis. Slope or the change in y for every.
1/54 Statistics Analysis of Variance. 2/54 Statistics in practice Introduction to Analysis of Variance Analysis of Variance: Testing for the Equality.
MARE 250 Dr. Jason Turner Analysis of Variance (ANOVA)
Chapter 12 Introduction to Analysis of Variance
The 2 nd to last topic this year!!.  ANOVA Testing is similar to a “two sample t- test except” that it compares more than two samples to one another.
Comparing Multiple Groups:
ANOVA: Analysis of Variation
ANOVA: Analysis of Variation
Everyday is a new beginning in life.
Factorial Experiments
Statistics Analysis of Variance.
STAT 6304 Final Project Fall, 2016.
Comparing Multiple Groups: Analysis of Variance ANOVA (1-way)
Linear Contrasts and Multiple Comparisons (§ 8.6)
Always be mindful of the kindness and not the faults of others.
Comparing Means.
Be humble in our attribute, be loving and varying in our attitude, that is the way to live in heaven.
1-Way Analysis of Variance - Completely Randomized Design
Presentation transcript:

Everyday is a new beginning in life. Every moment is a time for self vigilance.

Multiple Comparisons Error rate of control Pairwise comparisons Comparisons to a control Linear contrasts

Multiple Comparison Procedures Once we reject H 0 :   =   =...  c in favor of H 1 : NOT all  ’s are equal, we don’t yet know the way in which they’re not all equal, but simply that they’re not all the same. If there are 4 columns, are all 4  ’s different? Are 3 the same and one different? If so, which one? etc.

These “more detailed” inquiries into the process are called MULTIPLE COMPARISON PROCEDURES. Errors (Type I): We set up “  ” as the significance level for a hypothesis test. Suppose we test 3 independent hypotheses, each at  =.05; each test has type I error (rej H 0 when it’s true) of.05. However, P(at least one type I error in the 3 tests) = 1-P( accept all ) = 1 - (.95) 3 .14 3, given true

In other words, Probability is.14 that at least one type one error is made. For 5 tests, prob =.23. Question - Should we choose  =.05, and suffer (for 5 tests) a.23 Experimentwise Error rate (“a” or  E )? OR Should we choose/control the overall error rate, “a”, to be.05, and find the individual test  by 1 - (1-  ) 5 =.05, (which gives us  =.011)?

The formula 1 - (1-  ) 5 =.05 would be valid only if the tests are independent; often they’re not. [ e.g.,  1 =  2  2 =  3,  1 =  3 IF accepted & rejected, isn’t it more likely that rejected? ]

When the tests are not independent, it’s usually very difficult to arrive at the correct  for an individual test so that a specified value results for the experimentwise error rate (or called family error rate). Error Rates

There are many multiple comparison procedures. We’ll cover only a few. Pairwise Comparisons Method 1: (Fisher Test) Do a series of pairwise t-tests, each with specified  value (for individual test). This is called “Fisher’s LEAST SIGNIFICANT DIFFERENCE” (LSD).

Example: Broker Study A financial firm would like to determine if brokers they use to execute trades differ with respect to their ability to provide a stock purchase for the firm at a low buying price per share. To measure cost, an index, Y, is used. Y=1000(A-P)/A where P=per share price paid for the stock; A=average of high price and low price per share, for the day. “The higher Y is the better the trade is.”

} R=6 CoL: broker Five brokers were in the study and six trades were randomly assigned to each broker.

 =.05, F TV = 2.76 (reject equal column MEANS) “MSW”

0 For any comparison of 2 columns,  /2 CLCL CuCu Y i -Y j AR: 0 + t  /2 x  MSW x  nini njnj df w (n i = n j = 6, here) MSW : Pooled Variance, the estimate for the common variance

In our example, with  =.05 0  (  21.2 x  ) 0  This value, 5.48 is called the Least Significant Difference (LSD). When same number of data points, R, in each column, LSD = t  /2 x  2xMSW. R

Col: Summarize the comparison results. (p. 443) 1. Now, rank order and compare: Underline Diagram

Step 2: identify difference > 5.48, and mark accordingly: : compare the pair of means within each subset: Comparison difference vs. LSD 3 vs. 1 2 vs. 4 2 vs. 5 4 vs. 5 ****** <<<<<<<< * Contiguous; no need to detail 5

Conclusion : 3, 1 2, 4, 5 Can get “inconsistency”: Suppose col 5 were 18: Now: Comparison |difference| vs. LSD 3 vs. 1 2 vs. 4 2 vs. 5 4 vs. 5 * <<><<<>< Conclusion : 3, ??? 6

Broker 1 and 3 are not significantly different but they are significantly different to the other 3 brokers. Conclusion : 3, Broker 2 and 4 are not significantly different, and broker 4 and 5 are not significantly different, but broker 2 is different to (smaller than) broker 5 significantly.

Fisher's pairwise comparisons (Minitab) Family error rate = Individual error rate = Critical value =  t _  /2 (not given in version 16.1) Intervals for (column level mean) - (row level mean) Minitab: Stat>>ANOVA>>One-Way Anova then click “comparisons”. Col 1 < Col 2 Col 2 = Col 4

Minitab Output for Broker Data Grouping Information Using Fisher Method broker N Mean Grouping A A A B B Means that do not share a letter are significantly different.

Pairwise comparisons Method 2: (Tukey Test) A procedure which controls the experimentwise error rate is “TUKEY’S HONESTLY SIGNIFICANT DIFFERENCE TEST ”.

Tukey’s method works in a similar way to Fisher’s LSD, except that the “LSD” counterpart (“HSD”) is not t  /2 x  MSW x  nini njnj t  /2 x  2xMSW R = or, for equal number of data points/col ( ), but tuk X  2xMSW, R where t uk has been computed to take into account all the inter-dependencies of the different comparisons.  /2

HSD = t uk  /2 x  2MSW R _______________________________________ A more general approach is to write HSD = q  x  MSW R where q  = t uk  /2 x  q = (Y largest - Y smallest ) /  MSW R ---- probability distribution of q is called the “Studentized Range Distribution”. --- q = q(c, df), where c =number of columns, and df = df of MSW

With c = 5 and df = 25, from table (or Minitab): q = 4.15 t uk = 4.15/1.414 = 2.93 Then, HSD = 4.15   also    x 

In our earlier example: Rank order: (No differences [contiguous] > 7.80)

Comparison |difference| >or< vs. 1 3 vs. 2 3 vs. 4 3 vs. 5 1 vs. 2 1 vs. 4 1 vs. 5 2 vs. 4 2 vs. 5 4 vs. 5 * <<>><>><<<<<>><>><<< 9 12 * 8 11 * 5 * (contiguous) 7 3, 1, 24, 5 2 is “same as 1 and 3, but also same as 4 and 5.”

Tukey's pairwise comparisons (Minitab) Family error rate = Individual error rate = Critical value = 4.15  q _  (not given in version 16.1) Intervals for (column level mean) - (row level mean) Minitab: Stat>>ANOVA>>One-Way Anova then click “comparisons”.

Minitab Output for Broker Data Grouping Information Using Tukey Method broker N Mean Grouping A A A B B B Means that do not share a letter are significantly different.

Special Multiple Comp. Method 3: Dunnett’s test Designed specifically for (and incorporating the interdependencies of) comparing several “treatments” to a “control.” Example: Col } R=6 CONTROL Analog of LSD (=t  /2 x  2 MSW ) R D = Dut  /2 x  2 MSW R From table or Minitab

D= Dut  /2 x  2 MSW/R = 2.61 (  2(21.2) ) = Cols 4 and 5 differ from the control [ 1 ]. - Cols 2 and 3 are not significantly different from control. 6 In our example: CONTROL Comparison |difference| >or< vs. 2 1 vs. 3 1 vs. 4 1 vs < >

Dunnett's comparisons with a control (Minitab) Family error rate =  controlled!! Individual error rate = Critical value = 2.61  Dut_  /2 Control = level (1) of broker Intervals for treatment mean minus control mean Level Lower Center Upper ( * ) ( * ) ( * ) ( * ) Minitab: Stat>>ANOVA>>General Linear Model then click “comparisons”.

What Method Should We Use? Fisher procedure can be used only after the F-test in the Anova is significant at 5%. Otherwise, use Tukey procedure. Note that to avoid being too conservative, the significance level of Tukey test can be set bigger (10%), especially when the number of levels is big.

Contrast Example Placebo Sulfa Type S 1 Sulfa Type S 2 Anti- biotic Type A Suppose the questions of interest are (1) Placebo vs. Non-placebo (2) S 1 vs. S 2 (3) (Average) S vs. A

In general, a question of interest can be expressed by a linear combination of column means such as with restriction that  a j = 0. Such linear combinations are called contrasts.

Test if a contrast has mean 0 The sum of squares for contrast Z is where R is the number of rows (replicates). The test statistic Fcalc = SSC/MSW is distributed as F with 1 and (df of error) degrees of freedom. Reject E[C]= 0 if the observed Fcalc is too large (say, > F 0.05 (1,df of error) at 5% significant level).

Example 1 (cont.): a j ’s for the 3 contrasts P vs. P: C 1 S 1 vs. S 2 :C 2 S vs. A: C PS1S1 S2S2 A

top row middle row bottom row                            Calculating

Y. 1 Y. 2 Y. 3 Y Placebo vs. drugs S 1 vs. S 2 Average S vs. A P S 1 S 2 A

Tests for Contrasts Source SSQ df MSQ F Error C1C2C3C1C2C F (1,28)=4.20

Example 1 (Cont.): Conclusions The mean response for Placebo is significantly different to that for Non-placebo. There is no significant difference between using Types S1 and S2. Using Type A is significantly different to using Type S on average.