Everyday is a new beginning in life. Every moment is a time for self vigilance.
Multiple Comparisons Error rate of control Pairwise comparisons Comparisons to a control Linear contrasts
Multiple Comparison Procedures Once we reject H 0 : = =... c in favor of H 1 : NOT all ’s are equal, we don’t yet know the way in which they’re not all equal, but simply that they’re not all the same. If there are 4 columns, are all 4 ’s different? Are 3 the same and one different? If so, which one? etc.
These “more detailed” inquiries into the process are called MULTIPLE COMPARISON PROCEDURES. Errors (Type I): We set up “ ” as the significance level for a hypothesis test. Suppose we test 3 independent hypotheses, each at =.05; each test has type I error (rej H 0 when it’s true) of.05. However, P(at least one type I error in the 3 tests) = 1-P( accept all ) = 1 - (.95) 3 .14 3, given true
In other words, Probability is.14 that at least one type one error is made. For 5 tests, prob =.23. Question - Should we choose =.05, and suffer (for 5 tests) a.23 Experimentwise Error rate (“a” or E )? OR Should we choose/control the overall error rate, “a”, to be.05, and find the individual test by 1 - (1- ) 5 =.05, (which gives us =.011)?
The formula 1 - (1- ) 5 =.05 would be valid only if the tests are independent; often they’re not. [ e.g., 1 = 2 2 = 3, 1 = 3 IF accepted & rejected, isn’t it more likely that rejected? ]
When the tests are not independent, it’s usually very difficult to arrive at the correct for an individual test so that a specified value results for the experimentwise error rate (or called family error rate). Error Rates
There are many multiple comparison procedures. We’ll cover only a few. Pairwise Comparisons Method 1: (Fisher Test) Do a series of pairwise t-tests, each with specified value (for individual test). This is called “Fisher’s LEAST SIGNIFICANT DIFFERENCE” (LSD).
Example: Broker Study A financial firm would like to determine if brokers they use to execute trades differ with respect to their ability to provide a stock purchase for the firm at a low buying price per share. To measure cost, an index, Y, is used. Y=1000(A-P)/A where P=per share price paid for the stock; A=average of high price and low price per share, for the day. “The higher Y is the better the trade is.”
} R=6 CoL: broker Five brokers were in the study and six trades were randomly assigned to each broker.
=.05, F TV = 2.76 (reject equal column MEANS) “MSW”
0 For any comparison of 2 columns, /2 CLCL CuCu Y i -Y j AR: 0 + t /2 x MSW x nini njnj df w (n i = n j = 6, here) MSW : Pooled Variance, the estimate for the common variance
In our example, with =.05 0 ( 21.2 x ) 0 This value, 5.48 is called the Least Significant Difference (LSD). When same number of data points, R, in each column, LSD = t /2 x 2xMSW. R
Col: Summarize the comparison results. (p. 443) 1. Now, rank order and compare: Underline Diagram
Step 2: identify difference > 5.48, and mark accordingly: : compare the pair of means within each subset: Comparison difference vs. LSD 3 vs. 1 2 vs. 4 2 vs. 5 4 vs. 5 ****** <<<<<<<< * Contiguous; no need to detail 5
Conclusion : 3, 1 2, 4, 5 Can get “inconsistency”: Suppose col 5 were 18: Now: Comparison |difference| vs. LSD 3 vs. 1 2 vs. 4 2 vs. 5 4 vs. 5 * <<><<<>< Conclusion : 3, ??? 6
Broker 1 and 3 are not significantly different but they are significantly different to the other 3 brokers. Conclusion : 3, Broker 2 and 4 are not significantly different, and broker 4 and 5 are not significantly different, but broker 2 is different to (smaller than) broker 5 significantly.
Fisher's pairwise comparisons (Minitab) Family error rate = Individual error rate = Critical value = t _ /2 (not given in version 16.1) Intervals for (column level mean) - (row level mean) Minitab: Stat>>ANOVA>>One-Way Anova then click “comparisons”. Col 1 < Col 2 Col 2 = Col 4
Minitab Output for Broker Data Grouping Information Using Fisher Method broker N Mean Grouping A A A B B Means that do not share a letter are significantly different.
Pairwise comparisons Method 2: (Tukey Test) A procedure which controls the experimentwise error rate is “TUKEY’S HONESTLY SIGNIFICANT DIFFERENCE TEST ”.
Tukey’s method works in a similar way to Fisher’s LSD, except that the “LSD” counterpart (“HSD”) is not t /2 x MSW x nini njnj t /2 x 2xMSW R = or, for equal number of data points/col ( ), but tuk X 2xMSW, R where t uk has been computed to take into account all the inter-dependencies of the different comparisons. /2
HSD = t uk /2 x 2MSW R _______________________________________ A more general approach is to write HSD = q x MSW R where q = t uk /2 x q = (Y largest - Y smallest ) / MSW R ---- probability distribution of q is called the “Studentized Range Distribution”. --- q = q(c, df), where c =number of columns, and df = df of MSW
With c = 5 and df = 25, from table (or Minitab): q = 4.15 t uk = 4.15/1.414 = 2.93 Then, HSD = 4.15 also x
In our earlier example: Rank order: (No differences [contiguous] > 7.80)
Comparison |difference| >or< vs. 1 3 vs. 2 3 vs. 4 3 vs. 5 1 vs. 2 1 vs. 4 1 vs. 5 2 vs. 4 2 vs. 5 4 vs. 5 * <<>><>><<<<<>><>><<< 9 12 * 8 11 * 5 * (contiguous) 7 3, 1, 24, 5 2 is “same as 1 and 3, but also same as 4 and 5.”
Tukey's pairwise comparisons (Minitab) Family error rate = Individual error rate = Critical value = 4.15 q _ (not given in version 16.1) Intervals for (column level mean) - (row level mean) Minitab: Stat>>ANOVA>>One-Way Anova then click “comparisons”.
Minitab Output for Broker Data Grouping Information Using Tukey Method broker N Mean Grouping A A A B B B Means that do not share a letter are significantly different.
Special Multiple Comp. Method 3: Dunnett’s test Designed specifically for (and incorporating the interdependencies of) comparing several “treatments” to a “control.” Example: Col } R=6 CONTROL Analog of LSD (=t /2 x 2 MSW ) R D = Dut /2 x 2 MSW R From table or Minitab
D= Dut /2 x 2 MSW/R = 2.61 ( 2(21.2) ) = Cols 4 and 5 differ from the control [ 1 ]. - Cols 2 and 3 are not significantly different from control. 6 In our example: CONTROL Comparison |difference| >or< vs. 2 1 vs. 3 1 vs. 4 1 vs < >
Dunnett's comparisons with a control (Minitab) Family error rate = controlled!! Individual error rate = Critical value = 2.61 Dut_ /2 Control = level (1) of broker Intervals for treatment mean minus control mean Level Lower Center Upper ( * ) ( * ) ( * ) ( * ) Minitab: Stat>>ANOVA>>General Linear Model then click “comparisons”.
What Method Should We Use? Fisher procedure can be used only after the F-test in the Anova is significant at 5%. Otherwise, use Tukey procedure. Note that to avoid being too conservative, the significance level of Tukey test can be set bigger (10%), especially when the number of levels is big.
Contrast Example Placebo Sulfa Type S 1 Sulfa Type S 2 Anti- biotic Type A Suppose the questions of interest are (1) Placebo vs. Non-placebo (2) S 1 vs. S 2 (3) (Average) S vs. A
In general, a question of interest can be expressed by a linear combination of column means such as with restriction that a j = 0. Such linear combinations are called contrasts.
Test if a contrast has mean 0 The sum of squares for contrast Z is where R is the number of rows (replicates). The test statistic Fcalc = SSC/MSW is distributed as F with 1 and (df of error) degrees of freedom. Reject E[C]= 0 if the observed Fcalc is too large (say, > F 0.05 (1,df of error) at 5% significant level).
Example 1 (cont.): a j ’s for the 3 contrasts P vs. P: C 1 S 1 vs. S 2 :C 2 S vs. A: C PS1S1 S2S2 A
top row middle row bottom row Calculating
Y. 1 Y. 2 Y. 3 Y Placebo vs. drugs S 1 vs. S 2 Average S vs. A P S 1 S 2 A
Tests for Contrasts Source SSQ df MSQ F Error C1C2C3C1C2C F (1,28)=4.20
Example 1 (Cont.): Conclusions The mean response for Placebo is significantly different to that for Non-placebo. There is no significant difference between using Types S1 and S2. Using Type A is significantly different to using Type S on average.