Download presentation

Presentation is loading. Please wait.

Published byAbigayle Barrett Modified about 1 year ago

1
1 Model Adequacy Checking in the ANOVA Text reference, Section 3-4, pg. 75 Checking assumptions is important Have we fit the right model? Normality Independence Constant variance Later we will talk about what to do if some of these assumptions are violated

2
2 Model Adequacy Checking in the ANOVA Residuals (see text, Sec. 3- 4, pg. 75) Statistical software usually generates the residuals Residual plots are very useful Normal probability plot of residuals Violations of the basic assumptions and model adequacy can be investigated by examination of residuals

3
3 Other Important Residual Plots

4
4 A tendency of positive/negative residuals indicates correlation – violation of independence assumption Special attention is needed on uneven spread on the two ends in residuals versus time plot Peak discharge vs. fitted value (Ex. 3-5, page 81) – a violation of independence or constant variance assumptions Figure 3-7 Plot of residuals versus fitted value (Ex. 3-5)

5
5 Outliers When the residual is much larger/smaller than any of the others First check if it is a human mistake in recording /calculation If it cannot be rejected based on reasonable non-statistical grounds, it must be specially studied At least two analyses (with and without the outlier) can be made Procedure to detect outliers Visual examination Using standardized residuals If e ij : N(0, 2 ), then d ij : N(0, ). Therefore, 68% of d ij should fall within the limits ±1; 95% of d ij should fall within the limits ±2; all d ij should fall within the limits ±3; A residual bigger than 3 or 4 standard deviations from zero is a potential outlier.

6
6 Post-ANOVA Comparison of Means The analysis of variance tests the hypothesis of equal treatment means Assume that residual analysis is satisfactory If that hypothesis is rejected, we don’t know which specific means are different Determining which specific means differ following an ANOVA is called the multiple comparisons problem There are lots of ways to do this…see text, Section 3-5, pg. 87 We will use pairwise t-tests on means…sometimes called Fisher’s Least Significant Difference (or Fisher’s LSD) Method

7
7 Contrasts Used for multiple comparisons A contrast is a linear combination of parameters with The hypotheses are Hypothesis testing can be done using a t-test: The null hypothesis would be rejected if |t > t /2,N-a

8
8 Contrasts (cont.) Hypothesis testing can be done using an F-test: The null hypothesis would be rejected if F > F ,1,N-a The 100(1- ) percent confidence interval on the contrast is

9
9 Comparing Pairs of Treatment Means Comparing all pairs of a treatment means testing H o : i = j for all i j. Tukey’s test Two means are significantly different if the absolute value of their sample differences exceeds for equal sample sizes Overall significance level is for unequal sample sizes Overall significance level is at most

10
10 100(1- ) percent confidence intervals (Tukey’s test) for equal sample sizes for unequal sample sizes

11
11 Example: etch rate experiment = 0.05 f = 16 (degrees of freedom for error) q 0.05 (4,16) = 4.05 (Table VII),

12
12 The Fisher Least Significant Difference (LSD) Method Two means are significantly different if where for equal sample sizes for unequal sample sizes For Example 3-1

13
13 Duncan’s Multiple Range Test (page 100) The Newman-Keuls Test (page 102) Are they the same? Which one to use? Opinions differ LSD method and Duncan’s multiple range test are the most powerful ones Tukey method controls the overall error rate

14
14 Design-Expert Output Treatment Means (Adjusted, If Necessary) EstimatedStandard MeanError 1-160551.20 8.17 2-180587.40 8.17 3-200625.40 8.17 4-220707.00 8.17 MeanStandardt for H0 TreatmentDifferenceDFErrorCoeff=0Prob > |t| 1 vs 2-36.20111.55-3.130.0064 1 vs 3-74.201 11.55 -6.42<0.0001 1 vs 4-155.801 11.55 -13.49< 0.0001 2 vs 3-38.001 11.55 -3.290.0046 2 vs 4-119.601 11.55 -10.35< 0.0001 3 vs 4-81.601 11.55 -7.06< 0.0001

15
15 For the Case of Quantitative Factors, a Regression Model is often Useful Quantitative factors: ones whose levels can be associated with points on a numerical scale Qualitative factors: Whose levels cannot be arranged in order of magnitude The experimenter is often interested in developing an interpolation equation for the response variable in the experiment – empirical model Approximations: y = 0 + 1 x + 2 x 2 + (a quadratic model) y = 0 + 1 x + 2 x 2 + 3 x 3 + (a cubic model) The method of least squares can be used to estimate the parameters. Balance between “goodness-of-fit” and “generality”

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google