# Model Adequacy Checking in the ANOVA Text reference, Section 3-4, pg

## Presentation on theme: "Model Adequacy Checking in the ANOVA Text reference, Section 3-4, pg"— Presentation transcript:

Model Adequacy Checking in the ANOVA Text reference, Section 3-4, pg
Checking assumptions is important Have we fit the right model? Normality Independence Constant variance Later we will talk about what to do if some of these assumptions are violated

Model Adequacy Checking in the ANOVA
Violations of the basic assumptions and model adequacy can be investigated by examination of residuals Residuals (see text, Sec. 3-4, pg. 75) Statistical software usually generates the residuals Residual plots are very useful Normal probability plot of residuals

Other Important Residual Plots

Other Important Residual Plots
A tendency of positive/negative residuals indicates correlation – violation of independence assumption Special attention is needed on uneven spread on the two ends in residuals versus time plot Peak discharge vs. fitted value (Ex. 3-5, page 81) – a violation of independence or constant variance assumptions Figure 3-7 Plot of residuals versus fitted value (Ex. 3-5)

Outliers When the residual is much larger/smaller than any of the others First check if it is a human mistake in recording /calculation If it cannot be rejected based on reasonable non-statistical grounds, it must be specially studied At least two analyses (with and without the outlier) can be made Procedure to detect outliers Visual examination Using standardized residuals If eij: N(0,s2), then dij: N(0,1). Therefore, 68% of dij should fall within the limits ±1; 95% of dij should fall within the limits ±2; all dij should fall within the limits ±3; A residual bigger than 3 or 4 standard deviations from zero is a potential outlier.

Post-ANOVA Comparison of Means
The analysis of variance tests the hypothesis of equal treatment means Assume that residual analysis is satisfactory If that hypothesis is rejected, we don’t know which specific means are different Determining which specific means differ following an ANOVA is called the multiple comparisons problem There are lots of ways to do this…see text, Section 3-5, pg. 87 We will use pairwise t-tests on means…sometimes called Fisher’s Least Significant Difference (or Fisher’s LSD) Method

Contrasts Used for multiple comparisons
A contrast is a linear combination of parameters with The hypotheses are Hypothesis testing can be done using a t-test: The null hypothesis would be rejected if |to| > ta/2,N-a

Contrasts (cont.) Hypothesis testing can be done using an F-test:
The null hypothesis would be rejected if Fo > Fa,1,N-a The 100(1-a) percent confidence interval on the contrast is

Comparing Pairs of Treatment Means
Comparing all pairs of a treatment means <=> testing Ho: mi = mj for all i  j. Tukey’s test Two means are significantly different if the absolute value of their sample differences exceeds for equal sample sizes Overall significance level is a for unequal sample sizes Overall significance level is at most a

100(1-a) percent confidence intervals (Tukey’s test)
for equal sample sizes for unequal sample sizes

Example: etch rate experiment
a = f = 16 (degrees of freedom for error) q0.05(4,16) = 4.05 (Table VII),

The Fisher Least Significant Difference (LSD) Method
Two means are significantly different if where for equal sample sizes for unequal sample sizes For Example 3-1

Duncan’s Multiple Range Test (page 100)
The Newman-Keuls Test (page 102) Are they the same? Which one to use? Opinions differ LSD method and Duncan’s multiple range test are the most powerful ones Tukey method controls the overall error rate

Design-Expert Output Treatment Means (Adjusted, If Necessary) Estimated Standard Mean Error Mean Standard t for H0 Treatment Difference DF Error Coeff=0 Prob > |t| 1 vs vs < vs < vs vs < vs <

For the Case of Quantitative Factors, a Regression Model is often Useful
Quantitative factors: ones whose levels can be associated with points on a numerical scale Qualitative factors: Whose levels cannot be arranged in order of magnitude The experimenter is often interested in developing an interpolation equation for the response variable in the experiment – empirical model Approximations: y = b0 + b1 x + b2 x2 + e (a quadratic model) y = b0 + b1 x + b2 x2 + b3 x3 + e (a cubic model) The method of least squares can be used to estimate the parameters. Balance between “goodness-of-fit” and “generality”