Lecture 24: Thurs. Dec. 4 Extra sum of squares F-tests (10.3) R-squared statistic (10.4.1) Residual plots (11.2) Influential observations (11.3, 11.4.3.
Published byModified over 4 years ago
Presentation on theme: "Lecture 24: Thurs. Dec. 4 Extra sum of squares F-tests (10.3) R-squared statistic (10.4.1) Residual plots (11.2) Influential observations (11.3, 11.4.3."— Presentation transcript:
Lecture 24: Thurs. Dec. 4 Extra sum of squares F-tests (10.3) R-squared statistic (10.4.1) Residual plots (11.2) Influential observations (11.3, 11.4.3 – very brief) Course summary More advanced statistics courses
Model Fits Parallel Regression Lines Model Separate Regression Lines Model How do we test whether parallel regression lines model is appropriate ( )?
Extra Sum of Squares F-tests Suppose we want to test whether multiple coefficients are equal to zero, e.g., test t-tests, either individually or in combination cannot be used to test such a hypothesis involving more than one parameter. F-test for joint significance of several terms
Extra Sum of Squares F-test Under, the F- statistic has an F distribution with number of betas being tested, n-(p+1) degrees of freedom. p-value can be found by using Table A.4 or creating a Formula in JMP with probability, F distribution and the putting the value of the F- statistic for F and the appropriate degrees of freedom. This gives the P(F random variable with degrees of freedom < observed F-statistic) which equals 1 – p-value
Extra Sum of Squares F-test example Testing parallel regression lines model (H 0, reduced model ) vs. separate regression lines model (full model) in manager example Full model: Reduced model: F-statistic p-value: P(F random variable with 2,53 df > 3.29)
Second Example of F-test For echolocation study, in parallel regression model, test Full model: Reduced model: F-statistic: p-value: P(F random variable with 2,16 degrees of freedom > 0.43) = 1-0.342 = 0.658
Manager Example Findings The runs supervised by Manager a appear abnormally time consuming. Manager b has high initial fixed setup costs, but the time per unit is the best of the three. Manager c has the lowest fixed costs and per unit production time in between managers a and b. Adjustments to marginal analysis via regression only control for possible differences in size among production runs. Other differences might be relevant, e.g., difficulty of production runs. It could be that Manager a supervised most difficult production runs.
Special Cases of F-test Multiple Regression Model: If we want to test if one equals zero, e.g.,, F- test is equivalent to t-test. Suppose we want to test, i.e., null hypothesis is that the mean of Y does not depend on any of the explanatory variables. JMP automatically computes this test under Analysis of Variance, Prob>F. For separate regression lines model, strong evidence that mean run time does depend on at least one of run size, manager.
The R-Squared Statistic For separate regression lines model in production time example, Similar interpretation as in simple linear regression. The R-squared statistic is the proportion of the variation in y explained by the multiple regression model Total Sum of Squares: Residual Sum of Squares:
Assumptions of Multiple Linear Regression Model Assumptions of multiple linear regression: –For each subpopulation, (A-1A) (A-1B) (A-1C) The distribution of is normal [Distribution of residuals should not depend on ] –(A-2) The observations are independent of one another
Checking/Refining Model Tools for checking (A-1A) and (A-1B) –Residual plots versus predicted (fitted) values –Residual plots versus explanatory variables –If model is correct, there should be no pattern in the residual plots Tool for checking (A-1C) – Normal quantile plot Tool for checking (A-2) –Residual plot versus time or spatial order of observations
Residual Plots for Echolocation Study Model: Residual vs. predicted plot suggests that variance is not constant and possible nonlinearity.
Coded Residual Plots For multiple regression involving nominal variables, a plot of the residuals versus a continuous explanatory variable with codes for the nominal variable is very useful.
Residual Plots for Transformed Model Model: Residual Plots for Transformed Model Transformed model appears to satisfy Assumptions (1-B) and Assumptions (1-C).
Normal Quantile Plot To check Assumption 1-C [populations are normal], we can use a normal quantile plot of the residuals as with simple linear regression.
Dealing with Influential Observations By influential observations, we mean one or several observations whose removal causes a different conclusion or course of action. Display 11.8 provides a strategy for dealing with suspected influential cases.
Cook’s Distance Cook’s distance is a statistic that can be used to flag observations which are influential. After fit model, click on red triangle next to Response, Save columns, Cook’s D influence. Cook’s distance of close to or larger than 1 indicates a large influence.
Course Summary Cont. Techniques: –Methods for comparing two groups –Methods for comparing more than two groups –Simple and multiple linear regression for predicting a response variable based on explanatory variables and (with a random experiment) finding the causal effect of explanatory variables on a response variable.
Course Summary Cont. Key messages: –Scope of inference: randomized experiments vs. observational studies, random samples vs. nonrandom samples. Always use randomized experiments and random samples if possible. –p-values only assesses whether there is strong evidence against the null hypothesis. They do not provide information about either practical significance. Confidence intervals are needed to assess practical significance. –When designing a study, choose a sample size that is large enough so that it will be unlikely that the confidence interval will contain both the null hypothesis and a practically significant alternative.
Course Summary Cont. Key messages: –Beware of multiple comparisons and data snooping. Use Tukey-Kramer method or Bonferroni to adjust for multiple comparisons. –Simple/multiple linear regression is a powerful method for making predictions and understanding causation in a randomized experiment. But beware of extrapolation and making causal statements when the explanatory variables were not randomly assigned.
More Statistics? Stat 210: Sample Survey Design. Will be offered next year. Stat 202: Intermediate Statistics. Offered next fall. Stat 431: Statistical Inference. Will be offered this spring (as well as throughout next year). Stat 430: Probability. Offered this spring. Stat 500: Applied Regression and Analysis of Variance. Offered next fall. Stat 501: Introduction to Nonparametric Methods and Log-linear models. Offered this spring.