2 18.1 IntroductionIn this chapter we extend the simple linear regression model, and allow for any number of independent variables.We expect to build a model that fits the data better than the simple linear regression model.
3 We will use computer printout to Assess the modelHow well it fits the dataIs it usefulAre any required conditions violated?Employ the modelInterpreting the coefficientsPredictions using the prediction equationEstimating the expected value of the dependent variable
4 18.2 Model and Required Conditions We allow for k independent variables to potentially be related to the dependent variabley = b0 + b1x1+ b2x2 + …+ bkxk + eCoefficientsRandom error variableDependent variableIndependent variables
5 y y = b0 + b1x y = b0 + b1x y = b0 + b1x y = b0 + b1x The simple linear regression modelallows for one independent variable, “x”y =b0 + b1x + eyy = b0 + b1xy = b0 + b1xy = b0 + b1xy = b0 + b1xNote how the straight linebecomes a plain, and...y = b0 + b1x1 + b2x2y = b0 + b1x1 + b2x2y = b0 + b1x1 + b2x2y = b0 + b1x1 + b2x2y = b0 + b1x1 + b2x2y = b0 + b1x1 + b2x2X1y = b0 + b1x1 + b2x2The multiple linear regression modelallows for more than one independent variable.Y = b0 + b1x1 + b2x2 + eX2
6 yy= b0+ b1x2b0X1y = b0 + b1x12 + b2x2… a parabola becomes a parabolic surfaceX2
7 Required conditions for the error variable e The error e is normally distributed with mean equal to zero and a constant standard deviation se (independent of the value of y). se is unknown.The errors are independent.These conditions are required in order toestimate the model coefficients,assess the resulting model.
8 18.3 Estimating the Coefficients and Assessing the Model The procedureObtain the model coefficients and statistics using a statistical computer software.Diagnose violations of required conditions. Try to remedy problems when identified.Assess the model fit and usefulness using the model statistics.If the model passes the assessment tests, use it to interpret the coefficients and generate predictions.
9 Example 18.1 Where to locate a new motor inn? La Quinta Motor Inns is planning an expansion.Management wishes to predict which sites are likely to be profitable.Several areas where predictors of profitability can be identified are:CompetitionMarket awarenessDemand generatorsDemographicsPhysical quality
10 Profitability Margin Competition Market awareness Customers Community PhysicalRoomsNearestOfficespaceCollegeenrollmentIncomeDisttwnNumber ofhotels/motelsrooms within3 miles fromthe site.Distance tothe nearestLa Quinta inn.Medianhouseholdincome.Distance todowntown.
11 Margin =b0 + b1Rooms + b2Nearest + b3Office + b4College Data was collected from randomly selected 100 inns that belong to La Quinta, and ran for the following suggested model:Margin =b0 + b1Rooms + b2Nearest + b3Office + b4College+ b5Income + b6Disttwn +
12 Let us assess this equation This is the sample regression equation(sometimes called the prediction equation)Excel outputMARGIN = ROOMS NEAREST+ 0.02OFFICE COLLEGEINCOME DISTTWNLet us assess this equation
13 Standard error of estimate We need to estimate the standard error of estimateCompare se to the mean value of yFrom the printout, Standard Error =Calculating the mean value of y we haveIt seems se is not particularly small.Can we conclude the model does not fit the data well?
14 Coefficient of determination The definition isFrom the printout, R2 =52.51% of the variation in the measure of profitability is explained by the linear regression model formulated above.When adjusted for degrees of freedom, Adjusted R2 = 1-[SSE/(n-k-1)] / [SS(Total)/(n-1)] == 49.44%
15 Testing the validity of the model We pose the question:Is there at least one independent variable linearly related to the dependent variable?To answer the question we test the hypothesisH0: b1 = b2 = … = bk = 0H1: At least one bi is not equal to zero.If at least one bi is not equal to zero, the model is valid.
16 To test these hypotheses we perform an analysis of variance procedure. The F testConstruct the F statisticRejection regionF>Fa,k,n-k-1MSR=SSR/kMSEMSRF=[Variation in y] = SSR + SSE.Large F results from a large SSR.Then, much of the variation in y isexplained by the regression model.The null hypothesis shouldbe rejected; thus, the model is valid.MSE=SSE/(n-k-1)Required conditions mustbe satisfied.
17 Example 18.1 - continued Excel provides the following ANOVA results MSR/MSEMSESSESSRMSR
18 Example 18.1 - continued Excel provides the following ANOVA results Conclusion: There is sufficient evidence to rejectthe null hypothesis in favor of the alternative hypothesis.At least one of the bi is not equal to zero. Thus, at least one independent variable is linearly related to y.This linear regression model is validFa,k,n-k-1 = F0.05,6, =2.17F = > 2.17Also, the p-value (Significance F) = (10)-13Clearly, a = 0.05> (10)-13, and the null hypothesisis rejected.
19 Let us interpret the coefficients This is the intercept, the value of y when all the variables take the value zero. Since the data range of all the independent variables do not cover the value zero, do not interpret the intercept.In this model, for each additional 1000 rooms within 3 mile of the La Quinta inn, the operating margin decreases on the average by 7.6% (assuming the other variables are held constant).
20 In this model, for each additional mile that the nearest competitor is to La Quinta inn, the average operating margin decreases by 1.65%For each additional 1000 sq-ft of office space, the average increase in operating margin will be .02%.For additional thousand students MARGIN increases by .21%.For additional $1000 increase in median household income, MARGIN decreases by .41%For each additional mile to the downtown center, MARGIN increases by .23% on the average
21 Testing the coefficients The hypothesis for each biExcel printoutTest statisticH0: bi = 0H1: bi = 0d.f. = n - k -1
22 Using the linear regression equation The model can be used byProducing a prediction interval for the particular value of y, for a given set of values of xi.Producing an interval estimate for the expected value of y, for a given set of values of xi.The model can be used to learn about relationships between the independent variables xi, and the dependent variable y, by interpreting the coefficients bi
23 Example 18.1 - continued. Produce predictions Predict the MARGIN of an inn at a site with the following characteristics:3815 rooms within 3 miles,Closet competitor 3.4 miles away,476,000 sq-ft of office space,24,500 college students,$39,000 median household income,3.6 miles distance to downtown center.MARGIN = (3815) (3.4) (476)+0.212(24.5) (39) (3.6) = 37.1%
24 18.4 Regression Diagnostics - II The required conditions for the model assessment to apply must be checked.Is the error variable normally distributed?Is the error variance constant?Are the errors independent?Can we identify outliers?Is multicollinearity a problem?Draw a histogram of the residualsPlot the residuals versus y^Plot the residuals versus the time periods
25 Example 18.2 House price and multicollinearity A real estate agent believes that a house selling price can be predicted using the house size, number of bedrooms, and lot size.A random sample of 100 houses was drawn and data recorded.Analyze the relationship among the four variables
26 SolutionThe proposed model is PRICE = b0 + b1BEDROOMS + b2H-SIZE +b3LOTSIZE + eExcel solutionThe model is valid, but novariable is significantly relatedto the selling price !!
27 Multicollinearity causes two kinds of difficulties: However,when regressing the price on each independent variable alone, it is found that each variable is strongly related to the selling price.Multicollinearity is the source of this problem.Multicollinearity causes two kinds of difficulties:The t statistics appear to be too small.The b coefficients cannot be interpreted as “slopes”.
28 Remedying violations of the required conditions Nonnormality or heteroscedasticity can be remedied using transformations on the y variable.The transformations can improve the linear relationship between the dependent variable and the independent variables.Many computer software systems allow us to make the transformations easily.
29 A brief list of transformations y’ = log y (for y > 0)Use when the se increases with y, orUse when the error distribution is positively skewedy’ = y2Use when the s2e is proportional to E(y), orUse when the error distribution is negatively skewedy’ = y1/2 (for y > 0)Use when the s2e is proportional to E(y)y’ = 1/yUse when s2e increases significantly when y increases beyond some value.
30 Example 18.3: Analysis, diagnostics, transformations. A statistics professor wanted to know whether time limit affect the marks on a quiz?A random sample of 100 students was split into 5 groups.Each student wrote a quiz, but each group was given a different time limit. See data below.MarksAnalyze these results, and include diagnostics
31 The model tested: MARK = b0 + b1TIME + e The errors seem to be normally distributedThis model is useful andprovides a good fit.
32 Two transformations are used to remedy this problem: 1. y’ = logey The standard error of estimate seems to increasewith the predicted value of y.Two transformations are used to remedy this problem:1. y’ = logey2. y’ = 1/y
33 Let us see what happens when a transformation is applied The original data, where “Mark”is a function of “Time”The modified data, whereLogMark is a function of “Time"Loge23 = 3.13540, 3.13540,2340, 2.89Loge18 = 2.8940,18
34 The new regression analysis and the diagnostics are: The model tested:LOGMARK = b’0 + b’1TIME + e’Predicted LogMark = TimeThis model is useful andprovides a good fit.
35 The errors seem to benormally distributedThe standard errors still changeswith the predicted y, but the changeis smaller than before.
36 How do we use the modified model to predict? Let TIME = 55 minutesLogMark = Time = (55) = 3.323To find the predicted mark, take the antilog:antiloge3.323 = e3.323 =How do we use the modified model to predict?
37 18.5 Regression Diagnostics - III The Durbin - Watson TestThis test detects first order auto-correlation between consecutive residuals in a time seriesIf autocorrelation exists the error variables are not independentResidual at time i
38 Positive first order autocorrelation Positive first order autocorrelation occurs whenconsecutive residuals tend to be similar. Then,the value of d is small (less than 2).+Residuals+++Time++++Negative first order autocorrelationNegative first order autocorrelation occurs whenconsecutive residuals tend to markedly differ.Then, the value of d is large (greater than 2).Residuals+++++Time++
39 One tail test for positive first order auto-correlation If d<dL there is enough evidence to show that positive first-order correlation existsIf d>dU there is not enough evidence to show that positive first-order correlation existsIf d is between dL and dU the test is inconclusive.One tail test for negative first order auto-correlationIf d>4-dL, negative first order correlation existsIf d<4-dU, negative first order correlation does not existsif d falls between 4-dU and 4-dL the test is inconclusive.
40 Two-tail test for first order auto-correlation If d<dL or d>4-dL first order auto-correlation existsIf d falls between dL and dU or between 4-dU and 4-dL the test is inconclusiveIf d falls between dU and 4-dU there is no evidence for first order auto-correlationFirst ordercorrelationdoes notexistFirst ordercorrelationexistsInconclusivetestdLdU24-dU4-dL4
41 TICKETS=b0+b1SNOWFALL+b2TEMPERATURE+e Example 18.4How does the weather affect the sales of lift tickets in a ski resort?Data of the past 20 years sales of tickets, along with the total snowfall and the average temperature during Christmas week in each year, was collected.The model hypothesized wasTICKETS=b0+b1SNOWFALL+b2TEMPERATURE+eRegression analysis yielded the following results:
42 Diagnosis of the required conditions resulted with The model seems to be very poor:The fit is very low (R-square=0.12),It is not valid (Signif. F =0.33)No variable is linearly related to SalesDiagnosis of the requiredconditions resulted withthe following findings
43 Residual vs. predicted y The error distributionThe error varianceis constantThe errors may benormally distributedResidual over timeThe errors arenot independent
44 Using the computer - Excel Test for positive first order auto-correlation:n=20, k=2. From the Durbin-Watson table we have:dL=1.10, dU= The statistic d=0.59Conclusion: Because d<dL , there is sufficient evidenceto infer that positive first order auto-correlation exists.Using the computer - ExcelTools > data Analysis > Regression (check the residual option and then OK)Tools > Data Analysis Plus > Durbin Watson Statistic > Highlight the range of the residualsfrom the regression run > OKThe residuals
45 The modified regression model TICKETS=b0+ b1SNOWFALL+ b2TEMPERATURE+ b3YEARS+eThe autocorrelation has occurred over time.Therefore, a time dependent variable addedto the model may correct the problemAll the required conditions are met for this model.The fit of this model is high R2 = 0.74.The model is useful. Significance F = 5.93 E-5.SNOWFALL and YEARS are linearly related to ticket sales.TEMPERATURE is not linearly related to ticket sales.