Download presentation

Presentation is loading. Please wait.

Published byLucy Auberry Modified over 2 years ago

1
Chap 14-1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter 14 Multiple Regression Analysis and Model Building

2
Chap 14-2 Chapter Goals After completing this chapter, you should be able to: understand model building using multiple regression analysis apply multiple regression analysis to business decision-making situations analyze and interpret the computer output for a multiple regression model test the significance of the independent variables in a multiple regression model

3
Chap 14-3 Chapter Goals After completing this chapter, you should be able to: use variable transformations to model nonlinear relationships recognize potential problems in multiple regression analysis and take the steps to correct the problems. incorporate qualitative variables into the regression model by using dummy variables. (continued)

4
Chap 14-4 The Multiple Regression Model Idea: Examine the linear relationship between 1 dependent (y) & 2 or more independent variables (x i ) Population model: Y-interceptPopulation slopesRandom Error Estimated (or predicted) value of y Estimated slope coefficients Estimated multiple regression model: Estimated intercept

5
Chap 14-5 Multiple Regression Model Two variable model y x1x1 x2x2 Slope for variable x 1 Slope for variable x 2

6
Chap 14-6 Multiple Regression Model Two variable model y x1x1 x2x2 yiyi y i < e = (y – y) < x 2i x 1i The best fit equation, y, is found by minimizing the sum of squared errors, e 2 < Sample observation

7
Chap 14-7 Multiple Regression Assumptions The errors are normally distributed The mean of the errors is zero Errors have a constant variance The model errors are independent e = (y – y) < Errors (residuals) from the regression model:

8
Chap 14-8 Model Specification Decide what you want to do and select the dependent variable Determine the potential independent variables for your model Gather sample data (observations) for all variables

9
Chap 14-9 The Correlation Matrix Correlation between the dependent variable and selected independent variables can be found using Excel: Tools / Data Analysis… / Correlation Can check for statistical significance of correlation with a t test

10
Chap Example A distributor of frozen desert pies wants to evaluate factors thought to influence demand Dependent variable: Pie sales (units per week) Independent variables: Price (in $) Advertising ($100’s) Data is collected for 15 weeks

11
Chap Pie Sales Model Sales = b 0 + b 1 (Price) + b 2 (Advertising) Week Pie Sales Price ($) Advertising ($100s) Pie SalesPrice Advertisin g Pie Sales1 Price Advertising Correlation matrix: Multiple regression model:

12
Chap Interpretation of Estimated Coefficients Slope (b i ) Estimates that the average value of y changes by b i units for each 1 unit increase in X i holding all other variables constant Example: if b 1 = -20, then sales (y) is expected to decrease by an estimated 20 pies per week for each $1 increase in selling price (x 1 ), net of the effects of changes due to advertising (x 2 ) y-intercept (b 0 ) The estimated average value of y when all x i = 0 (assuming all x i = 0 is within the range of observed values)

13
Chap Pie Sales Correlation Matrix Price vs. Sales : r = There is a negative association between price and sales Advertising vs. Sales : r = There is a positive association between advertising and sales Pie SalesPrice Advertisin g Pie Sales1 Price Advertising

14
Chap Scatter Diagrams Sales Price Advertising

15
Chap Estimating a Multiple Linear Regression Equation Computer software is generally used to generate the coefficients and measures of goodness of fit for multiple regression Excel: Tools / Data Analysis... / Regression PHStat: PHStat / Regression / Multiple Regression…

16
Chap Multiple Regression Output Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations15 ANOVA dfSSMSFSignificance F Regression Residual Total Coefficient s Standard Errort StatP-valueLower 95%Upper 95% Intercept Price Advertising

17
Chap The Multiple Regression Equation b 1 = : sales will decrease, on average, by pies per week for each $1 increase in selling price, net of the effects of changes due to advertising b 2 = : sales will increase, on average, by pies per week for each $100 increase in advertising, net of the effects of changes due to price where Sales is in number of pies per week Price is in $ Advertising is in $100’s.

18
Chap Using The Model to Make Predictions Predict sales for a week in which the selling price is $5.50 and advertising is $350: Predicted sales is pies Note that Advertising is in $100’s, so $350 means that x 2 = 3.5

19
Chap Multiple Coefficient of Determination Reports the proportion of total variation in y explained by all x variables taken together

20
Chap Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations15 ANOVA dfSSMSFSignificance F Regression Residual Total Coefficient s Standard Errort StatP-valueLower 95%Upper 95% Intercept Price Advertising % of the variation in pie sales is explained by the variation in price and advertising Multiple Coefficient of Determination (continued)

21
Chap Adjusted R 2 R 2 never decreases when a new x variable is added to the model This can be a disadvantage when comparing models What is the net effect of adding a new variable? We lose a degree of freedom when a new x variable is added Did the new x variable add enough explanatory power to offset the loss of one degree of freedom?

22
Chap Shows the proportion of variation in y explained by all x variables adjusted for the number of x variables used (where n = sample size, k = number of independent variables) Penalize excessive use of unimportant independent variables Smaller than R 2 Useful in comparing among models Adjusted R 2 (continued)

23
Chap Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations15 ANOVA dfSSMSFSignificance F Regression Residual Total Coefficient s Standard Errort StatP-valueLower 95%Upper 95% Intercept Price Advertising % of the variation in pie sales is explained by the variation in price and advertising, taking into account the sample size and number of independent variables Multiple Coefficient of Determination (continued)

24
Chap Is the Model Significant? F-Test for Overall Significance of the Model Shows if there is a linear relationship between all of the x variables considered together and y Use F test statistic Hypotheses: H 0 : β 1 = β 2 = … = β k = 0 (no linear relationship) H A : at least one β i ≠ 0 (at least one independent variable affects y)

25
Chap F-Test for Overall Significance Test statistic: where F has (numerator) D 1 = k and (denominator) D 2 = (n – k - 1) degrees of freedom (continued)

26
Chap Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations15 ANOVA dfSSMSFSignificance F Regression Residual Total Coefficient s Standard Errort StatP-valueLower 95%Upper 95% Intercept Price Advertising (continued) F-Test for Overall Significance With 2 and 12 degrees of freedom P-value for the F-Test

27
Chap H 0 : β 1 = β 2 = 0 H A : β 1 and β 2 not both zero =.05 df 1 = 2 df 2 = 12 Test Statistic: Decision: Conclusion: Reject H 0 at = 0.05 The regression model does explain a significant portion of the variation in pie sales (There is evidence that at least one independent variable affects y) 0 =.05 F.05 = Reject H 0 Do not reject H 0 Critical Value: F = F-Test for Overall Significance (continued) F

28
Chap Are Individual Variables Significant? Use t-tests of individual variable slopes Shows if there is a linear relationship between the variable x i and y Hypotheses: H 0 : β i = 0 (no linear relationship) H A : β i ≠ 0 (linear relationship does exist between x i and y)

29
Chap Are Individual Variables Significant? H 0 : β i = 0 (no linear relationship) H A : β i ≠ 0 (linear relationship does exist between x i and y) Test Statistic: ( df = n – k – 1) (continued)

30
Chap Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations15 ANOVA dfSSMSFSignificance F Regression Residual Total Coefficient s Standard Errort StatP-valueLower 95%Upper 95% Intercept Price Advertising t-value for Price is t = , with p-value.0398 t-value for Advertising is t = 2.855, with p-value.0145 (continued) Are Individual Variables Significant?

31
Chap d.f. = = 12 =.05 t /2 = Inferences about the Slope: t Test Example H 0 : β i = 0 H A : β i 0 The test statistic for each variable falls in the rejection region (p-values <.05) There is evidence that both Price and Advertising affect pie sales at =.05 From Excel output: Reject H 0 for each variable CoefficientsStandard Errort StatP-value Price Advertising Decision: Conclusion: Reject H 0 /2=.025 -t α/2 Do not reject H 0 0 t α/2 /2=

32
Chap Confidence Interval Estimate for the Slope Confidence interval for the population slope β 1 (the effect of changes in price on pie sales): Example: Weekly sales are estimated to be reduced by between 1.37 to pies for each increase of $1 in the selling price Coefficient s Standard Error…Lower 95%Upper 95% Intercept … Price … Advertising … where t has (n – k – 1) d.f.

33
Chap Standard Deviation of the Regression Model The estimate of the standard deviation of the regression model is: Is this value large or small? Must compare to the mean size of y for comparison

34
Chap Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations15 ANOVA dfSSMSFSignificance F Regression Residual Total Coefficient s Standard Errort StatP-valueLower 95%Upper 95% Intercept Price Advertising The standard deviation of the regression model is (continued) Standard Deviation of the Regression Model

35
Chap The standard deviation of the regression model is A rough prediction range for pie sales in a given week is Pie sales in the sample were in the 300 to 500 per week range, so this range is probably too large to be acceptable. The analyst may want to look for additional variables that can explain more of the variation in weekly sales (continued) Standard Deviation of the Regression Model

36
Chap R commands paste("Today is", date()) y=c(350, 460, 350, 430, 350, 380, 430, 470, 450, 490, 340, 300, 440, 450, 300) #y is apple pie sales x=c(5.5, 7.5, 8.0, 8.0, 6.8, 7.5, 4.5, 6.4, 7.0, 5.0, 7.2, 7.9, 5.9, 5.0, 7.0) #price charged z=c(3.3, 3.3, 3.0, 4.5, 3.0, 4.0, 3.0, 3.7, 3.5, 4.0, 3.5, 3.2, 4.0, 3.5, 2.7) #advertising expenses t.test(x); sort(x); t.test(y); sort(y) t.test(z); sort(z) library(fBasics)

37
Chap R commands set 2 cs=colStats(cbind(y,x,z), FUN=basicStats); cs #following function automatically computes outliers get.outliers = function(x) { #function to compute the number of outliers automatically #author H. D. Vinod, Fordham university, New York, 24 March, 2006 su=summary(x) if (ncol(as.matrix(x))>1) {print("Error: input to get.outliers function has 2 or more columns") return(0)} iqr=su[5]-su[2] dn=su[2]-1.5*iqr up=su[5]+1.5*iqr LO=x[x

38
Chap R commands set 3 xx=get.outliers(x) xx=get.outliers(y) xx=get.outliers(z) #Tests for Correlation Coefficients cor.test(x,y) #capture.output(cor.test(x,y), file="c:/stat2/PieSaleOutput.txt", append=T) cor.test(z,y) #capture.output(cor.test(z,y), file="c:/stat2/PieSaleOutput.txt", append=T) cor.test(x,z) #capture.output(cor.test(x,z), file="c:/stat2/PieSaleOutput.txt", append=T) #Now regression analysis reg1=lm(y~x+z) summary(reg1) #plot(reg1) library(car); confint(reg1) #prints confidence intervals

39
Chap Multicollinearity Multicollinearity: High correlation exists between two independent variables This means the two variables contribute redundant information to the multiple regression model

40
Chap Multicollinearity Including two highly correlated independent variables can adversely affect the regression results No new information provided Can lead to unstable coefficients (large standard error and low t-values) Coefficient signs may not match prior expectations (continued)

41
Chap Some Indications of Severe Multicollinearity Incorrect signs on the coefficients Large change in the value of a previous coefficient when a new variable is added to the model A previously significant variable becomes insignificant when a new independent variable is added The estimate of the standard deviation of the model increases when a variable is added to the model

42
Chap Detect Collinearity (Variance Inflationary Factor) VIF j is used to measure collinearity: If VIF j > 5, x j is highly correlated with the other explanatory variables R 2 j is the coefficient of determination when the j th independent variable is regressed against the remaining k – 1 independent variables

43
Chap Detect Collinearity in PHStat Output for the pie sales example: Since there are only two explanatory variables, only one VIF is reported VIF is < 5 There is no evidence of collinearity between Price and Advertising Regression Analysis Price and all other X Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations15 VIF PHStat / regression / multiple regression … Check the “variance inflationary factor (VIF)” box

44
Chap Qualitative (Dummy) Variables Categorical explanatory variable (dummy variable) with two or more levels: yes or no, on or off, male or female coded as 0 or 1 Regression intercepts are different if the variable is significant Assumes equal slopes for other variables The number of dummy variables needed is (number of levels - 1)

45
Chap Dummy-Variable Model Example (with 2 Levels) Let: y = pie sales x 1 = price x 2 = holiday (X 2 = 1 if a holiday occurred during the week) (X 2 = 0 if there was no holiday that week)

46
Chap Same slope Dummy-Variable Model Example (with 2 Levels) (continued) x 1 (Price) y (sales) b 0 + b 2 b0b0 Holiday No Holiday Different intercept Holiday No Holiday If H 0 : β 2 = 0 is rejected, then “Holiday” has a significant effect on pie sales

47
Chap Sales: number of pies sold per week Price: pie price in $ Holiday: Interpretation of the Dummy Variable Coefficient (with 2 Levels) Example: 1 If a holiday occurred during the week 0 If no holiday occurred b 2 = 15: on average, sales were 15 pies greater in weeks with a holiday than in weeks without a holiday, given the same price

48
Chap Dummy-Variable Models (more than 2 Levels) The number of dummy variables is one less than the number of levels Example: y = house price ; x 1 = square feet The style of the house is also thought to matter: Style = ranch, split level, condo Three levels, so two dummy variables are needed

49
Chap Dummy-Variable Models (more than 2 Levels) b 2 shows the impact on price if the house is a ranch style, compared to a condo b 3 shows the impact on price if the house is a split level style, compared to a condo (continued) Let the default category be “condo”

50
Chap Interpreting the Dummy Variable Coefficients (with 3 Levels) With the same square feet, a split-level will have an estimated average price of thousand dollars more than a condo With the same square feet, a ranch will have an estimated average price of thousand dollars more than a condo. Suppose the estimated equation is For a condo: x 2 = x 3 = 0 For a ranch: x 3 = 0 For a split level: x 2 = 0

51
Chap The relationship between the dependent variable and an independent variable may not be linear Useful when scatter diagram indicates non- linear relationship Example: Quadratic model The second independent variable is the square of the first variable Nonlinear Relationships

52
Chap Polynomial Regression Model where: β 0 = Population regression constant β i = Population regression coefficient for variable x j : j = 1, 2, …k p = Order of the polynomial i = Model error If p = 2 the model is a quadratic model: General form:

53
Chap Linear fit does not give random residuals Linear vs. Nonlinear Fit Nonlinear fit gives random residuals x residuals x y x y x

54
Chap Quadratic Regression Model Quadratic models may be considered when scatter diagram takes on the following shapes: x1x1 y x1x1 x1x1 yyy β 1 < 0β 1 > 0β 1 < 0β 1 > 0 β 1 = the coefficient of the linear term β 2 = the coefficient of the squared term x1x1 β 2 > 0 β 2 < 0

55
Chap Testing for Significance: Quadratic Model Test for Overall Relationship F test statistic = Testing the Quadratic Effect Compare quadratic model with the linear model Hypotheses (No 2 nd order polynomial term) (2 nd order polynomial term is needed) H 0 : β 2 = 0 H A : β 2 0

56
Chap Higher Order Models y x If p = 3 the model is a cubic form:

57
Chap Interaction Effects Hypothesizes interaction between pairs of x variables Response to one x variable varies at different levels of another x variable Contains two-way cross product terms Basic Terms Interactive Terms

58
Chap Effect of Interaction Given: Without interaction term, effect of x 1 on y is measured by β 1 With interaction term, effect of x 1 on y is measured by β 1 + β 3 x 2 Effect changes as x 2 increases

59
Chap x 2 = 1 x 2 = 0 y = 1 + 2x 1 + 3(1) + 4x 1 (1) = 4 + 6x 1 y = 1 + 2x 1 + 3(0) + 4x 1 (0) = 1 + 2x 1 Interaction Example Effect (slope) of x 1 on y does depend on x 2 value x1x y y = 1 + 2x 1 + 3x 2 + 4x 1 x 2 where x 2 = 0 or 1 (dummy variable)

60
Chap Interaction Regression Model Worksheet multiply x 1 by x 2 to get x 1 x 2, then run regression with y, x 1, x 2, x 1 x 2

61
Chap Hypothesize interaction between pairs of independent variables Hypotheses: H 0 : β 3 = 0 (no interaction between x 1 and x 2 ) H A : β 3 ≠ 0 (x 1 interacts with x 2 ) Evaluating Presence of Interaction

62
Chap Model Building Goal is to develop a model with the best set of independent variables Easier to interpret if unimportant variables are removed Lower probability of collinearity Stepwise regression procedure Provide evaluation of alternative models as variables are added Best-subset approach Try all combinations and select the best using the highest adjusted R 2 and lowest s ε

63
Chap Idea: develop the least squares regression equation in steps, either through forward selection, backward elimination, or through standard stepwise regression The coefficient of partial determination is the measure of the marginal contribution of each independent variable, given that other independent variables are in the model Stepwise Regression

64
Chap Best Subsets Regression Idea: estimate all possible regression equations using all possible combinations of independent variables Choose the best fit by looking for the highest adjusted R 2 and lowest standard error s ε Stepwise regression and best subsets regression can be performed using PHStat, Minitab, or other statistical software packages

65
Chap Aptness of the Model Diagnostic checks on the model include verifying the assumptions of multiple regression: Each x i is linearly related to y Errors have constant variance Errors are independent Error are normally distributed Errors (or Residuals) are given by

66
Chap Residual Analysis Non-constant variance Constant variance xx residuals Not IndependentIndependent x residuals x

67
Chap The Normality Assumption Errors are assumed to be normally distributed Standardized residuals can be calculated by computer Examine a histogram or a normal probability plot of the standardized residuals to check for normality

68
Chap Chapter Summary Developed the multiple regression model Tested the significance of the multiple regression model Developed adjusted R 2 Tested individual regression coefficients Used dummy variables Examined interaction in a multiple regression model

69
Chap Chapter Summary Described nonlinear regression models Described multicollinearity Discussed model building Stepwise regression Best subsets regression Examined residual plots to check model assumptions (continued)

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google