Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multiple Regression Analysis

Similar presentations


Presentation on theme: "Multiple Regression Analysis"— Presentation transcript:

1 Multiple Regression Analysis
Chapter 14 McGraw-Hill/Irwin Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.

2 Topics Multiple Regression
Estimation Global Test Individual Coefficient Test Regression Assumptions and Regression Diagnostics Error Term Distribution Multicollinearity Heteroscedascity Autocorrelation Dummy Variable Stepwise Regression 13-2 2

3 Multiple Regression Analysis
Multiple Linear Regression Model: Y = α + β1X1 + β2X2+ ··· +βkXk+ ε Y is the dependent variable and X1, X2, … Xk are the independent variable. α, β1, β2, …, βk are population coefficients that need to be estimated using sample data. ε is the error term. The model represents the linear relationship between the two variables in the population Estimated Regression Equation: = a + b1X1 + b2X2+ + ··· +bkXk a and b1, b2, …, bk are estimated coefficients from the sample. bi is the net change in Y for each unit change in Xi holding other X’s constant. The least squares criterion is used to develop this equation. 14-3

4 Multiple Linear Regression - Example
Salsberry Realty sells homes along the east coast of the United States. One of the questions most frequently asked by prospective buyers is: If we purchase this home, how much can we expect to pay to heat it during the winter? The research department at Salsberry has been asked to develop some guidelines regarding heating costs for single-family homes. Three variables are thought to relate to the heating costs: (1) the mean daily outside temperature, (2) the number of inches of insulation in the attic, and (3) the age in years of the furnace. To investigate, Salsberry’s research department selected a random sample of 20 recently sold homes. It determined the cost to heat each home last January, as well as the January outside temperature in the region, the number of inches of insulation in the attic, and the age of the furnace. X1 X2 X3 Data Salsberry 14-4

5 Multiple Linear Regression – Excel Output
SUMMARY OUTPUT Regression Statistics  Multiple R R Square Adjusted R Square Standard Error Observations 20 ANOVA df SS MS F Significance F Regression 3 6.56E-06 Residual 16 Total 19 Coefficients t Stat P-value Lower 95% Upper 95% Intercept 2.24E-06 Temp 2.1E-05 Insul Age See Excel instruction in the textbook, P 566, #2. b1, b2, and b3, a 14-5

6 Estimating the Multiple Regression Equation
Interpreting the Regression Coefficients The regression coefficient for mean outside temperature, X1, is For every unit increase in temperature, holding the other two independent variables constant, monthly heating cost is expected to decrease by $4.583. The attic insulation variable, X2, also shows a negative relationship. For each additional inch of insulation, the cost to heat the home is expected to decline by $14.83 per month, . The age of the furnace variable shows a positive relationship. For each additional year older the furnace is, the cost is expected to increase by $6.10 per month. 14-6

7 Using the Multiple Regression Equation
Applying the Model for Estimation What is the estimated heating cost for a home if the mean outside temperature is 30 degrees, there are 5 inches of insulation in the attic, and the furnace is 10 years old?

8 Fitness of the model—Adjusted r2
The Adjusted R2 R2 is inflated by the number of independent variables. In multiple regression analysis, the adjusted R2 is a better measurement of the fitness of the model. Ranges from 0 to 1. The Adjusted R2 is adjusted by the number of independent variables and sample size. It measures the percentage of total variation in Y that is explained by all independent variables, that is, explained by the regression model. SUMMARY OUTPUT Regression Statistics  Multiple R R Square Adjusted R Square Standard Error Observations 20 . About 76.7% of the variation in the heating cost is explained by the mean outside temperature, attic insulation and the age of furnace. 14-8

9 Global Test: Testing the Multiple Regression Model
The global test is used to investigate whether any of the independent variables have coefficients that are significantly different from zero. This is also the test on the validity of the model. The hypotheses are: Decision Rules: (1) Reject H0 if F > F,k,n-k-1 or (2) Reject H0 if p-value<α 14-9

10 F-distribution The distribution takes nonnegative values only.
Asymmetric, skewed to the right. The shape of the distribution is controlled by 2 degrees of freedoms, denoted v1 and v2. The degrees of freedoms are usually reported in the ANOVA table in the output Excel function: =FINV(α, k, n-k-1) ANOVA df SS MS F Significance F Regression 3 6.56E-06 Residual 16 Total 19

11 Global test—Example 2. Significance level: α=0.05
ANOVA df SS MS F Significance F Regression 3 6.56E-06 Residual 16 Total 19 2. Significance level: α=0.05 3. Test statistic: F=21.90 14-11

12 Global test—Example 4. Rule (1) Rejection region:
ANOVA df SS MS F Significance F Regression 3 6.56E-06 Residual 16 Total 19 4. Rule (1) Rejection region: Reject H0 if F >3.24 According to step 3, F=21.90, which falls in the rejection region. Rule (2) Reject H0 if p-value < α p-value =0.00, less than 0.05 5. Decision: rejection the null hypothesis =FINV(.05, 3, 16) = 3.24 14-12

13 Interpretation The null hypothesis that all the multiple regression coefficients are zero is rejected. Interpretation: Some of the independent variables are useful in predicting the dependent variable (heating cost). Some of the independent variables are linearly related to the dependent variable. The model is valid. Logical question – which ones? 14-13

14 Evaluating Individual Regression Coefficients (βi)
This test is used to determine which independent variables have nonzero regression coefficients. The variables that have nonzero regression coefficients are said to have significant coefficients (significantly different from zero). The variables that have zero regression coefficients can be dropped from the analysis. The test statistic follows t distribution. The test hypotheses test are: H0: βi = 0 H1: βi ≠ 0 Instead of comparing test statistic with rejection region for each independent variable (which is tedious), we rely on the p-values. If p-value < α, we reject the null hypothesis. 14-14

15 P-values for the Slopes
For temperature: For Insulation: For furnace age: H0: β1 = H0: β2 = H0: β3 = 0 H1: β1 ≠ H1: β2 ≠ H1: β3 ≠ 0 P-value=.00 < P-value=.007 < P-value=.148 < .05 Conclusions: For temperature and insulation, rejection the null hypothesis. The coefficients are significant (significantly different from zero); The variables are linearly related to heating cost The variables are useful in predicting heating cost For furnace age, do not rejection the null hypothesis. the coefficient is insignificant and thus can be dropped from the model The variable is not linearly related to heating cost The variable is not useful in predicting heating cost Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Intercept 2.24E-06 Temp 2.1E-05 Insul Age 14-15

16 New Regression without Variable “Age”
SUMMARY OUTPUT Regression Statistics Multiple R R Square Adjusted R Square 0.7495 Standard Error Observations 20 ANOVA df SS MS F Significance F Regression 2 3.01E-06 Residual 17 Total 19 Coefficients t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0% Intercept 3.56E-09 Temp 1.16E-06 Insul 14-16

17 New Regression Model without Variable “Age”
ANOVA df SS MS F Significance F Regression 2 3.01E-06 Residual 17 Total 19 d.f. (2,17) 3.59 2. Significance level: α=0.05 3. Test statistic: F=29.42 4. Rejection region: Reject H0 if F >3.59, test statistic falls in the rejection region. p-value =0.00, less than 0.05 5. Decision: rejection the null hypothesis

18 Individual t-test on the new Coefficient
For temperature: For Insulation: H0: β1 = H0: β2 = 0 H1: β1 ≠ H1: β2 ≠ 0 P-value=.00 < P-value=.008 < .05 Conclusions: For temperature and insulation, rejection the null hypothesis. The coefficients are significant (significantly different from zero); The variables are linearly related to heating cost The variables are useful in predicting heating cost Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0% Intercept 3.56E-09 Temp 1.16E-06 Insul 14-18

19 Multiple Regression Assumptions
Each of the independent variables and the dependent variable have a linear relationship. The independent variables are not correlated. When this assumption is violated, we call the condition multicollinearity. The probability distribution of ε is normal. The variance of ε is constant regardless of the value of . This condition is called homoscedasticity. When the requirement is violated, we say heterscedasticity is observed in the regression. The error terms are independent of each other. This assumption is often violated when time is involved and we call this condition autocorrelation. 14-19

20 Evaluating the Assumptions of Multiple Regression
There is a linear relationship. We use scatter plot to examine this assumption. The independent variables are not correlated. We examine the correlation coefficient among the independent variables. The error term follow the normal probability distribution. We use histogram of the residual or normal probability plot to examine the normality. The variance of ε is constant regardless of the value of . The error term independent of each other. We plot the residual against the predicted Y to examine the last two assumptions. 14-20

21 Assumption I: linear relationship
A scatter plot of each independent variable against the dependent variable is used. In practice, we can skip this check since the test on individual coefficient will serve the same purpose. 14-21

22 Assumption II: Multicollinearity
Multicollinearity exists when independent variables (X’s) are correlated. Effects of Multicollinearity on the Model: 1. An independent variable known to be an important predictor ends up having an insignificant coefficient. 2. A regression coefficient that should have a positive sign turns out to be negative, or vice versa. 3. Multicollieanrity adds difficulty to the interpretation of the coefficients. When one variable changes by 1 unit, other correlated variables will change also (but we require it to be held constant in order to correctly interpret the coefficient). However, correlated independent variables do not affect a multiple regression equation’s ability to predict the dependent variable (Y). Minimizing the effect of multicollinearity is often easier than correcting it: Try to include explanatory variables that are independent of each other. Remove variables that cause multicollinearity in the model. 14-22

23 Multicollinearity: Detection
A general rule is if the correlation between two independent variables is between and 0.70 there likely is not a problem using both of the independent variables. A more precise test is to use the variance inflation factor (VIF). A VIF > 10 is unsatisfactory. Remove that independent variable from the analysis. The value of VIF is found as follows: The term R2j refers to the coefficient of determination, where the selected independent variable is used as a dependent variable and the remaining independent variables are used as independent variables. 14-23

24 Multicollinearity – Example
Refer to example of heating cost, which is related to the independent variables outside temperature, amount of insulation, and age of furnace. Develop a correlation matrix for all the independent variables. Does it appear there is a problem with multicollinearity? Correlation Matrix Temp Insul Age 1.00 -0.10 -0.49 0.06 None of the correlations (highlighted above) among the independent variables exceed -.7u0 or .70, so we do not suspect problems with multicollinearity. Excel: Data-> Data Analysis-> Correlation 14-24

25 Regression Statistics
VIF – Example SUMMARY OUTPUT Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 20 Coefficients t Stat P-value Intercept Insul -0.342 Age Find and interpret the variance inflation factor for each of the independent variables. We consider variable temperature first. We run a multiple regression with temperature as the dependent variable and the other two as the independent variables. Coefficient of Determination . The VIF value of 1.32 is less than the upper limit of 10. This indicates that the independent variable temperature is not strongly correlated with the other independent variables. 14-25

26 VIF – Example Calculating the VIF for each variable using Excel can be tedious. Minitab generates the VIF values for each independent variable in its output, which is shown below. . None of the VIFs are higher than 10. hence, we conclude there is not a problem with multicollinearity in this example. Note: for your project first obtain correlation matrix. For variables that are associated with correlation coefficients exceeding -.70 or .70, calculated the corresponding VIFs to further determine whether multicollinearity is an issue or not. 14-26

27 Assumption III: Normality of Error Term
Histogram (discuss in review) of residuals is used to visually determine whether the assumption of normality is satisfied. Excel offers another graph, normal probability plot, that helps to evaluate this assumption. Basically, if the plotted points are fairly close to a straight line drawn from the lower left to the upper right, the normality assumption is satisfied. . 14-27

28 Assumption IV & V As we can see from the scatter plot, the residuals are randomly distributed across the horizontal axis and there is no obvious. Therefore, there is no sign of heteroscedasticity or autocorrelation. 14-28

29 Residual Plot versus Fitted Values: Testing the Heteroscedasticity Assumption
When the variance of the error term is changing across different values of Y’s, we refer to this condition as heteroscedasticity. In the plot of the residuals against the predicted value of Y, we look for a change in the spread of the plotted points. The spread of the points increases as the predicted value of Y increases. A scatter plot such as this would indicate possible heteroscedasticity. 14-29

30 Residual Plot versus Fitted Values: Testing the Independence Assumption
When successive residuals are correlated we refer to this condition as autocorrelation, which frequently occurs when the data are collected over a period of time. Note the run of residuals above the mean of the residuals, followed by a run below the mean. A scatter plot such as this would indicate possible autocorrelation. 14-30

31 Dummy Variable Notation:
Usually categorical data or nominal data cannot be included in the analysis directly. Instead, we need to use dummy variables to denote the categories. Dummy variable: Dummy variable is a variable that can assume either one of only two values (usually 1 and 0), where 1 represents the existence of a certain condition and 0 indicates that the condition does not hold. Notation:

32 Dummy Variable - Example
Suppose in the Salsberry Realty example that the independent variable “garage” is added, which indicate whether a house comes with an attached garage or not. To include this variable in our analysis, we define a dummy variable as follows: for those homes without an attached garage, 0 is used; for homes with an attached garage, a 1 is used. 14-32

33 Dummy Variable - Example
SUMMARY OUTPUT Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 20 ANOVA df SS MS F Significance F Regression 3 2.59E-07 Residual 16 Total 19 Coefficients t Stat P-value Lower 95% Upper 95% Intercept 1.71E-07 Temp 1.62E-05 Insul Garage New estimated regression equation: 14-33

34 Dummy Variable - Example
Interpretation: b3 = 77.4: the heating cost for homes with attached garage is on average $77.4 higher than homes without attached garage, with other conditions being the same. 14-34

35 Dummy Variable – Another Example
What determines the value of a used car? To examine this issue, a used-car dealer randomly selected year-old Toyota Camrys that were sold at auction during the past month. Each car was in top condition and equipped with all the features that come standard with this car. The dealer recorded the price ($1,000), the number of miles (thousands) on the odometer and the color of the car. When recording the color, the dealer uses 1 to denote white, 2 to denote silver and 3 to denote other colors. 14-35

36 Dummy Variable – Another Example
Although variable color include numbers, 1, 2, and 3, they cannot be included in the analysis. Instead we need to generate dummy variables to denote the different categories. Rule of assigning dummy variables: if there are m different categories in the data, generate m-1 dummy variables. The last category is represented by I1 = I2 = … = Im-1 = 0 , and is called the omitted category. Since there are three categories in variable color, we generate two dummy variables defined as follows: “Other colors” is the omitted category and is represented by I1 = I2 = 0 14-36

37 Dummy Variable – Excel Open data Toyota Camry
In the column next to “color” type “I1” to generate the dummy variable for “white.” In the cell below it, type =IF(C2=1, 1, 0) and hit enter. (Excel function: IF(logical_test, [value_if_true], [value_if_false].) Copy the cell and paste to the rest of cells in the column till the cell in the previous column is empty. Similarly, generate the dummy variable for “silver” in the next column by typing =IF(C2=2, 1, 0) and follow the same procedure. To run regression, we need to put the explanation variables together. Copy the column of Odometer and past to the column next to the second dummy variable. Run multiple regression using the 2 dummy variables and Odometer. 14-37

38 Dummy Variable – Excel =IF(C2=1, 1, 0) =IF(C2=2, 1, 0) 14-38

39 Regression Statistics
Dummy Variable – Excel SUMMARY OUTPUT Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 100 ANOVA df SS MS F Significance F Regression 3 4.65E-25 Residual 96 Total 99 Coefficients t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0% Intercept 2.28E-92 16.446 I1 I2 Odometer 4.04E-20 Estimated regression equation: 14-39

40 Dummy Variable – Interpretation
The coefficient of I1 : b1 = 0.09: A white Camry sells for thousand or $91.10 on average more than other colors (nonwhite, nonsilver) with the same odometer reading. The coefficient of I2 : b2 = : A silver Camry sells for thousand or $33.04 on average more than other colors (nonwhite, nonsilver) with the same odometer reading. 14-40

41 Stepwise Regression The advantages to the stepwise method are:
1. Only independent variables with significant regression coefficients are entered into the equation. 2. The steps involved in building the regression equation are clear. 3. It is efficient in finding the regression equation with only significant regression coefficients. 4. The changes in the multiple standard error of estimate and the coefficient of determination are shown. 14-41

42 Stepwise Regression – Minitab Example
The stepwise MINITAB output for the heating cost problem follows. Temperature is selected first. This variable explains more of the variation in heating cost than any other proposed independent variables. Garage is selected next, followed by Insulation. Variable age is not selected 14-42


Download ppt "Multiple Regression Analysis"

Similar presentations


Ads by Google