Download presentation

Presentation is loading. Please wait.

1
**Chapter 15 Multiple Regression**

Statistics for Business and Economics 6th Edition Chapter 15 Multiple Regression Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

2
**Chapter Goals After completing this chapter, you should be able to:**

Apply multiple regression analysis to business decision-making situations Analyze and interpret the computer output for a multiple regression model Perform a hypothesis test for all regression coefficients or for a subset of coefficients Fit and interpret nonlinear regression models Incorporate qualitative variables into the regression model by using dummy variables Discuss model specification and analyze residuals Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

3
**The Multiple Regression Model**

Idea: Examine the linear relationship between 1 dependent (Y) & 2 or more independent variables (Xi) Multiple Regression Model with k Independent Variables: Y-intercept Population slopes Random Error Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

4
**Multiple Regression Equation**

The coefficients of the multiple regression model are estimated using sample data Multiple regression equation with k independent variables: Estimated (or predicted) value of y Estimated intercept Estimated slope coefficients In this chapter we will always use a computer to obtain the regression slope coefficients and other regression summary measures. Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

5
**Multiple Regression Equation**

(continued) Two variable model y Slope for variable x1 x2 Slope for variable x2 x1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

6
**Standard Multiple Regression Assumptions**

The values xi and the error terms εi are independent The error terms are random variables with mean 0 and a constant variance, 2. (The constant variance property is called homoscedasticity) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

7
**Standard Multiple Regression Assumptions**

(continued) The random error terms, εi , are not correlated with one another, so that It is not possible to find a set of numbers, c0, c1, , ck, such that (This is the property of no linear relation for the Xj’s) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

8
**Example: 2 Independent Variables**

A distributor of frozen desert pies wants to evaluate factors thought to influence demand Dependent variable: Pie sales (units per week) Independent variables: Price (in $) Advertising ($100’s) Data are collected for 15 weeks Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

9
**Pie Sales Example Sales = b0 + b1 (Price) + b2 (Advertising)**

Week Pie Sales Price ($) Advertising ($100s) 1 350 5.50 3.3 2 460 7.50 3 8.00 3.0 4 430 4.5 5 6.80 6 380 4.0 7 4.50 8 470 6.40 3.7 9 450 7.00 3.5 10 490 5.00 11 340 7.20 12 300 7.90 3.2 13 440 5.90 14 15 2.7 Multiple regression equation: Sales = b0 + b1 (Price) + b2 (Advertising) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

10
**Estimating a Multiple Linear Regression Equation**

Excel will be used to generate the coefficients and measures of goodness of fit for multiple regression Excel: Tools / Data Analysis... / Regression PHStat: PHStat / Regression / Multiple Regression… Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

11
**Multiple Regression Output**

Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 15 ANOVA df SS MS F Significance F Regression 2 Residual 12 Total 14 Coefficients t Stat P-value Lower 95% Upper 95% Intercept Price Advertising Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

12
**The Multiple Regression Equation**

where Sales is in number of pies per week Price is in $ Advertising is in $100’s. b1 = : sales will decrease, on average, by pies per week for each $1 increase in selling price, net of the effects of changes due to advertising b2 = : sales will increase, on average, by pies per week for each $100 increase in advertising, net of the effects of changes due to price Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

13
**Coefficient of Determination, R2**

Reports the proportion of total variation in y explained by all x variables taken together This is the ratio of the explained variability to total sample variability Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

14
**Coefficient of Determination, R2**

(continued) Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 15 ANOVA df SS MS F Significance F Regression 2 Residual 12 Total 14 Coefficients t Stat P-value Lower 95% Upper 95% Intercept Price Advertising 52.1% of the variation in pie sales is explained by the variation in price and advertising Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

15
**Estimation of Error Variance**

Consider the population regression model The unbiased estimate of the variance of the errors is where The square root of the variance, se , is called the standard error of the estimate Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

16
**Regression Statistics**

Standard Error, se Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 15 ANOVA df SS MS F Significance F Regression 2 Residual 12 Total 14 Coefficients t Stat P-value Lower 95% Upper 95% Intercept Price Advertising The magnitude of this value can be compared to the average y value Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

17
**Adjusted Coefficient of Determination,**

R2 never decreases when a new X variable is added to the model, even if the new variable is not an important predictor variable This can be a disadvantage when comparing models What is the net effect of adding a new variable? We lose a degree of freedom when a new X variable is added Did the new X variable add enough explanatory power to offset the loss of one degree of freedom? Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

18
**Adjusted Coefficient of Determination,**

(continued) Used to correct for the fact that adding non-relevant independent variables will still reduce the error sum of squares (where n = sample size, K = number of independent variables) Adjusted R2 provides a better comparison between multiple regression models with different numbers of independent variables Penalize excessive use of unimportant independent variables Smaller than R2 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

19
**Regression Statistics**

Multiple R R Square Adjusted R Square Standard Error Observations 15 ANOVA df SS MS F Significance F Regression 2 Residual 12 Total 14 Coefficients t Stat P-value Lower 95% Upper 95% Intercept Price Advertising 44.2% of the variation in pie sales is explained by the variation in price and advertising, taking into account the sample size and number of independent variables Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

20
**Coefficient of Multiple Correlation**

The coefficient of multiple correlation is the correlation between the predicted value and the observed value of the dependent variable Is the square root of the multiple coefficient of determination Used as another measure of the strength of the linear relationship between the dependent variable and the independent variables Comparable to the correlation between Y and X in simple regression Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

21
**Evaluating Individual Regression Coefficients**

Use t-tests for individual coefficients Shows if a specific independent variable is conditionally important Hypotheses: H0: βj = 0 (no linear relationship) H1: βj ≠ 0 (linear relationship does exist between xj and y) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

22
**Evaluating Individual Regression Coefficients**

(continued) H0: βj = 0 (no linear relationship) H1: βj ≠ 0 (linear relationship does exist between xi and y) Test Statistic: (df = n – k – 1) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

23
**Evaluating Individual Regression Coefficients**

(continued) Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 15 ANOVA df SS MS F Significance F Regression 2 Residual 12 Total 14 Coefficients t Stat P-value Lower 95% Upper 95% Intercept Price Advertising t-value for Price is t = , with p-value .0398 t-value for Advertising is t = 2.855, with p-value .0145 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

24
**Example: Evaluating Individual Regression Coefficients**

From Excel output: H0: βj = 0 H1: βj 0 Coefficients Standard Error t Stat P-value Price Advertising d.f. = = 12 = .05 t12, .025 = The test statistic for each variable falls in the rejection region (p-values < .05) Decision: Conclusion: Reject H0 for each variable a/2=.025 a/2=.025 There is evidence that both Price and Advertising affect pie sales at = .05 Reject H0 Do not reject H0 Reject H0 -tα/2 tα/2 2.1788 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

25
**Confidence Interval Estimate for the Slope**

Confidence interval limits for the population slope βj where t has (n – K – 1) d.f. Coefficients Standard Error Intercept Price Advertising Here, t has (15 – 2 – 1) = 12 d.f. Example: Form a 95% confidence interval for the effect of changes in price (x1) on pie sales: ± (2.1788)(10.832) So the interval is < β1 < Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

26
**Confidence Interval Estimate for the Slope**

(continued) Confidence interval for the population slope βi Coefficients Standard Error … Lower 95% Upper 95% Intercept Price Advertising Example: Excel output also reports these interval endpoints: Weekly sales are estimated to be reduced by between 1.37 to pies for each increase of $1 in the selling price Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

27
**Test on All Coefficients**

F-Test for Overall Significance of the Model Shows if there is a linear relationship between all of the X variables considered together and Y Use F test statistic Hypotheses: H0: β1 = β2 = … = βk = 0 (no linear relationship) H1: at least one βi ≠ 0 (at least one independent variable affects Y) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

28
**F-Test for Overall Significance**

Test statistic: where F has k (numerator) and (n – K – 1) (denominator) degrees of freedom The decision rule is Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

29
**F-Test for Overall Significance**

(continued) Regression Statistics Multiple R R Square Adjusted R Square Standard Error Observations 15 ANOVA df SS MS F Significance F Regression 2 Residual 12 Total 14 Coefficients t Stat P-value Lower 95% Upper 95% Intercept Price Advertising With 2 and 12 degrees of freedom P-value for the F-Test Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

30
**F-Test for Overall Significance**

(continued) Test Statistic: Decision: Conclusion: H0: β1 = β2 = 0 H1: β1 and β2 not both zero = .05 df1= df2 = 12 Critical Value: F = 3.885 Since F test statistic is in the rejection region (p-value < .05), reject H0 = .05 F There is evidence that at least one independent variable affects Y Do not reject H0 Reject H0 F.05 = 3.885 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

31
**Tests on a Subset of Regression Coefficients**

Consider a multiple regression model involving variables xj and zj , and the null hypothesis that the z variable coefficients are all zero: Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

32
**Tests on a Subset of Regression Coefficients**

(continued) Goal: compare the error sum of squares for the complete model with the error sum of squares for the restricted model First run a regression for the complete model and obtain SSE Next run a restricted regression that excludes the z variables (the number of variables excluded is r) and obtain the restricted error sum of squares SSE(r) Compute the F statistic and apply the decision rule for a significance level Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

33
**Prediction Given a population regression model**

then given a new observation of a data point (x1,n+1, x 2,n+1, , x K,n+1) the best linear unbiased forecast of yn+1 is It is risky to forecast for new X values outside the range of the data used to estimate the model coefficients, because we do not have data to support that the linear model extends beyond the observed range. ^ Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

34
**Using The Equation to Make Predictions**

Predict sales for a week in which the selling price is $5.50 and advertising is $350: Note that Advertising is in $100’s, so $350 means that X2 = 3.5 Predicted sales is pies Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

35
**Predictions in PHStat PHStat | regression | multiple regression …**

Check the “confidence and prediction interval estimates” box Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

36
**Predictions in PHStat Predicted y value Input values (continued) <**

Confidence interval for the mean y value, given these x’s < Prediction interval for an individual y value, given these x’s < Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

37
**Residuals in Multiple Regression**

Two variable model y Sample observation yi Residual = ei = (yi – yi) < yi < x2i x2 x1i x1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

38
**Nonlinear Regression Models**

The relationship between the dependent variable and an independent variable may not be linear Can review the scatter diagram to check for non-linear relationships Example: Quadratic model The second independent variable is the square of the first variable Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

39
**Quadratic Regression Model**

Model form: where: β0 = Y intercept β1 = regression coefficient for linear effect of X on Y β2 = regression coefficient for quadratic effect on Y εi = random error in Y for observation i Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

40
**Linear vs. Nonlinear Fit**

Y Y X X X X residuals residuals Linear fit does not give random residuals Nonlinear fit gives random residuals Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

41
**Quadratic Regression Model**

Quadratic models may be considered when the scatter diagram takes on one of the following shapes: Y Y Y Y X1 X1 X1 X1 β1 < 0 β1 > 0 β1 < 0 β1 > 0 β2 > 0 β2 > 0 β2 < 0 β2 < 0 β1 = the coefficient of the linear term β2 = the coefficient of the squared term Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

42
**Testing for Significance: Quadratic Effect**

Testing the Quadratic Effect Compare the linear regression estimate with quadratic regression estimate Hypotheses (The quadratic term does not improve the model) (The quadratic term improves the model) H0: β2 = 0 H1: β2 0 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

43
**Testing for Significance: Quadratic Effect**

(continued) Testing the Quadratic Effect Hypotheses (The quadratic term does not improve the model) (The quadratic term improves the model) The test statistic is H0: β2 = 0 H1: β2 0 where: b2 = squared term slope coefficient β2 = hypothesized slope (zero) Sb = standard error of the slope 2 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

44
**Testing for Significance: Quadratic Effect**

(continued) Testing the Quadratic Effect Compare R2 from simple regression to R2 from the quadratic model If R2 from the quadratic model is larger than R2 from the simple model, then the quadratic model is a better model Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

45
**Example: Quadratic Model**

Purity increases as filter time increases: Purity Filter Time 3 1 7 2 8 15 5 22 33 40 10 54 12 67 13 70 14 78 85 87 16 99 17 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

46
**Example: Quadratic Model**

(continued) Simple regression results: y = Time ^ Coefficients Standard Error t Stat P-value Intercept Time 2.078E-10 t statistic, F statistic, and R2 are all high, but the residuals are not random: Regression Statistics R Square Adjusted R Square Standard Error F Significance F 2.0778E-10 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

47
**Example: Quadratic Model**

(continued) Quadratic regression results: y = Time (Time)2 ^ Coefficients Standard Error t Stat P-value Intercept Time Time-squared 1.165E-05 Regression Statistics R Square Adjusted R Square Standard Error F Significance F 2.368E-13 The quadratic term is significant and improves the model: R2 is higher and se is lower, residuals are now random Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

48
**The Log Transformation**

The Multiplicative Model: Original multiplicative model Transformed multiplicative model Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

49
**Interpretation of coefficients**

For the multiplicative model: When both dependent and independent variables are logged: The coefficient of the independent variable Xk can be interpreted as a 1 percent change in Xk leads to an estimated bk percentage change in the average value of Y bk is the elasticity of Y with respect to a change in Xk Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

50
Dummy Variables A dummy variable is a categorical independent variable with two levels: yes or no, on or off, male or female recorded as 0 or 1 Regression intercepts are different if the variable is significant Assumes equal slopes for other variables If more than two levels, the number of dummy variables needed is (number of levels - 1) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

51
**Dummy Variable Example**

Let: y = Pie Sales x1 = Price x2 = Holiday (X2 = 1 if a holiday occurred during the week) (X2 = 0 if there was no holiday that week) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

52
**Dummy Variable Example**

(continued) Holiday No Holiday Different intercept Same slope y (sales) If H0: β2 = 0 is rejected, then “Holiday” has a significant effect on pie sales b0 + b2 Holiday (x2 = 1) b0 No Holiday (x2 = 0) x1 (Price) Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

53
**Interpreting the Dummy Variable Coefficient**

Example: Sales: number of pies sold per week Price: pie price in $ Holiday: 1 If a holiday occurred during the week 0 If no holiday occurred b2 = 15: on average, sales were 15 pies greater in weeks with a holiday than in weeks without a holiday, given the same price Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

54
**Interaction Between Explanatory Variables**

Hypothesizes interaction between pairs of x variables Response to one x variable may vary at different levels of another x variable Contains two-way cross product terms Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

55
**Effect of Interaction Given:**

Without interaction term, effect of X1 on Y is measured by β1 With interaction term, effect of X1 on Y is measured by β1 + β3 X2 Effect changes as X2 changes Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

56
**Slopes are different if the effect of x1 on y depends on x2 value**

Interaction Example Suppose x2 is a dummy variable and the estimated regression equation is y 12 x2 = 1: y = 1 + 2x1 + 3(1) + 4x1(1) = 4 + 6x1 ^ 8 4 x2 = 0: y = 1 + 2x1 + 3(0) + 4x1(0) = 1 + 2x1 ^ x1 0.5 1 1.5 Slopes are different if the effect of x1 on y depends on x2 value Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

57
**Significance of Interaction Term**

The coefficient b3 is an estimate of the difference in the coefficient of x1 when x2 = 1 compared to when x2 = 0 The t statistic for b3 can be used to test the hypothesis If we reject the null hypothesis we conclude that there is a difference in the slope coefficient for the two subgroups Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

58
**Multiple Regression Assumptions**

Errors (residuals) from the regression model: ei = (yi – yi) < Assumptions: The errors are normally distributed Errors have a constant variance The model errors are independent Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

59
**Analysis of Residuals in Multiple Regression**

These residual plots are used in multiple regression: Residuals vs. yi Residuals vs. x1i Residuals vs. x2i Residuals vs. time (if time series data) < Use the residual plots to check for violations of regression assumptions Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

60
**Chapter Summary Developed the multiple regression model**

Tested the significance of the multiple regression model Discussed adjusted R2 ( R2 ) Tested individual regression coefficients Tested portions of the regression model Used quadratic terms and log transformations in regression models Used dummy variables Evaluated interaction effects Discussed using residual plots to check model assumptions Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

61
**Chapter 15-1 Additional Topics in Regression Analysis**

Statistics for Business and Economics 6th Edition Chapter 15-1 Additional Topics in Regression Analysis Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

62
**Chapter Goals After completing this chapter, you should be able to:**

Explain regression model-building methodology Apply dummy variables for categorical variables with more than two categories Explain how dummy variables can be used in experimental design models Incorporate lagged values of the dependent variable is regressors Describe specification bias and multicollinearity Examine residuals for heteroscedasticity and autocorrelation Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

63
**The Stages of Model Building**

Model Specification * Understand the problem to be studied Select dependent and independent variables Identify model form (linear, quadratic…) Determine required data for the study Coefficient Estimation Model Verification Interpretation and Inference Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

64
**The Stages of Model Building**

(continued) Model Specification Estimate the regression coefficients using the available data Form confidence intervals for the regression coefficients For prediction, goal is the smallest se If estimating individual slope coefficients, examine model for multicollinearity and specification bias Coefficient Estimation * Model Verification Interpretation and Inference Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

65
**The Stages of Model Building**

(continued) Model Specification Logically evaluate regression results in light of the model (i.e., are coefficient signs correct?) Are any coefficients biased or illogical? Evaluate regression assumptions (i.e., are residuals random and independent?) If any problems are suspected, return to model specification and adjust the model Coefficient Estimation Model Verification * Interpretation and Inference Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

66
**The Stages of Model Building**

(continued) Model Specification Coefficient Estimation Interpret the regression results in the setting and units of your study Form confidence intervals or test hypotheses about regression coefficients Use the model for forecasting or prediction Model Verification Interpretation and Inference * Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

67
**Dummy Variable Models (More than 2 Levels)**

Dummy variables can be used in situations in which the categorical variable of interest has more than two categories Dummy variables can also be useful in experimental design Experimental design is used to identify possible causes of variation in the value of the dependent variable Y outcomes are measured at specific combinations of levels for treatment and blocking variables The goal is to determine how the different treatments influence the Y outcome Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

68
**Dummy Variable Models (More than 2 Levels)**

Consider a categorical variable with K levels The number of dummy variables needed is one less than the number of levels, K – 1 Example: y = house price ; x1 = square feet If style of the house is also thought to matter: Style = ranch, split level, condo Three levels, so two dummy variables are needed Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

69
**Dummy Variable Models (More than 2 Levels)**

(continued) Example: Let “condo” be the default category, and let x2 and x3 be used for the other two categories: y = house price x1 = square feet x2 = 1 if ranch, 0 otherwise x3 = 1 if split level, 0 otherwise The multiple regression equation is: Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

70
**Interpreting the Dummy Variable Coefficients (with 3 Levels)**

Consider the regression equation: For a condo: x2 = x3 = 0 With the same square feet, a ranch will have an estimated average price of thousand dollars more than a condo For a ranch: x2 = 1; x3 = 0 With the same square feet, a split-level will have an estimated average price of thousand dollars more than a condo. For a split level: x2 = 0; x3 = 1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

71
**Experimental Design Consider an experiment in which**

four treatments will be used, and the outcome also depends on three environmental factors that cannot be controlled by the experimenter Let variable z1 denote the treatment, where z1 = 1, 2, 3, or 4. Let z2 denote the environment factor (the “blocking variable”), where z2 = 1, 2, or 3 To model the four treatments, three dummy variables are needed To model the three environmental factors, two dummy variables are needed Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

72
Experimental Design (continued) Define five dummy variables, x1, x2, x3, x4, and x5 Let treatment level 1 be the default (z1 = 1) Define x1 = 1 if z1 = 2, x1 = 0 otherwise Define x2 = 1 if z1 = 3, x2 = 0 otherwise Define x3 = 1 if z1 = 4, x3 = 0 otherwise Let environment level 1 be the default (z2 = 1) Define x4 = 1 if z2 = 2, x4 = 0 otherwise Define x5 = 1 if z2 = 3, x5 = 0 otherwise Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

73
**Experimental Design: Dummy Variable Tables**

The dummy variable values can be summarized in a table: Z1 X1 X2 X3 1 2 3 4 Z2 X4 X5 1 2 3 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

74
**Experimental Design Model**

The experimental design model can be estimated using the equation The estimated value for β2 , for example, shows the amount by which the y value for treatment 3 exceeds the value for treatment 1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

75
**Lagged Values of the Dependent Variable**

In time series models, data is collected over time (weekly, quarterly, etc…) The value of y in time period t is denoted yt The value of yt often depends on the value yt-1, as well as other independent variables xj : A lagged value of the dependent variable is included as an explanatory variable Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

76
**Interpreting Results in Lagged Models**

An increase of 1 unit in the independent variable xj in time period t (all other variables held fixed), will lead to an expected increase in the dependent variable of j in period t j in period (t+1) j2 in period (t+2) j3 in period (t+3) and so on The total expected increase over all current and future time periods is j/(1-) The coefficients 0, 1, ,K, are estimated by least squares in the usual manner Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

77
**Interpreting Results in Lagged Models**

(continued) Confidence intervals and hypothesis tests for the regression coefficients are computed the same as in ordinary multiple regression (When the regression equation contains lagged variables, these procedures are only approximately valid. The approximation quality improves as the number of sample observations increases.) Caution should be used when using confidence intervals and hypothesis tests with time series data There is a possibility that the equation errors i are no longer independent from one another. When errors are correlated the coefficient estimates are unbiased, but not efficient. Thus confidence intervals and hypothesis tests are no longer valid. Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

78
Specification Bias Suppose an important independent variable z is omitted from a regression model If z is uncorrelated with all other included independent variables, the influence of z is left unexplained and is absorbed by the error term, ε But if there is any correlation between z and any of the included independent variables, some of the influence of z is captured in the coefficients of the included variables Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

79
Specification Bias (continued) If some of the influence of omitted variable z is captured in the coefficients of the included independent variables, then those coefficients are biased… …and the usual inferential statements from hypothesis test or confidence intervals can be seriously misleading In addition the estimated model error will include the effect of the missing variable(s) and will be larger Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

80
Multicollinearity Collinearity: High correlation exists among two or more independent variables This means the correlated variables contribute redundant information to the multiple regression model Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

81
Multicollinearity (continued) Including two highly correlated explanatory variables can adversely affect the regression results No new information provided Can lead to unstable coefficients (large standard error and low t-values) Coefficient signs may not match prior expectations Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

82
**Some Indications of Strong Multicollinearity**

Incorrect signs on the coefficients Large change in the value of a previous coefficient when a new variable is added to the model A previously significant variable becomes insignificant when a new independent variable is added The estimate of the standard deviation of the model increases when a variable is added to the model Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

83
**Detecting Multicollinearity**

Examine the simple correlation matrix to determine if strong correlation exists between any of the model independent variables Multicollinearity may be present if the model appears to explain the dependent variable well (high F statistic and low se ) but the individual coefficient t statistics are insignificant Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

84
**Assumptions of Regression**

Normality of Error Error values (ε) are normally distributed for any given value of X Homoscedasticity The probability distribution of the errors has constant variance Independence of Errors Error values are statistically independent Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

85
Residual Analysis The residual for observation i, ei, is the difference between its observed and predicted value Check the assumptions of regression by examining the residuals Examine for linearity assumption Examine for constant variance for all levels of X (homoscedasticity) Evaluate normal distribution assumption Evaluate independence assumption Graphical Analysis of Residuals Can plot residuals vs. X Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

86
**Residual Analysis for Linearity**

x x x x residuals residuals Not Linear Linear Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

87
**Residual Analysis for Homoscedasticity**

x x x x residuals residuals Constant variance Non-constant variance Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

88
** Residual Analysis for Independence Not Independent Independent X X X**

residuals X residuals X residuals Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

89
**Excel Residual Output Does not appear to violate**

Predicted House Price Residuals 1 2 3 4 5 6 7 8 9 10 Does not appear to violate any regression assumptions Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

90
**Heteroscedasticity Homoscedasticity Heteroscedasticity**

The probability distribution of the errors has constant variance Heteroscedasticity The error terms do not all have the same variance The size of the error variances may depend on the size of the dependent variable value, for example When heteroscedasticity is present least squares is not the most efficient procedure to estimate regression coefficients The usual procedures for deriving confidence intervals and tests of hypotheses is not valid Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

91
**Tests for Heteroscedasticity**

To test the null hypothesis that the error terms, εi, all have the same variance against the alternative that their variances depend on the expected values Estimate the simple regression Let R2 be the coefficient of determination of this new regression The null hypothesis is rejected if nR2 is greater than 21, where 21, is the critical value of the chi-square random variable with 1 degree of freedom and probability of error Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

92
**Autocorrelated Errors**

Independence of Errors Error values are statistically independent Autocorrelated Errors Residuals in one time period are related to residuals in another period Autocorrelation violates a least squares regression assumption Leads to sb estimates that are too small (i.e., biased) Thus t-values are too large and some variables may appear significant when they are not Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

93
Autocorrelation Autocorrelation is correlation of the errors (residuals) over time Here, residuals show a cyclic pattern, not random Violates the regression assumption that residuals are random and independent Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

94
**The Durbin-Watson Statistic**

The Durbin-Watson statistic is used to test for autocorrelation H0: successive residuals are not correlated (i.e., Corr(εt,εt-1) = 0) H1: autocorrelation is present The possible range is 0 ≤ d ≤ 4 d should be close to 2 if H0 is true d less than 2 may signal positive autocorrelation, d greater than 2 may signal negative autocorrelation Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

95
**Testing for Positive Autocorrelation**

H0: positive autocorrelation does not exist H1: positive autocorrelation is present Calculate the Durbin-Watson test statistic = d d can be approximated by d = 2(1 – r) , where r is the sample correlation of successive errors Find the values dL and dU from the Durbin-Watson table (for sample size n and number of independent variables K) Decision rule: reject H0 if d < dL Reject H0 Inconclusive Do not reject H0 dL dU 2 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

96
**Negative Autocorrelation**

Negative autocorrelation exists if successive errors are negatively correlated This can occur if successive errors alternate in sign Decision rule for negative autocorrelation: reject H0 if d > 4 – dL Do not reject H0 Reject H0 Inconclusive Inconclusive Reject H0 dL dU 2 4 – dU 4 – dL 4 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

97
**Testing for Positive Autocorrelation**

(continued) Example with n = 25: Excel/PHStat output: Durbin-Watson Calculations Sum of Squared Difference of Residuals Sum of Squared Residuals Durbin-Watson Statistic Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

98
**Testing for Positive Autocorrelation**

(continued) Here, n = 25 and there is k = 1 one independent variable Using the Durbin-Watson table, dL = and dU = 1.45 D = < dL = 1.29, so reject H0 and conclude that significant positive autocorrelation exists Therefore the linear model is not the appropriate model to forecast sales Decision: reject H0 since D = < dL Reject H0 Inconclusive Do not reject H0 dL=1.29 dU=1.45 2 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

99
**Dealing with Autocorrelation**

Suppose that we want to estimate the coefficients of the regression model where the error term εt is autocorrelated Two steps: (i) Estimate the model by least squares, obtaining the Durbin-Watson statistic, d, and then estimate the autocorrelation parameter using Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

100
**Dealing with Autocorrelation**

(ii) Estimate by least squares a second regression with dependent variable (yt – ryt-1) independent variables (x1t – rx1,t-1) , (x2t – rx2,t-1) , . . ., (xk1t – rxk,t-1) The parameters 1, 2, . . ., k are estimated regression coefficients from the second model An estimate of 0 is obtained by dividing the estimated intercept for the second model by (1-r) Hypothesis tests and confidence intervals for the regression coefficients can be carried out using the output from the second model Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

101
**Chapter Summary Discussed regression model building**

Introduced dummy variables for more than two categories and for experimental design Used lagged values of the dependent variable as regressors Discussed specification bias and multicollinearity Described heteroscedasticity Defined autocorrelation and used the Durbin-Watson test to detect positive and negative autocorrelation Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc.

Similar presentations

OK

Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Introduction to Multiple Regression Statistics for Managers.

Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Introduction to Multiple Regression Statistics for Managers.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on bank lending Ppt on sound navigation and ranging system one Ppt on conceptual art Ppt on tata trucks india Ppt on our planet earth Thin film transistor display ppt online Ppt on mpeg audio compression and decompression algorithms Ppt on equity shares valuation Ppt on book review writing Ppt on solid dielectrics wiki