© 2008 Prentice-Hall, Inc. Chapter 4 To accompany Quantitative Analysis for Management, Tenth Edition, by Render, Stair, and Hanna Power Point slides created.

Slides:



Advertisements
Similar presentations
Multiple Regression and Model Building
Advertisements

Managerial Economics in a Global Economy
Chap 12-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 12 Simple Regression Statistics for Business and Economics 6.
Forecasting Using the Simple Linear Regression Model and Correlation
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
© 2008 Prentice-Hall, Inc. Chapter 4 To accompany Quantitative Analysis for Management, Tenth Edition, by Render, Stair, and Hanna Power Point slides created.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Chapter 12 Simple Linear Regression
Chapter 13 Multiple Regression
Regression Models Chapter 4
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 4-1 © 2006 by Prentice Hall, Inc., Upper Saddle River, NJ Chapter 4 RegressionModels.
Chapter 12 Simple Regression
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Simple Linear Regression Basic Business Statistics 11 th Edition.
Chapter 12 Multiple Regression
Chapter 13 Introduction to Linear Regression and Correlation Analysis
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 13-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
SIMPLE LINEAR REGRESSION
Pengujian Parameter Koefisien Korelasi Pertemuan 04 Matakuliah: I0174 – Analisis Regresi Tahun: Ganjil 2007/2008.
Chapter Topics Types of Regression Models
Multiple Regression and Correlation Analysis
Linear Regression Example Data
SIMPLE LINEAR REGRESSION
Korelasi dalam Regresi Linear Sederhana Pertemuan 03 Matakuliah: I0174 – Analisis Regresi Tahun: Ganjil 2007/2008.
© 2000 Prentice-Hall, Inc. Chap Forecasting Using the Simple Linear Regression Model and Correlation.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Simple Linear Regression Basic Business Statistics 10 th Edition.
Chapter 7 Forecasting with Simple Regression
Introduction to Regression Analysis, Chapter 13,
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS & Updated by SPIROS VELIANITIS.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Introduction to Multiple Regression Statistics for Managers.
Lecture 5 Correlation and Regression
Correlation and Linear Regression
SIMPLE LINEAR REGRESSION
Introduction to Linear Regression and Correlation Analysis
Inference for regression - Simple linear regression
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Chapter 13: Inference in Regression
Linear Regression and Correlation
Correlation and Linear Regression
1 1 Slide © 2005 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
© 2003 Prentice-Hall, Inc.Chap 13-1 Basic Business Statistics (9 th Edition) Chapter 13 Simple Linear Regression.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
1 1 Slide Simple Linear Regression Coefficient of Determination Chapter 14 BA 303 – Spring 2011.
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
1 1 Slide © 2014 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole.
EQT 373 Chapter 3 Simple Linear Regression. EQT 373 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value.
1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model.
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Chapter 13 Multiple Regression
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 13-1 Introduction to Regression Analysis Regression analysis is used.
Lecture 10: Correlation and Regression Model.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice- Hall, Inc. Chap 14-1 Business Statistics: A Decision-Making Approach 6 th Edition.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc.. Chap 14-1 Chapter 14 Introduction to Multiple Regression Basic Business Statistics 10 th Edition.
Statistics for Managers Using Microsoft® Excel 5th Edition
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 14-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Multiple Regression Chapter 14.
REGRESSION AND CORRELATION SIMPLE LINEAR REGRESSION 10.2 SCATTER DIAGRAM 10.3 GRAPHICAL METHOD FOR DETERMINING REGRESSION 10.4 LEAST SQUARE METHOD.
Regression Models Chapter 4
Chapter 14 Introduction to Multiple Regression
Essentials of Modern Business Statistics (7e)
Prepared by Lee Revere and John Large
Presentation transcript:

© 2008 Prentice-Hall, Inc. Chapter 4 To accompany Quantitative Analysis for Management, Tenth Edition, by Render, Stair, and Hanna Power Point slides created by Jeff Heyl Regression Models © 2009 Prentice-Hall, Inc.

© 2009 Prentice-Hall, Inc. 4 – 2 Learning Objectives 1.Identify variables and use them in a regression model 2.Develop simple linear regression equations from sample data and interpret the slope and intercept 3.Compute the coefficient of determination and the coefficient of correlation and interpret their meanings 4.Interpret the F -test in a linear regression model 5.List the assumptions used in regression and use residual plots to identify problems After completing this chapter, students will be able to:

© 2009 Prentice-Hall, Inc. 4 – 3 Learning Objectives 6.Develop a multiple regression model and use it to predict 7.Use dummy variables to model categorical data 8.Determine which variables should be included in a multiple regression model 9.Transform a nonlinear function into a linear one for use in regression 10.Understand and avoid common mistakes made in the use of regression analysis After completing this chapter, students will be able to:

© 2009 Prentice-Hall, Inc. 4 – 4 Chapter Outline 4.1Introduction 4.2Scatter Diagrams 4.3Simple Linear Regression 4.4Measuring the Fit of the Regression Model 4.5Using Computer Software for Regression 4.6Assumptions of the Regression Model

© 2009 Prentice-Hall, Inc. 4 – 5 Chapter Outline 4.7Testing the Model for Significance 4.8Multiple Regression Analysis 4.9Binary or Dummy Variables 4.10Model Building 4.11Nonlinear Regression 4.12Cautions and Pitfalls in Regression Analysis

© 2009 Prentice-Hall, Inc. 4 – 6 Introduction Regression analysis Regression analysis is a very valuable tool for a manager Regression can be used to Understand the relationship between variables Predict the value of one variable based on another variable Simple linear regression models have only two variables Multiple regression models have more variables

© 2009 Prentice-Hall, Inc. 4 – 7 Introduction dependent variable The variable to be predicted is called the dependent variable response variable Sometimes called the response variable independent variable The value of this variable depends on the value of the independent variable explanatory predictor variable Sometimes called the explanatory or predictor variable Independent variable Dependent variable Independent variable = +

© 2009 Prentice-Hall, Inc. 4 – 8 Scatter Diagram Graphing is a helpful way to investigate the relationship between variables scatter diagramscatter plot A scatter diagram or scatter plot is often used The independent variable is normally plotted on the X axis The dependent variable is normally plotted on the Y axis

© 2009 Prentice-Hall, Inc. 4 – 9 Triple A Construction Triple A Construction renovates old homes They have found that the dollar volume of renovation work is dependent on the area payroll TRIPLE A’S SALES ($100,000’s) LOCAL PAYROLL ($100,000,000’s) Table 4.1

© 2009 Prentice-Hall, Inc. 4 – 10 Triple A Construction Figure – 10 – 8 – 6 – 4 – 2 – 0 – Sales ($100,000) Payroll ($100 million) |||||||| ||||||||

© 2009 Prentice-Hall, Inc. 4 – 11 Simple Linear Regression where Y = dependent variable (response) X = independent variable (predictor or explanatory)  0 = intercept (value of Y when X = 0)  1 = slope of the regression line  = random error Regression models are used to test if there is a relationship between variables There is some random error that cannot be predicted

© 2009 Prentice-Hall, Inc. 4 – 12 Simple Linear Regression True values for the slope and intercept are not known so they are estimated using sample data where Y = dependent variable (response) X = independent variable (predictor or explanatory) b 0 = intercept (value of Y when X = 0) b 1 = slope of the regression line ^

© 2009 Prentice-Hall, Inc. 4 – 13 Triple A Construction Triple A Construction is trying to predict sales based on area payroll Y = Sales X = Area payroll The line chosen in Figure 4.1 is the one that minimizes the errors Error = (Actual value) – (Predicted value)

© 2009 Prentice-Hall, Inc. 4 – 14 Triple A Construction For the simple linear regression model, the values of the intercept and slope can be calculated using the formulas below

© 2009 Prentice-Hall, Inc. 4 – 15 Triple A Construction YX ( X – X ) 2 ( X – X )( Y – Y ) 63(3 – 4) 2 = 1(3 – 4)(6 – 7) = 1 84(4 – 4) 2 = 0(4 – 4)(8 – 7) = 0 96(6 – 4) 2 = 4(6 – 4)(9 – 7) = 4 54(4 – 4) 2 = 0(4 – 4)(5 – 7) = (2 – 4) 2 = 4(2 – 4)(4.5 – 7) = (5 – 4) 2 = 1(5 – 4)(9.5 – 7) = 2.5 ΣY = 42 Y = 42/6 = 7 ΣX = 24 X = 24/6 = 4 Σ(X – X) 2 = 10 Σ(X – X)(Y – Y) = 12.5 Table 4.2 Regression calculations

© 2009 Prentice-Hall, Inc. 4 – 16 Triple A Construction Regression calculations Therefore

© 2009 Prentice-Hall, Inc. 4 – 17 Triple A Construction Regression calculations Therefore sales = (payroll) If the payroll next year is $600 million

© 2009 Prentice-Hall, Inc. 4 – 18 Measuring the Fit of the Regression Model Regression models can be developed for any variables X and Y How do we know the model is actually helpful in predicting Y based on X ? We could just take the average error, but the positive and negative errors would cancel each other out Three measures of variability are SST – Total variability about the mean SSE – Variability about the regression line SSR – Total variability that is explained by the model

© 2009 Prentice-Hall, Inc. 4 – 19 Measuring the Fit of the Regression Model Sum of the squares total Sum of the squared error Sum of squares due to regression An important relationship

© 2009 Prentice-Hall, Inc. 4 – 20 Measuring the Fit of the Regression Model YX ( Y – Y ) 2 Y 63(6 – 7) 2 = (3) = (8 – 7) 2 = (4) = (9 – 7) 2 = (6) = (5 – 7) 2 = (4) = (4.5 – 7) 2 = (2) = (9.5 – 7) 2 = (5) = ∑( Y – Y ) 2 = 22.5∑( Y – Y ) 2 = 6.875∑( Y – Y ) 2 = Y = 7 SST = 22.5 SSE = SSR = ^ ^^ ^^ Table 4.3

© 2009 Prentice-Hall, Inc. 4 – 21 Sum of the squares total Sum of the squared error Sum of squares due to regression An important relationship Measuring the Fit of the Regression Model For Triple A Construction SST = 22.5 SSE = SSR =

© 2009 Prentice-Hall, Inc. 4 – 22 Measuring the Fit of the Regression Model Figure – 10 – 8 – 6 – 4 – 2 – 0 – Sales ($100,000) Payroll ($100 million) |||||||| |||||||| Y = X ^ Y – Y ^ Y ^

© 2009 Prentice-Hall, Inc. 4 – 23 Coefficient of Determination coefficient of determination The proportion of the variability in Y explained by regression equation is called the coefficient of determination The coefficient of determination is r 2 For Triple A Construction About 69% of the variability in Y is explained by the equation based on payroll ( X )

© 2009 Prentice-Hall, Inc. 4 – 24 Correlation Coefficient correlation coefficient The correlation coefficient is an expression of the strength of the linear relationship It will always be between +1 and –1 The correlation coefficient is r For Triple A Construction

© 2009 Prentice-Hall, Inc. 4 – 25 Correlation Coefficient * * * * (a)Perfect Positive Correlation: r = +1 X Y * * * * (c)No Correlation: r = 0 X Y * * * * * * * * * * (d)Perfect Negative Correlation: r = –1 X Y * * * * * * * * * (b)Positive Correlation: 0 < r < 1 X Y * * * * * * * Figure 4.3

© 2009 Prentice-Hall, Inc. 4 – 26 Using Computer Software for Regression Program 4.1A

© 2009 Prentice-Hall, Inc. 4 – 27 Using Computer Software for Regression Program 4.1B

© 2009 Prentice-Hall, Inc. 4 – 28 Using Computer Software for Regression Program 4.1C

© 2009 Prentice-Hall, Inc. 4 – 29 Using Computer Software for Regression Program 4.1D

© 2009 Prentice-Hall, Inc. 4 – 30 Using Computer Software for Regression Program 4.1D Correlation coefficient is called Multiple R in Excel

© 2009 Prentice-Hall, Inc. 4 – 31 Assumptions of the Regression Model 1.Errors are independent 2.Errors are normally distributed 3.Errors have a mean of zero 4.Errors have a constant variance If we make certain assumptions about the errors in a regression model, we can perform statistical tests to determine if the model is useful A plot of the residuals (errors) will often highlight any glaring violations of the assumption

© 2009 Prentice-Hall, Inc. 4 – 32 Residual Plots A random plot of residuals Figure 4.4A Error X

© 2009 Prentice-Hall, Inc. 4 – 33 Residual Plots Nonconstant error variance Figure 4.4B Error X

© 2009 Prentice-Hall, Inc. 4 – 34 Residual Plots Nonlinear relationship Figure 4.4C Error X

© 2009 Prentice-Hall, Inc. 4 – 35 Estimating the Variance Errors are assumed to have a constant variance (  2 ), but we usually don’t know this mean squared error MSE It can be estimated using the mean squared error ( MSE ), s 2 where n = number of observations in the sample k = number of independent variables

© 2009 Prentice-Hall, Inc. 4 – 36 Estimating the Variance For Triple A Construction We can estimate the standard deviation, s standard error of the estimatestandard deviation of the regression This is also called the standard error of the estimate or the standard deviation of the regression

© 2009 Prentice-Hall, Inc. 4 – 37 Testing the Model for Significance When the sample size is too small, you can get good values for MSE and r 2 even if there is no relationship between the variables Testing the model for significance helps determine if the values are meaningful We do this by performing a statistical hypothesis test

© 2009 Prentice-Hall, Inc. 4 – 38 Testing the Model for Significance We start with the general linear model no If  1 = 0, the null hypothesis is that there is no relationship between X and Y is The alternate hypothesis is that there is a linear relationship (  1 ≠ 0) If the null hypothesis can be rejected, we have proven there is a relationship We use the F statistic for this test

© 2009 Prentice-Hall, Inc. 4 – 39 Testing the Model for Significance The F statistic is based on the MSE and MSR where k =number of independent variables in the model The F statistic is This describes an F distribution with degrees of freedom for the numerator = df 1 = k degrees of freedom for the denominator = df 2 = n – k – 1

© 2009 Prentice-Hall, Inc. 4 – 40 Testing the Model for Significance If there is very little error, the MSE would be small and the F -statistic would be large indicating the model is useful If the F -statistic is large, the significance level ( p -value) will be low, indicating it is unlikely this would have occurred by chance So when the F -value is large, we can reject the null hypothesis and accept that there is a linear relationship between X and Y and the values of the MSE and r 2 are meaningful

© 2009 Prentice-Hall, Inc. 4 – 41 Steps in a Hypothesis Test 1.Specify null and alternative hypotheses 2.Select the level of significance (  ). Common values are 0.01 and Calculate the value of the test statistic using the formula

© 2009 Prentice-Hall, Inc. 4 – 42 Steps in a Hypothesis Test 4.Make a decision using one of the following methods a)Reject the null hypothesis if the test statistic is greater than the F -value from the table in Appendix D. Otherwise, do not reject the null hypothesis: b)Reject the null hypothesis if the observed significance level, or p -value, is less than the level of significance (  ). Otherwise, do not reject the null hypothesis:

© 2009 Prentice-Hall, Inc. 4 – 43 Triple A Construction Step 1. H 0 :  1 = 0(no linear relationship between X and Y ) H 1 :  1 ≠ 0(linear relationship exists between X and Y ) Step 2. Select  = 0.05 Step 3. Calculate the value of the test statistic

© 2009 Prentice-Hall, Inc. 4 – 44 Triple A Construction Step 4. Reject the null hypothesis if the test statistic is greater than the F -value in Appendix D df 1 = k = 1 df 2 = n – k – 1 = 6 – 1 – 1 = 4 The value of F associated with a 5% level of significance and with degrees of freedom 1 and 4 is found in Appendix D F 0.05,1,4 = 7.71 F calculated = 9.09 Reject H 0 because 9.09 > 7.71

© 2009 Prentice-Hall, Inc. 4 – 45 F = Triple A Construction Figure 4.5 We can conclude there is a statistically significant relationship between X and Y The r 2 value of 0.69 means about 69% of the variability in sales ( Y ) is explained by local payroll ( X )

© 2009 Prentice-Hall, Inc. 4 – 46 Analysis of Variance (ANOVA) Table When software is used to develop a regression model, an ANOVA table is typically created that shows the observed significance level ( p -value) for the calculated F value This can be compared to the level of significance (  ) to make a decision DFSSMSF SIGNIFICANCE Regression kSSR MSR = SSR / kMSR / MSEP ( F > MSR / MSE ) Residual n - k - 1 SSE MSE = SSE /( n - k - 1) Total n - 1 SST Table 4.4

© 2009 Prentice-Hall, Inc. 4 – 47 ANOVA for Triple A Construction Because this probability is less than 0.05, we reject the null hypothesis of no linear relationship and conclude there is a linear relationship between X and Y Program 4.1D (partial) P ( F > ) =

© 2009 Prentice-Hall, Inc. 4 – 48 Multiple Regression Analysis Multiple regression models Multiple regression models are extensions to the simple linear model and allow the creation of models with several independent variables Y =  0 +  1 X 1 +  2 X 2 + … +  k X k +  where Y =dependent variable (response variable) X i = ith independent variable (predictor or explanatory variable)  0 =intercept (value of Y when all X i = 0)  I =coefficient of the ith independent variable k =number of independent variables  =random error

© 2009 Prentice-Hall, Inc. 4 – 49 Multiple Regression Analysis To estimate these values, a sample is taken the following equation developed where =predicted value of Y b 0 =sample intercept (and is an estimate of  0 ) b i =sample coefficient of the ith variable (and is an estimate of  i )

© 2009 Prentice-Hall, Inc. 4 – 50 Jenny Wilson Realty Jenny Wilson wants to develop a model to determine the suggested listing price for houses based on the size and age of the house where =predicted value of dependent variable (selling price) b 0 = Y intercept X 1 and X 2 =value of the two independent variables (square footage and age) respectively b 1 and b 2 =slopes for X 1 and X 2 respectively She selects a sample of houses that have sold recently and records the data shown in Table 4.5

© 2009 Prentice-Hall, Inc. 4 – 51 Jenny Wilson Realty SELLING PRICE ($) SQUARE FOOTAGE AGECONDITION 95,0001,92630Good 119,0002,06940Excellent 124,8001,72030Excellent 135,0001,39615Good 142,0001,70632Mint 145,0001,84738Mint 159,0001,95027Mint 165,0002,32330Excellent 182,0002,28526Mint 183,0003,75235Good 200,0002,30018Good 211,0002,52517Good 215,0003,80040Excellent 219,0001,74012Mint Table 4.5

© 2009 Prentice-Hall, Inc. 4 – 52 Jenny Wilson Realty Program 4.2

© 2009 Prentice-Hall, Inc. 4 – 53 Evaluating Multiple Regression Models Evaluation is similar to simple linear regression models The p -value for the F -test and r 2 are interpreted the same The hypothesis is different because there is more than one independent variable The F -test is investigating whether all the coefficients are equal to 0

© 2009 Prentice-Hall, Inc. 4 – 54 Evaluating Multiple Regression Models To determine which independent variables are significant, tests are performed for each variable The test statistic is calculated and if the p -value is lower than the level of significance (  ), the null hypothesis is rejected

© 2009 Prentice-Hall, Inc. 4 – 55 Jenny Wilson Realty The model is statistically significant The p-value for the F -test is r 2 = so the model explains about 67% of the variation in selling price ( Y ) But the F -test is for the entire model and we can’t tell if one or both of the independent variables are significant By calculating the p -value of each variable, we can assess the significance of the individual variables Since the p-value for X 1 (square footage) and X 2 (age) are both less than the significance level of 0.05, both null hypotheses can be rejected

© 2009 Prentice-Hall, Inc. 4 – 56 Binary or Dummy Variables Binarydummyindicator Binary (or dummy or indicator) variables are special variables created for qualitative data A dummy variable is assigned a value of 1 if a particular condition is met and a value of 0 otherwise The number of dummy variables must equal one less than the number of categories of the qualitative variable

© 2009 Prentice-Hall, Inc. 4 – 57 Jenny Wilson Realty Jenny believes a better model can be developed if she includes information about the condition of the property X 3 = 1 if house is in excellent condition = 0 otherwise X 4 = 1 if house is in mint condition = 0 otherwise Two dummy variables are used to describe the three categories of condition No variable is needed for “good” condition since if both X 3 and X 4 = 0, the house must be in good condition

© 2009 Prentice-Hall, Inc. 4 – 58 Jenny Wilson Realty Program 4.3

© 2009 Prentice-Hall, Inc. 4 – 59 Jenny Wilson Realty Program 4.3 Model explains about 90% of the variation in selling price F -value indicates significance Low p -values indicate each variable is significant

© 2009 Prentice-Hall, Inc. 4 – 60 Model Building The best model is a statistically significant model with a high r 2 and few variables As more variables are added to the model, the r 2 -value usually increases adjusted r 2 For this reason, the adjusted r 2 value is often used to determine the usefulness of an additional variable The adjusted r 2 takes into account the number of independent variables in the model

© 2009 Prentice-Hall, Inc. 4 – 61 Model Building The formula for r 2 The formula for adjusted r 2 As the number of variables increases, the adjusted r 2 gets smaller unless the increase due to the new variable is large enough to offset the change in k

© 2009 Prentice-Hall, Inc. 4 – 62 Model Building In general, if a new variable increases the adjusted r 2, it should probably be included in the model In some cases, variables contain duplicate information collinear When two independent variables are correlated, they are said to be collinear multicollinearity When more than two independent variables are correlated, multicollinearity exists When multicollinearity is present, hypothesis tests for the individual coefficients are not valid but the model may still be useful

© 2009 Prentice-Hall, Inc. 4 – 63 Nonlinear Regression In some situations, variables are not linear Transformations may be used to turn a nonlinear model into a linear model * ** * * * * * * Linear relationshipNonlinear relationship * * * * * * * * * * *

© 2009 Prentice-Hall, Inc. 4 – 64 Colonel Motors The engineers want to use regression analysis to improve fuel efficiency They have been asked to study the impact of weight on miles per gallon (MPG) MPG WEIGHT (1,000 LBS.)MPG WEIGHT (1,000 LBS.) Table 4.6

© 2009 Prentice-Hall, Inc. 4 – 65 Colonel Motors Figure 4.6A 45 – 40 – 35 – 30 – 25 – 20 – 15 – 10 – 5 – 0 – ||||| MPG Weight (1,000 lb.) Linear model

© 2009 Prentice-Hall, Inc. 4 – 66 Colonel Motors Program 4.4 A useful model with a small F -test for significance and a good r 2 value

© 2009 Prentice-Hall, Inc. 4 – 67 Colonel Motors Figure 4.6B 45 – 40 – 35 – 30 – 25 – 20 – 15 – 10 – 5 – 0 – ||||| MPG Weight (1,000 lb.) Nonlinear model

© 2009 Prentice-Hall, Inc. 4 – 68 Colonel Motors The nonlinear model is a quadratic model The easiest way to work with this model is to develop a new variable This gives us a model that can be solved with linear regression software

© 2009 Prentice-Hall, Inc. 4 – 69 Colonel Motors Program 4.5 A better model with a smaller F -test for significance and a larger adjusted r 2 value

© 2009 Prentice-Hall, Inc. 4 – 70 Cautions and Pitfalls If the assumptions are not met, the statistical test may not be valid Correlation does not necessarily mean causation Multicollinearity makes interpreting coefficients problematic, but the model may still be good Using a regression model beyond the range of X is questionable, the relationship may not hold outside the sample data

© 2009 Prentice-Hall, Inc. 4 – 71 Cautions and Pitfalls t -tests for the intercept ( b 0 ) may be ignored as this point is often outside the range of the model A linear relationship may not be the best relationship, even if the F -test returns an acceptable value A nonlinear relationship can exist even if a linear relationship does not Just because a relationship is statistically significant doesn't mean it has any practical value