Multiple Regression. Introduction In this chapter, we extend the simple linear regression model. Any number of independent variables is now allowed. We.

Slides:



Advertisements
Similar presentations
Simple Linear Regression 1. review of least squares procedure 2
Advertisements

Chapter 18 Multiple Regression.
Lecture Unit Multiple Regression.
Simple Linear Regression Analysis
Multiple Regression and Model Building
Forecasting Using the Simple Linear Regression Model and Correlation
Regression Analysis Module 3. Regression Regression is the attempt to explain the variation in a dependent variable using the variation in independent.
1 Multiple Regression Chapter Introduction In this chapter we extend the simple linear regression model, and allow for any number of independent.
1 Multiple Regression Model Error Term Assumptions –Example 1: Locating a motor inn Goodness of Fit (R-square) Validity of estimates (t-stats & F-stats)
Multiple Regression Analysis
Lecture 9- Chapter 19 Multiple regression Introduction In this chapter we extend the simple linear regression model and allow for any number of.
LECTURE 3 Introduction to Linear Regression and Correlation Analysis
Chapter 10 Simple Regression.
Chapter 12 Simple Regression
1 Multiple Regression Chapter Introduction In this chapter we extend the simple linear regression model, and allow for any number of independent.
Lecture 22 Multiple Regression (Sections )
Chapter 13 Introduction to Linear Regression and Correlation Analysis
1 Multiple Regression. 2 Introduction In this chapter we extend the simple linear regression model, and allow for any number of independent variables.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Prentice-Hall, Inc.Chap 13-1 Statistics for Managers Using Microsoft® Excel 5th Edition Chapter.
SIMPLE LINEAR REGRESSION
Pengujian Parameter Koefisien Korelasi Pertemuan 04 Matakuliah: I0174 – Analisis Regresi Tahun: Ganjil 2007/2008.
Chapter Topics Types of Regression Models
Chapter 11 Multiple Regression.
Lecture 23 Multiple Regression (Sections )
Introduction to Probability and Statistics Linear Regression and Correlation.
Linear Regression Example Data
SIMPLE LINEAR REGRESSION
© 2000 Prentice-Hall, Inc. Chap Forecasting Using the Simple Linear Regression Model and Correlation.
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Chapter 7 Forecasting with Simple Regression
Introduction to Regression Analysis, Chapter 13,
Simple Linear Regression Analysis
1 Simple Linear Regression 1. review of least squares procedure 2. inference for least squares lines.
Multiple Regression Farrokh Alemi, Ph.D. Kashif Haqqi M.D.
1 5. Multiple Regression II ECON 251 Research Methods.
SIMPLE LINEAR REGRESSION
Introduction to Linear Regression and Correlation Analysis
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Chapter 14 Simple Regression
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
© 2003 Prentice-Hall, Inc.Chap 13-1 Basic Business Statistics (9 th Edition) Chapter 13 Simple Linear Regression.
1 1 Slide Simple Linear Regression Coefficient of Determination Chapter 14 BA 303 – Spring 2011.
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
EQT 373 Chapter 3 Simple Linear Regression. EQT 373 Learning Objectives In this chapter, you learn: How to use regression analysis to predict the value.
Applied Quantitative Analysis and Practices LECTURE#23 By Dr. Osman Sadiq Paracha.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc Chapter 18 Multiple Regression.
1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
Chapter 13 Multiple Regression
Economics 173 Business Statistics Lecture 19 Fall, 2001© Professor J. Petry
Lecture 10: Correlation and Regression Model.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Statistics for Managers Using Microsoft® Excel 5th Edition
Copyright © 2009 Cengage Learning 17.1 Chapter 19 Multiple Regression.
Economics 173 Business Statistics Lecture 18 Fall, 2001 Professor J. Petry
Multiple Regression Reference: Chapter 18 of Statistics for Management and Economics, 7 th Edition, Gerald Keller. 1.
Chapter 13 Simple Linear Regression
Inference for Least Squares Lines
Statistics for Managers using Microsoft Excel 3rd Edition
Linear Regression and Correlation Analysis
Regression Analysis Simple Linear Regression
Simple Linear Regression
Chapter 13 Simple Linear Regression
Prepared by Lee Revere and John Large
PENGOLAHAN DAN PENYAJIAN
SIMPLE LINEAR REGRESSION
Chapter 13 Simple Linear Regression
Presentation transcript:

Multiple Regression

Introduction In this chapter, we extend the simple linear regression model. Any number of independent variables is now allowed. We wish to build a model that fits the data better than the simple linear regression model.

Computer printout is used to help us: –Assess/Validate the model How well does it fit the data? Is it useful? Are any of the required conditions violated? –Apply the model Interpreting the coefficients Estimating the expected value of the dependent variable

Coefficients Dependent variableIndependent variables Random error variable Model and Required Conditions We allow for k independent variables to potentially be related to the dependent variable Y =  0 +  1 X 1 +  2 X 2 + …+  k X k + 

Multiple Regression for k = 2, Graphical Demonstration Y =  0 +  1 X X Y X2X2 1 The simple linear regression model allows for one independent variable, “X” Y =  0 +  1 X +  The multiple linear regression model allows for more than one independent variable. Y =  0 +  1 X 1 +  2 X 2 +  Note how the straight line becomes a plane Y =  0 +  1 X 1 +  2 X 2

The error  is normally distributed. The mean is equal to zero and the standard deviation is constant (    for all possible values of the X i s. All errors are independent. Required Conditions for the Error Variable

–If the model assessment indicates good fit to the data, use it to interpret the coefficients and generate predictions. –Assess the model fit using statistics obtained from the sample. –Diagnose violations of required conditions. Try to remedy problems when identified. Estimating the Coefficients and Assessing the Model The procedure used to perform regression analysis: –Obtain the model coefficients and statistics using Excel.

Example 18.1 Where to locate a new motor inn? –La Quinta Motor Inns is planning an expansion. –Management wishes to predict which sites are likely to be profitable, defined as having 50% or higher operating margin (net profit expressed as a percentage of total revenue). –Several potential predictors of profitability are: Competition (room supply) Market awareness (competing motel) Demand generators (office and college) Demographics (household income) Physical quality/location (distance to downtown)

Profitability Competition/ Supply Market Awareness Demand/ Customers Community Physical Operating Margin RoomsNearestOffice Space College Enrollment IncomeDisttwn Distance to downtown. Median household income. Distance to the nearest motel. Number of hotels/motels rooms within 3 miles from the site.

Data were collected from 100 randomly-selected inns that belong to La Quinta, and ran for the following suggested model: Margin =     Rooms   Nearest   Office    College +  5 Income +  6 Disttwn +  Model and Data Xm18-01

This is the sample regression equation (sometimes called the prediction equation) This is the sample regression equation (sometimes called the prediction equation) Excel Output Margin = Rooms Nearest Office College Income Disttwn

Model Assessment The model is assessed using three measures: –The standard error of estimate –The coefficient of determination –The F-test of the analysis of variance The standard error of estimates is used in the calculations for the other measures.

The standard deviation of the error is estimated by the Standard Error of Estimate : (k+1 coefficients were estimated) The magnitude of s  is judged by comparing it to: Standard Error of Estimate

From the printout, s  = 5.51 The mean value of Y can be determined as: It seems that s  is not particularly small (relative to the mean of Y). Question: Can we conclude the model does not fit the data well? Not necessarily.

The definition is: From the printout, R 2 = % of the variation in operating margin is explained by the six independent variables % are unexplained. When adjusted for the impact of k relative to n (intended to flag potential problems with small sample size), we have: Adjusted R 2 = 1-[SSE/(n-k-1)] / [SS(Total)/(n-1)] = = 49.44% Coefficient of Determination

Consider the question: Is there at least one independent variable linearly related to the dependent variable? To answer this question, we test the hypothesis: H 0 :  1 =  2 = … =  k = 0 H 1 : At least one  i is not equal to zero. If at least one  i is not equal to zero, the model has some validity. The test is similar to an Analysis of Variance... Testing the Validity of the Model

The hypotheses can be tested by an ANOVA procedure. The Excel output is: MSE=SSE/(n-k-1) MSR=SSR/k MSR/MSE SSE SSR k = n–k–1 = n-1 = SSR: Sum of Squares for Regression SSE: Sum of Squares for Error

[Total Variation in Y] = SSR + SSE. Large F indicates a large SSR; that is, much of the variation in Y is explained by the regression model. Therefore, if F is large, the model is considered valid and hence the null hypothesis should be rejected. The Rejection Region: F>F ,k,n-k-1 As in analysis of variance, we have:

F ,k,n-k-1 = F 0.05,6, =2.17 F = > 2.17 Also, the p-value (Significance F) = Reject the null hypothesis. Conclusion: There is sufficient evidence to reject the null hypothesis in favor of the alternative hypothesis: at least one of the  i is not equal to zero. Thus, at least one independent variable is linearly related to Y. This linear regression model is valid

b 0 = This is the intercept, the value of Y when all the variables take the value zero. Since the data range of all the independent variables do not cover the value zero, do not interpret the intercept. b 1 = – In this model, for each additional room within 3 mile of the La Quinta inn, the operating margin decreases on average by.0076% (assuming the other variables are held constant). Interpreting the Coefficients

b 2 = In this model, for each additional mile that the nearest competitor is to a La Quinta inn, the operating margin increases on average by 1.65%, when the other variables are held constant. b 3 = For each additional 1000 sq-ft of office space, the operating margin will increase on average by.02%, when the other variables are held constant. b 4 = For each additional thousand students, the operating margin increases on average by.21%, when the other variables are held constant.

b 5 = For each increment of $1000 in median household income, the operating margin would increase on average by.41%, when the other variables remain constant. b 6 = For each additional mile to the downtown center, the operating margin decreases on average by.23%, when the other variables are held constant.

The hypothesis for each  i is: Excel output: H 0 :  i  0 H 1 :  i  0 d.f. = n - k -1 Test statistic Testing Individual Coefficients Insufficient Evidence Ignore

Predict the average operating margin of an inn at a site with the following characteristics: –3815 rooms within 3 miles, –Closet competitor.9 miles away, –476,000 sq-ft of office space, –24,500 college students, –$35,000 median household income, –11.2 miles distance to downtown center. MARGIN = (3815) (.9) ( 476) (24.5) ( 35) (11.2) = 37.1% Xm18-01 La Quinta Inns, Point Estimate

The conditions required for the model assessment to apply must be checked. –Is the error variable normally distributed? –Is the error variance constant? –Are the errors independent? –Can we identify outlier? –Is multicolinearity (correlation between the X i ’s) a problem? Regression Diagnostics Draw a histogram of the residuals Plot the residuals versus the predicted values of Y Plot the residuals versus the time periods