Presentation is loading. Please wait.

Presentation is loading. Please wait.

Business Research Methods

Similar presentations


Presentation on theme: "Business Research Methods"— Presentation transcript:

1 Business Research Methods
Multiple Regression

2 Multiple Regression Multiple Regression Model Least Squares Method
Multiple Coefficient of Determination Model Assumptions Testing for Significance Using the Estimated Regression Equation for Estimation and Prediction Qualitative Independent Variables

3 The Multiple Regression Model
y = 0 + 1x1 + 2x pxp +  The Estimated Multiple Regression Equation y = b0 + b1x1 + b2x bpxp ^

4 The Least Squares Method
Least Squares Criterion Computation of Coefficients’ Values The formulas for the regression coefficients b0, b1, b2, bp involve the use of matrix algebra. We will rely on computer software packages to perform the calculations. A Note on Interpretation of Coefficients bi represents an estimate of the change in y corresponding to a one-unit change in xi when all other independent variables are held constant & is known as partial regression coefficient ^

5 The Multiple Coefficient of Determination
Relationship Among SST, SSR, SSE SST = SSR + SSE Multiple Coefficient of Determination R 2 = SSR/SST Adjusted Multiple Coefficient of Determination ^ ^

6 The Multiple Coefficient of Determination
Multiple Coefficient of Determination R2:It is the square of multiple correlation coefficient R & it measures strength of association in multiple regression Adjusted Multiple Coefficient of Determination: R2 is adjusted for the number of independent variables & the sample size to account for diminishing returns. After the first few variables the additional variables do not make much contribution ^ ^

7 Model Assumptions Assumptions About the Error Term 
The error  is a random variable with mean of zero. The variance of  , denoted by 2, is the same for all values of the independent variables. The values of  are independent. The error  is a normally distributed random variable reflecting the deviation between the y value and the expected value of y given by 0 + 1x1 + 2x pxp

8 Testing for Significance: F Test
Hypotheses H0: 1 = 2 = = p = 0 Ha: One or more of the parameters is not equal to zero. Test Statistic F = MSR/MSE Rejection Rule Reject H0 if F > F where F is based on an F distribution with p d.f. in the numerator and n - p - 1 d.f. in the denominator.

9 Testing for Significance: t Test
Hypotheses H0: i = 0 Ha: i = 0 Test Statistic Rejection Rule Reject H0 if t < -tor t > t where t is based on a t distribution with n - p - 1 degrees of freedom.

10 Testing for Significance: Multicollinearity
The term multicollinearity refers to the correlation among the independent variables. When the independent variables are highly correlated (say, |r | > .7), it is not possible to determine the separate effect of any particular independent variable on the dependent variable. Every attempt should be made to avoid including independent variables that are highly correlated.

11 Using the Estimated Regression Equation for Estimation and Prediction
The procedures for estimating the mean value of y and predicting an individual value of y in multiple regression are similar to those in simple regression. We substitute the given values of x1, x2, , xp into the estimated regression equation and use the corresponding value of y as the point estimate. . Software packages for multiple regression will often provide the interval estimates for an individual value of y ^

12 Example: Programmer Salary Survey
A software firm collected data for a sample of 20computer programmers. A suggestion was made that regression analysis could be used to determine if salary was related to the years of experience and the score on the firm’s programmer aptitude test. The years of experience, score on the aptitude test, and corresponding annual salary (Rs10000) for a sample of 20 programmers is shown on the next slide.

13 Example: Programmer Salary Survey
Exper. Score Salary Exper. Score Salary

14 Example: Programmer Salary Survey
Multiple Regression Model Suppose we believe that salary (y) is related to the years of experience (x1) and the score on the programmer aptitude test (x2) by the following regression model: y = 0 + 1x1 + 2x2 +  where y = annual salary (Rs.0000) x1 = years of experience x2 = score on programmer aptitude test

15 Example: Programmer Salary Survey
Estimated Regression Equation b0, b1, b2 are the least squares estimates of 0, 1, 2 Thus y = b0 + b1x1 + b2x2 ^

16 Example: Programmer Salary Survey
Solving for the Estimates of 0, 1, 2 Least Squares Output Input Data x1 x2 y Computer Package for Solving Multiple Regression Problems b0 = b1 = b2 = R2 = etc.

17 Example: Programmer Salary Survey
Computer Output Predictor Coef Stdev t-ratio p Constant Exper Score

18 INTERPRETATION Partial regression coefficients are 1.40 for experience and .251 for score Increase in salary per unit increase in experience is 1.40 units when test score is constant Increase in salary per unit increase in test score is .251 units when experience is constant The regression equation is Salary = Exper Score

19 The p ratio for Experience is This is less than .05. Hence experience is a significant variable in explaining salary Score is .005 which is also less than .05. Hence score is also a significant variable in explaining salary

20 Example: Programmer Salary Survey
Computer Output (continued) Analysis of Variance SOURCE DF SS MS F P Regression Error Total R-sq = 83.4% R-sq(adj) = 81.5% Interpretation : The p ratio of overall model is .000.This is less than .05. Hence overall regression model is significant at 5% level of significance Since R-Sqare is 81.5 %,the regression model explains 81.5% variation in salary

21 Example: Programmer Salary Survey
F Test Hypotheses H0: 1 = 2 = 0 Ha: One or both of the parameters is not equal to zero. Rejection Rule For  = .05 and d.f. = 2, 17: F.05 = 3.59 Reject H0 if F > 3.59. Test Statistic F = MSR/MSE = /5.85 = 42.76 Conclusion We can reject H0.

22 Qualitative Independent Variables
In many situations we must work with qualitative independent variables such as gender (male, female), method of payment (cash, check, credit card), etc. For example, x2 might represent gender where x2 = 0 indicates male and x2 = 1 indicates female. In this case, x2 is called a dummy or indicator variable. If a qualitative variable has k levels, k - 1 dummy variables are required, with each dummy variable being coded as 0 or 1. For example, a variable with levels A, B, and C would be represented by x1 and x2 values of (0, 0), (1, 0), and (0,1), respectively.

23 Example: Programmer Salary Survey (B)
As an extension of the problem involving the computer programmer salary survey, suppose that management also believes that the annual salary is related to whether or not the individual has a graduate degree in computer science or information systems. The years of experience, the score on the programmer aptitude test, whether or not the individual has a relevant graduate degree, and the annual salary (Rs.0000) for each of the sampled 20 programmers are shown on the next slide.

24 Example: Programmer Salary Survey (B)
Exp. Score Degr. Salary Exp. Score Degr. Salary 4 78 No Yes 38 7 100 Yes No 26.6 1 86 No Yes 36.2 5 82 Yes No 31.6 8 86 Yes No 29 10 84 Yes Yes 34 0 75 No No 30.1 1 80 No Yes 33.9 6 83 No No 28.2 6 91 Yes No 30

25 Example: Programmer Salary Survey (B)
Multiple Regression Equation E(y ) = 0 + 1x1 + 2x2 + 3x3 Estimated Regression Equation y = b0 + b1x1 + b2x2 + b3x where y = annual salary (Rs.0000) x1 = years of experience x2 = score on programmer aptitude test x3 = 0 if individual does not have a grad. degree 1 if individual does have a grad. degree Note: x3 is referred to as a dummy variable. ^

26 Example: Programmer Salary Survey (B)
Computer Output The regression is Salary = Exp Score Deg Predictor Coef Stdev t-ratio p Constant Exp Score Deg SE=S =√MSE= √5.74= R-sq = 84.7% R-sq(adj) = 81.8%

27 Example: Programmer Salary Survey (B)
Computer Output (continued) Analysis of Variance SOURCE DF SS MS F P Regression Error Total Since p value is < .05 , we reject hypothesis1= 2= 3=0 Regression model is significant & explains = 81.8% of total variation

28 Partial Correlation r - ( ) r = 1 - r 1 - r
A partial correlation coefficient measures the association between two variables after controlling for, or adjusting for, the effects of one or more additional variables. Partial correlations have an order associated with them. The order indicates how many variables are being adjusted or controlled. The simple correlation coefficient, r, has a zero-order, as it does not control for any additional variables while measuring the association between two variables. r x y - ( z ) r x y . z = 1 - r x z 2 1 - r y z 2

29 Nonmetric Correlation
If the nonmetric variables are ordinal and numeric, Spearman's rho, , and Kendall's tau, , are two measures of nonmetric correlation, which can be used to examine the correlation between them. Both these measures use rankings rather than the absolute values of the variables, and the basic concepts underlying them are quite similar. Both vary from -1.0 to In the absence of ties, Spearman's yields a closer approximation to the Pearson product moment correlation coefficient, , than Kendall's In these cases, the absolute magnitude of tends to be smaller than Pearson's . On the other hand, when the data contain a large number of tied ranks, Kendall's seems more appropriate.

30 Regression with Dummy Variables
Product Usage Original Dummy Variable Code Category Variable Code D D D3 Nonusers Light Users Medium Users Heavy Users i = a + b1D1 + b2D2 + b3D3 In this case, "heavy users" has been selected as a reference category and has not been directly included in the regression equation. The coefficient b1 is the difference in predicted i for nonusers, as compared to heavy users.


Download ppt "Business Research Methods"

Similar presentations


Ads by Google