Presentation is loading. Please wait.

Presentation is loading. Please wait.

Last week we talked, among other things, about supply and demand equations and said that having those available may improve the accuracy of our predictions.

Similar presentations


Presentation on theme: "Last week we talked, among other things, about supply and demand equations and said that having those available may improve the accuracy of our predictions."— Presentation transcript:

1 Last week we talked, among other things, about supply and demand equations and said that having those available may improve the accuracy of our predictions.

2 How can we obtain those equations? What information do we need for that? What techniques do we use to translate raw data into an equation? How much faith should we have in the resulting equations?

3 Regression analysis The simplest case is the relationship between two variables, which may help answer such business-type questions as How does the number of TVs sold at an outlet depend on the TV price? How does the quantity demanded of paper towels depend on the population of a town? How does the volume of ice cream sales depend on the outside temperature?

4 Simple regression Step 1. Collect data. Observation #QuantityPrice 118475 259400 343450 425550 527575 672375 766375 849450 970400 1021500

5

6 Step 2. Assume a form of the relationship between the variables of interest, such as Q D = A – B∙P, or Q D = A/P – B∙P + C P – D, or any other you can possibly think of, where A through D are some numbers, or “coefficients”. Which of the above specifications would you prefer to use? How would you justify your choice? None of them represents the “real” relationship between P and Q, therefore let’s use the linear, the simplest one that is consistent with theory (therefore common sense).

7

8

9 Step 3. Find the best values for coefficients. How? Each observation is a point in the price-quantity space. Each pair of numbers, A and B, when plugged into the equation above, uniquely defines a line in the price- quantity space. It is highly unlikely that all the points (observations) will fit on the same line. The best line will be the one that minimizes the sum of squared deviations between the line and the actual data points.

10

11 This procedure can be performed using a MS Excel spreadsheet. 2003: In the upper scroll-down menu, choose Tools  Data Analysis  Regression 2007: Data tab  Analysis  Data Analysis  Regression then enter the range of cells that contain data for each variable. Y is the “dependent variable” (in our case, Q D ). X is the “independent variable”, also called the explanatory variable (in our case, P). Check the “Labels” box if and only if you include column headers. Click on the cell where you want the output printout to start, then click ‘OK’.

12 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393

13 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393 R 2 shows the portion of the variation in the dependent variable, Q D, that is explained by the independent one, P.

14 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393 The greater the F-statistic is, the lower is the probability that the estimated regression model fits the data purely by accident.

15 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393 The greater the F-statistic is, the lower is the probability that the estimated regression model fits the data purely by accident. That probability is given under “Significance F”.

16 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393 The ‘Coefficients’ column contains the values of A and B that provide the best fit.

17 In our case, the regression analysis suggests that the best estimate of the demand for TVs based on the data provided is…

18 Q D = A + B∙P

19 In our case, the regression analysis suggests that the best estimate of the demand for TVs based on the data provided is… Q D = 163.7 – 0.261∙P Which means that For every $1 increase in the price of a TV set, the quantity demanded drops by (approx.) 0.26 units

20 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393 The ‘Coefficients’ column contains the values of A and B that provide the best fit.

21 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Error t Stat P-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393 t-values show the ‘statistical significance’ of each coefficient. The larger is the t-statistic, the higher are the chances that the coefficient is different from zero.

22 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Error t StatP-value Lower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393 t-values show the ‘statistical significance’ of each coefficient. The larger is the t-statistic, the higher are the chances that the coefficient is different from zero. As in the case of the F-statistic, the same information is also available in the form of probabilities, or P-values.

23 This, however, is very different from the statement “There is 99.89% probability that the coefficient on price is –0.26”, which is FALSE. Another way to say it is, there is a 99.89% probability that price matters for consumer decisions (more specifically, that higher price is associated with a lower quantity demanded). In our case, there is only a 0.11% probability that price is irrelevant for consumer decisions.

24 0 100200300400500600 -200 - 100 0 100200300400 The regression output gives us the “expected value” of the coefficient But the actual value is not certain – it is distributed around the expected value in a probabilistic manner P = 0.2 The entire distribution lies in the positive range – P  0 and we are certain the sign of the coefficient is positive (but don’t know its actual value!) 80% of the area is in the positive range 20%

25 The “Lower 95%” and “Upper 95%” columns in the printout give the upper and lower bounds within which the true value of each coefficient falls with a certain probability (95% probability in this case). This range is also known as the “95% confidence interval”. In our case, …

26 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSFSignif F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-value Lower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393

27 The “Lower 95%” and “Upper 95%” columns in the printout give the upper and lower bounds within which the true value of each coefficient falls with a certain probability (95% probability in this case). This range is also known as the “95% confidence interval”. In our case, … … there is a 95% probability the value of the coefficient for price lies between -0.382 and -0.1393

28 Note the P-value for the estimated coefficient on price equals the significance of F. This is always the case when there is only one independent variable.

29 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393

30 To summarize, we want: The sign of coefficients to make economic sense; R 2 to be The F-statistic to be The t-statistic to be ‘Significance F’ and P-values to be The confidence interval to be LARGE (close to 1) LARGE LARGE in absolute value SMALL SMALL / NARROW

31 Statistical significance of a coefficient is a statement about how reliable the sign of the calculated coefficient is (look at the t-statistic and the p-value).

32 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393

33 Statistical significance of a coefficient is a statement about how reliable the sign of the calculated coefficient is (look at the t-statistic and the p-value). Example: There is a 0.11% probability that the coefficient on “Price” is non-negative

34 Statistical significance of a coefficient is a statement about how reliable the sign of the calculated coefficient is (look at the t-statistic and the p-value). Example: There is a 0.11% probability that there is either a positive or no relationship between price and quantity demanded. You may also come across the expression “the coefficient is significant at the 5% level”. That means there is no more than 5 percent probability that this coefficient has the sign opposite to the estimate. When statistical significance of a coefficient is low, the role of the variable in question is weak or unclear.

35 Economic significance of a variable: How, according to the regression results, a change in one variable will affect the other variable (look at the value of the coefficient itself).

36 SUMMARY OUTPUT Regression Statistics Multiple R0.868299 R Square0.753944 Adjusted R Square0.723187 Standard Error11.147108 Observations10 ANOVA dfSSMSF Significance F Regression13045.935 24.512980.0011193 Residual8994.0642124.2580 Total94040 CoefficientsStandard Errort StatP-valueLower 95%Upper 95% Intercept163.70624.23376.75530.000144107.823219.589 Price-0.26080.0526-4.95100.001119-0.382-0.1393

37 Economic significance of a variable: How, according to the regression results, a change in one variable will affect the other variable (look at the value of the coefficient itself). Example: For every $1 increase in price, quantity demanded drops by 0.261 unit.

38 Economic significance of a variable: How, according to the regression results, a change in one variable will affect the other variable (look at the value of the coefficient itself). Example: For every $100 increase in price, quantity demanded drops by ~ 26 units.

39 Economic significance of a variable: How, according to the regression results, a change in one variable will affect the other variable (look at the value of the coefficient itself). Example: For every $4 decrease in price, quantity demanded increases by approximately one unit. Avoid causality statements!

40 What if you are not happy with the fit? (Say, F-statistic is too small, the statistical significance is small, etc.) Sometimes this is due to the fact that the points on the scatter plot do not align well along a straight line. In that case you may be able to make things better by trying a different specification, such as a log-linear regression.

41 A curve provides a better fit than a line.

42 To run a log-linear regression, you first need to create new auxiliary variables and Here, ln stands for the natural logarithm (logarithm with base e, where is a mathematical constant). ln X is a number such that e ln X = X. After that, proceed with the regression as usual.

43 The resulting equation will be of the form or which is equivalent to What if the log-linear specification doesn’t help? How can we increase the explanatory power further? Add more explanatory variables!

44 Another potential source of errors: the “specification problem” Example: Data on demand for soft drinks: July Oct Jan Apr Fitted line P Q

45 Another potential source of errors: the “specification problem” Example: Data on demand for soft drinks: July Oct Jan Apr Fitted line P Q The real story:

46 Another potential source of errors: the “specification problem” Example: Data on demand for soft drinks: July Oct Jan Apr Fitted line P Q The real story: Once again, adding more explanatory variables may help us understand things better

47 Multiple regression The idea is similar to a simple regression, except there are more than one explanatory (independent) variable. When compared to a simple regression, a multiple regression helps avoid the aforementioned “specification problem”, improve the overall goodness of fit, and improve the understanding of factors relevant for the variable of interest. What variables other than own price could matter in the soft drink example? Outside temperature? Town population? Etc.

48 Running a multiple regression in MS Excel is similar to a simple regression except the fact that, when choosing the cell range for independent variables, you need to include all the independent variables at once. The output will contain more lines, according to the number of variables included in the regression. (A demonstration session follows.) Regression output can again be translated into an equation.

49 Such an equation helps us not only evaluate the relationship between price and quantity, but also answer such questions as… - Are goods X and Y substitutes or complements? - How does the consumption of our good depend on income? - Does advertising matter and how much? and so on.

50 Things to look for when adding explanatory variables: Does R 2 improve (increase) when variables are added?

51 Things to look for when adding explanatory variables: Does R 2 improve (increase) when variables are added? (Normally, the answer is ‘yes’.) What is happening to R 2 -adjusted? (R 2 adj punishes the researcher for adding variables that don’t contribute much to the explanatory power, so it is a better criterion)

52 Only statistically significant variables should be included in the final regression run and the resulting equation. More advanced statistical packages perform ‘stepwise regressions’ when the program itself decides which variables are worth keeping and which deserve to be dropped.

53 “Dummy” variables Sometimes, we are interested in the role of a factor that doesn’t have a numerical value attached to it, such as gender, race, day of the week, etc. Such factors can be included in the regression by assigning a value to each realization of the variable except one. Dummy variables usually only take values of 0 or 1.

54 Examples: Gender: “0” if male, “1” if female. (one dummy does the job) Day of the week: We need six (7-1=6) additional variables. X1: “1” if Monday, “0” otherwise. X2: “1” if Tuesday, “0” otherwise… etc. up to X6. No variable for Sunday. We will know Sunday is important if a regression with dummies included produces a noticeably better R 2 ADJ than without them.

55 The “economic” interpretation of the effect of dummy variables is similar to that for regular variables. An average male buys 10 more gallons of soft drinks in a year than an average female On average, there are 200 more people attending a Washburn home soccer game when the game is on Friday


Download ppt "Last week we talked, among other things, about supply and demand equations and said that having those available may improve the accuracy of our predictions."

Similar presentations


Ads by Google