Presentation is loading. Please wait.

Presentation is loading. Please wait.

Week 11 November 10-14 Four Mini-Lectures QMM 510 Fall 2014.

Similar presentations


Presentation on theme: "Week 11 November 10-14 Four Mini-Lectures QMM 510 Fall 2014."— Presentation transcript:

1 Week 11 November Four Mini-Lectures QMM 510 Fall 2014

2 12-2 Chapter Learning Objectives LO12-1: Calculate and test a correlation coefficient for significance. LO12-2: Interpret the slope and intercept of a regression equation. LO12-3: Make a prediction for a given x value using a regression equation. LO12-4: Fit a simple regression on an Excel scatter plot. LO12-5: Calculate and interpret confidence intervals for regression coefficients. LO12-6: Test hypotheses about the slope and intercept by using t tests. LO12-7: Perform regression analysis with Excel or other software. LO12-8: Interpret the standard error, R 2, ANOVA table, and F test. LO12-9: Distinguish between confidence and prediction intervals for Y. LO12-10: Test residuals for violations of regression assumptions. LO12-11: Identify unusual residuals and high-leverage observations. Chapter 12 Chapter 12: Correlation and Regression Too much?

3 12-3 Correlation Analysis ML 11.1 Begin the analysis of bivariate data (i.e., two variables) with a scatter plot. A scatter plot: - displays each observed data pair (x i, y i ) as a dot on an X / Y grid. - indicates visually the strength of the relationship between X and Y Visual Displays Visual Displays Chapter 12 Sample Scatter Plot

4 12-4 Strong Positive Correlation Weak Positive CorrelationWeak Negative Correlation Strong Negative Correlation No Correlation Nonlinear Relation Chapter 12 Correlation Analysis Note: r is an estimate of the population correlation coefficient  (rho).

5 12-5 Step 1: State the Hypotheses H 0 :  = 0 H 1 :  ≠ 0 Step 2: Specify the Decision Rule For degrees of freedom d.f. = n  2, look up the critical value t  in Appendix D or Excel =T.INV.2T(α,df).  for a 2-tailed test Steps in Testing if  = 0 (population correlation = 0) Chapter 12 Correlation Analysis Step 3: Calculate the Test Statistic Step 4: Make the Decision If using the t statistic method, reject H 0 if t > t  or if the p-value  .  1 ≤ r ≤ +1 r = 0 indicates no linear relationship

6 12-6 Equivalently, you can calculate the critical value for the correlation coefficient using This method gives a benchmark for the correlation coefficient. However, there is no p-value and is inflexible if you change your mind about . MegaStat uses this method, giving two-tail critical values for  = 0.05 and  = Alternative Method to Test for  = 0 Chapter 12 Correlation Analysis Critical values of r for various sample sizes

7 12-7 Simple regression analyzes the relationship between two variables. It specifies one dependent (response) variable and one independent (predictor) variable. This hypothesized relationship (in this chapter) will be linear. What is Simple Regression? What is Simple Regression? Chapter 12 Simple Regression ML 11.2

8 12-8 Interpreting an Estimated Regression Equation Interpreting an Estimated Regression Equation Chapter 12 Simple Regression

9 12-9 Prediction Using Regression: Examples Prediction Using Regression: Examples Chapter 12 Simple Regression

10 12-10 Cause-and-Effect? Cause-and-Effect? Chapter 12 Simple Regression Can We Make Predictions?

11 12-11 The assumed model for a linear relationship is y =  0 +  1 x + . The relationship holds for all pairs (x i, y i ). The error term  is not observable; it is assumed to be independently normally distributed with mean of 0 and standard deviation . Model and Parameters Model and Parameters Chapter 12 Regression Terminology The unknown parameters are:  0 Intercept  1 Slope

12 12-12 The fitted model or regression model used to predict the expected value of Y for a given value of X is Model and Parameters Model and Parameters The fitted coefficients are b 0 Estimated intercept b 1 Estimated slope Chapter 12 Regression Terminology

13 12-13 Chapter 12 A more precise method is to let Excel calculate the estimates. Enter observations on the independent variable x 1, x 2,..., x n and the dependent variable y 1, y 2,..., y n into separate columns, and let Excel fit the regression equation. Excel will choose the regression coefficients so as to produce a good fit. Regression Terminology

14 12-14 Chapter 12 Slope Interpretation: The slope of  says that for each additional unit of engine horsepower, the miles per gallon decreases by mile. This estimated slope is a statistic because a different sample might yield a different estimate of the slope. Intercept Interpretation: The intercept value of suggests that when the engine has no horsepower, the fuel efficiency would be quite high. However, the intercept has little meaning in this case, not only because zero horsepower makes no logical sense, but also because extrapolating to x = 0 is beyond the range of the observed data. Regression Terminology Scatter plot shows a sample of miles per gallon and horsepower for 15 vehicles.

15 12-15 The ordinary least squares method (OLS) estimates the slope and intercept of the regression line so that the sum of squared residuals is minimized. The sum of the residuals = 0. The sum of the squared residuals is SSE. OLS Method OLS Method Chapter 12 Ordinary Least Squares (OLS) Formulas

16 12-16 The OLS estimator for the slope is: The OLS estimator for the intercept is: Slope and Intercept Slope and Intercept Chapter 12 Ordinary Least Squares (OLS) Formulas =SLOPE(YData, XData) These formulas are built into Excel. =INTERCEPT(YData, XData) Excel function:

17 12-17 Example: Achievement Test Scores Example: Achievement Test Scores Chapter high school students’ achievement exam scores. Note that verbal scores average higher than quant scores (slope exceeds 1 and intercept shifts the line up almost 20 points). Ordinary Least Squares (OLS) Formulas

18 12-18 Slope and Intercept Slope and Intercept Chapter 12 Ordinary Least Squares (OLS) Formulas

19 12-19 We want to explain the total variation in Y around its mean (SST for total sums of squares). The regression sum of squares (SSR) is the explained variation in Y. Assessing Fit Assessing Fit Chapter 12 Assessing Fit

20 12-20 The error sum of squares (SSE) is the unexplained variation in Y. If the fit is good, SSE will be relatively small compared to SST. A perfect fit is indicated by an SSE = 0. The magnitude of SSE depends on n and on the units of measurement. Chapter 12 Assessing Fit Assessing Fit Assessing Fit

21 12-21 Coefficient of Determination Coefficient of Determination 0  R 2  1 Often expressed as a percent, an R 2 = 1 (i.e., 100%) indicates perfect fit.Often expressed as a percent, an R 2 = 1 (i.e., 100%) indicates perfect fit. In simple regression, R 2 = r 2 where r 2 is the squared correlation coefficient).In simple regression, R 2 = r 2 where r 2 is the squared correlation coefficient). R 2 is a measure of relative fit based on a comparison of SSR (explained variation) and SST (total variation).R 2 is a measure of relative fit based on a comparison of SSR (explained variation) and SST (total variation). Chapter 12 Assessing Fit

22 12-22 Example: Achievement Test Scores Example: Achievement Test Scores Chapter 12 Excel shows the sums needed to calculate R 2. Strong relationship between quant score and verbal score (68 percent of variation explained) R 2 = SSR / SST = / =.6838 SST SSR Assessing Fit

23 12-23 The standard error (s e ) is an overall measure of model fit. Standard Error of Regression Standard Error of Regression If the fitted model’s predictions are perfect (SSE = 0), then s e = 0. Thus, a small s e indicates a better fit. Used to construct confidence intervals. Magnitude of s e depends on the units of measurement of Y and on data magnitude. Chapter 12 Tests for Significance Excel’s Data Analysis > Regression calculates s e.

24 12-24 Standard error of the slope and intercept: Confidence Intervals for Slope and Intercept Confidence Intervals for Slope and Intercept Chapter 12 Excel’s Data Analysis > Regression constructs confidence intervals for the slope and intercept. Tests for Significance Confidence interval for the true slope and intercept:

25 12-25 If  1 = 0, then the regression model collapses to a constant  0 plus random error. The hypotheses to be tested are: Hypothesis Tests Hypothesis Tests Chapter 12 Reject H 0 if t calc > t  or if p-value  α. d.f. = n  2 Tests for Significance Excel ‘s Data Analysis > Regression performs these tests.

26 12-26 Example: Achievement Test Scores Example: Achievement Test Scores Chapter 12 Analysis of Variance: Overall Fit MegaStat is similar but rounds off and highlights p-values to show significance (light yellow.05, bright yellow.01) 20 high school students’ achievement exam scores. Excel shows 95% confidence intervals and t test statistics

27 12-27 To test a regression for overall significance, we use an F test to compare the explained (SSR) and unexplained (SSE) sums of squares. F Test for Overall Fit F Test for Overall Fit Chapter 12 Analysis of Variance: Overall Fit

28 12-28 Example: Achievement Test Scores Example: Achievement Test Scores Chapter 12 Analysis of Variance: Overall Fit MegaStat is similar, but also highlights p-values to indicate significance (light yellow.05, bright yellow.01) 20 high school students’ achievement exam scores. Excel shows the ANOVA sums, the F test statistic, and its p-value.

29 12-29 Confidence and Prediction Intervals for Y conditional mean of Y is shown below.Confidence interval for the conditional mean of Y is shown below. Prediction intervals f Y.Prediction intervals are wider than confidence intervals for the mean because individual Y values vary more than the mean of Y. How to Construct an Interval Estimate for Y How to Construct an Interval Estimate for Y Chapter 12 Excel does not do these CIs!

30 12-30 Tests of Assumptions 11.3 Three Important Assumptions Three Important Assumptions 1.The errors are normally distributed. 2.The errors have constant variance (i.e., they are homoscedastic). 3.The errors are independent (i.e., they are nonautocorrelated). Chapter 12 Non-normal Errors Non-normal Errors Non-normality of errors is a mild violation since the regression parameter estimates b 0 and b 1 and their variances remain unbiased and consistent.Non-normality of errors is a mild violation since the regression parameter estimates b 0 and b 1 and their variances remain unbiased and consistent. Confidence intervals for the parameters may be untrustworthy because the normality assumption is used to justify using Student’s t distribution.Confidence intervals for the parameters may be untrustworthy because the normality assumption is used to justify using Student’s t distribution.

31 12-31 Non-normal Errors Non-normal Errors A large sample size would compensate.A large sample size would compensate. Outliers could pose serious problems.Outliers could pose serious problems. Chapter 12 Normal Probability Plot Normal Probability Plot The normal probability plot tests the assumption H 0 : Errors are normally distributed H 1 : Errors are not normally distributedThe normal probability plot tests the assumption H 0 : Errors are normally distributed H 1 : Errors are not normally distributed If H 0 is true, the residual probability plot should be linear, as shown in the example.If H 0 is true, the residual probability plot should be linear, as shown in the example. Residual Tests

32 12-32 What to Do about Non-normality? What to Do about Non-normality? 1.Trim outliers only if they clearly are mistakes. 2.Increase the sample size if possible. 3.If data are totals, try a logarithmic transformation of both X and Y. Chapter 12 Residual Tests

33 12-33 Heteroscedastic Errors (Nonconstant Variance) Heteroscedastic Errors (Nonconstant Variance) The ideal condition is if the error magnitude is constant (i.e., errors are homoscedastic).The ideal condition is if the error magnitude is constant (i.e., errors are homoscedastic). Heteroscedastic errors increase or decrease with X.Heteroscedastic errors increase or decrease with X. In the most common form of heteroscedasticity, the variances of the estimators are likely to be understated.In the most common form of heteroscedasticity, the variances of the estimators are likely to be understated. This results in overstated t statistics and artificially narrow confidence intervals.This results in overstated t statistics and artificially narrow confidence intervals. Chapter 12 Tests for Heteroscedasticity Tests for Heteroscedasticity Plot the residuals against X. Ideally, there is no pattern in the residuals moving from left to right.Plot the residuals against X. Ideally, there is no pattern in the residuals moving from left to right. Residual Tests

34 12-34 Tests for Heteroscedasticity Tests for Heteroscedasticity The “fan-out” pattern of increasing residual variance is the most common pattern indicating heteroscedasticity.The “fan-out” pattern of increasing residual variance is the most common pattern indicating heteroscedasticity. Chapter 12 Residual Tests

35 12-35 What to Do about Heteroscedasticity? What to Do about Heteroscedasticity? Transform both X and Y, for example, by taking logs.Transform both X and Y, for example, by taking logs. Although it can widen the confidence intervals for the coefficients, heteroscedasticity does not bias the estimates.Although it can widen the confidence intervals for the coefficients, heteroscedasticity does not bias the estimates. Chapter 12 Autocorrelated Errors Autocorrelated Errors Autocorrelation is a pattern of non-independent errors. In a first-order autocorrelation, e t is correlated with e t  1. The estimated variances of the OLS estimators are biased, resulting in confidence intervals that are too narrow, overstating the model’s fit. Residual Tests

36 12-36 Runs Test for Autocorrelation Runs Test for Autocorrelation In the runs test, count the number of the residuals’ sign reversals (i.e., how often does the residual cross the zero centerline?).In the runs test, count the number of the residuals’ sign reversals (i.e., how often does the residual cross the zero centerline?). If the pattern is random, the number of sign changes should be n/2.If the pattern is random, the number of sign changes should be n/2. Fewer than n/2 would suggest positive autocorrelation.Fewer than n/2 would suggest positive autocorrelation. More than n/2 would suggest negative autocorrelation.More than n/2 would suggest negative autocorrelation. Chapter 12 Durbin-Watson (DW) Test Durbin-Watson (DW) Test Tests for autocorrelation under the hypotheses H 0 : Errors are nonautocorrelated H 1 : Errors are autocorrelated The DW statistic will range from 0 to 4. DW 2 suggests negative autocorrelation Residual Tests

37 12-37 What to Do about Autocorrelation? What to Do about Autocorrelation? Transform both variables using the method of first differences in which both variables are redefined as changes.Transform both variables using the method of first differences in which both variables are redefined as changes. Then we regress Y against X. Although it can widen the confidence interval for the coefficients, autocorrelation does not bias the estimates.Although it can widen the confidence interval for the coefficients, autocorrelation does not bias the estimates. Don’t worry about it at this stage of your training. Just learn to detect whether it exists.Don’t worry about it at this stage of your training. Just learn to detect whether it exists. Chapter 12 Residual Tests

38 12-38 Example: Excel’s Tests of Assumptions Example: Excel’s Tests of Assumptions Chapter 12 Residual Tests Warning: Warning: Excel offers normal probability plots for residuals, but they are done incorrectly. Excel’s Data Analysis > Regression does residual plots and gives the DW test statistic. Its standardized residuals are done in a strange way, but usually they are not misleading.

39 12-39 Example: MegaStat’s Tests of Assumptions Example: MegaStat’s Tests of Assumptions Chapter 12 Residual Tests MegaStat will do all three tests (if you check the boxes). Its runs plot (residuals by observation) is a visual test for autocorrelation, which Excel does not offer.

40 12-40 Example: MegaStat’s Tests of Assumptions Example: MegaStat’s Tests of Assumptions Chapter 12 Residual Tests near-linear plot - indicates normal errorsno pattern - suggests homoscedastic errors DW near 2 - suggests no autocorrelation

41 12-41 Unusual Observations Standardized Residuals Standardized Residuals Use Excel, MINITAB, MegaStat or other software to compute standardized residuals. If the absolute value of any standardized residual is at least 2, then it is classified as unusual. Chapter 12 Leverage and Influence Leverage and Influence A high leverage statistic indicates the observation is far from the mean of X. These observations are influential because they are at the “end of the lever.” The leverage for observation i is denoted h i.

42 12-42 Leverage Leverage A leverage that exceeds 4/n is unusual. Chapter 12 Unusual Observations

43 12-43 unusual If the absolute value of any standardized residual is at least 2, then it is classified as unusual. influential X Leverage that exceeds 4/n indicates an influential X value (far from mean of X). Chapter 12 Unusual Observations Example: Achievement Test Scores Example: Achievement Test Scores

44 B-44 Other Regression Problems Outliers Outliers To fix the problem delete the observation(s) if you are sure they are actually wrong. delete the observation(s) if you are sure they are actually wrong. formulate a multiple regression model that includes the lurking variable. formulate a multiple regression model that includes the lurking variable. Outliers may be caused by an error in recording data. an error in recording data. impossible data (can be omitted). impossible data (can be omitted). an observation that has been influenced by an unspecified “lurking” variable that should have been controlled but wasn’t. an observation that has been influenced by an unspecified “lurking” variable that should have been controlled but wasn’t. Chapter 12

45 12-45 Model Misspecification Model Misspecification If a relevant predictor has been omitted, then the model is misspecified. For example, Height depends on Gender as well as Age. Use multiple regression instead of bivariate regression. Ill-Conditioned Data Ill-Conditioned Data Well-conditioned data values are of the same general order of magnitude. Ill-conditioned data have unusually large or small data values and can cause loss of regression accuracy or awkward estimates. Chapter 12 Other Regression Problems

46 12-46 Ill-Conditioned Data Ill-Conditioned Data Avoid mixing magnitudes by adjusting the magnitude of your data before running the regression. For example, Revenue= 139,405,377 mixed with ROI =.037. Spurious Correlation Spurious Correlation In a spurious correlation two variables appear related because of the way they are defined. This problem is called the size effect or problem of totals. Expressing variables as per capita or per cent may be helpful. Chapter 12 Other Regression Problems

47 12-47 Model Form and Variable Transforms Model Form and Variable Transforms Sometimes a nonlinear model is a better fit than a linear model. Excel offers other model forms for simple regression (one X and one Y) Sometimes a nonlinear model is a better fit than a linear model. Excel offers other model forms for simple regression (one X and one Y) Variables may be transformed (e.g., logarithmic or exponential functions) in order to provide a better fit. Variables may be transformed (e.g., logarithmic or exponential functions) in order to provide a better fit. Log transformations reduce heteroscedasticity. Log transformations reduce heteroscedasticity. Nonlinear models may be difficult to interpret. Nonlinear models may be difficult to interpret. Chapter 12 Other Regression Problems

48 0-48 Assignments Assignments ML 11.4 Connect C-8 (covers chapter 12) Connect C-8 (covers chapter 12) You get three attempts You get three attempts Feedback is given if requested Feedback is given if requested Printable if you wish Printable if you wish Deadline is midnight each Monday Deadline is midnight each Monday Project P-3 (data, tasks, questions Project P-3 (data, tasks, questions ) Review instructions Review instructions Look at the data Look at the data Your task is to write a nice, readable report (not a spreadsheet) Your task is to write a nice, readable report (not a spreadsheet) Length is up to you Length is up to you

49 0-49 Projects: General Instructions General Instructions For each team project, submit a short (5-10 page) report (using Microsoft Word or equivalent) that answers the questions posed. Strive for effective writing (see textbook Appendix I). Creativity and initiative will be rewarded. Avoid careless spelling and grammar. Paste graphs and computer tables or output into your written report (it may be easier to format tables in Excel and then use Paste Special > Picture to avoid weird formatting and permit sizing within Word). Allocate tasks among team members as you see fit, but all should review and proofread the report (submit only one report).

50 0-50 You will be assigned team members and a dependent variable (see Moodle) from the 2010 state database Big Dataset 09 - US States. The team may change the assigned dependent variable (instructor assigned one just to give you a quick start). Delegate tasks and collaborate as seems appropriate, based on your various skills. Submit one report. Data: Choose an interesting dependent variable (non-binary) from the 2010 state database posted on Moodle. Analysis: (a). Propose a reasonable model of the form Y = f(X 1, X 2,..., X k ) using not more than 12 predictors. (b) Use regression to investigate the hypothesized relationship. (c) Try deleting poor predictors until you feel that you have a parsimonious model, based on the t-values, p-values, standard error, and R 2 adj. (d) For the preferred model only, obtain a list of residuals and request residual tests and VIFs. (e) List the states with high leverage and/or unusual residuals. (f) Make a histogram and/or probability plot of the residuals. Are the residuals normal? (g) For the predictors that were retained, analyze the correlation matrix and/or VIFs. Is multicollinearity a problem? If so, what could be done? (h) If you had more time, what might you do?Moodle Watch the instructor’s video walkthrough using Assault as an example (posted on Moodle) Project P-3

51 12-51 Project P-3 (preview, data, tasks) Example using the 2005 state database: Example using the 2005 state database: 170 variables on n = 50 states 170 variables on n = 50 states Choose one variable as Y ( the response). Choose one variable as Y ( the response). Goal: to explain why Y varies from state to state. Goal: to explain why Y varies from state to state. Start choosing X 1, X 2, …, X k (the predictors). Start choosing X 1, X 2, …, X k (the predictors). Copy Y and X 1, X 2, …, X k to a new spreadsheet. Copy Y and X 1, X 2, …, X k to a new spreadsheet. Study the definitions for each variable (e.g., is the burglary rate per 100,000 population. Study the definitions for each variable (e.g., Burglary is the burglary rate per 100,000 population.

52 12-52 Why multiple predictors? Why multiple predictors? One predictor usually is an incorrect specification. One predictor usually is an incorrect specification. Fit can usually be improved. Fit can usually be improved. How many predictors: Evans’ Rule (k  n/10) How many predictors: Evans’ Rule (k  n/10) Up to one predictor per 10 observations Up to one predictor per 10 observations For example, n = 50 suggests k = 5 predictors. For example, n = 50 suggests k = 5 predictors. Evans’ Rule is conservative. It’s OK to start with more (you will end up with fewer after deleting weak predictors). Evans’ Rule is conservative. It’s OK to start with more (you will end up with fewer after deleting weak predictors). Project P-3 (preview, data, tasks)

53 12-53 Project P-3 (preview, data, tasks) Work with partners? Absolutely – it will be more fun. Work with partners? Absolutely – it will be more fun. Post questions for peers or instructor on Moodle. Post questions for peers or instructor on Moodle. Get started. But don’t run a bunch of regressions until you have studied Chapter 13. Get started. But don’t run a bunch of regressions until you have studied Chapter 13. It’s a good idea to have the instructor look over your list of intended Y and X 1, X 2, …, X k in order to avoid unnecessary re- work if there are obvious problems. It’s a good idea to have the instructor look over your list of intended Y and X 1, X 2, …, X k in order to avoid unnecessary re- work if there are obvious problems. Look at all the categories of variables – don’t just grab the first one you see (there are 170 variables). Or just use the one your instructor assigned. Look at all the categories of variables – don’t just grab the first one you see (there are 170 variables). Or just use the one your instructor assigned.


Download ppt "Week 11 November 10-14 Four Mini-Lectures QMM 510 Fall 2014."

Similar presentations


Ads by Google