Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation.

Similar presentations


Presentation on theme: "Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation."— Presentation transcript:

1 Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation

2 Correlation & Regression Univariate & Bivariate Statistics  U: frequency distribution, mean, mode, range, standard deviation  B: correlation – two variables Correlation  linear pattern of relationship between one variable (x) and another variable (y) – an association between two variables  graphical representation of the relationship between two variables Warning:  No proof of causality  Cannot assume x causes y

3 1. Correlation Analysis Correlation coefficientCorrelation coefficient measures the strength of the relationship between x and y Sample Pearson’s correlation coefficient

4 Pearson’s Correlation Coefficient “r” indicates… strength of relationship (strong, weak, or none) direction of relationship positive (direct) – variables move in same direction negative (inverse) – variables move in opposite directions r ranges in value from –1.0 to +1.0 Strong Negative No Rel. Strong Positive -1.0 0.0 +1.0

5 Limitations of Correlation linearity: can’t describe non-linear relationships e.g., relation between anxiety & performance no proof of causation Cannot assume x causes y

6 Y X Y X Y Y X X Linear relationshipsCurvilinear relationships Some Correlation Patterns

7 Y X Y X Y Y X X Strong relationshipsWeak relationships Some Correlation Patterns

8 Example The table shows the heights and weights of n = 10 randomly selected college football players. Player12345678910 Height, x73717572 7567697169 Weight, y185175200210190195150170180175

9 r =.8261 Strong positive correlation As the player’s height increases, so does his weight. r =.8261 Strong positive correlation As the player’s height increases, so does his weight. Example – scatter plot

10 Inference using r coefficient of correlationThe population coefficient of correlation is called (“rho”). We can test for a significant correlation between x and y using a t test:

11 Is there a significant positive correlation between weight and height in the population of all college football players? Use the t-table with n-2 = 8 df to bound the p-value as p-value <.005. There is a significant positive correlation between weight and height in the population of all college football players. Use the t-table with n-2 = 8 df to bound the p-value as p-value <.005. There is a significant positive correlation between weight and height in the population of all college football players. Example

12 2. Linear Regression Regression: Correlation + Prediction Regression analysis is used to predict the value of one variable (the dependent variable) on the basis of other variables (the independent variables). ◦ Dependent variable: denoted Y ◦ Independent variables: denoted X 1, X 2, …, X k

13 Example Let y be the monthly sales revenue for a company. This might be a function of several variables: ◦ x 1 = advertising expenditure ◦ x 2 = time of year ◦ x 3 = state of economy ◦ x 4 = size of inventory We want to predict y using knowledge of x 1, x 2, x 3 and x 4.

14 Some Questions Which of the independent variables are useful and which are not? How could we create a prediction equation to allow us to predict y using knowledge of x 1, x 2, x 3 etc? How good is this prediction? We start with the simplest case, in which the response y is a function of a single independent variable, x.

15 A statistical model separates the systematic component of a relationship from the random component. Data Statistical model Systematic component + Random errors In regression, the systematic component is the overall linear relationship, and the random component is the variation around the line. Model Building

16 Explanatory and Response Variables are Numeric Relationship between the mean of the response variable and the level of the explanatory variable assumed to be approximately linear (straight line) Model:  1 > 0  Positive Association  1 < 0  Negative Association  1 = 0  No Association A Simple Linear Regression Model

17 X Y x  = Slope 1 y Error:  Regression Plot Picturing the Simple Linear Regression Model 0  = Intercept

18 Simple Linear Regression Analysis Variables: x = Independent Variable y = Dependent Variable Parameters:  = y Intercept β = Slope ε ~ normal distribution with mean 0 and variance  2 y = actual value of a score = predicted value

19 Simple Linear Regression Model… y x b=slope=  y/  x intercept a

20 The Method of Least Squares The equation of the best-fitting line is calculated using a set of n pairs (x i, y i ). We choose our estimates a and b to estimate  and  so that the vertical distances of the points from the line, are minimized.

21 Least Squares Estimators

22 Example The table shows the IQ scores for a random sample of n = 10 college freshmen, along with their final calculus grades. Student12345678910 IQ Scores, x39432164574728753452 Calculus grade, y65785282928973985675 Use your calculator to find the sums and sums of squares.

23 Example

24 The Analysis of Variance total sum of squares The total variation in the experiment is measured by the total sum of squares: Total SS The Total SS is divided into two parts: SSR SSR (sum of squares for regression): measures the variation explained by using x in the model. SSE SSE (sum of squares for error): measures the leftover variation not explained by x.

25 We calculate The Analysis of Variance

26 The ANOVA Table Total df = Mean Squares Regression df = Error df = n -1 1 n –1 – 1 = n - 2 MSR = SSR/(1) MSE = SSE/(n-2) SourcedfSSMSF Regression1SSRSSR/(1)MSR/MSE Errorn - 2SSESSE/(n-2) Totaln -1Total SS

27 The Calculus Problem SourcedfSSMSF Regression11449.9741 19.14 Error8606.025975.7532 Total92056.0000

28 You can test the overall usefulness of the model using an F test. If the model is useful, MSR will be large compared to the unexplained variation, MSE. This test is exactly equivalent to the t-test, with t 2 = F. Testing the Usefulness of the Model ( The F Test)

29 Regression Analysis: y versus x The regression equation is y = 40.8 + 0.766 x Predictor Coef SE Coef T P Constant 40.784 8.507 4.79 0.001 x 0.7656 0.1750 4.38 0.002 S = 8.70363 R-Sq = 70.5% R-Sq(adj) = 66.8% Analysis of Variance Source DF SS MS F P Regression 1 1450.0 1450.0 19.14 0.002 Residual Error 8 606.0 75.8 Total 9 2056.0 Regression coefficients, a and b Minitab Output Least squares regression line

30 Testing the Usefulness of the Model The first question to ask is whether the independent variable x is of any use in predicting y. If it is not, then the value of y does not change, regardless of the value of x. This implies that the slope of the line, , is zero.

31 The test statistic is function of b, our best estimate of  Using MSE as the best estimate of the random variation  2, we obtain a t statistic. Testing the Usefulness of the Model

32 The Calculus Problem Is there a significant relationship between the calculus grades and the IQ scores at the 5% level of significance? Reject H 0 when |t| > 2.306. Since t = 4.38 falls into the rejection region, H 0 is rejected. There is a significant linear relationship between the calculus grades and the IQ scores for the population of college freshmen.

33 Measuring the Strength of the Relationship If the independent variable x is of useful in predicting y, you will want to know how well the model fits. The strength of the relationship between x and y can be measured using:

34 Measuring the Strength of the Relationship Since Total SS = SSR + SSE, r 2 measures the proportion of the total variation in the responses that can be explained by using the independent variable x in the model. the percent reduction the total variation by using the regression equation rather than just using the sample mean y-bar to estimate y. For the calculus problem, r 2 =.705 or 70.5%. Meaning that 70.5% of the variability of Calculus Scores can be exlain by the model.

35 Estimation and Prediction Confidence interval Prediction interval

36 The Calculus Problem Estimate the average calculus grade for students whose IQ score is 50 with a 95% confidence interval.

37 Estimate the calculus grade for a particular student whose IQ score is 50 with a 95% confidence interval. Notice how much wider this interval is! The Calculus Problem

38 Predicted Values for New Observations New Obs Fit SE Fit 95.0% CI 95.0% PI 1 79.06 2.84 (72.51, 85.61) (57.95,100.17) Values of Predictors for New Observations New Obs x 1 50.0 Minitab Output Green prediction bands are always wider than red confidence bands. Both intervals are narrowest when x = x- bar. Confidence and prediction intervals when x = 50

39 Estimation and Prediction Once you have determined that the regression line is useful used the diagnostic plots to check for violation of the regression assumptions. You are ready to use the regression line to Estimate the average value of y for a given value of x Predict a particular value of y for a given value of x.

40 The best estimate of either E(y) or y for a given value x = x 0 is Particular values of y are more difficult to predict, requiring a wider range of values in the prediction interval. Estimation and Prediction

41 Regression Assumptions 1.The relationship between x and y is linear, given by y =  +  x +  2.The random error terms  are independent and, for any value of x, have a normal distribution with mean 0 and constant variance,  2. 1.The relationship between x and y is linear, given by y =  +  x +  2.The random error terms  are independent and, for any value of x, have a normal distribution with mean 0 and constant variance,  2. Remember that the results of a regression analysis are only valid when the necessary assumptions have been satisfied. Assumptions:

42 Diagnostic Tools 1.Normal probability plot or histogram of residuals 2.Plot of residuals versus fit or residuals versus variables 3.Plot of residual versus order 1.Normal probability plot or histogram of residuals 2.Plot of residuals versus fit or residuals versus variables 3.Plot of residual versus order

43 Residuals residual errorThe residual error is the “leftover” variation in each data point after the variation explained by the regression model has been removed. normalIf all assumptions have been met, these residuals should be normal, with mean 0 and variance  2.

44 If the normality assumption is valid, the plot should resemble a straight line, sloping upward to the right. If not, you will often see the pattern fail in the tails of the graph. If the normality assumption is valid, the plot should resemble a straight line, sloping upward to the right. If not, you will often see the pattern fail in the tails of the graph. Normal Probability Plot

45 If the equal variance assumption is valid, the plot should appear as a random scatter around the zero center line. If not, you will see a pattern in the residuals. If the equal variance assumption is valid, the plot should appear as a random scatter around the zero center line. If not, you will see a pattern in the residuals. Residuals versus Fits


Download ppt "Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation."

Similar presentations


Ads by Google