Presentation is loading. Please wait.

Presentation is loading. Please wait.

SIMPLE LINEAR REGRESSION

Similar presentations


Presentation on theme: "SIMPLE LINEAR REGRESSION"— Presentation transcript:

1 SIMPLE LINEAR REGRESSION
CHAPTER 13 SIMPLE LINEAR REGRESSION Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

2 SIMPLE LINEAR REGRESSION MODEL
Simple Regression Linear Regression Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

3 Simple Regression Definition
A regression model is a mathematical equation that describes the relationship between two or more variables. A simple regression model includes only two variables: one independent and one dependent. The dependent variable is the one being explained, and the independent variable is the one used to explain the variation in the dependent variable. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

4 Linear Regression Definition
A (simple) regression model that gives a straight-line relationship between two variables is called a linear regression model. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

5 Figure 13. 1 Relationship between food expenditure and income
Figure 13.1 Relationship between food expenditure and income. (a) Linear relationship. (b) Nonlinear relationship. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

6 Figure 13.2 Plotting a linear equation.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

7 Figure 13.3 y-intercept and slope of a line.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

8 SIMPLE LINEAR REGRESSION ANALYSIS
Scatter Diagram Least Squares Line Interpretation of a and b Assumptions of the Regression Model Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

9 SIMPLE LINEAR REGRESSION ANALYSIS
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

10 SIMPLE LINEAR REGRESSION ANALYSIS
Definition In the regression model y = A + Bx + ε, A is called the y-intercept or constant term, B is the slope, and ε is the random error term. The dependent and independent variables are y and x, respectively. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

11 SIMPLE LINEAR REGRESSION ANALYSIS
Definition In the model ŷ = a + bx, a and b, which are calculated using sample data, are called the estimates of A and B, respectively. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

12 Table 13.1 Incomes (in hundreds of dollars) and Food Expenditures of Seven Households
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

13 Scatter Diagram Definition
A plot of paired observations is called a scatter diagram. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

14 Figure 13.4 Scatter diagram.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

15 Figure 13.5 Scatter diagram and straight lines.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

16 Figure 13.6 Regression Line and random errors.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

17 Error Sum of Squares (SSE)
The error sum of squares, denoted SSE, is The values of a and b that give the minimum SSE are called the least square estimates of A and B, and the regression line obtained with these estimates is called the least square line. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

18 The Least Squares Line For the least squares regression line ŷ = a + bx, Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

19 The Least Squares Line where
and SS stands for “sum of squares”. The least squares regression line ŷ = a + bx us also called the regression of y on x. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

20 Example 13-1 Find the least squares regression line for the data on incomes and food expenditure on the seven households given in the Table Use income as an independent variable and food expenditure as a dependent variable. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

21 Table 13.2 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

22 Example 13-1: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

23 Example 13-1: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

24 Example 13-1: Solution ŷ = 1.5050 + .2525 x
Thus, our estimated regression model is ŷ = x Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

25 Figure 13.7 Error of prediction.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

26 Interpretation of a and b
Consider the household with zero income. Using the estimated regression line obtained in Example 13-1, ŷ = (0) = $ hundred Thus, we can state that households with no income is expected to spend $ per month on food The regression line is valid only for the values of x between 33 and 83 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

27 Interpretation of a and b
Interpretation of b The value of b in the regression model gives the change in y (dependent variable) due to change of one unit in x (independent variable). We can state that, on average, a $100 (or $1) increase in income of a household will increase the food expenditure by $25.25 (or $.2525). Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

28 Figure 13.8 Positive and negative linear relationships between x and y.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

29 Case Study 13-1 Regression of Heights and Weights of NBA Players
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

30 Case Study 13-1 Regression of Heights and Weights of NBA Players
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

31 Assumptions of the Regression Model
The random error term Є has a mean equal to zero for each x Assumption 2: The errors associated with different observations are independent Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

32 Assumptions of the Regression Model
For any given x, the distribution of errors is normal Assumption 4: The distribution of population errors for each x has the same (constant) standard deviation, which is denoted σЄ Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

33 Assumptions of the Regression Model
The model must be linear in parameters Assumption 6: All the values of x cannot all be the same. Assumption 7: The values of x must be randomly selected. Assumption 8: The error term cannot be correlated with x.

34 Figure 13.11 (a) Errors for households with an income of $4000 per month.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

35 Figure 13.11 (b) Errors for households with an income of $ 7500 per month.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

36 Figure 13.12 Distribution of errors around the population regression line.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

37 Figure 13.13 Nonlinear relations between x and y.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

38 STANDARD DEVIATION OF RANDOM ERRORS
Degrees of Freedom for a Simple Linear Regression Model The degrees of freedom for a simple linear regression model are df = n – 2 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

39 Figure 13.14 Spread of errors for x = 40 and x = 75.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

40 STANDARD DEVIATION OF RANDOM ERRORS
The standard deviation of errors is calculated as where Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

41 Example 13-2 Compute the standard deviation of errors se for the data on monthly incomes and food expenditures of the seven households given in Table 13.1. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

42 Table 13.3 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

43 Example 13-2: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

44 COEFFICIENT OF DETERMINATION
Total Sum of Squares (SST) The total sum of squares, denoted by SST, is calculated as Note that this is the same formula that we used to calculate SSyy. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

45 Figure Total errors. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

46 Table 13.4 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

47 Figure 13.16 Errors of prediction when regression model is used.
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

48 COEFFICIENT OF DETERMINATION
Regression Sum of Squares (SSR) The regression sum of squares , denoted by SSR, is Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

49 COEFFICIENT OF DETERMINATION
The coefficient of determination, denoted by r2, represents the proportion of SST that is explained by the use of the regression model. The computational formula for r2 is and 0 ≤ r2 ≤ 1 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

50 Example 13-3 For the data of Table 13.1 on monthly incomes and food expenditures of seven households, calculate the coefficient of determination. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

51 Example 13-3: Solution From earlier calculations made in Examples 13-1 and 13-2, b = .2525, SSxx = , SSyy = Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

52 REGRESSION ANALYSIS: A COMPLETE EXAMPLE
A random sample of eight drivers insured with a company and having similar auto insurance policies was selected. The following table lists their driving experience (in years) and monthly auto insurance premiums. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

53 Example 13-8 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

54 Example 13-8 a) Does the insurance premium depend on the driving experience or does the driving experience depend on the insurance premium? Do you expect a positive or a negative relationship between these two variables? b) Compute SSxx, SSyy, and SSxy. c) Find the least squares regression line by choosing appropriate dependent and independent variables based on your answer in part a. d) Interpret the meaning of the values of a and b calculated in part c. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

55 Example 13-8 e) Plot the scatter diagram and the regression line.
f) Calculate r and r2 and explain what they mean. g) Predict the monthly auto insurance for a driver with 10 years of driving experience. h) Compute the standard deviation of errors. i) Construct a 90% confidence interval for B. j) Test at the 5% significance level whether B is negative. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

56 Example 13-8: Solution Based on theory and intuition, we expect the insurance premium to depend on driving experience The insurance premium is a dependent variable The driving experience is an independent variable Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

57 Table 13.5 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

58 Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

59 Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

60 Example 13-8: Solution The value of a = gives the value of ŷ for x = 0; that is, it gives the monthly auto insurance premium for a driver with no driving experience. The value of b = indicates that, on average, for every extra year of driving experience, the monthly auto insurance premium decreases by $1.55. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

61 Figure 13.21 Scatter diagram and the regression line.
The regression line slopes downward from left to right. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

62 Example 13-8: Solution f) Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

63 Example 13-8: Solution The value of r = indicates that the driving experience and the monthly auto insurance premium are negatively related. The (linear) relationship is strong but not very strong. The value of r² = 0.59 states that 59% of the total variation in insurance premiums is explained by years of driving experience and 41% is not. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

64 Example 13-8: Solution Using the estimated regression line, we find the predict value of y for x = 10 is ŷ = – (10) = $61.18 Thus, we expect the monthly auto insurance premium of a driver with 10 years of driving experience to be $61.18. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

65 Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

66 Example 13-8: Solution Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

67 Example 13-8: Solution Step 1: H0: B = 0 (B is not negative)
H1: B < 0 (B is negative) Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

68 Example 13-8: Solution Step 2: Because the standard deviation of the error is not known, we use the t distribution to make the hypothesis test Step 3: Area in the left tail = α = .05 df = n – 2 = 8 – 2 = 6 The critical value of t is Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

69 Figure 13.22 Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

70 Example 13-8: Solution Step 4: From H0
Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved

71 Example 13-8: Solution Step 5:
The value of the test statistic t = It falls in the rejection region Hence, we reject the null hypothesis and conclude that B is negative The monthly auto insurance premium decreases with an increase in years of driving experience. Prem Mann, Introductory Statistics, 7/E Copyright © 2010 John Wiley & Sons. All right reserved


Download ppt "SIMPLE LINEAR REGRESSION"

Similar presentations


Ads by Google