Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistics 350 Lecture 2. Today Last Day: Section 1.1-1.3 Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,

Similar presentations


Presentation on theme: "Statistics 350 Lecture 2. Today Last Day: Section 1.1-1.3 Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,"— Presentation transcript:

1 Statistics 350 Lecture 2

2 Today Last Day: Section 1.1-1.3 Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34, 35 Due: January 19 Read Sections 1.1-1.3 and 1.6

3 Simple Linear Regression Last day, introduced the simple linear regression model: Y i =  0 +  1 X i +  i for i=1,2,…,n In practice, do not know the values of the  ’s nor  2 Use data to estimate model parameters giving estimated regression equation Want to get the “line of best fit”…what does this mean?

4 Apartment Example

5 Least Squares Would like to get estimates, b o and b 1 to get estimated regression function Would like the estimated line to come as close as possible to the data Coming close to one point may make the line further from others Would like all points to be close on average

6 Least Squares The single criterion that is commonly used is the least squares criterion: Q= Want to select values of b o and b 1 that minimize Q How to minimize:

7 Least Squares Partial derivatives: Normal equations:

8 Least Squares Solving the normal equations: b 0 = b 1 =

9 Comments and Properties b 1 can be re-written as linear combination of the Y i ’s Therefore: It is a statistic (function of the data) It is a random variable with its own distribution It is a linear combination of independent normal random variables and thus will have a Same is true for b o As we shall see, both are unbiased estimators of, respectively, so…

10 Comments and Properties The resulting estimated regression line is : It gives an estimate of E(Y) for a given X For the X i ’s in the sample, can compute the predicted (or fitted) values The difference between the actual observed data and the predicted value is: They should resemble the  i ’s (Chapter 3)

11 Comments and Properties Residuals sum to 0: The sum of the squared residuals is minimized for these data (i.e., property of least squares)

12 Comments and Properties  X i e i =0 The predicted response at the mean of the observed X is:

13 Comments and Properties The sum of the residuals, weighted by the predicted responses, is: The mean square error is: It is useful because:


Download ppt "Statistics 350 Lecture 2. Today Last Day: Section 1.1-1.3 Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,"

Similar presentations


Ads by Google