CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.

Slides:



Advertisements
Similar presentations
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Advertisements

Managerial Economics in a Global Economy
Regression and correlation methods
Chapter 12 Simple Linear Regression
Forecasting Using the Simple Linear Regression Model and Correlation
11 Simple Linear Regression and Correlation CHAPTER OUTLINE
Regression Analysis Module 3. Regression Regression is the attempt to explain the variation in a dependent variable using the variation in independent.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Linear regression models
Ch11 Curve Fitting Dr. Deshi Ye
Chapter 12 Simple Linear Regression
Chapter 10 Simple Regression.
9. SIMPLE LINEAR REGESSION AND CORRELATION
The Simple Regression Model
SIMPLE LINEAR REGRESSION
Chapter Topics Types of Regression Models
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
Simple Linear Regression Analysis
REGRESSION AND CORRELATION
Introduction to Probability and Statistics Linear Regression and Correlation.
SIMPLE LINEAR REGRESSION
© 2000 Prentice-Hall, Inc. Chap Forecasting Using the Simple Linear Regression Model and Correlation.
BCOR 1020 Business Statistics
This Week Continue with linear regression Begin multiple regression –Le 8.2 –C & S 9:A-E Handout: Class examples and assignment 3.
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Simple Linear Regression and Correlation
McGraw-Hill/Irwin Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. A PowerPoint Presentation Package to Accompany Applied Statistics.
Simple Linear Regression Analysis
Lecture 5 Correlation and Regression
Correlation and Regression
SIMPLE LINEAR REGRESSION
Introduction to Linear Regression and Correlation Analysis
Linear Regression and Correlation
Regression Analysis (2)
Simple Linear Regression Models
Bivariate Regression (Part 1) Chapter1212 Visual Displays and Correlation Analysis Bivariate Regression Regression Terminology Ordinary Least Squares Formulas.
© 1998, Geoff Kuenning Linear Regression Models What is a (good) model? Estimating model parameters Allocating variation Confidence intervals for regressions.
©2006 Thomson/South-Western 1 Chapter 13 – Correlation and Simple Regression Slides prepared by Jeff Heyl Lincoln University ©2006 Thomson/South-Western.
1 Chapter 12 Simple Linear Regression. 2 Chapter Outline  Simple Linear Regression Model  Least Squares Method  Coefficient of Determination  Model.
Introduction to Probability and Statistics Thirteenth Edition Chapter 12 Linear Regression and Correlation.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
CPE 619 One Factor Experiments Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama in.
Chapter 8: Simple Linear Regression Yang Zhenlin.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics Seventh Edition By Brase and Brase Prepared by: Lynn Smith.
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Linear Regression Models Andy Wang CIS Computer Systems Performance Analysis.
© 1998, Geoff Kuenning Comparison Methodology Meaning of a sample Confidence intervals Making decisions and comparing alternatives Special considerations.
Bivariate Regression. Bivariate Regression analyzes the relationship between two variables. Bivariate Regression analyzes the relationship between two.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
The simple linear regression model and parameter estimation
Chapter 20 Linear and Multiple Regression
Regression Analysis AGEC 784.
Inference for Least Squares Lines
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Correlation and Simple Linear Regression
Simple Linear Regression
Linear Regression Models
Correlation and Simple Linear Regression
Correlation and Regression
Prepared by Lee Revere and John Large
Correlation and Simple Linear Regression
Simple Linear Regression
SIMPLE LINEAR REGRESSION
Simple Linear Regression and Correlation
SIMPLE LINEAR REGRESSION
Presentation transcript:

CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama in Huntsville

2 Overview Definition of a Good Model Estimation of Model Parameters Allocation of Variation Standard Deviation of Errors Confidence Intervals for Regression Parameters Confidence Intervals for Predictions Visual Tests for Verifying Regression Assumption

3 Regression Expensive (and sometimes impossible) to measure performance across all possible input values Instead, measure performance for limited inputs and use it to produce model over range of input values Build regression model

4 Simple Linear Regression Models Regression Model: Predict a response for a given set of predictor variables Response Variable: Estimated variable Predictor Variables: Variables used to predict the response Linear Regression Models: Response is a linear function of predictors Simple Linear Regression Models: Only one predictor

5 Definition of a Good Model x y x y x y Good Bad

6 Good Model (cont ’ d) Regression models attempt to minimize the distance measured vertically between the observation point and the model line (or curve) The length of the line segment is called residual, modeling error, or simply error The negative and positive errors should cancel out  Zero overall error Many lines will satisfy this criterion

7 Good Model (cont ’ d) Choose the line that minimizes the sum of squares of the errors where, is the predicted response when the predictor variable is x. The parameter b 0 and b 1 are fixed regression parameters to be determined from the data Given n observation pairs {(x 1, y 1 ), …, (x n, y n )}, the estimated response for the i th observation is: The error is:

8 Good Model (cont ’ d) The best linear model minimizes the sum of squared errors (SSE): subject to the constraint that the mean error is zero: This is equivalent to minimizing the variance of errors

9 Estimation of Model Parameters Regression parameters that give minimum error variance are: where, and

10 Example 14.1 The number of disk I/O's and processor times of seven programs were measured as: (14, 2), (16, 5), (27, 7), (42, 9), (39, 10), (50, 13), (83, 20) For this data: n=7,  xy=3375,  x=271,  x 2 =13,855,  y=66,  y 2 =828, = 38.71, = Therefore, The desired linear model is:

11 Example 14.1 (cont ’ d)

12 Example 14.1 (cont ’ d) Error Computation

13 Derivation of Regression Parameters The error in the ith observation is: For a sample of n observations, the mean error is: Setting mean error to zero, we obtain: Substituting b0 in the error expression, we get:

14 Derivation (cont ’ d) The sum of squared errors SSE is:

15 Derivation (cont ’ d) Differentiating this equation with respect to b 1 and equating the result to zero: That is,

16 Allocation of Variation How to predict the response without regression => use the mean response Error variance without regression = Variance of the response and

17 Allocation of Variation (cont ’ d) The sum of squared errors without regression would be: This is called total sum of squares or (SST). It is a measure of y's variability and is called variation of y. SST can be computed as follows: Where, SSY is the sum of squares of y (or  y 2 ). SS0 is the sum of squares of and is equal to.

18 Allocation of Variation (cont ’ d) The difference between SST and SSE is the sum of squares explained by the regression. It is called SSR: or The fraction of the variation that is explained determines the goodness of the regression and is called the coefficient of determination, R 2 :

19 Allocation of Variation (cont ’ d) The higher the value of R 2, the better the regression. R 2 =1  Perfect fit R 2 =0  No fit Coefficient of Determination = {Correlation Coefficient (x,y)} 2 Shortcut formula for SSE:

20 Example 14.2 For the disk I/O-CPU time data of Example 14.1: The regression explains 97% of CPU time's variation.

21 Standard Deviation of Errors Since errors are obtained after calculating two regression parameters from the data, errors have n-2 degrees of freedom SSE/(n-2) is called mean squared errors or (MSE). Standard deviation of errors = square root of MSE. SSY has n degrees of freedom since it is obtained from n independent observations without estimating any parameters SS0 has just one degree of freedom since it can be computed simply from SST has n-1 degrees of freedom, since one parameter must be calculated from the data before SST can be computed

22 Standard Deviation of Errors (cont ’ d) SSR, which is the difference between SST and SSE, has the remaining one degree of freedom Overall, Notice that the degrees of freedom add just the way the sums of squares do

23 Example 14.3 For the disk I/O-CPU data of Example 14.1, the degrees of freedom of the sums are: The mean squared error is: The standard deviation of errors is:

24 Confidence Intervals for Regression Params Regression coefficients b 0 and b 1 are estimates from a single sample of size n  they are random  Using another sample, the estimates may be different If  0 and  1 are true parameters of the population. That is, Computed coefficients b 0 and b 1 are estimates of  0 and  1 (the mean values), respectively Their standard deviations can be obtained as follows

25 Confidence Intervals (cont ’ d) The 100(1-  )% confidence intervals for b 0 and b 1 can be be computed using t [1-  /2; n-2] --- the 1-  /2 quantile of a t variate with n-2 degrees of freedom. The confidence intervals are: And If a confidence interval includes zero, then the regression parameter cannot be considered different from zero at the at 100(1-  )% confidence level.

26 Example 14.4 For the disk I/O and CPU data of Example 14.1, we have n=7, =38.71, =13,855, and s e = Standard deviations of b 0 and b 1 are:

27 Example 14.4 (cont ’ d) From Appendix Table A.4, the 0.95-quantile of a t-variate with 5 degrees of freedom is  90% confidence interval for b 0 is: Since, the confidence interval includes zero, the hypothesis that this parameter is zero cannot be rejected at 0.10 significance level.  b 0 is essentially zero. 90% Confidence Interval for b 1 is: Since the confidence interval does not include zero, the slope b 1 is significantly different from zero at this confidence level.

28 Case Study 14.1: Remote Procedure Call

29 Case Study 14.1 (cont ’ d) UNIX:

30 Case Study 14.1 (cont ’ d) ARGUS:

31 Case Study 14.1 (cont ’ d) Best linear models are: The regressions explain 81% and 75% of the variation, respectively. Does ARGUS takes larger time per byte as well as a larger set up time per call than UNIX?

32 Case Study 14.1 (cont ’ d) Intervals for intercepts overlap while those of the slopes do not.  Set up times are not significantly different in the two systems while the per byte times (slopes) are different.

33 Confidence Intervals for Predictions This is only the mean value of the predicted response. Standard deviation of the mean of a future sample of m observations is: m=1  Standard deviation of a single future observation:

34 CI for Predictions (cont ’ d) m =   Standard deviation of the mean of a large number of future observations at x p : 100(1-  )% confidence interval for the mean can be constructed using a t quantile read at n-2 degrees of freedom

35 CI for Predictions (cont ’ d) Goodness of the prediction decreases as we move away from the center

36 Example 14.5 Using the disk I/O and CPU time data of Example 14.1, let us estimate the CPU time for a program with 100 disk I/O's. For a program with 100 disk I/O's, the mean CPU time is:

37 Example 14.5 (cont ’ d) The standard deviation of the predicted mean of a large number of observations is: From Table A.4, the 0.95-quantile of the t-variate with 5 degrees of freedom is  90% CI for the predicted mean

38 Example 14.5 (cont ’ d) CPU time of a single future program with 100 disk I/O's: 90% CI for a single prediction:

39 Visual Tests for Regression Assumptions Regression assumptions: 1.The true relationship between the response variable y and the predictor variable x is linear 2.The predictor variable x is non-stochastic and it is measured without any error 3.The model errors are statistically independent 4.The errors are normally distributed with zero mean and a constant standard deviation

40 1. Linear Relationship: Visual Test Scatter plot of y versus x  Linear or nonlinear relationship

41 2. Independent Errors: Visual Test Scatter plot of  i versus the predicted response All tests for independence simply try to find dependence

42 Independent Errors (cont ’ d) Plot the residuals as a function of the experiment number

43 3. Normally Distributed Errors: Test Prepare a normal quantile-quantile plot of errors. Linear  the assumption is satisfied

44 4. Constant Standard Deviation of Errors Also known as homoscedasticity Trend  Try curvilinear regression or transformation

45 Example 14.6 For the disk I/O and CPU time data of Example Relationship is linear 2. No trend in residuals  Seem independent 3. Linear normal quantile-quantile plot  Larger deviations at lower values but all values are small Number of disk I/Os Predicted Response Normal QuantileCPU time in ms Residual Residual Quantile

46 Example 14.7: RPC Performance 1. Larger errors at larger responses 2. Normality of errors is questionable Predicted ResponseNormal Quantile Residual Residual Quantile

47 Summary Terminology: Simple Linear Regression model, Sums of Squares, Mean Squares, degrees of freedom, percent of variation explained, Coefficient of determination, correlation coefficient Regression parameters as well as the predicted responses have confidence intervals It is important to verify assumptions of linearity, error independence, error normality  Visual tests

48 Homework #5 Read Chapter 13 and Chapter 14 Submit answers to exercise 13.2 Submit answers to exercise 14.2, 14.7 Due: Wednesday, February 13, 2008, 12:45 PM Submit by to instructor with subject “ CPE619-HW5 ” Name file as: FirstName.SecondName.CPE619.HW5.doc