3HistoryDeveloped by Sir Francis Galton ( ) in his article “Regression towards mediocrity in hereditary structure”
4Purposes:To describe the linear relationship between two continuous variables, the response variable (y-axis) and a single predictor variable (x-axis)To determine how much of the variation in Y can be explained by the linear relationship with X and how much of this relationship remains unexplainedTo predict new values of Y from new values of X
5The linear regression model is: Xi and Yi are paired observations (i = 1 to n)β0 = population intercept (when Xi =0)β1 = population slope (measures the change in Yi per unit change in Xi)εi = the random or unexplained error associated with the i th observation. The εi are assumed to be independent and distributed as N(0, σ2).
11Estimating Regression Parameters The “best fit” estimates for the regression population parameters (β0 and β1) are the values that minimize the residual sum of squares (SSresidual) between each observed value and the predicted value of the model:
15Solving for the intercept: Thus, our estimated regression equation is:
16Hypothesis Tests with Regression Null hypothesis is that there is no linear relationship between X and Y:H0: β1 = 0 Yi = β0 + εiHA: β1 ≠ 0 Yi = β0 + β1 Xi + εiWe can use an F-ratio (i.e., the ratio of variances) to test these hypotheses
17Variance of the error of regression: NOTE: this is also referred to as residual variance, mean squared error (MSE) or residual mean square (MSresidual)
18Mean square of regression: The F-ratio is: (MSRegression)/(MSResidual)This ratio follows the F-distribution with (1, n-2) degrees of freedom
19Variance components and Coefficient of determination
23Parametric Confidence Intervals If we assume our parameter of interest has a particular sampling distribution and we have estimated its expected value and variance, we can construct a confidence interval for a given percentile.Example: if we assume Y is a normal random variable with unknown mean μ and variance σ2, then is distributed as a standard normal variable. But, since we don’t know σ, we must divide by the standard error instead: , giving us a t-distribution with (n-1) degrees of freedom.The 100(1-α)% confidence interval for μ is then given by:IMPORTANT: this does not mean “There is a 100(1-α)% chance that the true population mean μ occurs inside this interval.” It means that if we were to repeatedly sample the population in the same way, 100(1-α)% of the confidence intervals would contain the true population mean μ.
24Publication form of ANOVA table for regression SourceSum of SquaresdfMean SquareFSig.Regression11.479121.044Residual8.18215.545Total19.66116
30Assumptions of regression The linear model correctly describes the functional relationship between X and YThe X variable is measured without errorFor a given value of X, the sampled Y values are independent with normally distributed errorsVariances are constant along the regression line