# Regression. Lines y=mx+b y=mx+b m = slope of the line; how steep it is m = slope of the line; how steep it is b = y-intercept of the line; where the line.

## Presentation on theme: "Regression. Lines y=mx+b y=mx+b m = slope of the line; how steep it is m = slope of the line; how steep it is b = y-intercept of the line; where the line."— Presentation transcript:

Regression

Lines y=mx+b y=mx+b m = slope of the line; how steep it is m = slope of the line; how steep it is b = y-intercept of the line; where the line hits the Y axis b = y-intercept of the line; where the line hits the Y axis

Slope Slope is the comparative rate of change for Y and X. Slope is the comparative rate of change for Y and X. Steeper slope indicates a greater change Steeper slope indicates a greater change Slope = m = Y = (Y2-Y1) = rise X (X2-X1) run Slope = m = Y = (Y2-Y1) = rise X (X2-X1) run

Compact and Augmented Model The Compact Model says that your best guess for any value in a sample is the mean. The Compact Model says that your best guess for any value in a sample is the mean. C: Y i = β 0 + ε i C: Y i = β 0 + ε i Anyones Yi value (DV) is equal to the intercept (β 0 ) plus errorAnyones Yi value (DV) is equal to the intercept (β 0 ) plus error The Augmented Model makes your prediction even better than the mean by adding a predictor(s). The Augmented Model makes your prediction even better than the mean by adding a predictor(s). A: Y i = β 0 + β 1 X 1 + … + β n X n +ε i A: Y i = β 0 + β 1 X 1 + … + β n X n +ε i With the average height of 55 we add other predictors like shoe size or ring size.With the average height of 55 we add other predictors like shoe size or ring size.

Parameters and Degrees of Freedom A parameter is a numeric quantity, that describes a certain population characteristic. (i.e. population mean) A parameter is a numeric quantity, that describes a certain population characteristic. (i.e. population mean) The number of betas in your compact and augmented model indicates how many parameters you have in each model. The number of betas in your compact and augmented model indicates how many parameters you have in each model. df Regression = PA-PC df Regression = PA-PC df Residual = N-PA df Residual = N-PA df Total = N-PC df Total = N-PC

Predicting Height From Mean Height How much error was there? How much error was there? C: Y i = β 0 + ε i ; PC =1 C: Y i = β 0 + ε i ; PC =1 Where β 0 is your average height and ε i is your error in the compact mode Where β 0 is your average height and ε i is your error in the compact mode PC = 1 PC = 1 Ŷ c = b 0 = Ӯ. Ŷ c = b 0 = Ӯ.

Predicting Height from Shoe Size and Mean Height How much error was there now? How much error was there now? A: Y i = β 0 + β 1 X 1 + ε i ; PA = 2 A: Y i = β 0 + β 1 X 1 + ε i ; PA = 2 β 0 is the adjusted mean, β 1 represents the effect of shoe size, X 1 is shoe size (a predictor) and ε i is the error β 0 is the adjusted mean, β 1 represents the effect of shoe size, X 1 is shoe size (a predictor) and ε i is the error Ŷ A = b 0 +b 1 X1 Ŷ A = b 0 +b 1 X1 b 1 = SSxy/SSx = slope b 1 = SSxy/SSx = slope b 0 = Ӯ – b 1 (Xbar1) b 0 = Ӯ – b 1 (Xbar1)

Proportional Reduction in Error PRE is the amount of error you have reduced by using the augmented model to predict height as opposed to the compact model PRE is the amount of error you have reduced by using the augmented model to predict height as opposed to the compact model PRE = R 2 = ɳ 2 = SSreg SStotal PRE = R 2 = ɳ 2 = SSreg SStotal = SSxy (SSx)(SSy) = SSxy (SSx)(SSy)

Creating the ANOVA Table

The Coefficients Table

Comparing Regression Printout With ANOVA

Contrast Coding Contrast codes are orthogonal codes meaning that they are unrelated codes. Contrast codes are orthogonal codes meaning that they are unrelated codes. Three rules to follow when using contrast codes: Three rules to follow when using contrast codes: The sum of the weights for all groups must be zero The sum of the weights for all groups must be zero The sum of the products for each pair must be zero The sum of the products for each pair must be zero The difference in the value of positive weights and negative weights should be one for each code variable The difference in the value of positive weights and negative weights should be one for each code variable http://www.stat.sc.edu/~mclaina/psyc/1st%20lab%20notes%20710.pdf

Sums of Squares Everywhere! SSE(C) =SS y = SS t SSE(C) =SS y = SS t SSE(A) = SS resid =SS w SSE(A) = SS resid =SS w SS reg = SS b SS reg = SS b

Download ppt "Regression. Lines y=mx+b y=mx+b m = slope of the line; how steep it is m = slope of the line; how steep it is b = y-intercept of the line; where the line."

Similar presentations