Download presentation

Presentation is loading. Please wait.

Published byGavin Hughes Modified over 5 years ago

1
Regression

2
Lines y=mx+b y=mx+b m = slope of the line; how steep it is m = slope of the line; how steep it is b = y-intercept of the line; where the line hits the Y axis b = y-intercept of the line; where the line hits the Y axis

3
Slope Slope is the comparative rate of change for Y and X. Slope is the comparative rate of change for Y and X. Steeper slope indicates a greater change Steeper slope indicates a greater change Slope = m = Y = (Y2-Y1) = rise X (X2-X1) run Slope = m = Y = (Y2-Y1) = rise X (X2-X1) run

4
Compact and Augmented Model The Compact Model says that your best guess for any value in a sample is the mean. The Compact Model says that your best guess for any value in a sample is the mean. C: Y i = β 0 + ε i C: Y i = β 0 + ε i Anyones Yi value (DV) is equal to the intercept (β 0 ) plus errorAnyones Yi value (DV) is equal to the intercept (β 0 ) plus error The Augmented Model makes your prediction even better than the mean by adding a predictor(s). The Augmented Model makes your prediction even better than the mean by adding a predictor(s). A: Y i = β 0 + β 1 X 1 + … + β n X n +ε i A: Y i = β 0 + β 1 X 1 + … + β n X n +ε i With the average height of 55 we add other predictors like shoe size or ring size.With the average height of 55 we add other predictors like shoe size or ring size.

5
Parameters and Degrees of Freedom A parameter is a numeric quantity, that describes a certain population characteristic. (i.e. population mean) A parameter is a numeric quantity, that describes a certain population characteristic. (i.e. population mean) The number of betas in your compact and augmented model indicates how many parameters you have in each model. The number of betas in your compact and augmented model indicates how many parameters you have in each model. df Regression = PA-PC df Regression = PA-PC df Residual = N-PA df Residual = N-PA df Total = N-PC df Total = N-PC

6
Predicting Height From Mean Height How much error was there? How much error was there? C: Y i = β 0 + ε i ; PC =1 C: Y i = β 0 + ε i ; PC =1 Where β 0 is your average height and ε i is your error in the compact mode Where β 0 is your average height and ε i is your error in the compact mode PC = 1 PC = 1 Ŷ c = b 0 = Ӯ. Ŷ c = b 0 = Ӯ.

7
Predicting Height from Shoe Size and Mean Height How much error was there now? How much error was there now? A: Y i = β 0 + β 1 X 1 + ε i ; PA = 2 A: Y i = β 0 + β 1 X 1 + ε i ; PA = 2 β 0 is the adjusted mean, β 1 represents the effect of shoe size, X 1 is shoe size (a predictor) and ε i is the error β 0 is the adjusted mean, β 1 represents the effect of shoe size, X 1 is shoe size (a predictor) and ε i is the error Ŷ A = b 0 +b 1 X1 Ŷ A = b 0 +b 1 X1 b 1 = SSxy/SSx = slope b 1 = SSxy/SSx = slope b 0 = Ӯ – b 1 (Xbar1) b 0 = Ӯ – b 1 (Xbar1)

8
Proportional Reduction in Error PRE is the amount of error you have reduced by using the augmented model to predict height as opposed to the compact model PRE is the amount of error you have reduced by using the augmented model to predict height as opposed to the compact model PRE = R 2 = ɳ 2 = SSreg SStotal PRE = R 2 = ɳ 2 = SSreg SStotal = SSxy (SSx)(SSy) = SSxy (SSx)(SSy)

9
Creating the ANOVA Table

10
The Coefficients Table

11
Comparing Regression Printout With ANOVA

12
Contrast Coding Contrast codes are orthogonal codes meaning that they are unrelated codes. Contrast codes are orthogonal codes meaning that they are unrelated codes. Three rules to follow when using contrast codes: Three rules to follow when using contrast codes: The sum of the weights for all groups must be zero The sum of the weights for all groups must be zero The sum of the products for each pair must be zero The sum of the products for each pair must be zero The difference in the value of positive weights and negative weights should be one for each code variable The difference in the value of positive weights and negative weights should be one for each code variable http://www.stat.sc.edu/~mclaina/psyc/1st%20lab%20notes%20710.pdf

13
Sums of Squares Everywhere! SSE(C) =SS y = SS t SSE(C) =SS y = SS t SSE(A) = SS resid =SS w SSE(A) = SS resid =SS w SS reg = SS b SS reg = SS b

Similar presentations

Presentation is loading. Please wait....

OK

Sequential sums of squares … or … extra sums of squares.

Sequential sums of squares … or … extra sums of squares.

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google