Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Psych 5510/6510 Chapter Eight--Multiple Regression: Models with Multiple Continuous Predictors Part 1: Testing the Overall Model Spring, 2009.

Similar presentations


Presentation on theme: "1 Psych 5510/6510 Chapter Eight--Multiple Regression: Models with Multiple Continuous Predictors Part 1: Testing the Overall Model Spring, 2009."— Presentation transcript:

1 1 Psych 5510/6510 Chapter Eight--Multiple Regression: Models with Multiple Continuous Predictors Part 1: Testing the Overall Model Spring, 2009

2 2 Multiple Regression MODEL Ŷ i =β 0 + β 1 X i1 + β 2 X i2 +...+β p-1 X ip-1 This is the general linear model (linear because the separate components, after being weighted by β, are added together). Nonlinear models can be expressed this way too by clever use of predictor variables (we will do that later this semester) p=number of parameters, as the first parameter is β 0 the last will be β p-1 (p-1)=number of predictor variables (X)

3 3 Partial Regression Coefficients Ŷ i =β 0 + β 1 X i1 + β 2 X i2 +...+β p-1 X ip-1 The various β’s are called partial regression coefficients. As we will see their values depend upon all of the other predictor variables that are included in the model. Because the value of the β’s is influenced by the other predictor variables, we will sometimes use the notation: β j.123…p-1 to remind us that the value of β j depends upon the other variables included in the model.

4 4 Partial Regression Coefficients For example: the value of β 2 in the model Ŷ i =β 0 + β 1 X i1 + β 2 X i2 is referred to as β 2.1 (the value of β 2 when X 1 is in the model). The value of β 2 will probably be different in the model: Ŷ i =β 0 + β 1 X i1 + β 2 X i2 + β 3 X i32 where it is referred to as β 2.13 (the value of β 2 when X 1 and X 3 are in the model).

5 5 Redundancy When we use more than one predictor variable in our model then an important issue arises; specifically, to what degree are the predictor variables redundant (i.e. share information).

6 6 Completely Redundant Using a person’s height in inches (X1) and their height in centimeters (X2) to predict their weight (Y) would be completely redundant, as the correlation between the predictor variables (height in inches and height in centimeters) would be r = 1 (i.e. height in inches can lead to a perfect prediction of height in centimeters). The correlation between the predictor variables is a measure of their redundancy.

7 7 Somewhat Redundant Using a child’s height (X1) and their age (X2) to predict their weight (Y) would be somewhat redundant, as there is a correlation between height and age (i.e. height could be used to somewhat predict age and vice versa) but the correlation is not perfect.

8 8 Completely Non-Redundant Using a person’s height (X1) and what State they live in (X2) to predict how much they like playing basketball (Y) might not be redundant at all. As far as I know, there is no correlation between a persons height and the State they live in, thus one could not be used to predict the other (which is how we measure redundancy among the predictor variables).

9 9 Venn Diagrams Example of non-redundant predictors Model C: Ŷ i = β 0 Model A: Ŷ i = β 0 + β 1 X i1 + β 2 X i2

10 10 Venn Diagrams Example of partially redundant predictors Model C: Ŷ i = β 0 Model A: Ŷ i = β 0 + β 1 X i1 + β 2 X i2

11 11 Venn Diagrams Example of completely redundant predictors Model C: Ŷ i = β 0 Model A: Ŷ i = β 0 + β 1 X i1 + β 2 X i2

12 12 Estimating parameters We are going to use the general linear model for predicting values of Y. Ŷ i =β 0 + β 1 X i1 + β 2 X i2 +...+β p-1 X ip-1, or in terms of our estimates of those β’s:

13 13 Estimating parameters We will use a computer program to calculate from our data the values of the various b’s that will lead to the least amount of error in our sample:

14 14 Visualizing Multiple Regression 2 predictor variables lead to a regression plane, rather than line.

15 15 Statistical Inference in Multiple Regression We will be examining three types of analyses we might want to perform: 1.Testing an overall model. 2.Testing the addition of one parameter. 3.Testing the addition of a set of parameters.

16 16 Testing an Overall Model Purpose: test to see whether the predictors as a group are better than simply using the mean of Y as our model. Model C: Ŷ i =β 0 (where β 0 =μ Y ) PC=1 Model A: Ŷ i =β 0 + β 1 X i1 + β 2 X i2 +...+β p-1 X ip-1 PA=p

17 17 As Always SSE(C) is the error from using model C. SSE(A) is the error from using model A. SSR is the reduction in error when moving from model C to Model A

18 18 Coefficient of Multiple Determination If you remember, when we have only two variables (X and Y) the square of their correlation (i.e. r²) is called the ‘coefficient of determination’ and is the same thing as PRE. When we have multiple predictor variables the PRE is called the ‘coefficient of multiple determination’, and its symbol in many statistical programs is R².

19 19 Multiple Correlation Coefficient We will be focusing on the value of R² as that is the PRE of moving from Model C to Model A. SPSS will also give us the square root of that, R, which is the measure of the correlation between all of the predictor variables as a set and the dependent variable Y. Just to be thorough let me give the formula for R:

20 20 Testing Significance Model C: Ŷ i =β 0 Model A: Ŷ i =β 0 + β 1 X i1 + β 2 X i2 +...+β p-1 X ip-1 We have our three ways of testing the statistical significance of PRE; 1.Look up the PRE critical value. 2.The PRE to F* approach. 3.The MS to F* approach. If we reject H0 we say that the whole set of extra parameters of Model A is worthwhile to add to our model (compared to a model consisting only of the mean of Y).

21 21 MS to F* Method SPSS will perform the exact test we want here as a linear regression analysis. When SPSS does linear regression, it always assumes the Model C is simply the mean of Y, which is the correct Model C for testing the overall model.

22 22 Example of an Overall Model Test Y=College GPA (1 st year accumulative) Predictor variables: X1: High school percentile rank (HSRANK) X2: SAT verbal score (SATV) X3: SAT math score (SATM) Model C: Ŷ i = β 0 Model A: Ŷ i = β 0 + β 1 X i1 +β 2 X i2 + β 3 X i3

23 23 Hypotheses (Conceptually) H 0 : Model A is not worthwhile; it does not significantly reduce error over simply using the mean of Y; the set of predictors (X1,X2,X3) do not as a group improve our prediction of Y. H A : Model A is worthwhile; including the set of parameters in Model does improve our model.

24 24 Hypotheses Model C: Ŷ i = β 0 Model A: Ŷ i = β 0 + β 1 X i1 +β 2 X i2 + β 3 X i3 H 0 : β 1 = β 2 = β 3 = 0 H A : at least one of those β’s doesn’t equal 0 H 0 : η² = 0 H A : η² > 0

25 25 Insert SPSS Printout Here

26 26 Testing Statistical Significance We will now go through the three (equivalent) ways of testing the statistical significance of Model A (just to make sure we understand them). SPSS gives us the p value for the ‘MS to F’ method, and says that p=.000, note that p will never actually equal zero, in this case it rounds to zero at three decimal places (so is pretty darn small). It would be preferable to say p<.0001. We can also use the F or PRE tools. Understand that the p value is the same no matter which of the three approaches we use.

27 27 Method 1: Testing the Significance of the PRE PRE =.220 (from SPSS) PRE critical (from PRE critical table) N=414 PC=1 PA=4 N-PA=410 PA-PC=3 PRE critical between.038 and.015

28 28 PRE significance PRE > PRE critical : Reject H0 From the PRE tool: p<.0001 We can conclude: 1) The additional parameters of Model A are ‘worthwhile’, they significantly decreases error compared to Model C, i.e., the group of predictors as a whole significantly reduced error of the prediction compared to just using the mean GPA score. 2) It is not the case that β 1 = β 2 = β 3 = 0

29 29 Some other stuff from printout  All the pair-wise correlations and their p values.  R²=.22 R=.469 (R is always positive)  est. η² =.214  SSR=48.789  SSE(A)=172.994  SSE(C)= 221.783  Standard error of the estimate=.650  Estimates of the parameters (b0 through b3) and their confidence intervals.  Some other information that we will look at soon (e.g. partial correlations)

30 30 Method 2: PRE to F Method F critical (3,410;α.05) =2.63 (approximately) F*>F critical Reject H0 From F tool p<.0001

31 31 Method 3: MS to F Method See the ANOVA summary table in the SPSS printout. F*=38.55, p=.000, Reject H0.

32 32 Problems with overall model test 1.If some of the parameters in A are worthwhile and some are not, the PRE per parameter added may not be very impressive, with the weaker parameters washing out the effects of the stronger. 2.As with the overall F test in ANOVA, our alternative hypothesis is very vague, that at least one β 1 through β p-1 doesn’t equal 0. If Model A is worthwhile overall, we don’t know which of its individual parameters contributed to that worthwhileness.


Download ppt "1 Psych 5510/6510 Chapter Eight--Multiple Regression: Models with Multiple Continuous Predictors Part 1: Testing the Overall Model Spring, 2009."

Similar presentations


Ads by Google