Download presentation

Presentation is loading. Please wait.

Published byVicente Yandle Modified over 2 years ago

1
Multicollinearity in Regression Principal Components Analysis Standing Heights and Physical Stature Attributes Among Female Police Officer Applicants S.Q. Lafi and J.B. Kaneene (1992). “An Explanation of the Use of Principal Components Analysis to Detect and Correct for Multicollinearity,” Preventive Veterinary Medicine, Vol. 13, pp. 261-275

2
Data Description Subjects: 33 Females applying for police officer positions Dependent Variable: Y ≡ Standing Height (cm) Independent Variables: X 1 ≡ Sitting Height (cm) X 2 ≡ Upper Arm Length (cm) X 3 ≡ Forearm Length (cm) X 4 ≡ Hand Length (cm) X 5 ≡ Upper Leg Length (cm) X 6 ≡ Lower Leg Length (cm) X 7 ≡ Foot Length (inches) X 8 ≡ BRACH (100X 3 /X 2 ) X 9 ≡ TIBIO (100X 6 /X 5 )

3
Data

4
Standardizing the Predictors

5
Correlations Matrix of Predictors and Inverse

6
Variance Inflation Factors (VIFs) VIF measures the extent that a regression coefficient’s variance is inflated due to correlations among the set of predictors VIF j = 1/(1-R j 2 ) where R j 2 is the coefficient of multiple determination when X j is regressed on the remaining predictors. Values > 10 are often considered to be problematic VIFs can be obtained as the diagonal elements of R -1 Not surprisingly, X 2, X 3, X 5, X 6, X 8, and X 9 are problems (see definitions of X 8 and X 9 )

7
Regression of Y on [1|X * ] Note the surprising negative coefficients for X 3 *, X 5 *, and X 9 *

8
Principal Components Analysis While the columns of X * are highly correlated, the columns of W are uncorrelated The s represent the variance corresponding to each principal component

9
Police Applicants Height Data - I

10
Police Applicants Height Data - II

11
Regression of Y on [1|W] Note that W8 and W9 have very small eigenvalues and very small t-statistics Condition indices are 63.5 and 85.2, Both well above 10

12
Reduced Model Removing last 2 principal components due to small, insignificant t-statistics and high condition indices Let V (g) be the p×g matrix of the eigenvectors for the g retained principal components (p=9, g=7) Let W (g) = X * V (g) Then regress Y on [1|W (g) ]

13
Reduced Regression Fit

14
Transforming Back to X-scale

15
Comparison of Coefficients and SEs Original Model Principal Components

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google