Download presentation

Presentation is loading. Please wait.

1
**Dimension reduction (1)**

Overview PCA Factor Analysis EDR space SIR References: Applied Multivariate Analysis.

2
Overview The purpose of dimension reduction: Data simplification Data visualization Reduce noise (if we can assume only the dominating dimensions are signals) Variable selection for prediction

3
Overview An analogy: Data separation Dimension reduction Outcome variable y exists (learning the association rule) Classification, regression SIR, Class-preserving projection, Partial least squares No outcome variable (learning intrinsic structure) Clustering PCA, MDS, Factor Analysis, ICA, NCA…

4
**Does not require normality!**

PCA Explain the variance-covariance structure among a set of random variables by a few linear combinations of the variables; Does not require normality!

5
PCA

6
PCA

7
**Reminder of some results for random vectors**

8
**Reminder of some results for random vectors**

Proof of the first (and second) point of the previous slide.

9
PCA The eigen values are the variance components: Proportion of total variance explained by the kth PC:

10
PCA

11
PCA The geometrical interpretation of PCA:

12
PCA PCA using the correlation matrix, instead of the covariance matrix? This is equivalent to first standardizing all X vectors.

13
PCA Using the correlation matrix avoids the domination from one X variable due to scaling (unit changes), for example using inch instead of foot. Example:

14
PCA Selecting the number of components? Based on eigen values (% variation explained). Assumption: the small amount of variation explained by low-rank PCs is noise.

15
Factor Analysis If we take the first several PCs that explain most of the variation in the data, we have one form of factor model. L: loading matrix F: unobserved random vector (latent variables). ε: unobserved random vector (noise)

16
Factor Analysis Orthogonal factor model assumes no correlation between the factor RVs. is a diagonal matrix

17
Factor Analysis

18
Factor Analysis Rotations in the m-dimensional subspace defined by the factors make the solution non-unique: PCA is one unique solution, as the vectors are sequentially selected. Maximum likelihood estimator is another solution:

19
Factor Analysis As we said, rotations within the m-dimensional subspace doesn’t change the overall amount of variation explained. Do rotation to make the results more interpretable:

20
Factor Analysis Varimax criterion: Find T such that is maximized. V is proportional to the summation of the variance of the squared loadings. Maximizing V makes the squared loadings as spread out as possible --- some are real small, and some are real big.

21
**Factor Analysis Orthogonal simple factor rotation:**

Rotate the orthogonal factors around the origin until the system is maximally aligned with the separate clusters of variables. Oblique Simple Structure Rotation: Allow the factors to become correlated. Each factor is rotated individually to fit a cluster.

22
MDS Multidimensional scaling is a dimension reduction procedure that maps the distances between observations to a lower dimensional space. Minimize this objective function: D: distance in the original space d: distance in the reduced dimension space. Numerical method is used for the minimization.

23
EDR space Now we start talking about regression. The data is {xi, yi} Is dimension reduction on X matrix alone helpful here? Possibly, if the dimension reduction preserves the essential structure about Y|X. This is suspicious. Effective Dimension Reduction --- reduce the dimension of X without losing information which is essential to predict Y.

24
EDR space The model: Y is predicted by a set of linear combinations of X. If g() is known, this is not very different from a generalized linear model. For dimension reduction purpose, is there a scheme which can work on almost any g(), without knowledge of its actual form?

25
EDR space The general model encompasses many models as special cases:

26
EDR space Under this general model, The space B generated by β1, β2, ……, βK is called the e.d.r. space. Reducing to this sub-space causes no loss of information regarding predicting Y. Similar to factor analysis, the subspace B is identifiable, but the vectors aren’t. Any non-zero vector in the e.d.r. space is called an e.d.r. direction.

27
EDR space This equation assumes almost the weakest form, to reflect the hope that a low-dimensional projection of a high-dimensional regresser variable contains most of the information that can be gathered from a sample of modest size. It doesn’t impose any structure on how the projected regresser variables effect the output variable. Most regression models assume K=1, plus additional structures on g().

28
EDR space The philosophical point of Sliced Inverse Regression: the estimation of the projection directions can be a more important statistical issue than the estimation of the structure of g() itself. After finding a good e.d.r. space, we can project data to this smaller space. Then we are in a better position to identify what should be pursued further : model building, response surface estimation, cluster analysis, heteroscedasticity analysis, variable selection, ……

29
SIR Sliced Inverse Regression. In regular regression, our interest is the conditional density h(Y|X). Most important is E(Y|x) and var(Y|x). SIR treats Y as independent variable and X as the dependent variable. Given Y=y, what values will X take? This takes us from a p-dimensional problem (subject to curse of dimensionality) back to a 1-dimensional curve-fitting problem: E(xi|y), i=1,…, p

30
SIR

31
SIR

32
SIR covariance matrix for the slice means of x, weighted by the slice sizes sample covariance for xi ’s Find the SIR directions by conducting the eigenvalue decomposition of with respect to :

33
SIR An example response surface found by SIR.

34
SIR and LDA Reminder: Fisher’s linear discriminant analysis seeks a projection direction that maximized class separation. When the underlying distributions are Gaussian, it agrees with the Bayes decision rule. It seeks to maximize: Between-group variance: Within-group variance:

35
SIR and LDA The solution is the first eigen vector in this eigen value decomposition: If we let , the LDA agrees with SIR up to a scaling.

36
**Structure-preserving dimension reduction in classification.**

Multi-class LDA Structure-preserving dimension reduction in classification. Within-class scatter: Between-class scatter: Mixture scatter: a: observations, c: class centers Kim et al. Pattern Recognition 2007, 40:2939

37
**The solution come from the eigen value/vectors of **

Multi-class LDA Maximize: The solution come from the eigen value/vectors of When we have N<<p, Sw is singular. Let Kim et al. Pattern Recognition 2007, 40:2939

38
Multi-class LDA Kim et al. Pattern Recognition 2007, 40:2939

Similar presentations

OK

Principle Component Analysis and its use in MA clustering Lecture 12.

Principle Component Analysis and its use in MA clustering Lecture 12.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on internet services download Ppt on collections framework in java Ppt on programmable logic array programmer Ppt on c language fundamentals evan Download ppt on heating effect of electric current Ppt on tcp/ip protocol suite by forouzan Ppt on suspension type insulation tool Ppt on noise in communication system Ppt on various means of transport and communication and their uses Ppt on p&g products rebates list