Presentation is loading. Please wait.

Presentation is loading. Please wait.

Affine-invariant Principal Components Charlie Brubaker and Santosh Vempala Georgia Tech School of Computer Science Algorithms and Randomness Center.

Similar presentations


Presentation on theme: "Affine-invariant Principal Components Charlie Brubaker and Santosh Vempala Georgia Tech School of Computer Science Algorithms and Randomness Center."— Presentation transcript:

1 Affine-invariant Principal Components Charlie Brubaker and Santosh Vempala Georgia Tech School of Computer Science Algorithms and Randomness Center

2 What is PCA? “PCA is a mathematical tool for finding directions in which a distribution is stretched out.” Widely used in practice Gives best-known results for some problems

3 History First discussed by Euler in a work on inertia of rigid bodies (1730). Principal Axes identified as eigenvectors by Lagrange. Power method for finding eigenvectors published in 1929, before computers Ubiquitous in practice today: Bioinformatics, Econometrics, Data Mining, Computer Vision,...

4 4 Principal Components Analysis For points a 1 …a m in R n, the principal components are orthogonal vectors v 1 …v n s.t. V k = span{v 1 …v k } minimizes among all k-subspaces. Like regression. Computed via SVD.

5 Singular Value Decomposition (SVD) Real m x n matrix A can be decomposed as:

6 6 PCA (continued) Example: for a Gaussian the principal components are the axes of the ellipsoidal level sets. v1v1 v2v2 “top” principal components = where the data is “stretched out.”

7 7 Why Use PCA? 1.Reduces computation or space. Space goes from O(mn) to O(mk+nk). --- Random Projection, Random Sampling also reduce space requirement 2.Reveals interesting structure that is hidden in high dimension.

8 Problem Learn a mixture of Gaussians Classify unlabeled samples Each component is a logconcave distribution (e.g., Gaussian). Means, variances and mixing weights are unknown

9 9 Distance-based Classification “ Points from the same component should be closer to each other than those from different components.”

10 Mixture models Easy to unravel if components are far enough apart Impossible if components are too close

11 Distance-based classification How far apart? Thus, suffices to have [Dasgupta ‘99] [Dasgupta, Schulman ‘00] [Arora, Kannan ‘01] (more general)

12 PCA Project to span of top k principal components of the data Replace A with Apply distance-based classification in this subspace

13 Main idea Subspace of top k principal components spans the means of all k Gaussians

14 SVD in geometric terms Rank 1 approximation is the projection to the line through the origin that minimizes the sum of squared distances. Rank k approximation is projection k-dimensional subspace minimizing sum of squared distances.

15 Why? Best line for 1 Gaussian? - Line through the mean Best k-subspace for 1 Gaussian? - Any k-subspace through the mean Best k-subspace for k Gaussians? - The k-subspace through all k means!

16 How general is this? Theorem [V-Wang’02]. For any mixture of weakly isotropic distributions, the best k- subspace is the span of the means of the k components. “weakly isotropic”: Covariance matrix = multiple of identity

17 17 PCA Projection to span of means gives For spherical Gaussians, Span(means) = PCA subspace of dim k.

18 Sample SVD Sample SVD subspace is “close” to mixture’s SVD subspace. Doesn’t span means but is close to them.

19 2 Gaussians in 20 Dimensions

20 4 Gaussians in 49 Dimensions

21 Mixtures of Logconcave Distributions Theorem [Kannan-Salmasian-V ’04]. For any mixture of k distributions with SVD subspace V,

22 22 Mixtures of Nonisotropic, Logconcave Distributions Theorem [Kannan, Salmasian, V, ‘04]. The PCA subspace V is “close” to the span of the means, provided that means are well- separated. where is the maximum directional variance. Polynomial was improved by Achlioptas-McSherry. Required separation:

23 However,… PCA collapses separable “pancakes”

24 Limits of PCA Algorithm is not affine invariant. Any instance can be made bad by an affine transformation. Spherical Gaussians become parallel pancakes but remain separable.

25 25 Parallel Pancakes Still separable, but previous algorithms don’t work.

26 Separability

27 27 Hyperplane Separability PCA is not affine-invariant. Is hyperplane separability sufficient to learn a mixture?

28 Affine-invariant principal components? What is an affine-invariant property that distinguishes 1 Gaussian from 2 pancakes? Or a ball from a cylinder?

29 29 Isotropic PCA 1.Make point set isotropic via an affine transformation. 2.Reweight points according to a spherically symmetric function f(|x|). 3.Return the 1 st and 2 nd moments of reweighted points.

30 30 Isotropic PCA [BV’08] Goal: Go beyond 1 st and 2 nd moments to find “interesting” directions. Why? What if all 2 nd moments are equal? v?v? v?v? v?v? v?v? This isotropy can always be achieved by an affine transformation.

31 Ball vs Cylinder

32 32 Algorithm 1.Make distribution isotropic. 2.Reweight points. 3.If mean shifts, partition along this direction. Recurse. 4.Otherwise, partition along top principle component. Recurse.

33 33 Step1: Enforcing Isotropy Isotropy: a. Mean = 0 and b. Variance = 1 in every direction Step 1a: move the origin to the mean (translation). Step 1b: apply linear transformation

34 34 Step 1: Enforcing Isotropy

35 35 Step 1: Enforcing Isotropy

36 36 Step 1: Enforcing Isotropy Turns every well-separated mixture into (almost) parallel pancakes, separable along the intermean direction. PCA no longer helps us!

37 37 Algorithm 1.Make distribution isotropic. 2.Reweight points (using a Gaussian). 3.If mean shifts, partition along this direction. Recurse. 4.Otherwise, partition along top principle component. Recurse.

38 Two parallel pancakes Isotropy pulls apart the components If one is heavier, then overall mean shifts along the separating direction If not, principal component is along the separating direction

39 39 Steps 3 & 4: Illustrative Examples Imbalanced Pancakes: Balanced Pancakes: Mean

40 40 Step 3: Imbalanced Case Mean shifts toward heavier component

41 41 Step 4: Balanced Case Mean doesn’t move by symmetry. Top principle component is inter-mean direction

42 Unraveling Gaussian Mixtures Theorem [Brubaker-V. ’08] The algorithm correctly classifies samples from two arbitrary Gaussians “separable by a hyperplane” with high probability.

43 Original Data 40 dimensions, 8000 samples (subsampled for visualization) Means of (0,0) and (1,1).

44 Random Projection

45 PCA

46 Isotropic PCA

47 47 Results:k=2 Theorem: For k=2, algorithm succeeds if there is some direction v such that: (i.e., hyperplane separability.)

48 48 Fisher Criterion For a direction p, intra-component variance along p J(p) = ------------------------------------------------ total variance along p Overlap: Min J(p) over all directions p. (small overlap => well-separated) Theorem: For k=2, algorithm suceeds if overlap is

49 49 Results:k>2 For k > 2, we need k-1 orthogonal directions with small overlap

50 50 Fisher Criterion J(S)= max intra-component variance within S Make F isotropic. For subspace S Overlap is affine-invariant. Overlap = Min J(S), S: k-1 dim subspace Theorem [BV ’08]: For k>2, the algorithm succeeds if the overlap is at most

51 51 Original Data (k=3) 40 dimensions, 15000 samples (subsampled for visualization)

52 52 Random Projection

53 53 PCA

54 54 Isotropic PCA

55 Conclusion Most of this in a new book: “Spectral Algorithms,” with Ravi Kannan IsoPCA gives an affine-invariant clustering (independent of a model) What do Iso-PCA directions mean? Robust PCA (Brubaker 08; robust to small changes in point set); applied to noisy/best-fit mixtures. PCA for tensors?


Download ppt "Affine-invariant Principal Components Charlie Brubaker and Santosh Vempala Georgia Tech School of Computer Science Algorithms and Randomness Center."

Similar presentations


Ads by Google