Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nonlinear Dimension Reduction:

Similar presentations


Presentation on theme: "Nonlinear Dimension Reduction:"— Presentation transcript:

1 Nonlinear Dimension Reduction:
Semi-Definite Embedding vs. Local Linear Embedding Li Zhang and Lin Liao

2 Outline Nonlinear Dimension Reduction Semi-Definite Embedding
Local Linear Embedding Experiments

3 Dimension Reduction To understand images in terms of their basic modes of variability. Unsupervised learning problem: Given N high dimensional input Xi RD, find a faithful one-to-one mapping to N low dimensional output Yi Rd and d<D. Methods: Linear methods (PCA, MDS): subspace Nonlinear methods (SDE, LLE): manifold

4 Semi-Definite Embedding
Given input X=(X1,...,XN) and k Finding the k nearest neighbors for each input Xi Formulate and solve a corresponding semi-definite programming problem; find optimal Gram matrix of output K=YTY Extract approximately a low dimensional embedding Y from the eigenvectors and eigenvalues of Gram matrix K

5 Semi-Definite Programming
Maximize C·X Subject to AX=b matrix(X) is positive semi-definite where X is a vector with size n2, and matrix(X) is a n by n matrix reshaped from X

6 Semi-Definite Programming
Constraints: Maintain the distance between neighbors |Xi-Xj|2=|Yi-Yj|2 for each pair of neighbor (i,j)  Kii+Kjj-Kij-Kji= Gii+Gjj-Gij-Gji where K=YTY,G=XTX Constrain the output centered on the origin ΣYi=0  ΣKij=0 K is positive semidefinite

7 Semi-Definite Programming
Objective function Maximize the sum of pairwise squared distance between outputs Σij|Yi-Yj|2  Tr(K)

8 Semi-Definite Programming
Solve the best K using any SDP solver CSDP (fast, stable) SeDuMi (stable, slow) SDPT3 (new, fastest, not well tested)

9 Locally Linear Embedding

10 Swiss Roll SDE, k=4 LLE, k=18 N=800

11 LLE on Swiss Roll, varying K

12 LLE on Swiss Roll, varying K

13 LLE on Swiss Roll, varying K

14 Twos SDE, k=4 LLE, k=18 N=638

15 Teapots SDE, k=4 LLE, k=12 N=400

16 LLE on Teapot, varying N N=400 K=200 K=100 K=50

17 Faces SDE, failed LLE, k=12 N=1900

18 SDE versus LLE Similar idea
First, compute neighborhoods in the input space Second, construct a square matrix to characterize local relationship between input data. Finally, compute low-dimension embedding using the eigenvectors of the matrix

19 SDE versus LLE Different performance
SDE: good quality, more robust to sparse samples, but optimization is slow and hard to scale to large data set LLE: fast, scalable to large data set, but low quality when samples are sparse, due to locally linear assumption


Download ppt "Nonlinear Dimension Reduction:"

Similar presentations


Ads by Google