Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.

Similar presentations


Presentation on theme: "Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee."— Presentation transcript:

1 Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee

2 Introduction ● Preprocessing – Obtain more useful representation of information – Useful for subsequent operation for classification, pattern recognition ● Unsupervised learning – Automatic methods to recover hidden structure – Density estimation – Dimensionality reduction

3 PCA & MDS ● Principle Component Analysis:PCA – Linear projections of greatest variance from the top eigenvectors of the data covariance matrix ● Multidimensional Scaling:MDS – Low dimensional embedding that best preserves pairwise distances between data points ● Modeling of linear variabilities in high dimensional data ● No local minima ● Find linear subspace and cannot deal properly with data lying on nonlinear manifolds

4 Problem in PCA and MDS ● Create distortion in local and global geometry – Map faraway data point to nearby point

5 LLE(Locally Linear Embedding) ● Property – Preserving the local configurations of nearest neighbors ● LLE – Local: only neighbors contribute to each reconstruction – Linear: reconstructions are confined to linear subspace ● Assumption – Well-sampled data->locally linear patch of the manifold – d-dimensional manifold->2d neighbors

6

7 LLE Algorithm ● Step1 Neighborhood search – Compute the neighbors of each data point – K nearest neighbors per data point

8 LLE Algorithm ● Step2 Constrained Least Square Fits – Reconstruction Error if not neighbor -> Invariant to rotations, rescaling, and translation of that point and its neighbor

9 LLE Algorithm ● Step3 Eigenvalue Problem – Reconstruction Error – -> centered at the origin – -> Avoid degenerate solution

10 Embedding by LLE Corners faces to the corners of its two dimensional embedding

11 Examples

12 LLE advantages ● Ability to discover nonlinear manifold of arbitrary dimension ● Non-iterative ● Global optimality ● Few parameters: K, d ● O(DN 2 ) and space efficient due to sparse matrix

13 LLE disadvantages ● Requires smooth, non-closed, densely sampled manifold ● Quality of manifold characterization dependent on neighborhood choice ● Cannot estimate low dimensionality ● Sensitive to outliers

14 Comparisons: PCA vs LLE vs Isomap ● PCA: find linear subspace projection vectors to the data ● LLE: find embedding coordinate vectors that best fit local neighborhood relationships ● ISOMAP: find embedding coordinate vectors that preserve geodesic shortest distances


Download ppt "Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee."

Similar presentations


Ads by Google