Presentation is loading. Please wait.

Presentation is loading. Please wait.

“Random Projections on Smooth Manifolds” -A short summary

Similar presentations


Presentation on theme: "“Random Projections on Smooth Manifolds” -A short summary"— Presentation transcript:

1 “Random Projections on Smooth Manifolds” -A short summary
RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip K Poon Optical Computing and Processing Laboratory

2 The Motivation\Problem
The Goal: Dimensionality Reduction Extract low dimensional information from a signal that is in a high dimensional space Preserve critical relationships among parts of the data Understand the structure of the low dimensional information Applications: Compression Find a low dimensional representation from which the original high dimensional data can be reproduced

3 General overview of dimensionality reduction
Make a model (or estimate) of the expected behavior of the low dimensional information. These models often assume some structure to the low dimensional information Ex. Given our data was presented as a cube our information probably lives on a line. Using the constraints and assumptions placed by the model, algorithms process the data into the desired low dimensional information

4 Three common model classes
Linear models Sparse models Manifold models

5 Linear Models High dimensional data signal, depends linearly on a low dimensional set of parameters. These models often uses a basis which allows the data, x, to be represented with a few coefficients,  ½ {1,2,…,N} i.e. A Fourier orthonormal basis This results in a signal class, F = span({Ãi} i 2 ) , in a K-dimensional linear subspace of RN These classes of signals have a linear geometry which lead to linear algorithms for dimensionality reduction Principal Component Analysis is the optimal linear algorithm Non-adaptive technique  requires training data High dimensional signal Low dimensional representation

6 Sparse (Nonlinear) Models
Similar to linear models A few coefficients in the new basis represents/approximates the high dimensional data Unlike linear model, the relevant set of basis elements,  may change from signal to signal No single low dimensional subspace suffices to represent all K-sparse signals Thus not closed under addition Thus NOT a linear The set of sparse signals must be written as a non-linear union on distinct K dimensional subspaces K := F = [ span ({Ãi} i 2 ) Examples of situations requiring sparse models Natural signals Audio recordings Images Piecewise smooth signals [???] Information in signal encoded in the location and strength of each coefficient On the surface sparse signals seem to requires an adaptive and nonlinear technique!!

7 Compressive Sensing and Sparse (Nonlinear) Models
CS uses nonadaptive linear methods Encoder requires no prior knowledge of the signal Decoder uses the sparsity model to recover the signal Every K-sparse signal can be recovered high probability of success using M = O(K log (N/K)) linear measurements, y = ©x © is a measurement\encoding matrix drawn from random distribution Random measurements allow for a universal or nonadaptive measurement scheme Invokes the Restricted Isometry Property of the measurement scheme: No two points are mapped to the same location in the new basis Similar to concept of injective mapping??? New and explosive area of research!

8 The Restricted Isometry Property
Accurate recovery of sparse signals require a stable embedding when encoded No two points map to same location Sparse signals remain well separated in RM A random measurement matrix obeying the RIP will guarantee accurate recovery Requires M = O(K log (N/K)) The Johnson-Lindenstrauss Lemma Intimately connected to the RIP

9 Manifold Models Classic example: The swiss roll
Linear models like PCA or MDS would fail They only see the Euclidean straight line distance! Even more we can’t assume sparsity! Compressive sensing fail So manifold modeled signals are needed!

10 A few terms: What is a manifold?
Manifolds are very general shapes and structures. A surface is a 2 manifold A volume is a 3 manifold Manifold’s are locally Euclidean They seem “flat” if you look close enough! i.e. Earth looks “flat!” Any object that can be “charted” is a manifold Geodesic Distance: Shortest distance between points in curved space

11 Manifold Models May not be represented with a sparse set of coefficients More general than framework of bases Manifold models arise when we believe signal has as a continuous and often nonlinear dependence on some parameters. K-dimensional parameter µ carries relevant information The signal xµ 2 RN changes as a continuous and nonlinear function of these parameters The geometry of the signal class forms a nonlinear K-dimensional submanifold of RN, F = { xµ : µ 2 £ } £ is a K-dimensional parameter space

12 Manifold Learning Most manifold modeled signals require learning the manifold structure Learning involves constructing non-linear mappings from RN to RM that is adapted from training data Mapping preserves a characteristic property of manifold

13 A Classic Manifold Learning Technique: ISOMAP
Seeks to preserve all the pair-wise geodesic manifold distances! Approximates faraway distances by adding up the small interpoint distances through the shortest route Burden of storing sampled data points increases with native dimension N of the data Each manifold learning algorithm attempts to preserve a different geometric property of the underlying manifold

14 What does Random Projections on Manifolds do for us?
Suggests that Compressive Sensing is not limited only to sparse signals but also manifold modeled signals A provably small number of M random linear projections can preserve key information of the manifold-modeled signal No training  Non-adaptive Mapping to lower dimension is linear! Significantly reduced computation

15 Random Projections Tells Us A Lot
Manifold Learning algorithms try to preserve key properties of the manifold Random projections accurately approximate many properties of the manifold: Ambient and geodesic distances between all point pairs Dimension of the manifold Topology, local neighborhoods, and local angles Lengths and curvature of paths on the manifold Volume of the manifold

16 The punchline Like CS for sparse models, Random Projections for Manifold Models requires that the number of random linear projections, M, is linear in the “information level” K and logarithmic in ambient dimension N. Allows for stable embedding under a random linear projection from the high to low dimensional submanifold. CS can be extended to sparse and manifold signals!

17 Conclusion Overview of Dimensionality Reduction Linear Models
Sparse Models Manifold Models Manifold Learning Techniques Random Projections on Smooth Manifolds more efficient than learning and possibly extends CS to non sparse signals!


Download ppt "“Random Projections on Smooth Manifolds” -A short summary"

Similar presentations


Ads by Google