Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nonlinear Unsupervised Feature Learning How Local Similarities Lead to Global Coding Amirreza Shaban.

Similar presentations


Presentation on theme: "Nonlinear Unsupervised Feature Learning How Local Similarities Lead to Global Coding Amirreza Shaban."— Presentation transcript:

1 Nonlinear Unsupervised Feature Learning How Local Similarities Lead to Global Coding Amirreza Shaban

2 2 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 2 Outline  Feature Learning  Coding methods  Vector Quantization  Sparse Coding  Local Coordinate Coding  Locality-constrained Linear Coding  Local Similarity Global Coding  Experiments

3 3 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 3 Feature Learning  The goal of feature learning is to convert a complex high dimensional nonlinear learning problem into a much simpler linear one.  Learned features capture the nonlinearity of the data structure in a way that the problem can be solved by a much easier linear learning method.  A topic very close to nonlinear dimensionality reduction.

4 4 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 4 Feature Learning

5 5 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 5 Coding Method  Coding methods are a class of algorithms aimed at finding high level representations of low level features.  Given unlabeled input data X= and codebook C = of m atoms, the goal is to learn the coding vector where each element indicates the affinity of data point to the corresponding codebook atom.

6 6 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 6 Vector Quantization  Assign each data point to its nearest dictionary basis:  The dictionary bases are the cluster centers that are learned by K-means.

7 7 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 7 Vector Quantization R1 R2 R3 [1, 0, 0] [0, 1, 0] [0, 0, 1]

8 8 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 8 Sparse Coding  Each data point is represented by a linear combination of a small number of codebook atoms.  The coefficients are found by solving the following minimization problem:

9 9 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 9 Local Coordinate Coding  It is empirically seen that when coefficients corresponding to local bases are non-zero, sparse coding proves a better performance.  It is conclude that locality is more essential than sparsity.

10 10 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 10 Local Coordinate Coding  Learning Method:  It is proved that it can learn an arbitrary function on the manifold.  Rate of convergence only depends on the intrinsic dimensionality of the manifold, not d.

11 11 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 11 Locality-constrained Linear Coding  LCC has high computational cost and it is not suitable for large-scale learning problems.  LLC firstly, guarantees locality by incorporating only the k-nearest bases in the coding process and secondly, minimizes the reconstruction term on the local patches:

12 12 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 12 Locality-constrained method drawback  Incapable of representing similarity between non- neighbor points:

13 13 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 13 Locality-constrained method drawback  The SVM labeling function can be written as:  For those points which SVM fails to predict the label of x.

14 14 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 14 Local Similarity Global Coding  The idea is to propagate the coefficients along the data manifold:  When t = 1, is similar to recent locality- constrained coding methods.

15 15 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 15 Inductive LSGC  The Kernel function is computed as:  It is referred to as diffusion kernel of order t.  The similarity is high if x and y are connected to each other by many paths in the graph.  it is known that t controls the resolution at which we are looking at data  The computational cost is.  High computational cost:

16 16 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 16 Inductive LSGC  A two step process:  Projection: Find vector f, in which each element represents one step similarity between data point x and basis, i.e..  Mapping: Propagate the one step similarities in f to the other bases by a (t-1)-step diffusion process.

17 17 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 17 Inductive LSGC  The coding coefficient of data point in base is defined as:  And overall coding can be shown as:

18 18 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 18 Inductive to Transductive convergence  p and q are related by:  converges to zero at the rate of.

19 19 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 19 Experiments

20 20 DML Nonlinear Unsupervised Feature Learning DML Nonlinear Unsupervised Feature Learning 20 Experiments

21 خسته نباشید


Download ppt "Nonlinear Unsupervised Feature Learning How Local Similarities Lead to Global Coding Amirreza Shaban."

Similar presentations


Ads by Google