Presentation is loading. Please wait.

Presentation is loading. Please wait.

Autoencoders Mostafa Heidarpour

Similar presentations


Presentation on theme: "Autoencoders Mostafa Heidarpour"— Presentation transcript:

1 Autoencoders Mostafa Heidarpour
In the name of god Autoencoders Mostafa Heidarpour

2 Autoencoders An auto-encoder is an artificial neural network used for learning efficient codings The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data This means it is being used for dimensionality reduction

3 Autoencoders Auto-encoders use three or more layers:
An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph. A number of considerably smaller hidden layers, which will form the encoding. An output layer, where each neuron has the same meaning as in the input layer.

4 Autoencoders

5 Autoencoders Encoder Decoder
Where h is feature vector or representation or code computed from x Decoder maps from feature space back into input space, producing a reconstruction attempting to incur the lowest possible reconstruction error Good generalization means low reconstruction error at test examples, while having high reconstruction error for most other x configurations

6 Autoencoders

7 Autoencoders

8 Autoencoders In summary, basic autoencoder training consists in finding a value of parameter vector minimizing reconstruction error: This minimization is usually carried out by stochastic gradient descent

9 regularized autoencoders
To capture the structure of the data-generating distribution, it is therefore important that something in the training criterion or the parameterization prevents the autoencoder from learning the identity function, which has zero reconstruction error everywhere. This is achieved through various means in the different forms of autoencoders, we call these regularized autoencoders.

10 Autoencoders Denoising Auto-encoders (DAE)
learning to reconstruct the clean input from a corrupted version. Contractive auto-encoders (CAE) robustness to small perturbations around the training points reduce the number of effective degrees of freedom of the representation (around each point) making the derivative of the encoder small (saturate hidden units) Sparse Autoencoders Sparsity in the representation can be achieved by penalizing the hidden unit biases or by directly penalizing the output of the hidden unit activations

11 Example ورودی خروجی Hidden nodes

12 Example net=fitnet([3]);

13 Example net=fitnet([8 3 8]);

14 Example

15

16 Introduction the auto-encoder network has not been utilized for clustering tasks To make it suitable for clustering, proposed a new objective function embedded into the auto-encoder model

17 Proposed Model

18 Proposed Model Suppose one-layer auto-encoder network as an example (minimizing the reconstruction error) Embed objective function:

19 Proposed Algorithm

20 Experiments All algorithms are tested on 3 databases:
MNIST contains 60,000 handwritten digits images (0∼9) with the resolution of 28 × 28. USPS consists of 4,649 handwritten digits images (0∼9) with the resolution of 16 × 16. YaleB is composed of 5,850 faces image over ten categories, and each image has 1200 pixels. Model: a four-layers auto-encoder network with the structure of

21 Experiments Baseline Algorithms: Compare with three classic and widely used clustering algorithms K-means Spectral clustering N-cut Evaluation Criterion Accuracy (ACC) Normalized mutual information (NMI)

22 Quantitative Results

23 Visualization

24 Difference of Spaces

25 Thanks for attention Any question ?


Download ppt "Autoencoders Mostafa Heidarpour"

Similar presentations


Ads by Google