Autoencoders Mostafa Heidarpour

Slides:



Advertisements
Similar presentations
Face Recognition: A Convolutional Neural Network Approach
Advertisements

Advanced topics.
Stacking RBMs and Auto-encoders for Deep Architectures References:[Bengio, 2009], [Vincent et al., 2008] 2011/03/03 강병곤.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
How to do backpropagation in a brain
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Artificial Neural Networks ML Paul Scheible.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
Biological neuron artificial neuron.
Artificial Neural Networks
K-means Based Unsupervised Feature Learning for Image Recognition Ling Zheng.
Image Denoising and Inpainting with Deep Neural Networks Junyuan Xie, Linli Xu, Enhong Chen School of Computer Science and Technology University of Science.
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Radial-Basis Function Networks
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
CS-424 Gregory Dudek Today’s Lecture Neural networks –Backprop example Clustering & classification: case study –Sound classification: the tapper Recurrent.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
Classification / Regression Neural Networks 2
Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function.
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Yang, Luyu.  Postal service for sorting mails by the postal code written on the envelop  Bank system for processing checks by reading the amount of.
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
EE459 Neural Networks Examples of using Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
CSC2535: Computation in Neural Networks Lecture 12: Non-linear dimensionality reduction Geoffrey Hinton.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Speech Lab, ECE, State University of New York at Binghamton  Classification accuracies of neural network (left) and MXL (right) classifiers with various.
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
CSC321: Lecture 25: Non-linear dimensionality reduction Geoffrey Hinton.
Dynamic Background Learning through Deep Auto-encoder Networks Pei Xu 1, Mao Ye 1, Xue Li 2, Qihe Liu 1, Yi Yang 2 and Jian Ding 3 1.University of Electronic.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Dimensionality Reduction and Principle Components Analysis
Unsupervised Learning of Video Representations using LSTMs
Handwritten Digit Recognition Using Stacked Autoencoders
Convolutional Neural Network
Article Review Todd Hricik.
Matt Gormley Lecture 16 October 24, 2016
Spring Courses CSCI 5922 – Probabilistic Models (Mozer) CSCI Mind Reading Machines (Sidney D’Mello) CSCI 7000 – Human Centered Machine Learning.
CSE 473 Introduction to Artificial Intelligence Neural Networks
Neural networks (3) Regularization Autoencoder
Final Year Project Presentation --- Magic Paint Face
Structure learning with deep autoencoders
Self organizing networks
Unsupervised Learning and Autoencoders
Deep Learning Hierarchical Representations for Image Steganalysis
Goodfellow: Chapter 14 Autoencoders
network of simple neuron-like computing elements
Neural Networks Geoff Hulten.
Zip Codes and Neural Networks: Machine Learning for
Representation Learning with Deep Auto-Encoder
Autoencoders hi shea autoencoders Sys-AI.
Machine learning overview
Word2Vec.
Neural networks (3) Regularization Autoencoder
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
Face Recognition: A Convolutional Neural Network Approach
Goodfellow: Chapter 14 Autoencoders
An introduction to neural network and machine learning
Presentation transcript:

Autoencoders Mostafa Heidarpour In the name of god Autoencoders Mostafa Heidarpour

Autoencoders An auto-encoder is an artificial neural network used for learning efficient codings The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data This means it is being used for dimensionality reduction

Autoencoders Auto-encoders use three or more layers: An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph. A number of considerably smaller hidden layers, which will form the encoding. An output layer, where each neuron has the same meaning as in the input layer.

Autoencoders

Autoencoders Encoder Decoder Where h is feature vector or representation or code computed from x Decoder maps from feature space back into input space, producing a reconstruction attempting to incur the lowest possible reconstruction error Good generalization means low reconstruction error at test examples, while having high reconstruction error for most other x configurations

Autoencoders

Autoencoders

Autoencoders In summary, basic autoencoder training consists in finding a value of parameter vector minimizing reconstruction error: This minimization is usually carried out by stochastic gradient descent

regularized autoencoders To capture the structure of the data-generating distribution, it is therefore important that something in the training criterion or the parameterization prevents the autoencoder from learning the identity function, which has zero reconstruction error everywhere. This is achieved through various means in the different forms of autoencoders, we call these regularized autoencoders.

Autoencoders Denoising Auto-encoders (DAE) learning to reconstruct the clean input from a corrupted version. Contractive auto-encoders (CAE) robustness to small perturbations around the training points reduce the number of effective degrees of freedom of the representation (around each point) making the derivative of the encoder small (saturate hidden units) Sparse Autoencoders Sparsity in the representation can be achieved by penalizing the hidden unit biases or by directly penalizing the output of the hidden unit activations

Example ورودی خروجی 10000000 01000000 00100000 00010000 00001000 00000100 00000010 00000001 Hidden nodes

Example net=fitnet([3]);

Example net=fitnet([8 3 8]);

Example

Introduction the auto-encoder network has not been utilized for clustering tasks To make it suitable for clustering, proposed a new objective function embedded into the auto-encoder model

Proposed Model

Proposed Model Suppose one-layer auto-encoder network as an example (minimizing the reconstruction error) Embed objective function:

Proposed Algorithm

Experiments All algorithms are tested on 3 databases: MNIST contains 60,000 handwritten digits images (0∼9) with the resolution of 28 × 28. USPS consists of 4,649 handwritten digits images (0∼9) with the resolution of 16 × 16. YaleB is composed of 5,850 faces image over ten categories, and each image has 1200 pixels. Model: a four-layers auto-encoder network with the structure of 1000-250-50-10.

Experiments Baseline Algorithms: Compare with three classic and widely used clustering algorithms K-means Spectral clustering N-cut Evaluation Criterion Accuracy (ACC) Normalized mutual information (NMI)

Quantitative Results

Visualization

Difference of Spaces

Thanks for attention Any question ?