Presentation is loading. Please wait.

Presentation is loading. Please wait.

Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham.

Similar presentations


Presentation on theme: "Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham."— Presentation transcript:

1 Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham

2 Neural Networks Basics Perceptron Equivalent performance to least mean square algorithm (linear regression) Activation Function Sigmoid, Hyperbolic Tangent Multi Layer Perceptrons Chains of perceptrons, perform feature extraction Training the network Training set, validation set, generalization set Back propagation

3 Perceptron and Activation Function The basic building block of Neural Networks (1) Summation of weighted inputs Bias performs change in y-intercept Output is present when the activation Threshold is overcome Activation function must be differentiable  w1 w3 w2 Activation function Input 1 Input 2 Input 3 output Bias

4 Multi-Layer Perceptrons and Training Classification with 20 node MLP NN (4) Feature extraction with 5 layered Convolutional Neural Network (2) Feature Extraction with MLP NN (4)

5 Article Objectives … we propose a supervised approach for task-aware selection of features using Deep Neural Networks (DNN) in the context of action recognition (e.g. walking, running, jumping). (1) … selected features are found to give better classification performance than the original high-dimensional features. (1) It is also shown that the classification performance of the proposed feature selection tech­nique is superior to the low-dimensional representation obtained by principal component analysis (PCA). (1)

6 Methodology … analyze the contribution of each of the input dimensions to identify the features (inputs) important for classification (1) … to correctly analyze the contribution of an input feature, we study its activation potential (averaged over all training values of the input and hidden neurons) relative to the total activation potential (1) The higher the activation potential contribution of an input dimension, the more likely is its participation in hidden neuronal activity and consequently, classification. (1)

7 Article Results (1)

8 Results continued (1)

9 References 1.Roy, D.; Murty, K.S.R.; Mohan, C.K., "Feature selection using Deep Neural Networks," in Neural Networks (IJCNN), 2015 International Joint Conference on, vol., no., pp.1-6, 12-17 July 2015 2.Zeiler, Matthew D., and Rob Fergus. "Visualizing and Understanding Convolutional Networks." Computer Vision – ECCV 2014 Lecture Notes in Computer Science (2014): 818-33. Web. 3.Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. "Going Deeper with Convolutions." 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). Web. 4.Haykin, Simon S., and Simon S. Haykin. Neural Networks and Learning Machines. New York: Prentice Hall/Pearson, 2009. Print.


Download ppt "Feature selection using Deep Neural Networks March 18, 2016 CSI 991 Kevin Ham."

Similar presentations


Ads by Google