Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea,

Similar presentations


Presentation on theme: "A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea,"— Presentation transcript:

1 A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model
Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea, Poorva Shelke and Charles C. Tappert

2 Outline Introduction Project Outline Methodology Experiment Results
Conclusion

3 What is Neural Networks
A neural network is made up of neurons, which are a set of connected processors. Large number of processors are arranged in parallel and they work together Each tier receives output from the previous tier The last layer of the Neural Network produces the output.

4 Convolutional Neural network
Works good with Image dataset. They learn more easily and they learn the things that we want They try to recognize the components of the image and thus predict the larger structure.

5 Pre-Trained Models

6 Pre-Trained Model Usage: Saves time
A pre-trained model has been previously trained on a dataset and contains the weights and biases that represent the features of the dataset it was trained. Learned features are often transferable to different data. Usage: Saves time It is a model developed by another entity that is reused for the same purpose. Developers can use existing model as a starting point. The transferability between projects is more efficient.

7 VGG16 A study of convolutional neural networks
It achieves 92.7% accuracy over the the top-5 test accuracy in ImageNet Dataset.

8 ResNet50 ResNet is short name for Residual Networks.
It introduces Residual Learning Nothing but subtraction of feature learned from input of the layer

9 MobileNets MobileNets are small, low-latency, low-powered models
They can be build upon classification, detections, embedding and segmentation MobileNets split the convolution into 3x3 depth wise conv and 1x1 pointwise conv.

10 InceptionV3 InceptionV3 is a variant of InceptionV2 with an addition of BN-Auxiliary. BN-auxiliary refers to the version in which a fully connected layer of auxiliary classified is also-normalized, not just convolutions.

11 Purpose and Methodology !
Compare, discuss, and experiment the CNN models with the proposed Memory model and discuss the limitations of the CNN A sequential memory model, which stores the results of the hidden layer This storage (like a memory) can be used in the future predictions Our experiments and results focuses on: How well CNN classify the spatial data ? How CNN learned hidden representation, does it store the hidden layers for future predictions

12 Experiments - CNN models

13 Pre-trained Input Images:
Experiment Setup Pre-trained Input Images: Python version 3.0 Knowledge of Pre-trained Models Weights has been downloaded in Keras and loaded for the experiment on the images

14 Experiment 1 – ResNet50 model
Experiment 2 – VGG16 model

15 Experiment 3 – InceptionV3 model
Experiment 4 – MobileNet model Experiment 3 – InceptionV3 model

16 Results After conducting the experiment (from 1-4), we found the following results Those CNN models were able to classify the images with a very good accuracy

17 CNN Pre-trained Model Accuracy
Comparing the pre-trained model accuracy Pre-trained models Image 1: Banana Image 2: Orange Image 3: Lion Image 4: Seashore ResNet50 0.989 0.998 0.999 0.375 VGG16 0.986 0.982 0.594 InceptionV3 1.0 0.992 0.864 MobileNet 0.988 0.426

18 Experiment-5 MNIST digits dataset

19

20 Results After conducting an experiment on MNIST data, it can be found that, CNN first extract all the features from the images Then, considering those features it transforms the features with series of operations (mainly Convolution operation and MaxPooling operation) Undertaking all the layers, the final layer is just a high level layer representation which has learned discrete patterns of features without considering or storing any past sequences of patterns.

21 Conclusion CNN is mostly used for classification of Spatial data
CNN does not store or remember past sequences of patterns that can be used for future predictions The Proposed architecture (like Rosenblatt Memory model) require more research and exploration to solve problems related to speech recognition, language modelling, language translation, and audio/video processing, where the past context and sequences are important for the future predictions

22 Questions ? Thank You


Download ppt "A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea,"

Similar presentations


Ads by Google