Big data classification using neural network

Slides:



Advertisements
Similar presentations
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Advertisements

Artificial Intelligence (CS 461D)
Simple Neural Nets For Pattern Classification
Neural Networks Chapter Feed-Forward Neural Networks.
Artificial Neural Networks (ANNs)
Radial-Basis Function Networks
Radial Basis Function Networks
Ranga Rodrigo April 5, 2014 Most of the sides are from the Matlab tutorial. 1.
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
NEURAL NETWORKS FOR DATA MINING
Classification / Regression Neural Networks 2
Artificial Intelligence Techniques Multilayer Perceptrons.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Neural Networks 2nd Edition Simon Haykin
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning 12. Local Models.
Multiple-Layer Networks and Backpropagation Algorithms
Regression.
Neural Network Architecture Session 2
Deep Feedforward Networks
The Relationship between Deep Learning and Brain Function
Introduction to Artificial Neural Network Session 1
Deep Learning Amin Sobhani.
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Data Mining, Neural Network and Genetic Programming
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Intelligent Information System Lab
Machine Learning Basics
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Classification / Regression Neural Networks 2
Introduction to Deep Learning for neuronal data analyses
CSC 578 Neural Networks and Deep Learning
Goodfellow: Chap 6 Deep Feedforward Networks
Chapter 3. Artificial Neural Networks - Introduction -
Neuro-Computing Lecture 4 Radial Basis Function Network
of the Artificial Neural Networks.
Tips for Training Deep Network
Artificial Intelligence Chapter 3 Neural Networks
network of simple neuron-like computing elements
An Improved Neural Network Algorithm for Classifying the Transmission Line Faults Slavko Vasilic Dr Mladen Kezunovic Texas A&M University.
Department of Electrical Engineering
Lecture Notes for Chapter 4 Artificial Neural Networks
Zip Codes and Neural Networks: Machine Learning for
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Introduction to Radial Basis Function Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
COSC 4335: Part2: Other Classification Techniques
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
Automatic Handwriting Generation
Introduction to Neural Networks
Akram Bitar and Larry Manevitz Department of Computer Science
Overall Introduction for the Lecture
Patterson: Chap 1 A Review of Machine Learning
Presentation transcript:

Big data classification using neural network Classification example of TensorFlow Playground Daijian Yang Fan Wang Dajun Zhang

Data classification

Data classification Data classification may refer to: Data classification (data management) Data classification (business intelligence) Data classification (machine learning), classification of data using machine learning algorithms Assigning a level of sensitivity to classified information In computer science, the data type of a piece of data Our study focuses on Data classification of big data using deep learning algorithms.

Data classification In machine learning and statistics, classification is the problem of identifying to which of a set of categories a new observation belongs, based on a training set of data containing observations whose category membership is known. Classification is an example of pattern recognition. Classification is considered an instance of supervised learning, i.e. learning where a training set of correctly identified observations is available. The corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance. An algorithm that implements classification, especially in a concrete implementation, is known as a classifier. The term "classifier" sometimes also refers to the mathematical function, implemented by a classification algorithm, that maps input data to a category.

Big data classification using deep learning algorithm Big Data: It is referred to the exponential growth and wide availability of digital data that is difficult or even impossible to be managed analyzed using conventional software tools and technologies. Big data often processes many examples (inputs), large varieties of class types (outputs), and very high dimensionality (attributes).

Big data classification using deep learning algorithm Deep Learning: Deep learning refers to machine learning techniques that use supervised and/or unsupervised strategies to automatically learn hierarchical representations in deep architectures for classification. Deep Learning algorithms extract meaningful abstract representations of the raw data using a hierarchical multi-level learning approach. Deep learning approach can be used in two well-established deep architectures: deep belief networks and convolutional neural networks.

neural network

neural network Neural networks is a computational approach which based on a large set of neurons, mimicking the way a biological brain solves problems with large clusters of biological neurons connected by axons.

The principle of neural network Firstly, we will create a neural network and design learning algorithm. Based on different purpose, we preparing different datasets to train the neural network. After training stage, the neural network could complete different tasks automatically

Architecture of neural network A neural network is composed of three types of layers: input layers, hidden layers and output layers The layers which are not output layers are called hidden layers, the typical hidden layers consist of two components: weights Activation function

Simple architecture of NN

architecture of neuron

Self-adjustment process Forward propagation stage: Extracting a sample from the dataset and input X into NN Calculating the actual output Back propagation stage: Calculating the difference between the actual output and ideal output Using minimum error method to adjust the weighted matrix

TensorFlow Playground

What is TensorFlow Playground? TensorFlow is Google Brain’s second-generation machine learning system. “TensorFlow” means the “tensor” can flow in the neural network. TensorFlow Playground is visual teaching platform that can help people to understand neural network intuitively (http://playground.tensorflow.org).

There are some issues that can help us to understand TensorFlow Playground quickly: Activation function: activation function is used to activate the corresponding neuron. Fig. 1 shows the basic architecture of a neuron: In TensorFlow Playground, there are four different activation function: ReLU, Tanh, Sigmoid and Linear.

2. In TensorFlow Playground, what is the training loss and testing loss? Loss is the difference between the predicting output and expected output. The smaller loss means the more accurate the predicted model is. The TensorFlow Playground provides an intuitive view of Training Loss and Testing Loss after each iteration. Ideally, the model is becoming more accurate in a case that both losses gradually decrease. If Training Loss decreases and Testing Loss increases, the output may overfit.

3. What problems can be solved through the TensorFlow Playground? The platform can solve two types of problems: Classification: The platform can create a model that successfully differentiates the orange and blue dots. TensorFlow Playground provides four different types of datasets (Circles, Exclusive or, Gaussian, and Spiral) in order to simulate the classification problem. Regression: Continuous problems.

The Classification Example of MATLAB TensorFlow Playground Fig. 2 shows the neural network architecture that uses the Exclusive or dataset in TensorFlow Playground. Each set of data is a group of points with different morphological distributions. Each point is born with two features: X1 and X2, which represents the location of the point. There are two types of points in our data: orange and blue. The goal of our neural network is to know where the points are orange and which points are blue by training.

The parameters of simulation in two platforms are shown in below: Learning Rate 0.03 Activation Function Tanh Regularization Rate Ratio of Training and Testing Data 50% Noise Batch Size 10 Data Set Exclusive or, Spiral

Fig. 3 shows the final output using Exclusive or dataset in TensorFlow platform. We can see that the loss functions are all converged in the figure, and the platform successfully divides the points into the corresponding regions.