Application of Back-Propagation neural network in data forecasting Le Hai Khoi, Tran Duc Minh Institute Of Information Technology – VAST Ha Noi – Viet.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

FUNCTION FITTING Student’s name: Ruba Eyal Salman Supervisor:
NEURAL NETWORKS Backpropagation Algorithm
EE 690 Design of Embodied Intelligence
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
Kostas Kontogiannis E&CE
Machine Learning Neural Networks
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
The back-propagation training algorithm
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
Back-Propagation Algorithm
Chapter 6: Multilayer Neural Networks
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
1 Seventh Lecture Error Analysis Instrumentation and Product Testing.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial-Basis Function Networks
Neuro-fuzzy Systems Xinbo Gao School of Electronic Engineering Xidian University 2004,10.
Radial Basis Function Networks
Biointelligence Laboratory, Seoul National University
Cascade Correlation Architecture and Learning Algorithm for Neural Networks.
Artificial Neural Networks
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Soft Computing Lecture 18 Foundations of genetic algorithms (GA). Using of GA.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 9 Neural Network.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Appendix B: An Example of Back-propagation algorithm
Introduction to Artificial Neural Network Models Angshuman Saha Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg.
A Modified Meta-controlled Boltzmann Machine Tran Duc Minh, Le Hai Khoi (*), Junzo Watada (**), Teruyuki Watanabe (***) (*) Institute Of Information Technology-Viet.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
NEURAL NETWORKS FOR DATA MINING
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
EE749 Neural Networks How to design a good performance NN? Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
Non-Bayes classifiers. Linear discriminants, neural networks.
Introduction to Neural Networks. Biological neural activity –Each neuron has a body, an axon, and many dendrites Can be in one of the two states: firing.
Fundamentals of Artificial Neural Networks Chapter 7 in amlbook.com.
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
Variations on Backpropagation.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Meta-controlled Boltzmann Machine toward Accelerating the Computation Tran Duc Minh (*), Junzo Watada (**) (*) Institute Of Information Technology-Viet.
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Neural Networks - Berrin Yanıkoğlu1 MLP & Backpropagation Issues.
Deep Feedforward Networks
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Going Backwards In The Procedure and Recapitulation of System Identification By Ali Pekcan 65570B.
Variations on Backpropagation.
of the Artificial Neural Networks.
Neural Networks Geoff Hulten.
Capabilities of Threshold Neurons
COSC 4335: Part2: Other Classification Techniques
Variations on Backpropagation.
Presentation transcript:

Application of Back-Propagation neural network in data forecasting Le Hai Khoi, Tran Duc Minh Institute Of Information Technology – VAST Ha Noi – Viet Nam

Acknowledgement The authors want to Express our thankfulness to Prof. Junzo WATADA who read and gave us worthy comments. Authors

CONTENT Introduction Steps in data forecasting modeling using neural network Determine network’s topology Application Concluding remarks

Introduction Neural networks are “Universal Approximators” To find a suitable model for the data forecasting problem is very difficult and in reality, it might be done only by trial-and-error We may take the data forecasting problem for a kind of data processing problem Data collecting and analyzing Neural Networks Post-processing Pre-processing Figure 1: Data Processing.

Steps in data forecasting modeling using neural network The works involved in are: * Data pre-processing: determining data interval: daily, weekly, monthly or quarterly; data type: technical index or basic index; method to normalize data: max/min or mean/standard deviation. * Training: determining the learning rate, momentum coefficient, stop condition, maximum cycles, weight randomizing, and size of training set, test set and verification set. * Network’s topology: determining number of inputs, hidden layers, number of neurons in each layer, number of neurons in output layer, transformation functions for the layers and error function

Steps in data forecasting modeling using neural network The major steps in design the data forecasting model is as follow: 1. Choosing variables 2. Data collection 3. Data pre-processing 4. Dividing the data set into smaller sets: training, test and verification 5. Determining network’s topology: number of hidden layers, number of neurons in each layer, number of neurons in output layer and the transformation function. 6. Determining the error function 7. Training 8. Implementation. In performing the above steps, it is not necessary to perform steps sequentially. We could be back to the previous steps, especially in training and choosing variables steps. The reason is because in the designing period, if the variables chosen gave us unexpected results then we need to choose another set of variables and bring about the training step

Choosing variables and Data collection Determining which variable is related directly or indirectly to the data that we need to forecast. If the variable does not have any affect to the value of data that we need to forecast then we should wipe it out of consider. Beside it, if the variable is concerned directly or indirectly then we should take it on consider. Collecting data involved with the variables that are chosen

Data pre-processing Analysis and transform values of input and output data to emphasize the important features, detect the trends and the distribution of data. Normalize the input and output real values into the interval between max and min of transformation function (usually in [0, 1] or [-1, 1] intervals). The most popular methods are following: SV = (( ) / (MAX_VAL - MIN_VAL)) * (OV - MIN_VAL) Or: SV = TFmin + ((TFmax - TFmin) / (MAX_VAL - MIN_VAL)) * (OV - MIN_VAL) where: SV: Scaled Value MAX_VAL: Max value of data MIN_VAL: Min value of data TFmax: Max of transformation function TFmin: Min of transformation function OV: Original Value

Dividing patterns set Divide the whole patterns set into the smaller sets: (1) Training set (2) Test set (3) Verification set. The training set is usually the biggest set employed in training the network. The test set, often includes 10% to 30% of training set, is used in testing the generalization. And the verification set is set balance between the needs of enough patterns for verification, training, and testing.

Determining network’s topology This step determines links between neurons, number of hidden layers, number of neurons in each layer. 1. How neurons in network are connected to each other. 2. The number of hidden layers should not exceed two 3. There is no method to find the most optimum number of neurons used in hidden layers. => Issue 2 and 3 can only be done by trial and error since it is depended on the problem that we are dealing with.

Determining the error function To estimate the network’s performance before and after training process. Function used in evaluation is usually a mean squared errors. Other functions may be: least absolute deviation, percentage differences, asymmetric least squares etc. Performance index F(x) = E[e T e] = E [ ( t - a ) T ( t - a ) ] Approximate Performance index F(x) = e T (k)e(k)] = (t(k) - a(k) ) T ( t(k) - a(k)) The lastest quality determination function is usually the Mean Absolute Percentage Error - MAPE.

Training Training tunes a neural network by adjusting the weights and biases that is expected to give us the global minimum of performance index or error function. When to stop the training process ? 1.It should stop only when there is no noticeable progress of the error function against data based on a randomly chosen parameters set? 2.It should regularly examine the generalization ability of the network by checking the network after a pre-determined number of cycles? 3.Hybrid solution is having a monitoring tool so we can stop the training process or let it run until there is no noticeable progress. 4.The result after examining of verification set of a neural network is most persuadable since it is a directly obtained result of the network after training.

Implementation This is the last step after we determined the factors related to network’s topology, variables choosing, etc. 1. Which environment: Electronic circuits or PC 2. The interval to re-train the network: might be depended on the times and also other factors related to our problem.

Determine network’s topology Multi-layer feed-forward neural networks S 2 x1 S 1 x1 n1n1 1 S 1 xR 1 R 1 x1 W1W1 b1b1  f1f1 S 1 x1 S1x1S1x1 a1a1 S 2 x1 n2n2 1 S 2 xS 1 W2W2 b2b2  f2f2 S 2 x1 a2a2 P Figure 2: Multi-layer feed-forward neural networks where: P: input vector (column vector) W i : Weight matrix of neurons in layer i. (S i xR i : S i rows (neurons), R i columns (number of inputs)) b i : bias vector of layer i (S i x1: for S i neurons) n i : net input (S i x1) f i : transformation function (activate function) a i : net output (S i x1)  : SUM function i = 1.. N, N is the total number of layers. a 2 = f 2 ( W 2 f 1 (W 1 p + b 1 ) + b 2 )

Determine training algorithm and network’s topology Output x1x2…xnx1x2…xn bias w ij Input layer Hidden layers Output layer … … … w jk w kl Transfer function is a sigmoid or any squashing function that is differentiable ƒ(x) = e -δx and ƒ’(x) = ƒ(x) { 1 - ƒ(x) } 1 1 Figure 3: Multi-layered Feed-forward neural network layout

Back-propagation algorithm Step 1: Feed forward the inputs through networks: a 0 = p a m+1 = f m+1 (W m+1 a m + b m+1 ), where m = 0, 1,..., M – 1. a = a M Step 2: Back-propagate the sensitive (error): where m = M – 1,..., 2, 1. Step 3: Finally, weights and biases are updated by following formulas:. (Details on constructing the algorithm and other related issues should be found on text book Neural Network Design) at the output layer at the hidden layers

Using Momentum This is a heuristic method based on the observation of training results. The standard back-propagation algorithm will add following item to the weight as the weight changes: ∆W m (k) = -  s m (a m – 1 ) T, ∆b m (k) = -  s m. When using momentum coefficient, this equation will be changed as follow: ∆W m (k) =  ∆W m (k – 1) – (1 –  )  s m (a m – 1 ) T, ∆b m (k) =  ∆b m (k – 1) – (1 –  )  s m.

Application Arrow: inheritance relation Rhombic antanna arrow: Aggregate relation NEURAL NET class includes the components that are the instances of Output Layer and Hidden Layer. Input Layer is not implemented here since it does not do any calculation on the input data. Arrow: inheritance relation Rhombic antanna arrow: Aggregate relation NEURAL NET class includes the components that are the instances of Output Layer and Hidden Layer. Input Layer is not implemented here since it does not do any calculation on the input data. NEURAL NET class Output layer Hidden layer LAYER class friend

Application

Application

Application

Concluding remarks The determination of the major works is important and realistic. It will help develop more accuracy data forecasting systems and also give the researchers the deeper look in implementing the solution using neural networks In fact, to successfully apply a neural network, it is depended on three major factors: First, the time to choose the variables from a numerous quantity of data as well as perform pre-processing those data; First, the time to choose the variables from a numerous quantity of data as well as perform pre-processing those data; Second, the software should provide the functions to examine the generalization ability, help find the optimal number of neurons for the hidden layer and verify with many input sets; Second, the software should provide the functions to examine the generalization ability, help find the optimal number of neurons for the hidden layer and verify with many input sets; Third, the developers need to consider, examine all the possible abilities in each time checking network’s operation with various input sets as well as the network’s topologies so that the chosen solution will exactly described the problem as well as give us the most accuracy forecasted data. Third, the developers need to consider, examine all the possible abilities in each time checking network’s operation with various input sets as well as the network’s topologies so that the chosen solution will exactly described the problem as well as give us the most accuracy forecasted data.

THANK YOU FOR ATTENDING! Authors Kytakyushu 03/2004