Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Neural Network

Similar presentations


Presentation on theme: "Artificial Neural Network"— Presentation transcript:

1 Artificial Neural Network
Lecturer note Azhari, Dr Artificial Neural Network Script & Programming in Matlab Computer Science Department, Gadjah Mada

2 Multilayer Neural Network

3 Learning rules in this toolbox fall into two broad categories:
A learning rule is defined as a procedure for modifying the weights and biases of a network. The learning rule is applied to train the network to perform some particular task. Learning rules in this toolbox fall into two broad categories: supervised learning, and unsupervised learning.

4 Supervised learning the learning rule is provided with a set of examples (the training set) of proper network behavior: {p1,t1},{p2,t2},…,{pQ,tQ} where pQ is an input to the network, and tQ is the corresponding correct (target) output. As the inputs are applied to the network, the network outputs are compared to the targets. The learning rule is then used to adjust the weights and biases of the network in order to move the network outputs closer to the targets.

5 Unsupervised learning
In unsupervised learning, the weights and biases are modified in response to network inputs only. There are no target outputs available. Most of these algorithms perform clustering operations. They categorize the input patterns into a finite number of classes.

6 The perceptron & Learning rule
The objective is to reduce the error e, which is the difference between the neuron response a and the target vector t. The perceptron learning rule learnp calculates desired changes to the perceptron’s weights and biases, given an input vector p and the associated error e. The target vector t must contain values of either 0 or 1, because perceptrons (with hardlim transfer functions) can only output these values.

7 The perceptron & Learning rule
There are three conditions that can occur for a single neuron once an input vector p is presented and the network’s response a is calculated:

8 The perceptron & Learning rule
The perceptron learning rule can be written in terms of the error e = t – a : For the case of a layer of neurons can be written as: Secara umum

9 Example: Learning rule
Matlab code: net = newp([-2 2;-2 2],[0 1]); net.b{1} = [0]; w = [1 -0.8]; net.IW{1,1} = w; p = [1; 2]; t = [1]; a = sim(net,p) % a = 0 e = t-a % e = 1 – 0  1 dw = learnp(w,p,[],[],[],[],e,[],[],[]) % dw = [1 2] w = w + dw % w = [ ] The process of finding new weights (and biases) can be repeated until there are no errors.

10 Example Training train
Assume: Step 1 The output a does not equal the target value t1, so use the perceptron rule to find the incremental changes to the weights and biases based on the error. calculate the new weights and bias using the perceptron update rules Step 2

11 W(4) = [-3, -1] b(4) = 0 Final step: W(6) = [–2 –3] b(6) = 1

12 Train syntax

13 Example train From page:

14 Matlab code: with epoch =1
net = newp([-2 2;-2 2],[0 1]); p = [2; 2]; t = [0]; net.trainParam.epochs = 1; net = train(net,p,t); w = net.iw{1,1}, b = net.b{1} % w = [-2 -2 ] & b = -1 p = [[2;2] [1;-2] [-2;2] [-1;1]] t = [ ] %w = [-3 -1 ] & b = 0 % continue with epoch =1000 a = sim(net,p) % a = [ ] net.trainParam.epochs = 1000; net = train(net,p,t); w = net.iw{1,1}, b = net.b{1} % w = [-2 -3 ] & b = 1 % a = [ ] error = a-t % error = [ ]

15 Feedforward backpropagation
net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) PR = Rx2 matrix of min and max values for R input elements S = number of output vector (or number of neorons in each hidden layer) TF = Transfer function of ith layer, default = 'tansig' BTF = Backpropagation network training function, default = 'traingdx' BLF = Backpropagation weight/bias learning function, default = 'learngdm' PF = performance function, default = 'mse' xk+1 = xk - akgk The transfer functions TFi can be any differentiable transfer function such as TANSIG, LOGSIG, or PURELIN. The training function BTF can be any of the backprop training functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.

16 input vector and vector target Matlab Code: input=[-1 -1 2 2; 0 5 0 5]
Example newff input vector and vector target Matlab Code: input=[ ; ] target=[ ] net=newff ([-1 2;0 5],[3 1],{'tansig' 'purelin'},’traingdm’); net.trainParam.epochs=30; %(number of epochs) net.trainParam.lr=0.3; %(learning rate) net.trainParam.mc=0.6; %(momentum) net=train (net,input,target); output =sim(net,input); [target; output]  [-1 2;0 5] is the minimum and maximum values of vector input One hidden layer, 3 nodes in hidden layer, with tangent sigmoid as transfer function in hidden layer and linear function for output layer, and with gradient descent with momentum backpropagation training function

17 Example: The XOR problem
Single hidden layer: 3 Sigmoid neurons 2 inputs, 1 output x1 x2 y Example 1 Example 2 1 Example 3 Example 4 Desired: I/O table (XOR):

18 create a matrix with the inputs ”1 1”, ”1 0”, ”0 1” and ”0 0”;
XOR network create a matrix with the inputs ”1 1”, ”1 0”, ”0 1” and ”0 0”; and target ‘ ‘ Matlab code: net = newff([0 1; 0 1],[2 1],{’logsig’,’logsig’}); input = [ ; ]; target = [ ]; net.trainParam.show=NaN; net = train(net,input,target); output = sim(net,input) net.IW{1,1} net.LW{2,1} Apply the default training algorithm Levenberg-Marquardt backpropagation trainlm Output = ans =

19


Download ppt "Artificial Neural Network"

Similar presentations


Ads by Google