Download presentation
Presentation is loading. Please wait.
Published byGabriel Glenn Modified over 9 years ago
1
Lecture 3 Introduction to Neural Networks and Fuzzy Logic President UniversityErwin SitompulNNFL 3/1 Dr.-Ing. Erwin Sitompul President University http://zitompul.wordpress.com 2013
2
President UniversityErwin SitompulNNFL 3/2 Single Layer PerceptronsNeural Networks Derivation of a Learning Rule for Perceptrons Widrow [1962] x1x1 x2x2 xmxm wk1wk1 wk2wk2 w km...... Adaline (Adaptive Linear Element) Goal:
3
President UniversityErwin SitompulNNFL 3/3 Least Mean Squares (LMS) Single Layer PerceptronsNeural Networks The following cost function (error function) should be minimized: i: index of data set, the i th data set j : index of input, the j th input
4
President UniversityErwin SitompulNNFL 3/4 Single Layer PerceptronsNeural Networks Adaline Learning Rule With then As already obtained before, Weight Modification Rule Defining we can write
5
President UniversityErwin SitompulNNFL 3/5 Single Layer PerceptronsNeural Networks Adaline Learning Modes Batch Learning Mode Incremental Learning Mode
6
President UniversityErwin SitompulNNFL 3/6 Tangent Sigmoid Activation Function Single Layer PerceptronsNeural Networks Goal: x1x1 x2x2 xmxm wk1wk1 wk2wk2 w km......
7
President UniversityErwin SitompulNNFL 3/7 Logarithmic Sigmoid Activation Function Single Layer PerceptronsNeural Networks Goal: x1x1 x2x2 xmxm wk1wk1 wk2wk2 w km......
8
President UniversityErwin SitompulNNFL 3/8 Single Layer PerceptronsNeural Networks Derivation of Learning Rules For arbitrary activation function,
9
President UniversityErwin SitompulNNFL 3/9 Derivation of Learning Rules Single Layer PerceptronsNeural Networks Depends on the activation function used
10
President UniversityErwin SitompulNNFL 3/10 Derivation of Learning Rules Single Layer PerceptronsNeural Networks Linear functionTangent sigmoid function Logarithmic sigmoid function
11
President UniversityErwin SitompulNNFL 3/11 Derivation of Learning Rules Single Layer PerceptronsNeural Networks
12
President UniversityErwin SitompulNNFL 3/12 Homework 3 Single Layer PerceptronsNeural Networks x1x1 x2x2 w11w11 w12w12 Case 1 [x 1 ;x 2 ] = [2;3] [y 1 ] = [5] Case 2 [x 1 ;x 2 ] = [[2 1];[3 1]] [y 1 ] = [5 2] Use initial values w 11 =1 and w 12 =1.5, and η = 0.01. Determine the required number of iterations. Note: Submit the m-file in hardcopy and softcopy. Given a neuron with linear activation function (a=0.5), write an m-file that will calculate the weights w 11 and w 12 so that the input [x 1 ;x 2 ] can match output y 1 the best. Odd-numbered Student ID Even-numbered Student ID
13
President UniversityErwin SitompulNNFL 3/13 Homework 3A Single Layer PerceptronsNeural Networks x1x1 x2x2 w11w11 w12w12 [x 1 ] = [0.2 0.5 0.4] [x 2 ] = [0.5 0.8 0.3] [y 1 ] = [0.1 0.7 0.9] Use initial values w 11 =0.5 and w 12 =–0.5, and η = 0.01. Determine the required number of iterations. Note: Submit the m-file in hardcopy and softcopy. Given a neuron with a certain activation function, write an m-file that will calculate the weights w 11 and w 12 so that the input [x 1 ;x 2 ] can match output y 1 the best. Even Student ID: Linear function Odd Student ID: Logarithmic sigmoid function ?
14
President UniversityErwin SitompulNNFL 3/14 MLP Architecture Multi Layer PerceptronsNeural Networks y1y1 y2y2 Input layer Hidden layers Output layer Inputs Outputs x1x1 x2x2 x3x3 w ji w kj w lk Possesses sigmoid activation functions in the neurons to enable modeling of nonlinearity. Contains one or more “hidden layers”. Trained using the “Backpropagation” algorithm.
15
President UniversityErwin SitompulNNFL 3/15 MLP Design Consideration Multi Layer PerceptronsNeural Networks W hat activation functions should be used? How many inputs does the network need? How many hidden layers does the network need? How many hidden neurons per hidden layer? How many outputs should the network have? There is no standard methodology to determine these values. Even there is some heuristic points, final values are determinate by a trial and error procedure.
16
President UniversityErwin SitompulNNFL 3/16 Advantages of MLP Multi Layer PerceptronsNeural Networks x1x1 x2x2 x3x3 wjiwji w kj w lk MLP with one hidden layer is a universal approximator. MLP can approximate any function within any preset accuracy The conditions: the weights and the biases are appropriately assigned through the use of adequate learning algorithm. MLP can be applied directly in identification and control of dynamic system with nonlinear relationship between input and output. MLP delivers the best compromise between number of parameters, structure complexity, and calculation cost.
17
President UniversityErwin SitompulNNFL 3/17 Learning Algorithm of MLP Multi Layer PerceptronsNeural Networks f(.) Function signal Error signal Forward propagation Backward propagation Computations at each neuron j: Neuron output, y j Vector of error gradient, E/w ji “Backpropagation Learning Algorithm”
18
President UniversityErwin SitompulNNFL 3/18 If node j is an output node, yi(n)yi(n) w ji (n) net j (n) f(.) yj(n)yj(n) dj(n)dj(n) ej(n)ej(n) Backpropagation Learning Algorithm Multi Layer PerceptronsNeural Networks
19
President UniversityErwin SitompulNNFL 3/19 y i (n) w ji (n) net j (n) f(.) yj(n)yj(n) w kj (n) net k (n) f(.) yk(n)yk(n) dk(n)dk(n) ek(n)ek(n) Backpropagation Learning Algorithm If node j is a hidden node, Multi Layer PerceptronsNeural Networks
20
President UniversityErwin SitompulNNFL 3/20 MLP Training Backward Pass Calculate j (n) Update weights w ji (n+1) Forward Pass Fix w ji (n) Compute y j (n) i j k Left Right i j k Left Right Multi Layer PerceptronsNeural Networks
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.