Download presentation
Presentation is loading. Please wait.
Published byRatheesh P M Modified over 2 years ago
1
Multi Layer Perceptron
2
Threshold Logic Unit (TLU) x1x1 x2x2 xnxn...... w1w1 w2w2 wnwn a= i=1 n w i x i 1 if a y = 0 if a < y { inputs weights activation output
3
Activation Functions a y a y a y a y threshold linear piece-wise linear sigmoid
4
Decision Surface of a TLU x1x1 x2x2 Decision line w 1 x 1 + w 2 x 2 = w 1 11 0 0 0 0 0 1
5
Geometric Interpretation x1x1 x2x2 Decision line w x wx= y=1 y=0 |x w |= /|w| The relation wx= defines the decision line xwxw
6
Geometric Interpretation In n dimensions the relation wx= defines a n- 1 dimensional hyper-plane, which is perpendicular to the weight vector w. On one side of the hyper-plane (wx> ) all patterns are classified by the TLU as “ 1 ”, while those that get classified as “ 0 ” lie on the other side of the hyper- plane. If patterns can be not separated by a hyper-plane then they cannot be correctly classified with a TLU.
7
Threshold as Weight x1x1 x2x2 xnxn...... w1w1 w2w2 wnwn w n+1 x n+1 =-1 a= i=1 n+1 w i x i y 1 if a y= 0 if a < { =w n+1
8
Training ANNs Training set S of examples {x,t} –x is an input vector and –t the desired target vector –Example: Logical And S = {(0,0),0}, {(0,1),0}, {(1,0),0}, {(1,1),1} Iterative process –Present a training example x, compute network output y, compare output y with target t, adjust weights and thresholds Learning rule –Specifies how to change the weights w and thresholds of the network as a function of the inputs x, output y and target t.
9
Perceptron Learning Rule w’=w + (t-y) x Or in components w’ i = w i + w i = w i + (t-y) x i (i=1..n+1) With w n+1 = and x n+1 =-1 The parameter is called the learning rate. It determines the magnitude of weight updates w i. If the output is correct (t=y) the weights are not changed ( w i =0). If the output is incorrect (t y) the weights w i are changed such that the output of the TLU for the new weights w’ i is closer/further to the input x i.
10
Perceptron Training Algorithm Repeat for each training vector pair (x,t) evaluate the output y when x is the input if y t then form a new weight vector w’ according to w’=w + (t-y) x else do nothing end if end for Until y=t for all training vector pairs
11
Perceptron Learning Rule t=1 t=-1 w=[0.25 –0.1 0.5] x 2 = 0.2 x 1 – 0.5 o=1o=-1 (x,t)=([-1,-1],1) o=sgn(0.25+0.1-0.5) =-1 w=[0.2 –0.2 –0.2] (x,t)=([2,1],-1) o=sgn(0.45-0.6+0.3) =1 w=[-0.2 –0.4 –0.2] (x,t)=([1,1],1) o=sgn(0.25-0.7+0.1) =-1 w=[0.2 0.2 0.2]
12
Perceptron Convergence Theorem The algorithm converges to the correct classification –if the training data is linearly separable –and is sufficiently small If two classes of vectors X 1 and X 2 are linearly separable, the application of the perceptron training algorithm will eventually result in a weight vector w 0, such that w 0 defines a TLU whose decision hyper-plane separates X 1 and X 2 (Rosenblatt 1962). Solution w 0 is not unique, since if w 0 x =0 defines a hyper-plane, so does w’ 0 = k w 0.
13
Multiple TLUs Handwritten alphabetic character recognition –26 classes : A,B,C…,Z –First TLU distinguishes between “A”s and “non-A”s, second TLU between “B”s and “non-B”s etc. x1x1 x2x2 x3x3 xnxn y1y1 y2y2 y 26... w’ ji = w ji + (t j -y j ) x i w ji connects x i with y j
14
Linear Unit x1x1 x2x2 xnxn...... w1w1 w2w2 wnwn a= i=1 n w i x i y y= a = i=1 n w i x i inputs weights activation output
15
Gradient Descent Learning Rule Consider linear unit without threshold and continuous output o (not just –1,1) –o=w 0 + w 1 x 1 + … + w n x n Train the w i ’s such that they minimize the squared error –E[w 1,…,w n ] = ½ d D (t d -o d ) 2 where D is the set of training examples
16
Gradient Descent D={,,, } Gradient: E[w]=[ E/ w 0,… E/ w n ] (w 1,w 2 ) (w 1 + w 1,w 2 + w 2 ) w=- E[w] w i =- E/ w i = / w i 1/2 d (t d -o d ) 2 = / w i 1/2 d (t d - i w i x i ) 2 = d (t d - o d )(-x i )
17
Incremental Stochastic Gradient Descent Batch mode : gradient descent w=w - E D [w] over the entire data D E D [w]=1/2 d (t d -o d ) 2 Incremental mode: gradient descent w=w - E d [w] over individual training examples d E d [w]=1/2 (t d -o d ) 2 Incremental Gradient Descent can approximate Batch Gradient Descent arbitrarily closely if is small enough
18
Perceptron vs. Gradient Descent Rule perceptron rule w’ i = w i + (t p -y p ) x i p derived from manipulation of decision surface. gradient descent rule w’ i = w i + (t p -y p ) x i p derived from minimization of error function E[w 1,…,w n ] = ½ p (t p -y p ) 2 by means of gradient descent. Where is the big difference?
19
Perceptron vs. Gradient Descent Rule Perceptron learning rule guaranteed to succeed if Training examples are linearly separable Sufficiently small learning rate Linear unit training rules uses gradient descent Guaranteed to converge to hypothesis with minimum squared error Given sufficiently small learning rate Even when training data contains noise Even when training data not separable by H
20
Presentation of Training Examples Presenting all training examples once to the ANN is called an epoch. In incremental stochastic gradient descent training examples can be presented in –Fixed order (1,2,3…,M) –Randomly permutated order (5,2,7,…,3) –Completely random (4,1,7,1,5,4,……)
21
Neuron with Sigmoid-Function x1x1 x2x2 xnxn...... w1w1 w2w2 wnwn a= i=1 n w i x i y= (a) =1/(1+e -a ) y inputs weights activation output
22
Sigmoid Unit x1x1 x2x2 xnxn...... w1w1 w2w2 wnwn w0w0 x 0= -1 a= i=0 n w i x i y y= (a)=1/(1+e -a ) (x) is the sigmoid function: 1/(1+e -x ) d (x)/dx= (x) (1- (x)) Derive gradient decent rules to train: one sigmoid function E/ w i = - p (t p -y) y (1-y) x i p Multilayer networks of sigmoid units backpropagation:
23
Gradient Descent Rule for Sigmoid Output Function a sigmoid E p / w i = / w i ½ (t p -y p ) 2 = / w i ½ (t p - ( i w i x i p )) 2 = (t p -y p ) ‘( i w i x i p ) (-x i p ) for y= (a) = 1/(1+e -a ) ’(a)= e -a /(1+e -a ) 2 = (a) (1- (a)) E p [w 1,…,w n ] = ½ (t p -y p ) 2 w’ i = w i + w i = w i + y(1-y)(t p -y p ) x i p a ’’
24
Gradient Descent Learning Rule w i = y j p (1-y j p ) (t j p -y j p ) x i p xixi w ji yjyj activation of pre-synaptic neuron error j of post-synaptic neuron derivative of activation function learning rate
25
Learning with hidden units Networks without hidden units are very limited in the input-output mappings they can model. –More layers of linear units do not help. Its still linear. –Fixed output non-linearities are not enough We need multiple layers of adaptive non-linear hidden units. This gives us a universal approximator. But how can we train such nets? –We need an efficient way of adapting all the weights, not just the last layer. This is hard. Learning the weights going into hidden units is equivalent to learning features. –Nobody is telling us directly what hidden units should do.
26
Learning by perturbing weights Randomly perturb one weight and see if it improves performance. If so, save the change. –Very inefficient. We need to do multiple forward passes on a representative set of training data just to change one weight. –Towards the end of learning, large weight perturbations will nearly always make things worse. We could randomly perturb all the weights in parallel and correlate the performance gain with the weight changes. –Not any better because we need lots of trials to “see” the effect of changing one weight through the noise created by all the others. Learning the hidden to output weights is easy. Learning the input to hidden weights is hard. hidden units output units input units
27
The idea behind backpropagation We don’t know what the hidden units ought to do, but we can compute how fast the error changes as we change a hidden activity. – Instead of using desired activities to train the hidden units, use error derivatives w.r.t. hidden activities. –Each hidden activity can affect many output units and can therefore have many separate effects on the error. These effects must be combined. –We can compute error derivatives for all the hidden units efficiently. –Once we have the error derivatives for the hidden activities, its easy to get the error derivatives for the weights going into a hidden unit.
28
Multi-Layer Networks input layer hidden layer output layer
29
Training-Rule for Weights to the Output Layer yjyj xixi w ji E p [w ij ] = ½ j (t j p -y j p ) 2 E p / w ji = / w ji ½ j (t j p -y j p ) 2 = … = - y j p (1-y p j )(t p j -y p j ) x i p w ji = y j p (1-y j p ) (t p j -y j p ) x i p = j p x i p with j p := y j p (1-y j p ) (t p j -y j p )
30
Training-Rule for Weights to the Hidden Layer xkxk xixi w ki Credit assignment problem: No target values t for hidden layer units. Error for hidden units? w jk jj kk yjyj k = j w jk j y j (1-y j ) w ki = x k p (1-x k p ) k p x i p
31
Training-Rule for Weights to the Hidden Layer xkxk E p [w ki ] = ½ j (t j p -y j p ) 2 E p / w ki = / w ki ½ j (t j p -y j p ) 2 = / w ki ½ j (t j p - k w jk x k p )) 2 = / w ki ½ j (t j p - k w jk i w ki x i p ))) 2 = - j (t j p -y j p ) ’ j (a) w jk ’ k (a) x i p = - j j w jk ’ k (a) x i p = - j j w jk x k (1-x k ) x i p jj xixi wkiwki wjkwjk kk yjyj w ki = k x i p with k = j j w jk x k (1-x k )
32
Backpropagation xkxk xixi w ki w jk jj kk yjyj Backward step: propagate errors from output to hidden layer Forward step: Propagate activation from input to output layer
33
Backpropagation Algorithm Initialize each w i to some small random value Until the termination condition is met, Do –For each training example Do Input the instance (x 1,…,x n ) to the network and compute the network outputs y k For each output unit k – k =y k (1-y k )(t k -y k ) For each hidden unit h – h =y h (1-y h ) k w h,k k For each network weight w i,j Do w i,j =w i,j + w i,j where w i,j = j x i,j
34
Backpropagation Gradient descent over entire network weight vector Easily generalized to arbitrary directed graphs Will find a local, not necessarily global error minimum -in practice often works well (can be invoked multiple times with different initial weights) Often include weight momentum term w i,j (n)= j x i,j + w i,j (n-1) Minimizes error training examples –Will it generalize well to unseen instances (over- fitting)? Training can be slow typical 1000-10000 iterations (use Levenberg-Marquardt instead of gradient descent) Using network after training is fast
35
Convergence of Backprop Gradient descent to some local minimum perhaps not global minimum Add momentum term: w ki (n) – w ki (n) = k (n) x i (n) + w ki (n-1) with [0,1] Stochastic gradient descent Train multiple nets with different initial weights Nature of convergence Initialize weights near zero Therefore, initial networks near-linear Increasingly non-linear functions possible as training progresses
36
Optimization Methods There are other more efficient (faster convergence) optimization methods than gradient descent –Newton’s method uses a quadratic approximation (2 nd order Taylor expansion) –F(x+ x) = F(x) + F(x) x + x 2 F(x) x + … –Conjugate gradients –Levenberg-Marquardt algorithm
37
NN: Universal Approximator? Kolmogorov proved that any continuous function g(x) defined on the unit hypercube I n can be represented as for properly chosen and. (A. N. Kolmogorov. On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition. Doklady Akademiia Nauk SSSR, 114(5):953-956, 1957)
38
Universal Approximation Property of ANN Boolean functions Every boolean function can be represented by network with single hidden layer But might require exponential (in number of inputs) hidden units Continuous functions Every bounded continuous function can be approximated with arbitrarily small error, by network with one hidden layer [Cybenko 1989, Hornik 1989] Any function can be approximated to arbitrary accuracy by a network with two hidden layers [Cybenko 1988]
39
Ways to use weight derivatives How often to update –after each training case? –after a full sweep through the training data? How much to update –Use a fixed learning rate? –Adapt the learning rate? –Add momentum? –Don’t use steepest descent?
40
Applications of neural networks Alvinn (the neural network that learns to drive a van from camera inputs). NETtalk: a network that learns to pronounce English text. Recognizing hand-written zip codes. Lots of applications in financial time series analysis.
41
NETtalk (Sejnowski & Rosenberg, 1987) The task is to learn to pronounce English text from examples. Training data is 1024 words from a side-by-side English/phoneme source. Input: 7 consecutive characters from written text presented in a moving window that scans text. Output: phoneme code giving the pronunciation of the letter at the center of the input window. Network topology: 7x29 inputs (26 chars + punctuation marks), 80 hidden units and 26 output units (phoneme code). Sigmoid units in hidden and output layer.
42
NETtalk (contd.) Training protocol: 95% accuracy on training set after 50 epochs of training by full gradient descent. 78% accuracy on a set-aside test set. Comparison against Dectalk (a rule based expert system): Dectalk performs better; it represents a decade of analysis by linguists. NETtalk learns from examples alone and was constructed with little knowledge of the task.
43
Overfitting The training data contains information about the regularities in the mapping from input to output. But it also contains noise –The target values may be unreliable. –There is sampling error. There will be accidental regularities just because of the particular training cases that were chosen. When we fit the model, it cannot tell which regularities are real and which are caused by sampling error. –So it fits both kinds of regularity. –If the model is very flexible it can model the sampling error really well. This is a disaster.
44
A simple example of overfitting Which model do you believe? –The complicated model fits the data better. –But it is not economical A model is convincing when it fits a lot of data surprisingly well. –It is not surprising that a complicated model can fit a small amount of data.
45
Generalization The objective of learning is to achieve good generalization to new cases, otherwise just use a look-up table. Generalization can be defined as a mathematical interpolation or regression over a set of training points: f(x) x
46
Generalization An Example: Computing Parity Can it learn from m examples to generalize to all 2^n possibilities? >0>1>2 Parity bit value (n+1)^2 weights n bits of input 2^n possible examples +1 +1
47
Generalization Fraction of cases used during training Test Error 100% 0.25.50.751.0 Network test of 10-bit parity (Denker et. al., 1987) When number of training cases, m >> number of weights, then generalization occurs.
48
Generalization A Probabilistic Guarantee N = # hidden nodesm = # training cases W = # weights = error tolerance (< 1/8) Network will generalize with 95% confidence if: 1. Error on training set < 2. Based on PAC theory => provides a good rule of practice.
49
Generalization The objective of learning is to achieve good generalization to new cases, otherwise just use a look-up table. Generalization can be defined as a mathematical interpolation or regression over a set of training points: f(x) x
50
Generalization Over-Training Is the equivalent of over-fitting a set of data points to a curve which is too complex Occam’s Razor (1300s) : “plurality should not be assumed without necessity” The simplest model which explains the majority of the data is usually the best
51
Generalization Preventing Over-training: Use a separate test or tuning set of examples Monitor error on the test set as network trains Stop network training just prior to over-fit error occurring - early stopping or tuning Number of effective weights is reduced Most new systems have automated early stopping methods
52
Generalization Weight Decay: an automated method of effective weight control Adjust the bp error function to penalize the growth of unnecessary weights: where: = weight -cost parameter is decayed by an amount proportional to its magnitude; those not reinforced => 0
53
Network Design & Training Issues Design: Architecture of network Structure of artificial neurons Learning rules Training: Ensuring optimum training Learning parameters Data preparation and more....
54
Network Design Architecture of the network: How many nodes? Determines number of network weights How many layers? How many nodes per layer? Input Layer Hidden Layer Output Layer Automated methods: – augmentation (cascade correlation) – weight pruning and elimination
55
Network Design Architecture of the network: Connectivity? Concept of model or hypothesis space Constraining the number of hypotheses: – selective connectivity – shared weights – recursive connections
56
Network Design Structure of artificial neuron nodes Choice of input integration: – summed, squared and summed – multiplied Choice of activation (transfer) function: – sigmoid (logistic) – hyperbolic tangent – Guassian – linear – soft-max
57
Network Design Selecting a Learning Rule Generalized delta rule (steepest descent) Momentum descent Advanced weight space search techniques Global Error function can also vary - normal - quadratic - cubic
58
Network Training How do you ensure that a network has been well trained? Objective: To achieve good generalization accuracy on new examples/cases Establish a maximum acceptable error rate Train the network using a validation test set to tune it Validate the trained network against a separate test set which is usually referred to as a production test set
59
Network Training Available Examples Training Set Production Set Approach #1: Large Sample When the amount of available data is large... 70% 30% Used to develop one ANN model Compute Test error Divide randomly Generalization error = test error Test Set
60
Network Training Available Examples Training Set Pro. Set Approach #2: Cross-validation When the amount of available data is small... 10% 90% Repeat 10 times Used to develop 10 different ANN models Accumulate test errors Generalization error determined by mean test error and stddev Test Set
61
Network Training How do you select between two ANN designs ? A statistical test of hypothesis is required to ensure that a significant difference exists between the error rates of two ANN models If Large Sample method has been used then apply McNemar’s test* If Cross-validation then use a paired t test for difference of two proportions *We assume a classification problem, if this is function approximation then use paired t test for difference of means
62
Network Training Mastering ANN Parameters Typical Range learning rate - 0.1 0.01 - 0.99 momentum - 0.8 0.1 - 0.9 weight-cost - 0.1 0.001 - 0.5 Fine tuning : - adjust individual parameters at each node and/or connection weight – automatic adjustment during training
63
Network Training Network weight initialization Random initial values +/- some range Smaller weight values for nodes with many incoming connections Rule of thumb: initial weight range should be approximately coming into a node
64
Network Training Typical Problems During Training E # iter E E Would like: But sometimes: Steady, rapid decline in total error Seldom a local minimum - reduce learning or momentum parameter Reduce learning parms. - may indicate data is not learnable
65
ALVINN Automated driving at 70 mph on a public highway Camera image 30x32 pixels as inputs 30 outputs for steering 30x32 weights into one out of four hidden unit 4 hidden units
66
Perceptron vs. TLU x1x1 x2x2 xnxn...... w1w1 w2w2 wnwn Input pattern Association units weights (trained) SummationThreshold fixed Association units (A-units) can be assigned arbitrary Boolean functions of the input pattern.
67
Gradient Descent Learning Rule Consider linear unit without threshold and continuous output o (not just –1,1) –o=w 0 + w 1 x 1 + … + w n x n Train the w i ’s such that they minimize the squared error –E[w 1,…,w n ] = ½ d D (t d -o d ) 2 where D is the set of training examples
68
Gradient Descent D={,,, } Gradient: E[w]=[ E/ w 0,… E/ w n ] (w 1,w 2 ) (w 1 + w 1,w 2 + w 2 ) w=- E[w] w i =- E/ w i = / w i 1/2 d (t d -o d ) 2 = / w i 1/2 d (t d - i w i x i ) 2 = d (t d - o d )(-x i )
69
Gradient Descent Gradient-Descent(training_examples, ) Each training example is a pair of the form where (x 1,…,x n ) is the vector of input values, and t is the target output value, is the learning rate (e.g. 0.1) Initialize each w i to some small random value Until the termination condition is met, Do –Initialize each w i to zero –For each in training_examples Do Input the instance (x 1,…,x n ) to the linear unit and compute the output o For each linear unit weight w i Do – w i = w i + (t-o) x i –For each linear unit weight wi Do w i =w i + w i
70
Literature Neural Networks – A Comprehensive Foundation, Simon Haykin, Prentice-Hall, 1999 Networks for Pattern Recognition”, C.M. Bishop, Oxford University Press, 1996 ”Neural Network Design”, M. Hagan et al, PWS, 1995 Perceptrons: An Introduction to Computational Geometry, Minsky, Papert, 1969
71
Software Neural Networks for Face Recognition http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/faces.html SNNS Stuttgart Neural Networks Simulator http://www-ra.informatik.uni-tuebingen.de/SNNS Neural Networks at your fingertips http://www.geocities.com/CapeCanaveral/1624/ Neural Network Design Demonstrations http://ee.okstate.edu/mhagan/nndesign_5.ZIP Bishop’s network toolbox Matlab Neural Network toolbox
72
A change of notation For simple networks we use the notation x for activities of input units y for activities of output units z for the summed input to an output unit For networks with multiple hidden layers: y is used for the output of a unit in any layer x is the summed input to a unit in any layer The index indicates which layer a unit is in.
73
Non-linear neurons with smooth derivatives For backpropagation, we need neurons that have well-behaved derivatives. –Typically they use the logistic function –The output is a smooth function of the inputs and the weights. 0.5 0 0 1 Its odd to express it in terms of y.
74
Sketch of the backpropagation algorithm on a single training case First convert the discrepancy between each output and its target value into an error derivative. Then compute error derivatives in each hidden layer from error derivatives in the layer above. Then use error derivatives w.r.t. activities to get error derivatives w.r.t. the weights.
75
The derivatives j i
76
Momentum Sometimes we add to ΔW ji a momentum factor α. This allows us to use a high learning rate, but prevent the oscillatory behavior that can sometimes result from a high learning rate. α Add to this a momentum times the weight update from the last iteration, i.e., add times the previous value of where and often = 0.9 Momentum keeps it going in the same direction. α α
77
More on backpropagation Performs gradient descent over the entire network weight vector. Will find a local, not necessarily global, error minimum. Minimizes error over training set; need to guard against overfitting just as with decision tree learning. Training takes thousands of iterations (epochs) --- slow!
78
Network topology Designing network topology is an art. We can learn the network topology using genetic algorithms. But using GAs is very cpu-intensive. An alternative that people use is hill-climbing.
79
First MLP Exercise (Due June 19) Become familiar with the Neural Network Toolbox in Matlab Construct a single hidden layer, feed forward network with sigmoidal units. to output. The network should have n hidden units n=3 to 6. Construct two more networks of same nature with n-1 and n+1 hidden units respectively. Initial random weights are from ~ N(µ,σ 2 ) The dimensionality of the input data is d
80
First MLP Exercise (Cntd) Constructing the a train and test set of size M For simplicity, choose two distributions N(-1, σ 2 ) and N(1, σ 2 ). Choose M/2 samples of d dimensions from the first distribution and M/2 from the second. This way you get a set of M vectors in d dimensions. Give the first set a class label of 0 and the second set a class label of 1. Repeat this again for the construction of the test set.
81
Actual Training Train 5 networks with the same training data (each network has different initial conditions) Construct a classification error graph for both train and test data taken at different time steps (mean and std over 5 nets) Repeat for n=3-6 using both n+1 and n-1 Discuss the results, justify with graphs and provide clear understanding (you may try other setups to test your understanding) Consider momentum and weight deca
82
Generalization An Example: Computing Parity Can it learn from m examples to generalize to all 2^n possibilities? >0>1>2 Parity bit value (n+1)^2 weights n bits of input 2^n possible examples +1 +1
83
Generalization Fraction of cases used during training Test Error 100% 0.25.50.751.0 Network test of 10-bit parity (Denker et. al., 1987) When number of training cases, m >> number of weights, then generalization occurs.
84
Generalization Consider 20-bit parity problem: 20-20-1 net has 441 weights For 95% confidence that net will predict with, we need training examples Not bad considering
85
Generalization Training Sample & Network Complexity Based on : W - to reduced size of training sample W - to supply freedom to construct desired function Optimum W => Optimum # Hidden Nodes
86
Generalization How can we control number of effective weights? Manually or automatically select optimum number of hidden nodes and connections Prevent over-fitting = over-training Add a weight-cost term to the bp error equation
87
Generalization Consider 20-bit parity problem: 20-20-1 net has 441 weights For 95% confidence that net will predict with, we need training examples Not bad considering
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.