Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.

Similar presentations


Presentation on theme: "1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs."— Presentation transcript:

1 1 Lecture 6 Neural Network Training

2 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs and the outputs of any neural network and considerable effort has been spent on finding faster and more efficient training algorithms which will reduce the time required to train a network.

3 3 Neural Network Training There are two basic classes of network training: supervised learning that involves an external source of knowledge about the system unsupervised learning that involves no external source of knowledge and learning relies on local information and internal data.

4 4 Neural Network Training supervised learning the desired outputs of the network for every given input condition are specified and the network learns the appropriate functional relationship between them following repeated application of training sets of input-output pairs. The popular back-propagation algorithm, which is used in many applications, belongs to this class. This algorithm gets its name from the fact that the synaptic weights of a multi-layer network are adapted iteratively by propagating some measure of the error between the desired and actual output of the network from its output back to its input.

5 5 Supervised Learning Teacher presents ANN input-output pairs ANN weights adjusted according to error Iterative algorithms (e.g. Delta rule, BP rule) Quality of training examples is critical

6 6 Neural Network Training unsupervised learning, no information on the desired output of the network that corresponds to a particular input is available. Here, the network is auto-associative, learning to respond to the different inputs in different ways. Typical applications of this class are feature detection and data clustering. Hebb´s algorithm and competitive learning are two examples of unsupervised learning algorithms. A wide range of network topologies such as those due to Hopfield, Hamming and Boltzmann, also use the same method of learning. In general, these networks, with their ability to generate arbitrary mappings between their inputs and outputs, are used as associative memories and classifiers.

7 7 Unsupervised Learning ANN adapts weights to cluster input data Hebbian learning  Connection stimulus-response strengthened (hebbian) Competitive learning algorithms  Kohonen & ART  Input weights adjusted to resemble stimulus

8 8 6.1 The Widrow-Hoff Training Algorithm introduced by Widrow and Hoff in the 1960s. A single neuron with inputs: an array of synaptic weights: weighted sum:

9 9 6.1 The Widrow-Hoff Training Algorithm Training is based on some measure of the discrepancy or error e=d-y between the desired and actual output of the element. LMS algorithm (for Least Mean Squares), using the squared error for each training set as the objective function, i.e. the optimum synaptic weight vector: R and p are not known and the Wiener solution unfortunately cannot be used.

10 10 6.1 The Widrow-Hoff Training Algorithm Widrow and Hoff observed that using the partial derivative of the squared instantaneous error The Widrow-Hoff algorithm can be expressed as the iteration: λ is some positive constant that defines the rate of convergence of the algorithm. ﹢

11 11 6.1 TheWidrow-Hoff Training Algorithm At each iteration the change in the error is

12 12 6.2 The Delta Training Algorithm The original Widrow-Hoff algorithm must be modified when the neural element contains a nonlinear element. The Delta training algorithm is given by the iteration differentiable or smooth.

13 13 6.3 Multi-layer ANN Training Algorithms For control applications number of layers and the number of neurons per layer minimal neural networks involving an input layer, a single hidden layer and an output layer, with a total of less than 10 nodes have proved quite successful in practical neural controllers. Training is performed first for a given number of neurons, the number is then reduced and the ANN is re-trained. The number of neurons is further reduced at each successive training session until the measure of the error starts to increase. the choice is arbitrary and a matter of experimentation.

14 14 6.4 The Back-propagation (BP) Algorithm consider a simple 2-2-1 ANN

15 15 first layer output layer 6.4 The Back-propagation (BP) Algorithm

16 16 distorting or compression function of the node: 6.4 The Back-propagation (BP) Algorithm

17 17 6.4 The Back-propagation (BP) Algorithm

18 18 Steps in Back propagation Algorithm STEP ONE: initialize the weights and biases. The weights in the network are initialized to random numbers from the interval [-1,1]. Each unit has a BIAS associated with it The biases are similarly initialized to random numbers from the interval [-1,1]. STEP TWO: feed the training sample.

19 19 Steps in Back propagation Algorithm ( cont..) STEP THREE: Propagate the inputs forward; we compute the net input and output of each unit in the hidden and output layers. STEP FOUR: back propagate the error. STEP FIVE: update weights and biases to reflect the propagated errors. STEP SIX: terminating conditions.


Download ppt "1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs."

Similar presentations


Ads by Google