Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Neural Networks John Paxton Montana State University Summer 2003.

Similar presentations


Presentation on theme: "Introduction to Neural Networks John Paxton Montana State University Summer 2003."— Presentation transcript:

1 Introduction to Neural Networks John Paxton Montana State University Summer 2003

2 Chapter 6: Backpropagation 1986Rumelhart, Hinton, Williams Gradient descent method that minimizes the total squared error of the output. Applicable to multilayer, feedforward, supervised neural networks. Revitalizes interest in neural networks!

3 Backpropagation Appropriate for any domain where inputs must be mapped onto outputs. 1 hidden layer is sufficient to learn any continuous mapping to any arbitrary accuracy! Memorization versus generalization tradeoff.

4 Architecture input layer, hidden layer, output layer x1x1 1 xnxn zpzp z1z1 1 ymym y1y1 v np w pm

5 General Process Feedforward the input signals. Backpropagate the error. Adjust the weights.

6 Activation Function Characteristics Continuous. Differentiable. Monotonically nondecreasing. Easy to compute. Saturates (reaches limits).

7 Activation Functions Binary Sigmoid f(x) = 1 / [ 1 + e -x ] f’(x) = f(x)[1 – f(x)] Bipolar Sigmoid f(x) = -1 + 2 / [1 + e -x ] f’(x) = 0.5 * [1 + f(x)] * [1 – f(x) ]

8 Training Algorithm 1.initialize weights to small random values, for example [-0.5.. 0.5] 2.while stopping condition is false do steps 3 – 8 3.for each training pair do steps 4-8

9 Training Algorithm 4.z in.j =  (x i * v ij ) z j = f(z in.j ) 5.y in.j =  (z i * w ij ) yj = f(y in.j ) 6.error(y j ) = (t j – y j ) * f’(y in.j ) t j is the target value 7.error(z k ) = [  error(y j ) * w kj ] * f’(z in.k )

10 Training Algorithm 8.w kj (new) = w kj (old) +  *error(y j )*z k v kj (new) = v kj (old) +  *error(z j ))*x k  is the learning rate An epoch is one cycle through the training vectors.

11 Choices Initial Weights –random [-0.5.. 0.5], don’t want the derivative to be 0 –Nguyen-Widrow  = 0.7 * p (1/n) n = number of input units p = number of hidden units v ij =  * v ij (random) / || v j (random) ||

12 Choices Stopping Condition (avoid overtraining!) –Set aside some of the training pairs as a validations set. –Stop training when the error on the validation set stops decreasing.

13 Choices Number of Training Pairs –total number of weights / desired average error on test set –where the average error on the training pairs is half of the above desired average

14 Choices Data Representation –Bipolar is better than binary because 0 units don’t learn. –Discrete values: red, green, blue? –Continuous values: [15.0.. 35.0]? Number of Hidden Layers –1 is sufficient –Sometimes, multiple layers might speed up the learning

15 Example XOR. Bipolar data representation. Bipolar sigmoid activation function.  = 1 3 input units, 5 hidden units,1 output unit Initial Weights are all 0. Training example (1 -1). Target: 1.

16 Example 4.z 1 = f(1*0 + 1*0+ -1*0) = 0.5 z 2 = z 3 = z 4 = 0.5 5.y 1 = f(1*0 + 0.5*0 + 0.5*0 + 0.5*0 + 0.5*0) = 0.5 6.error(y 1 ) = (1 – 0.5) * [0.5 * (1 + 0) * (1 – 0)] = 0.25 7.error(z 1 ) = 0 * f’(z in.1 ) = 0 = error(z 2 ) = error(z 3 ) = error(z 4 )

17 Example 8.w 01 (new) = w 01 (old) +  *error(y 1 )*z 0 = 0 + 1 * 0.25 * 1 = 0.25 v 21 (new) = v 21 (old) +  *error(z 1 )*x 2 = 0 + 1 * 0 * -1 = 0.

18 Exercise Draw the updated neural network. Present the example 1 -1 as an example to classify. How is it classified now? If learning were to occur, how would the network weights change this time?

19 XOR Experiments Binary Activation/Binary Representation: 3000 epochs. Bipolar Activation/Bipolar Representation: 400 epochs. Bipolar Activation/Modified Bipolar Representation [-0.8.. 0.8]: 265 epochs. Above experiment with Nguyen-Widrow weight initialization: 125 epochs.

20 Variations Momentum  w jk (t+1) =  * error(y j ) * z k +  *  w jk (t)  v ij (t+1) = similar  is [0.0.. 1.0] The previous experiment takes 38 epochs.

21 Variations Batch update the weights to smooth the changes. Adapt the learning rate. For example, in the delta-bar-delta procedure each weight has its own learning rate that varies over time. –2 consecutive weight increases or decreases will increase the learning rate.

22 Variations Alternate Activation Functions Strictly Local Backpropagation –makes the algorithm more biologically plausible by making all computations local –cortical units sum their inputs –synaptic units apply an activation function –thalamic units compute errors –equivalent to standard backpropagation

23 Variations Strictly Local Backpropagation input cortical layer -> input synaptic layer -> hidden cortical layer -> hidden synaptic layer -> output cortical layer-> output synaptic layer -> output thalamic layer Number of Hidden Layers

24 Hecht-Neilsen Theorem Given any continuous function f: I n -> R m where I is [0, 1], f can be represented exactly by a feedforward network having n input units, 2n + 1 hidden units, and m output units.


Download ppt "Introduction to Neural Networks John Paxton Montana State University Summer 2003."

Similar presentations


Ads by Google