Download presentation
Presentation is loading. Please wait.
Published byAmi Anthony Modified over 8 years ago
2
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
3
The Perceptron Binary classifier functions Threshold activation function
4
What is here to learn? The Perceptron Training Rule
5
The Perceptron: Threshold Activation Function Binary classifier functions Threshold activation function
6
One way to learn an acceptable weight vector is to begin with random weights Then iteratively apply the perceptron weights to each training example Modifying the perceptron weights whenever it misclassifies an example This process is repeated, iterating through the training examples as many times as needed until the perceptron classifies all training examples correctly. Weights are modified at each step according to the perceptron training rule, which revises the weight associated with input The Perceptron Training Rule
7
Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen) input pattern that is similar to an old (seen) input pattern is likely to be classified correctly The Perceptron Training Rule
8
Basic Idea – go over all existing data patterns, whose labeling is known, and check their classification with a current weight vector If correct, continue If not, add to the weights a quantity that is proportional to the product of the input pattern with the desired output Z (1 or –1) The Perceptron Training Rule
9
Weight Update Rule
10
The delta training rule is best understood by considering the task of training an unthresholded perceptron; that is, a linear unit for which the output o is given by Gradient Descent and Delta Rule
11
In order to derive a weight learning rule for linear units, let us begin by specifying a measure for the training error of a hypothesis (weight vector), relative to the training examples.
12
Derivation GDR The vector derivative is called the gradient of E with respect to The gradient specifies the direction that produces the steepest increase in E. The negative of this vector therefore gives the direction of steepest decrease. The training rule for gradient descent is
13
Derivation of GDR … The negative sign is presented because we want to move the weight vector in the direction that decreases E. This training rule can also written in its component form which makes it clear that steepest descent is achieved by altering each component of in proportion to.
14
The vector of derivatives that form the gradient can be obtained by differentiating E The weight update rule for standard gradient descent can be summarized as Derivation of GDR …
15
AB problem
16
XOR problem
17
1
18
2
20
1 1
21
2 2
22
1 2 AND
23
xnxn x1x1 x2x2 Input Output Three-layer networks Hidden layers
24
Feed-forward layered network Output layer 2 nd hidden layer 1 st hidden layer Input layer
25
Different Non-Linearly Separable Problems Structure Types of Decision Regions Exclusive-OR Problem Class Separation Most General Region Shapes Single-Layer Two-Layer Three-Layer Half Plane Bounded By Hyperplane Convex Open Or Closed Regions Arbitrary (Complexity Limited by No. of Nodes) A AB B A AB B A AB B B A B A B A
26
In the perceptron/single layer nets, we used gradient descent on the error function to find the correct weights: w ji = (t j - y j ) x i We see that errors/updates are local to the node i.e. the change in the weight from node i to output j (w ji ) is controlled by the input that travels along the connection and the error signal from output j x1x1 (t j - y j )
27
x1x1 x2x2 But with more layers how are the weights for the first 2 layers found when the error is computed for layer 3 only? There is no direct error signal for the first layers!!!!! ?
28
Objective of Multilayer NNet x1x1 x2x2 xnxn w1w1 w2w2 wmwm x = Training set Goalfor all k
29
Learn the Optimal Weight Vector w1w1 w2w2 wmwm x1x1 x2x2 xnxn x = Training set Goalfor all k
30
First Complex NNet Algorithm Multilayer feedforward NNet
31
Training: Backprop algorithm Searches for weight values that minimize the total error of the network over the set of training examples. Repeated procedures of the following two passes: Forward pass: Compute the outputs of all units in the network, and the error of the output layers. Backward pass: The network error is used for updating the weights (credit assignment problem). Starting at the output layer, the error is propagated backwards through the network, layer by layer. This is done by recursively computing the local gradient of each neuron.
32
Back-propagation training algorithm illustrated: Backprop adjusts the weights of the NN in order to minimize the network total mean squared error. Network activation Error computation Forward Step Error propagation Backward Step
33
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation Pictures below illustrate how signal is propagating through the network, Symbols w (xm)n represent weights of connections between network input x m and neuron n in input layer. Symbols y n represents output signal of neuron n.
34
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation
36
Propagation of signals through the hidden layer. Symbols w mn represent weights of connections between output of neuron m and input of neuron n in the next layer.
37
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation
39
Propagation of signals through the output layer.
40
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation In the next algorithm step the output signal of the network y is compared with the desired output value (the target), which is found in training data set. The difference is called error signal d of output layer neuron
41
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation The idea is to propagate error signal d (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron.
42
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation The idea is to propagate error signal d (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron.
43
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation The weights' coefficients w mn used to propagate errors back are equal to this used during computing output value. Only the direction of data flow is changed (signals are propagated from output to inputs one after the other). This technique is used for all network layers. If propagated errors came from few neurons they are added. The illustration is below:
44
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below df(e)/de represents derivative of neuron activation function (which weights are modified).
45
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below df(e)/de represents derivative of neuron activation function (which weights are modified).
46
Learning Algorithm: Backpropagation Learning Algorithm: Backpropagation When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below df(e)/de represents derivative of neuron activation function (which weights are modified).
47
y 11 11 22 22 mm mm x1x1 x2x2 xnxn w1w1 w2w2 wmwm x = Single-Hidden Layer NNet Hidden Units
48
y 11 11 22 22 mm mm x1x1 x2x2 xnxn w1w1 w2w2 wmwm x = Radial Basis Function Networks Hidden Units
49
Non-Linear Models Adjusted by the Learning process Weights
50
Typical Radial Functions Gaussian Hardy Multiquadratic Inverse Multiquadratic
51
Gaussian Basis Function ( =0.5,1.0,1.5)
52
+ + Most General RBF +
53
The Topology of RBF NNet Feature Vectors x1x1 x2x2 xnxn y1y1 ymym Inputs Hidden Units Output Units Subclasses Classes
54
Radial Basis Function Networks x1x1 x2x2 xnxn w1w1 w2w2 wmwm x = Training set Goalfor all k
55
Learn the Optimal Weight Vector w1w1 w2w2 wmwm x1x1 x2x2 xnxn x = Training set Goalfor all k
56
Regularization Training set Goalfor all k If regularization is not needed, set
57
Learn the Optimal Weight Vector Minimize
58
Learning Kernel Parameters x1x1 x2x2 xnxn y1y1 ymym 11 22 ll w 11 w 12 w1lw1l wm1wm1 wm2wm2 w ml Training set Kernels
59
What to Learn? Weights w ij ’s Centers j ’s of j ’s Widths j ’s of j ’s Number of j ’s x1x1 x2x2 xnxn y1y1 ymym 11 22 ll w 11 w 12 w1lw1l wm1wm1 wm2wm2 w ml
60
Two-Stage Training x1x1 x2x2 xnxn y1y1 ymym 11 22 ll w 11 w 12 w1lw1l wm1wm1 wm2wm2 w ml Step 1 Step 2 Determines Centers j ’s of j ’s. Widths j ’s of j ’s. Number of j ’s. Determines Centers j ’s of j ’s. Widths j ’s of j ’s. Number of j ’s. Determines w ij ’s.
61
Learning Rule Backpropagation learning rule will apply in RBF also.
62
Three-layer RBF neural network
63
Deep NNet
67
Deep Supervised Learning
69
Convolution NNet – Example of DNNet
70
Next Class – Deep Learning (CNN)
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.