Download presentation
Presentation is loading. Please wait.
1
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance of the node’s output with the current network error. Covariance: : vector of weights to the new node : vector of weights to the new node : output of the new node to p-th input sample : output of the new node to p-th input sample : error of k-th output node for p-th input sample before the new node is added : error of k-th output node for p-th input sample before the new node is added : averages over the training set : averages over the training set
2
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 2 Cascade Correlation Since we want to maximize S (as opposed to minimizing some error), we use gradient ascent: : i-th input for the p-th pattern : i-th input for the p-th pattern : sign of the correlation between the node’s output and the k-th network output : sign of the correlation between the node’s output and the k-th network output : learning rate : learning rate : derivative of the node’s activation function with respect to its net input, evaluated at p-th pattern : derivative of the node’s activation function with respect to its net input, evaluated at p-th pattern
3
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 3 Cascade Correlation If we can find weights so that the new node’s output perfectly covaries with the error in each output node, we can set the new output node weights and offsets so that the new error is zero. More realistically, there will be no perfect covariance, which means that we will set each output node weight so that the error is minimized. To do this, we can use gradient descent or linear regression for each individual output node weight. The next added hidden node will further reduce the remaining network error, and so on, until we reach a desired error threshold.
4
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 4 Cascade Correlation This learning algorithm is much faster than backpropagation learning, because only one neuron is trained at a time. On the other hand, its inability to retrain neurons may prevent the cascade correlation network from finding optimal weight patterns for encoding the given function.
5
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 5 Input Space Clusters One of our basic assumptions about functions to be learned by ANNs is that inputs belonging to the same class (or requiring similar outputs) are located close to each other in the input space. Often, input vectors from the same class form clusters, i.e., local groups of data points. For such data distributions, the linearly dividing functions used by perceptrons, Adalines, or BPNs are not optimal.
6
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 6 Input Space Clusters Example: x1x1 x1x1 x2x2 x2x2 Line 1 Line 2 Line 3 Line 4 Circle 1 A network with linearly separating functions would require four neurons plus one higher-level neuron. On the other hand, a single neuron with a local, circular “receptive field” would suffice. Class 1 Class -1
7
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 7 Radial Basis Functions (RBFs) To achieve such local “receptive fields,” we can use radial basis functions, i.e., functions whose output only depends on the Euclidean distance between the input vector and another (“weight”) vector. A typical choice is a Gaussian function: where c determines the “width” of the Gaussian. However, any radially symmetric, non-increasing function could be used.
8
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 8 Linear Interpolation: 1-Dimensional Case For function approximation, the desired output for new (untrained) inputs could be estimated by linear interpolation. As a simple example, how do we determine the desired output of a one-dimensional function at a new input x 0 that is located between known data points x 1 and x 2 ? which simplifies to: with distances D 1 and D 2 from x 0 to x 1 and x 2, resp.
9
Linear Interpolation: Multiple Dimensions In the multi-dimensional case, hyperplane segments connect neighboring points so that the desired output for a new input x 0 is determined by the P 0 known samples that surround it: Where D p is the Euclidean distance between x 0 and x p and f(x p ) is the desired output value for input x p. November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 9
10
Linear Interpolation: Multiple Dimensions Example for f:R 2 R 1 (with desired output indicated): For four nearest neighbors, the desired output for x 0 is November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 10 X 0 : ? X 3 : 4 X 1 : 9 X 2 : 5 X 4 : -6 X 7 : 6 X 8 : -9 X 6 : 7 X 5 : 8 D2D2D2D2 D3D3D3D3 D6D6D6D6 D7D7D7D7
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.