Presentation is loading. Please wait.

Presentation is loading. Please wait.

November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.

Similar presentations


Presentation on theme: "November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an."— Presentation transcript:

1 November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an external teacher improves network performance by comparing desired and actual outputs and modifying the synaptic weights accordingly. However, most of the learning that takes place in our brains is completely unsupervised. This type of learning is aimed at achieving the most efficient representation of the input space, regardless of any output space. Unsupervised learning can also be useful in artificial neural networks.

2 November 9, 2010Neural Networks Lecture 16: Counterpropagation 2 Unsupervised Learning Applications of unsupervised learning include ClusteringClustering Vector quantizationVector quantization Data compressionData compression Feature extractionFeature extraction Unsupervised learning methods can also be combined with supervised ones to enable learning through input- output pairs like in the BPN. One such hybrid approach is the counterpropagation network.

3 November 9, 2010Neural Networks Lecture 16: Counterpropagation 3 Unsupervised/Supervised Learning: The Counterpropagation Network The counterpropagation network (CPN) is a fast- learning combination of unsupervised and supervised learning. Although this network uses linear neurons, it can learn nonlinear functions by means of a hidden layer of competitive units. Moreover, the network is able to learn a function and its inverse at the same time. However, to simplify things, we will only consider the feedforward mechanism of the CPN.

4 November 9, 2010Neural Networks Lecture 16: Counterpropagation 4 Distance/Similarity Functions In the hidden layer, the neuron whose weight vector is most similar to the current input vector is the “winner.” There are different ways of defining such maximal similarity, for example: (1) Maximal cosine similarity (same as net input): (2) Minimal Euclidean distance: (no square root necessary for determining the winner)

5 November 9, 2010Neural Networks Lecture 16: Counterpropagation 5 The Counterpropagation Network A simple CPN with two input neurons, three hidden neurons, and two output neurons can be described as follows: X1X1X1X1 X2X2X2X2 Input layer H1H1H1H1 H2H2H2H2 H3H3H3H3 Hidden layer Y1Y1Y1Y1 Y2Y2Y2Y2 Output layer

6 November 9, 2010Neural Networks Lecture 16: Counterpropagation 6 The Counterpropagation Network The CPN learning process (general form for n input units and m output units): 1.Randomly select a vector pair (x, y) from the training set. 2.If you use the cosine similarity function, normalize (shrink/expand to “length” 1) the input vector x by dividing every component of x by the magnitude ||x||, where

7 November 9, 2010Neural Networks Lecture 16: Counterpropagation 7 The Counterpropagation Network 3.Initialize the input neurons with the resulting vector and compute the activation of the hidden-layer units according to the chosen similarity measure. 4.In the hidden (competitive) layer, determine the unit W with the largest activation (the winner). 5.Adjust the connection weights between W and all N input-layer units according to the formula: 6.Repeat steps 1 to 5 until all training patterns have been processed once.

8 November 9, 2010Neural Networks Lecture 16: Counterpropagation 8 The Counterpropagation Network 7.Repeat step 6 until each input pattern is consistently associated with the same competitive unit. 8.Select the first vector pair in the training set (the current pattern). 9.Repeat steps 2 to 4 (normalization, competition) for the current pattern. 10.Adjust the connection weights between the winning hidden-layer unit and all M output layer units according to the equation:

9 November 9, 2010Neural Networks Lecture 16: Counterpropagation 9 The Counterpropagation Network 11.Repeat steps 9 and 10 for each vector pair in the training set. 12.Repeat steps 8 through 11 for several epochs.

10 November 9, 2010Neural Networks Lecture 16: Counterpropagation 10 The Counterpropagation Network Because in our example network the input is two- dimensional, each unit in the hidden layer has two weights (one for each input connection). Therefore, input to the network as well as weights of hidden-layer units can be represented and visualized by two-dimensional vectors. For the current network, all weights in the hidden layer can be completely described by three 2D vectors.

11 November 9, 2010Neural Networks Lecture 16: Counterpropagation 11 Counterpropagation – Cosine Similarity This diagram shows a sample state of the hidden layer and a sample input to the network:

12 November 9, 2010Neural Networks Lecture 16: Counterpropagation 12 Counterpropagation – Cosine Similarity In this example, hidden-layer neuron H 2 wins and, according to the learning rule, is moved closer towards the current input vector.

13 November 9, 2010Neural Networks Lecture 16: Counterpropagation 13 Counterpropagation – Cosine Similarity After doing this through many epochs and slowly reducing the adaptation step size , each hidden-layer unit will win for a subset of inputs, and the angle of its weight vector will be in the center of gravity of the angles of these inputs. all input vectors in the training set

14 November 9, 2010Neural Networks Lecture 16: Counterpropagation 14 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 1 2

15 November 9, 2010Neural Networks Lecture 16: Counterpropagation 15 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 1 2

16 November 9, 2010Neural Networks Lecture 16: Counterpropagation 16 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 1 2

17 November 9, 2010Neural Networks Lecture 16: Counterpropagation 17 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 1 2

18 November 9, 2010Neural Networks Lecture 16: Counterpropagation 18 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 1 2

19 November 9, 2010Neural Networks Lecture 16: Counterpropagation 19 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 1 2

20 November 9, 2010Neural Networks Lecture 16: Counterpropagation 20 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 2 1

21 November 9, 2010Neural Networks Lecture 16: Counterpropagation 21 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 2 1

22 November 9, 2010Neural Networks Lecture 16: Counterpropagation 22 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 2 1

23 November 9, 2010Neural Networks Lecture 16: Counterpropagation 23 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 3 2 1

24 November 9, 2010Neural Networks Lecture 16: Counterpropagation 24 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 2 1 3

25 November 9, 2010Neural Networks Lecture 16: Counterpropagation 25 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 2 1 3

26 November 9, 2010Neural Networks Lecture 16: Counterpropagation 26 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 2 1 3

27 November 9, 2010Neural Networks Lecture 16: Counterpropagation 27 Counterpropagation – Euclidean Distance … and so on, possibly with reduction of the learning rate… possibly with reduction of the learning rate…

28 November 9, 2010Neural Networks Lecture 16: Counterpropagation 28 Counterpropagation – Euclidean Distance Example of competitive learning with three hidden neurons: x x x x + + + + + o o o o 2 1 3

29 November 9, 2010Neural Networks Lecture 16: Counterpropagation 29 The Counterpropagation Network After the first phase of the training, each hidden-layer neuron is associated with a subset of input vectors. The training process minimized the average angle difference or Euclidean distance between the weight vectors and their associated input vectors. In the second phase of the training, we adjust the weights in the network’s output layer in such a way that, for any winning hidden-layer unit, the network’s output is as close as possible to the desired output for the winning unit’s associated input vectors. The idea is that when we later use the network to compute functions, the output of the winning hidden- layer unit is 1, and the output of all other hidden-layer units is 0.

30 November 9, 2010Neural Networks Lecture 16: Counterpropagation 30 Counterpropagation – Cosine Similarity Because there are two output neurons, the weights in the output layer that receive input from the same hidden-layer unit can also be described by 2D vectors. These weight vectors are the only possible output vectors of our network. network output if H 1 wins network output if H 3 wins network output if H 2 wins

31 November 9, 2010Neural Networks Lecture 16: Counterpropagation 31 Counterpropagation – Cosine Similarity For each input vector, the output-layer weights that are connected to the winning hidden-layer unit are made more similar to the desired output vector:

32 November 9, 2010Neural Networks Lecture 16: Counterpropagation 32 Counterpropagation – Cosine Similarity The training proceeds with decreasing step size , and after its termination, the weight vectors are in the center of gravity of their associated output vectors: Output associated with H1H1H1H1 H2H2H2H2 H3H3H3H3

33 November 9, 2010Neural Networks Lecture 16: Counterpropagation 33 Counterpropagation – Euclidean Distance At the end of the output-layer learning process, the outputs of the network are at the center of gravity of the desired outputs of the winner neuron. x x x x + + + + + o o o o 2 1 3

34 November 9, 2010Neural Networks Lecture 16: Counterpropagation 34 The Counterpropagation Network Notice: In the first training phase, if a hidden-layer unit does not win for a long period of time, its weights should be set to random values to give that unit a chance to win subsequently.In the first training phase, if a hidden-layer unit does not win for a long period of time, its weights should be set to random values to give that unit a chance to win subsequently. It is useful to reduce the learning rates ,  during training.It is useful to reduce the learning rates ,  during training. There is no need for normalizing the training output vectors.There is no need for normalizing the training output vectors. After the training has finished, the network maps the training inputs onto output vectors that are close to the desired ones.After the training has finished, the network maps the training inputs onto output vectors that are close to the desired ones. The more hidden units, the better the mapping; however, the generalization ability may decrease.The more hidden units, the better the mapping; however, the generalization ability may decrease. Thanks to the competitive neurons in the hidden layer, even linear neurons can realize nonlinear mappings.Thanks to the competitive neurons in the hidden layer, even linear neurons can realize nonlinear mappings.


Download ppt "November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an."

Similar presentations


Ads by Google