Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.

Similar presentations


Presentation on theme: "Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example."— Presentation transcript:

1 Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example

2 Ming-Feng Yeh2 Objectives Use three different neural network architectures to solve a simple pattern recognition problem. Feedforward network: perceptron Competitive network: Hamming network Recurrent associative memory network: Hopfield network

3 Ming-Feng Yeh3 Problem Statement Neural Network shape texture weight Sorter Sensors Apples Oranges Apples Oranges Sorter

4 Ming-Feng Yeh4 Problem Statement Shape sensor: 1 -- round, –1 -- elliptical. Texture sensor: 1 -- smooth, –1 -- rough. Weight sensor: 1 -- > 1 pound, –1 -- < 1 pound.

5 Ming-Feng Yeh5 Single-layer Perceptron + a S1S1 n S1S1 1 b S1S1 W SRSR R p R1R1 S Symmetrical Hard Limit a = -1, if n  0 a = +1, if n  1 MATLAB function: hardlims n = Wp + b a = hardlims(Wp + b)

6 Ming-Feng Yeh6 Two-Input / Single-Neuron Perceptron Single-neuron perceptrons can classify input vectors into two categories. If Wp   b, then a = +1; otherwise, a = -1.  Inner product

7 Ming-Feng Yeh7 Two-Input / Single-Neuron Perceptron The decision boundary between the categories is determined by n = Wp + b = 0. Because the boundary must be linear, the single- layer perceptron can only be used to recognize patterns that are linear separable (can be separated by a linear boundary). n = 0

8 Ming-Feng Yeh8 Pattern Recognition Example Choose the bias b and the elements of the weight matrix W so that the perceptron will can distinguish between apples and oranges.

9 Ming-Feng Yeh9 Pattern Recognition Example

10 Ming-Feng Yeh10 Pattern Recognition Example What happens if we put a not-so-perfect orange into the classifier? That is to say, an orange with an elliptical shape is pass through the sensor.

11 Ming-Feng Yeh11 Hamming Network Hamming network was designed explicitly to solve binary (1 or –1) pattern recognition problem. It uses both feedforward and recurrent (feedback) layers. The objective is to decide which prototype vector is closest to the input vector. When the recurrent layer converges, there will be only one neuron with nonzero output.

12 Ming-Feng Yeh12 Hamming Network a1a1 S1S1 n1n1 S1S1 + W1W1 SRSR R p R1R1 S 1 b1b1 S1S1 W2W2 SSSS S D n 2 (t+1) S1S1 S1S1 a 2 (t+1) S1S1 a2(t)a2(t) Feedforward Layer Recurrent Layer n 1 = W 1 p + b 1 pure a 1 = purelin(n 1 ) n 2 (t+1) = W 2 a 2 (t) pos a 2 (t+1) = poslin[n 2 (t+1)] a 2 (0) = a 1

13 Ming-Feng Yeh13 Feedforward Layer The feedforward layer performs a correlation, or inner product, between each of the prototype patterns and the input pattern. The connection matrix W 1 are set to the prototype patterns. Each element of the bias vector b 1 is set to be R, such that the outputs a 1 can never be negative. a1a1 S1S1 n1n1 S1S1 + W1W1 SRSR R p R1R1 S 1 b1b1 S1S1 n 1 = W 1 p + b 1 pure a 1 = purelin(n 1 )

14 Ming-Feng Yeh14 Feedforward Layer The outputs are equal to the inner products of each prototype pattern ( p 1 and p 2 ) with the input (p ), plus R (3).

15 Ming-Feng Yeh15 Feedforward Layer The Hamming distance between two vectors is equal to the number of elements that are different. It is defined only for binary vectors. The neuron with the largest output will correspond to the prototype pattern that is closest in Hamming distance to the input pattern. Output = 2  (R – Hamming distance) a 1 1 = 2  (3 – 1) = 4, a 1 2 = 2  (3 – 2) = 2

16 Ming-Feng Yeh16 Recurrent Layer The recurrent layer of the Hamming network is what is known as a “competitive” layer. The neurons compete with each other to determine a winner. After the competition, only one neuron will have a nonzero output. n 2 (t+1) = W 2 a 2 (t) pos a 2 (t+1) = poslin[n 2 (t+1)] a 2 (0) = a 1 W2W2 SSSS S D n 2 (t+1) S1S1 S1S1 a 2 (t+1) S1S1 a2(t)a2(t) a1a1

17 Ming-Feng Yeh17 Recurrent Layer Each element is reduced by the same fraction of the other.  The difference between large and small will be increased. The effect of the recurrent layer is to zero out all neuron outputs, except the one with the largest initial value (which corresponds to the prototype pattern that is closest in Hamming distance to the input).

18 Ming-Feng Yeh18 Recurrent Layer Since the outputs of successive iterations produce the same result, the network has converged. Prototype pattern number one, the orange, is chosen as the correct match.

19 Ming-Feng Yeh19 Hopfield Network In the Hamming network, the nonzero neuron indicates which prototype pattern is chosen. The Hopfield network actually produces the selected prototype pattern as its output. n(t+1) = Wa(t) + b, a(t+1) = satlins[n(t+1)], a(0) = p S p S1S1 a(t+1) S1S1 S n(t+1) S1S1 + W SSSS 1 b S1S1 D a(t)a(t) S1S1

20 Ming-Feng Yeh20 Hopfield Network

21 Ming-Feng Yeh21 Conclusions The perceptron had a single output, which could take on values of –1 (orange) or 1 (apple). In the Hamming network, the single nonzero neuron indicated which prototype had the closest match (neuron 1 indicated orange and neuron 2 indicated apple). In the Hopfield network, the prototype pattern itself appears at the output of the network.


Download ppt "Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example."

Similar presentations


Ads by Google