Presentation is loading. Please wait.

Presentation is loading. Please wait.

Self-Organizing Network Model (SOM) Session 11

Similar presentations


Presentation on theme: "Self-Organizing Network Model (SOM) Session 11"— Presentation transcript:

1

2 Self-Organizing Network Model (SOM) Session 11
Course : T0293 – NEURO COMPUTING Year : 2013 Self-Organizing Network Model (SOM) Session 11

3 Learning Outcomes At the end of this session, student will be able to:
Explain the concept of self-organizing network model (SOM) and create applications using the concept (LO4) T Neuro Computing

4 Lecture Outline What is SOM? Mapping Models T Neuro Computing

5 What is SOM? SOM is a neural networks without a teacher, i.e., unsupervised. SOM perform learning based grouping of input data (clustering of input data). Clustering is essentially the classifier of similar objects and separate the different objects. These networks are based on competitive learning; the output neurons of the network compete among themselves to be activated or fired, with the result that only one output neuron, or one neuron per group, is on at any one time. T Neuro Computing

6 Why SOM SOM able to learn to classify data without supervision. You may already be aware of supervised training techniques such as backpropagation where the training data consists of vector pairs - an input vector and a target vector. T Neuro Computing

7 T Neuro Computing

8 In a self-organizing map, the neurons are placed at the nodes of a lattice that is usually one or two dimensional. As a neural model, the SOM provides a bridge between two levels of adaptation: Adaptation rules formulated at the microscopic level of a single neuron; Formation of experientially better and physically accessible patterns of feature selectivity at the microscopic level of neural layers. T Neuro Computing

9 Figure 11.2 shows the layout of the two models.
In both cases, the output neurons are arranged in a two-dimensional lattice. The models differ from each other in the manner in which the input pattern are specified. T Neuro Computing

10 Figure 11.1 Two-dimensional lattice of neurons, illustrated for a three-dimensional input and four-by-four dimensional output (all shown in red). Input vector The principal goal of the self-organizing map (SOM) is to transform an incoming signal pattern of arbitrary dimension into a one- or two-dimensional discrete map The algorithm performs this transformation adaptively in a topologically ordered fashion. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: T Neuro Computing

11 Once the network has been properly initialized, there are three essential processes involved in the formation of the self-organizing map: Competition. For each input pattern, the neuron in the network compute their respective values of a discriminant function. This discriminant function provides the basis for competition among the neurons. The particular neuron with the largest value of discriminant function is declared winner of the competition. Cooperation. The winning neuron determines the spatial location of a topological neighborhood of excited neurons, thereby providing the basis for cooperation among such neighboring neurons. Synaptic Adaptation. This last mechanism enables the excited neurons to increase their individual values of the discriminant function in relation to the input pattern through suitable adjustments applied to their synaptic weights. The adjustments made are such that the response of the winning neuron to the subsequent application of a similar input pattern is enhanced. T Neuro Computing

12 The SOM Algorithm The essence of SOM algorithm is that it substitutes a simple geometric computation for the more detailed properties of the Hebb-like rule and lateral interactions. The essential ingredients and parameters of the algorithm are: A continuous input space of activation patterns that are generated in accordance with a certain probability distribution; A topology of the network in the form of a lattice of neurons, which defines a discrete output space; A time-varying neighborhood function hj,i(x)(n) that is defined around a winning neuron i(x); A learning-rate parameter that starts at initial value and then decreases gradually with time n, but never goes to zero. T Neuro Computing

13 The algorithm is summarized as follows:
There are three basic steps involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. The algorithm is summarized as follows: Initialization. Choose random values for the initial weight vectors wj(0). The only restriction here is that the wj(0) be different for j = 1, 2, …, l, where l is the number of neurons in the lattice. It may be desirable to keep the magnitude of the weights small. Sampling. Draw a sample x from the input space with a certain probability; the vector x represents the activation pattern that is applied to the lattice. The dimension of vector x is equal to m. Similarity matching. Find the best-matching (winning) neuron i(x) at time-step n by using the minimum-distance criterion. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: T Neuro Computing

14 Updating. Adjust the synaptic-weight vectors of all excited neurons by using the update formula:
where is the learning-rate parameter and is the neighborhood function centered around the winning neuron i(x); both and are varied dynamically during learning for best results. Continuation. Continue with step 2 until no noticeable changes in the feature map are observed. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: T Neuro Computing

15 Summary The essence of Kohonen’s SOM algorithm is that it substitutes a simple geometric computation for the more detailed properties of the Hebb-like rule and lateral interactions. There are three basic steps involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. T Neuro Computing

16 Example T Neuro Computing

17 The SOM Algorithm The essence of SOM algorithm is that it substitutes a simple geometric computation for the more detailed properties of the Hebb-like rule and lateral interactions. The essential ingredients and parameters of the algorithm are: A continuous input space of activation patterns that are generated in accordance with a certain probability distribution; A topology of the network in the form of a lattice of neurons, which defines a discrete output space; A time-varying neighborhood function hj,i(x)(n) that is defined around a winning neuron i(x); A learning-rate parameter that starts at initial value and then decreases gradually with time n, but never goes to zero. T Neuro Computing

18 The algorithm is summarized as follows:
There are three basic steps involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. The algorithm is summarized as follows: Initialization. Choose random values for the initial weight vectors wj(0). The only restriction here is that the wj(0) be different for j = 1, 2, …, l, where l is the number of neurons in the lattice. It may be desirable to keep the magnitude of the weights small. Sampling. Draw a sample x from the input space with a certain probability; the vector x represents the activation pattern that is applied to the lattice. The dimension of vector x is equal to m. Similarity matching. Find the best-matching (winning) neuron i(x) at time-step n by using the minimum-distance criterion. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: T Neuro Computing

19 Updating. Adjust the synaptic-weight vectors of all excited neurons by using the update formula:
where is the learning-rate parameter and is the neighborhood function centered around the winning neuron i(x); both and are varied dynamically during learning for best results. Continuation. Continue with step 2 until no noticeable changes in the feature map are observed. Sources: Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: T Neuro Computing

20 Demo calculation page 172 T Neuro Computing

21 T Neuro Computing

22 T Neuro Computing

23 T Neuro Computing

24 T Neuro Computing

25 T Neuro Computing

26 T Neuro Computing

27 T Neuro Computing

28 T Neuro Computing

29 Quiz There are 2 models of Self-Organizing Maps (SOM),
Kohonen Model, and Willshaw-von der Malsburg’s. Draw and explain both of them in 2 dimensional lattice. Please Explain three essential processes involved in the formation of the self-organizing map. Please explain three basic steps of SOM involved in the application of the algorithm after initialization: sampling, similarity matching, and updating. Please explain about BAM (Bidirectional Associative Memory) T Neuro Computing

30 Summary SOM is a neural networks without a teacher, i.e., unsupervised. SOM perform learning based grouping of input data (clustering of input data). There are two basic feature-mapping models: Willshaw-von der Malsburg’s model and Kohonen model T Neuro Computing

31 References Textbook Haykin, S. (2009). Neural Networks and Learning Machines, 3rd ed. Pearson. ISBN: Fausett, L.V. (1994). Fundamentals of Neural Networks: Architectures, Algorithms, and Applications, 1st ed. Prentice Hall. New York. ISBN: Web T Neuro Computing

32 END T Neuro Computing


Download ppt "Self-Organizing Network Model (SOM) Session 11"

Similar presentations


Ads by Google