Self Organizing Feature Map CS570 인공지능 20003361 이대성 Computer Science KAIST.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
PMR5406 Redes Neurais e Lógica Fuzzy
Un Supervised Learning & Self Organizing Maps Learning From Examples
Instar Learning Law Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and Neural Systems.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Un Supervised Learning & Self Organizing Maps Learning From Examples
WK6 – Self-Organising Networks:
Lecture 09 Clustering-based Learning
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing map Speech and Image Processing Unit Department of Computer Science University of Joensuu, FINLAND Pasi Fränti Clustering Methods: Part.
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses,
Self-Organizing Maps Corby Ziesman March 21, 2007.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Topological Neural Networks Bear, Connors & Paradiso (2001). Neuroscience: Exploring The Brain. Pg. 474.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Self-Organizing Maps (SOM) (§ 5.5)
Self Organizing Maps: Clustering With unsupervised learning there is no instruction and the network is left to cluster patterns. All of the patterns within.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Unsupervised learning: simple competitive learning Biological background: Neurons are wired topographically, nearby neurons connect to nearby neurons.
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Ch10 : Self-Organizing Feature Maps
Unsupervised Learning Networks
Other Applications of Energy Minimzation
Lecture 22 Clustering (3).
Kohonen Self-organizing Feature Maps
Computational Intelligence: Methods and Applications
Self-Organizing Maps (SOM) (§ 5.5)
Introduction to Cluster Analysis
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Feature mapping: Self-organizing Maps
Artificial Neural Networks
A Neural Net For Terrain Classification
Presentation transcript:

Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST

Self Organizing Maps Based on competitive learning(Unsupervised) Only one output neuron activated at any one time Winner-takes-all neuron or winning neuron In a Self-Organizing Map Neurons placed at the nodes of a lattice one or two dimensional Neurons selectively tuned to input patterns by a competitive learning process Locations of neurons so tuned to be ordered formation of topographic map of input patterns  Spatial locations of the neurons in the lattice -> intrinsic statistical features contained in the input patterns

Self Organizing Maps Topology-preserving transformation             

SOM as a Neural Model Distinct feature of human brain Organized in such a way that different sensory inputs are represented by topologically ordered computational maps Computational map Basic building block in information-processing infrastructure of the nervous system Array of neurons representing slightly differently tuned processors, operate on the sensory information-bearing signals in parallel

Cerebral Cortex

Basic Feature-mapping models Willshaw-von der Malsburg Model(1976) Biological grounds to explain the problem of retinotopic mapping from the retina to the visual cortex Two 2D lattices : presynaptic, postsynaptic neurons Geometric proximity of presynaptic neurons is coded in the form of correlation, and it is used in postsynaptic lattice Specialized for mapping for same dimension of input and output

Basic Feature-mapping models Kohonen Model(1982) Captures essential features of computational maps in Brain remains computationally tractable More general and more attention than Willshaw- Malsburg model Capable of dimensionality reduction Class of vector coding algorithm

Formation Process of SOM After initialization for synaptic weights, three essential processes Competition Largest value of discriminant function selected Winner of competition Cooperation Spatial neighbors of winning neuron is selected Synaptic adaptation Excited neurons adjust synaptic weights

Competitive Process Input vector, synaptic weight vector x = [x 1, x 2, …, x m ] T w j =[w j1, w j2, …, w jm ] T, j = 1, 2,3, l Best matching, winning neuron i(x) = arg min ||x-w j ||, j =1,2,3,..,l Determine the location where the topological neighborhood of excited neurons is to be centered

Cooperative Process For a winning neuron, the neurons in its immediate neighborhood excite more than those farther away topological neighborhood decay smoothly with lateral distance Symmetric about maximum point defined by d ij = 0 Monotonically decreasing to zero for d ij  ∞ Neighborhood function: Gaussian case Size of neighborhood shrinks with time

Adaptive process Synaptic weight vector is changed in relation with input vector w j (n+1)= w j (n) +  (n) h j,i(x) (n) (x - w j (n)) Applied to all neurons inside the neighborhood of winning neuron i Upon repeated presentation of the training data, weight tend to follow the distribution Learning rate  (n) : decay with time May decompose two phases Self-organizing or ordering phase : topological ordering of the weight vectors Convergence phase : after ordering, for accurate statistical quantification of the input space

Summary of SOM(1) Continuous input space of activation patterns that are generated in accordance with a certain probability distribution Topology of the network in the form of a lattice of neurons, which defines a discrete output space Time-varying neighborhood function defined around winning neuron Learning rate decrease gradually with time, but never go to zero

Summary of SOM(2) Learning Algorithm 1. Initialize w ’ s 2. Find nearest cell i(x) = argmin j || x(n) - w j (n) || 3. Update weights of neighbors w j (n+1) = w j (n) +  (n) h j,i(x) (n) [ x(n) - w j (n) ] 4. Reduce neighbors and  5. Go to 2

SOFM Example(1) 2-D Lattice by 2-D distribution

SOFM Example(2) Phoneme Recognion  Phonotopic maps  Recognition result for “ humppila ”