Data Mining, Neural Network and Genetic Programming

Slides:



Advertisements
Similar presentations
Memristor in Learning Neural Networks
Advertisements

Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Non-linear Dimensionality Reduction CMPUT 466/551 Nilanjan Ray Prepared on materials from the book Non-linear dimensionality reduction By Lee and Verleysen,
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Lecture 14 – Neural Networks
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Introduction to Neural Networks Simon Durrant Quantitative Methods December 15th.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Lecture 09 Clustering-based Learning
Radial Basis Function (RBF) Networks
Radial-Basis Function Networks
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Explorations in Neural Networks Tianhui Cai Period 3.
Chapter 9 Neural Network.
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
A two-stage approach for multi- objective decision making with applications to system reliability optimization Zhaojun Li, Haitao Liao, David W. Coit Reliability.
NEURAL NETWORKS FOR DATA MINING
Self-Organizing Maps Corby Ziesman March 21, 2007.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Big data classification using neural network
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Unsupervised Learning and Neural Networks
Self organizing networks
Lecture 22 Clustering (3).
Self-Organizing Maps Corby Ziesman March 21, 2007.
Neuro-Computing Lecture 4 Radial Basis Function Network
Neural Networks and Their Application in the Fields of Coporate Finance By Eric Séverin Hanna Viinikainen.
network of simple neuron-like computing elements
Introduction to Cluster Analysis
Feature mapping: Self-organizing Maps
Prediction Networks Prediction A simple example (section 3.7.3)
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
A Neural Net For Terrain Classification
Ch4: Backpropagation (BP)
Presentation transcript:

Data Mining, Neural Network and Genetic Programming High-Order Neural Network and Self-Organising Map Yi Mei Yi.mei@ecs.vuw.ac.nz

Outline High-order neural network Self-organising map Motivation Architecture HONN with products Self-organising map Classification with no label (clustering) Architecture (neurons, weights, positions) Training and mapping Weight learning

High-Order Neural Network We have discussed about CNN Weight smoothing Central weight initialisation All take use of domain knowledge in object recognition Relationship between neighbouring pixels CNN changes architectures and constrain weights Weight smoothing constrain weights Central weight initialisation: the central pixels of an object may be more important, and deserve more weights The architecture of CNN is good, but too complicated to design, any simple ways? High-Order Neural Network (HONN)

High-Order Neural Network Conventional Feedforward Neural Network Outputs Hidden units Inputs (pixels) 1-order

High-Order Neural Network Outputs Hidden units Inputs Pixels 2-order

High-Order Neural Network Outputs Hidden units Inputs Pixels 3-order

High-Order Neural Network 8 * 8 feature map 64 hidden nodes 16 * 16 256 input nodes Receptive field 5 * 5 weight matrix 25-order HONN with weight constraint

High-Order Neural Network Neighboring pixels are connected to the same input node (not fully connected) CNN is a special type of HONN Do not require weight constraint (but can be improved by including weight constraints) A more general architecture that can take advantage of geometric relationship between pixels

High-Order Neural Network Another point of view Conventional neural network only considers weighted sum (1-order) This only captures the linear correlation between inputs High-order correlation (e.g. products) may be useful HONN can capture high-order correlations 1-order 2-order

High-Order Neural Network An example: XOR problem Cannot be solved by perceptron What if a high-order perceptron?

High-Order Neural Network Translation invariance Impose constraints on weights: similar to CNN

Self-Organising Map Conventional neural network require class label (desired output) But class label is hard (time-consuming) to get Can we do classification without class label? -- SOM

Self-Organising Map Introduced by T. Kohonen in 1980s A type of neural network But different from other NNs (feedforward, CNN) – unsupervised learning Does not require class labels Visualise high-dimensional data in a low-dimensional space Discover categories and abstractions from raw data

Self-Organising Map A set of nodes or neurons A weight vector – one weight for each input variable A position in the map space Usually arranged as a rectangular grid Neurons Inputs

Self-Organising Map Training: building the map using (training) input examples (adjust neuron weights) Mapping: automatically classifying a new input vector (which neuron is fired?) Doing competitive learning rather than error-correction learning Use a neighborhood function to preserve the topology of the input space

Self-Organising Map For each input vector, the node whose weight is the closest to the input value is chosen (fired) Best Matching Unit (BMU) For each input vector, the weights of the BMU and the neurons close to it in the SOM lattice are adjusted towards the input vector Neighborhood function, shrink over time Input BMU of the input Decreasing learning coefficient

Self-Organising Map Topology Rectangular Hexagonal

Self-Organising Map Neighborhood function Threshold Other forms: Gaussian

Self-Organising Map Distance Euclidean distance Manhattan distance Box distance Number of links in between

Learning Self-Organising Map 1. Randomly initialise the weights 2. Get an input vector 3. Calculate the BMU of : 4. Update the weight vector for each node in the map: 5. Increase s and go back to 2 until s reach the maximal step number

An example: color Input: 3-dimensional vectors: RGB values SOM: 40 x 40 lattice For each neuron, show the 3-D weight vector as RGB

Another example: clustering digits

Question Consider a SOM for digit clustering Each digit has 8 features The SOM is defined as a 30 x 30 lattice 10 classes, 1000 images for training, 100 per class How many input neurons? How many output neurons? How many weights in the SOM?

Summary HONN is a general network to take into account the geometric relationship between pixels Which pixels to be connected together? HONN can consider high-order correlation (products) between inputs (pixels) HONN can be handcrafted to be invariant under transformations (translation, scaling, rotation, …) SOM is an unsupervised neural network SOM can find patterns in input data without requiring labels SOM can visualise high-dimensional data in low dimension