Associative Learning Memories -SOLAR_A

Slides:



Advertisements
Similar presentations
Biostatistics-Lecture 4 More about hypothesis testing Ruibin Xi Peking University School of Mathematical Sciences.
Advertisements

Outline Data with gaps clustering on the basis of neuro-fuzzy Kohonen network Adaptive algorithm for probabilistic fuzzy clustering Adaptive probabilistic.
Automatic Speech Recognition II  Hidden Markov Models  Neural Network.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Fourth International Symposium on Neural Networks (ISNN) June 3-7, 2007, Nanjing, China A Hierarchical Self-organizing Associative Memory for Machine Learning.
What is Cluster Analysis
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Introduction to Neural Networks Simon Durrant Quantitative Methods December 15th.
1 A Novel Binary Particle Swarm Optimization. 2 Binary PSO- One version In this version of PSO, each solution in the population is a binary string. –Each.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Image Denoising and Inpainting with Deep Neural Networks Junyuan Xie, Linli Xu, Enhong Chen School of Computer Science and Technology University of Science.
Lecture 09 Clustering-based Learning
Foundation of High-Dimensional Data Visualization
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
A Simple Method to Extract Fuzzy Rules by Measure of Fuzziness Jieh-Ren Chang Nai-Jian Wang.
Computational Intelligence: Methods and Applications Lecture 3 Histograms and probabilities. Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
CISC 4631 Data Mining Lecture 03: Introduction to classification Linear classifier Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook.
© Negnevitsky, Pearson Education, Will neural network work for my problem? Will neural network work for my problem? Character recognition neural.
NEURAL NETWORKS FOR DATA MINING
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Particle Filters.
Soft Computing Lecture 14 Clustering and model ART.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 28: Principal Component Analysis; Latent Semantic Analysis.
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
Handwritten Hindi Numerals Recognition Kritika Singh Akarshan Sarkar Mentor- Prof. Amitabha Mukerjee.
Data Classification with the Radial Basis Function Network Based on a Novel Kernel Density Estimation Algorithm Yen-Jen Oyang Department of Computer Science.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
2.1 Frequency Distribution and Their Graphs NOTES Coach Bridges.
Introduction to Digital Signals
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
Chapter 12 Object Recognition Chapter 12 Object Recognition 12.1 Patterns and pattern classes Definition of a pattern class:a family of patterns that share.
An Approximate Nearest Neighbor Retrieval Scheme for Computationally Intensive Distance Measures Pratyush Bhatt MS by Research(CVIT)
1 Channel Coding (III) Channel Decoding. ECED of 15 Topics today u Viterbi decoding –trellis diagram –surviving path –ending the decoding u Soft.
© Tan,Steinbach, Kumar Introduction to Data Mining 8/05/ Data Mining: Exploring Data Lecture Notes for Chapter 3 Introduction to Data Mining by Tan,
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
BIOSTATISTICS Explorative data analysis. Box plot QQ plot Classification analysis Copyright ©2012, Joanna Szyda INTRODUCTION.
CS623: Introduction to Computing with Neural Nets (lecture-16) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
A Kernel Approach for Learning From Almost Orthogonal Pattern * CIS 525 Class Presentation Professor: Slobodan Vucetic Presenter: Yilian Qin * B. Scholkopf.
Introduction to Probability and Bayesian Decision Making Soo-Hyung Kim Department of Computer Science Chonnam National University.
Stallings, Wireless Communications & Networks, Second Edition, © 2005 Pearson Education, Inc. All rights reserved Spread Spectrum Chapter.
Neural networks – Hands on
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Introduction to Classifiers Fujinaga. Bayes (optimal) Classifier (1) A priori probabilities: and Decision rule: given and decide if and probability of.
1 A Statistical Matching Method in Wavelet Domain for Handwritten Character Recognition Presented by Te-Wei Chiang July, 2005.
Facial Detection via Convolutional Neural Network Nathan Schneider.
OCR Reading.
Big data classification using neural network
Data Mining Introduction to Classification using Linear Classifiers
Exploring Data: Summary Statistics and Visualizations
Chapter 12 Object Recognition
The Elements of Statistical Learning
Ying shen Sse, tongji university Sep. 2016
Final Year Project Presentation --- Magic Paint Face
Figure 1.1 Rules for the contact lens data.
Classifiers Fujinaga.
Data Mining: Exploring Data
Principal Component Analysis (PCA)
DataMining, Morgan Kaufmann, p Mining Lab. 김완섭 2004년 10월 27일
GAUSSIAN PROCESS REGRESSION WITHIN AN ACTIVE LEARNING SCHEME
Introduction PCA (Principal Component Analysis) Characteristics:
CS623: Introduction to Computing with Neural Nets (lecture-15)
Creating Data Representations
Classifiers Fujinaga.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Using Clustering to Make Prediction Intervals For Neural Networks
Data exploration and visualization
Presentation transcript:

Associative Learning Memories -SOLAR_A Matlab code presentation

Introduction Associating SOLAR (SOLAR_A) SOLAR_A structures are hierarchically organized and have ability to classify patterns in a network of sparsely connected neurons. 

Association Training Neurons learn associations between pattern and its code. Once the training is completed, a network is capable to make necessary associations. Testing When the network is presented with the pattern only, it drives the associated input signals to these code values that represent the observed pattern.

Signal definition The inner signals in the network range from 0 to 1. A signal is a determinate low or determinate high if its value is 0 or 1. 0 - 0.5 weak low 0.5 - 1 weak high 0.5 “inactive”, or “high impedance”

Neurons’ definitions If a neuron is able to observe any type of statistical correlations of its input connections, it will function as an associative neuron. Otherwise it will be a transmitting neuron.

Associative neuron A neuron is called an associative neuron when its inputs I1 and I2 are associated Inputs I1 and I2 are associated if and only if I2 can be implied from I1 and I1 can be implied from I2 simultaneously.

associative neuron Low I1 is associated with low I2, and high I1 is associated with high I2. I1 and I2 are inputs an associative neuron has received in training. It is quite clear that I1 and I2 are most likely to be simultaneously low or high although there is some noise. This can be verified using P(I2 | I1) and P(I1 | I2), and implying values I2 from I1 and I1 from I2.

Network Structure Hierarchical structure In horizontal direction, the neurons on one layer can only connect to the neurons on the previous layer.

Network Structure The connection in vertical direction obeys 80% Gaussian distribution with standard deviation 2 + 20% uniform distribution

Network Structure The network uses feedback signals to pass information backwards to the associated inputs.

Testing During testing, the missing parts of the data need to be recovered from the existing data through association. For example, in a pattern recognition problem, the associated code inputs are unknown and therefore set to 0.5.

Neuron Feedback Scheme

Iris Plants Database The Iris database has: 3 classes (Iris Setosa, Iris Versicolour and Iris Virginica) 4 numeric attributes (petal length, petal width , sepal length , sepal width ) 150 instances of 50 instances for each class, where each class refers to a type of iris plant. The classification objective Identify the class ID based on the input feature (attribute) values

Coding of the database The 4 features were scaled linearly and coded using a sliding bar code . Input bits from (V-Min)+1 to (V-Min)+L will be set high and remaining bits will be low N-L=Max-Min

Coding of the database We scaled the 4 features of Iris database between 0-30, and Set the length of L equal to 12 The total length of each feature is 42 The feature input requires 168 bits

Coding of the database In order to increase the probability that each feature is associated with sample class code, we merged the 4 features.

Coding of the database

Coding of the database There are 3 classes total We use 3M bits to code the class ID maximizing their code Hamming distance The white part is filled by 2M-bit 0 string, while the grey part is filled by M-bit 1 string.

Iris database simulation Rows 174-203 class ID Rows 1-168 Features

Iris database simulation

Glass identification database

Simulation of mixed features and class ID code

Simulation of mixed features and class ID code Iris database

Image recovery Examples of training patterns Testing results and recovered images of letter B and J

Coding example Samples from Iris database 5.1,3.5,1.4,0.2,Iris-setosa(class 1) 7.0,3.2,4.7,1.4,Iris-versicolor (class 2) 6.3,3.3,6.0,2.5,Iris-virginica (class 3)

Coding example Pre-preparing: 51,35,14,2,1 Coding:5.1,3.5,1.4,0.2,Iris-setosa (class 1) Pre-preparing: 51,35,14,2,1 Scaling the features (51,35,14,2) from 0 to 30 After scaling: 7,19,2,2,1

Coding example Features 7 000000011111111111100000000000000000000000 7 bits 12 bits Features 7 000000011111111111100000000000000000000000 19 000000000000000000011111111111100000000000 2 001111111111110000000000000000000000000000 Class ID 1 1111111111…1110000000…0000000000000…000 56 bits 112 bits

Coded data Matrix- Input Features Class ID code Input matrix M Training data N Testing data

Matlab user interface main.m – main function training2.m – training function testing2.m – testing function catchassociating.m– actively associative neurons generate_input– coding the database

parameters columns- depth of layers rows- length of an input pattern stdr- standard deviation in vertical stdc- standard deviation in horizontal n_tests- test numbers

training.m r_distribution(meanr,stdr,rows,columns,width) --defines distribution in vertical direction normrnd(meanr,stdc,rows,columns) --defines distribution in horizontal direction