Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

G53MLE | Machine Learning | Dr Guoping Qiu
Artificial Neural Networks (1)
Perceptron Learning Rule
also known as the “Perceptron”
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
An Illustrative Example.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
CSC321: Neural Networks Lecture 3: Perceptrons
An Illustrative Example.
4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised.
Artificial Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Performance Optimization
Simple Neural Nets For Pattern Classification
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Perceptron Learning Rule
An Illustrative Example
1 Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Artificial Neural Network
Radial-Basis Function Networks
Neuron Model and Network Architecture
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Supervised Hebbian Learning
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
1 Chapter 6: Artificial Neural Networks Part 2 of 3 (Sections 6.4 – 6.6) Asst. Prof. Dr. Sukanya Pongsuparb Dr. Srisupa Palakvangsa Na Ayudhya Dr. Benjarath.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Feed-Forward Neural Networks 主講人 : 虞台文. Content Introduction Single-Layer Perceptron Networks Learning Rules for Single-Layer Perceptron Networks – Perceptron.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
Digital Image Processing Lecture 25: Object Recognition Prof. Charlene Tsai.
381 Self Organization Map Learning without Examples.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Chapter 18 Connectionist Models
13 1 Associative Learning Simple Associative Network.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24,
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Data Mining, Neural Network and Genetic Programming
ECE 471/571 - Lecture 17 Back Propagation.
Chapter 3. Artificial Neural Networks - Introduction -
Competitive Networks.
Neural Networks Chapter 5
An Illustrative Example.
Adaptive Resonance Theory
Competitive Networks.
Copyright © 2014 Elsevier Inc. All rights reserved.
Adaptive Resonance Theory
Perceptron Learning Rule
An Illustrative Example.
Perceptron Learning Rule
Presentation transcript:

Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example

Ming-Feng Yeh2 Objectives Use three different neural network architectures to solve a simple pattern recognition problem. Feedforward network: perceptron Competitive network: Hamming network Recurrent associative memory network: Hopfield network

Ming-Feng Yeh3 Problem Statement Neural Network shape texture weight Sorter Sensors Apples Oranges Apples Oranges Sorter

Ming-Feng Yeh4 Problem Statement Shape sensor: 1 -- round, –1 -- elliptical. Texture sensor: 1 -- smooth, –1 -- rough. Weight sensor: 1 -- > 1 pound, –1 -- < 1 pound.

Ming-Feng Yeh5 Single-layer Perceptron + a S1S1 n S1S1 1 b S1S1 W SRSR R p R1R1 S Symmetrical Hard Limit a = -1, if n  0 a = +1, if n  1 MATLAB function: hardlims n = Wp + b a = hardlims(Wp + b)

Ming-Feng Yeh6 Two-Input / Single-Neuron Perceptron Single-neuron perceptrons can classify input vectors into two categories. If Wp   b, then a = +1; otherwise, a = -1.  Inner product

Ming-Feng Yeh7 Two-Input / Single-Neuron Perceptron The decision boundary between the categories is determined by n = Wp + b = 0. Because the boundary must be linear, the single- layer perceptron can only be used to recognize patterns that are linear separable (can be separated by a linear boundary). n = 0

Ming-Feng Yeh8 Pattern Recognition Example Choose the bias b and the elements of the weight matrix W so that the perceptron will can distinguish between apples and oranges.

Ming-Feng Yeh9 Pattern Recognition Example

Ming-Feng Yeh10 Pattern Recognition Example What happens if we put a not-so-perfect orange into the classifier? That is to say, an orange with an elliptical shape is pass through the sensor.

Ming-Feng Yeh11 Hamming Network Hamming network was designed explicitly to solve binary (1 or –1) pattern recognition problem. It uses both feedforward and recurrent (feedback) layers. The objective is to decide which prototype vector is closest to the input vector. When the recurrent layer converges, there will be only one neuron with nonzero output.

Ming-Feng Yeh12 Hamming Network a1a1 S1S1 n1n1 S1S1 + W1W1 SRSR R p R1R1 S 1 b1b1 S1S1 W2W2 SSSS S D n 2 (t+1) S1S1 S1S1 a 2 (t+1) S1S1 a2(t)a2(t) Feedforward Layer Recurrent Layer n 1 = W 1 p + b 1 pure a 1 = purelin(n 1 ) n 2 (t+1) = W 2 a 2 (t) pos a 2 (t+1) = poslin[n 2 (t+1)] a 2 (0) = a 1

Ming-Feng Yeh13 Feedforward Layer The feedforward layer performs a correlation, or inner product, between each of the prototype patterns and the input pattern. The connection matrix W 1 are set to the prototype patterns. Each element of the bias vector b 1 is set to be R, such that the outputs a 1 can never be negative. a1a1 S1S1 n1n1 S1S1 + W1W1 SRSR R p R1R1 S 1 b1b1 S1S1 n 1 = W 1 p + b 1 pure a 1 = purelin(n 1 )

Ming-Feng Yeh14 Feedforward Layer The outputs are equal to the inner products of each prototype pattern ( p 1 and p 2 ) with the input (p ), plus R (3).

Ming-Feng Yeh15 Feedforward Layer The Hamming distance between two vectors is equal to the number of elements that are different. It is defined only for binary vectors. The neuron with the largest output will correspond to the prototype pattern that is closest in Hamming distance to the input pattern. Output = 2  (R – Hamming distance) a 1 1 = 2  (3 – 1) = 4, a 1 2 = 2  (3 – 2) = 2

Ming-Feng Yeh16 Recurrent Layer The recurrent layer of the Hamming network is what is known as a “competitive” layer. The neurons compete with each other to determine a winner. After the competition, only one neuron will have a nonzero output. n 2 (t+1) = W 2 a 2 (t) pos a 2 (t+1) = poslin[n 2 (t+1)] a 2 (0) = a 1 W2W2 SSSS S D n 2 (t+1) S1S1 S1S1 a 2 (t+1) S1S1 a2(t)a2(t) a1a1

Ming-Feng Yeh17 Recurrent Layer Each element is reduced by the same fraction of the other.  The difference between large and small will be increased. The effect of the recurrent layer is to zero out all neuron outputs, except the one with the largest initial value (which corresponds to the prototype pattern that is closest in Hamming distance to the input).

Ming-Feng Yeh18 Recurrent Layer Since the outputs of successive iterations produce the same result, the network has converged. Prototype pattern number one, the orange, is chosen as the correct match.

Ming-Feng Yeh19 Hopfield Network In the Hamming network, the nonzero neuron indicates which prototype pattern is chosen. The Hopfield network actually produces the selected prototype pattern as its output. n(t+1) = Wa(t) + b, a(t+1) = satlins[n(t+1)], a(0) = p S p S1S1 a(t+1) S1S1 S n(t+1) S1S1 + W SSSS 1 b S1S1 D a(t)a(t) S1S1

Ming-Feng Yeh20 Hopfield Network

Ming-Feng Yeh21 Conclusions The perceptron had a single output, which could take on values of –1 (orange) or 1 (apple). In the Hamming network, the single nonzero neuron indicated which prototype had the closest match (neuron 1 indicated orange and neuron 2 indicated apple). In the Hopfield network, the prototype pattern itself appears at the output of the network.