Perceptron Learning Rule

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
Artificial Neural Networks (1)
NEURAL NETWORKS Perceptron
also known as the “Perceptron”
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Computer Science Department FMIPA IPB 2003 Neural Computing Yeni Herdiyeni Computer Science Dept. FMIPA IPB.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Perceptron.
4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised.
Simple Neural Nets For Pattern Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Perceptron Learning Rule
An Illustrative Example
Chapter 6: Multilayer Neural Networks
Data Mining with Neural Networks (HK: Chapter 7.5)
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Artificial Neural Network
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Neural Networks Lecture 8: Two simple learning algorithms
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
ANNs (Artificial Neural Networks). THE PERCEPTRON.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
Non-Bayes classifiers. Linear discriminants, neural networks.
ADALINE (ADAptive LInear NEuron) Network and
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.
EEE502 Pattern Recognition
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Artificial Neural Networks By: Steve Kidos. Outline Artificial Neural Networks: An Introduction Frank Rosenblatt’s Perceptron Multi-layer Perceptron Dot.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Neural networks.
Artificial neural networks
Artificial neural networks:
Perceptron as one Type of Linear Discriminants
Neural Networks Chapter 5
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Lecture 02: Perceptron By: Nur Uddin, Ph.D.
Introduction to Neural Network
David Kauchak CS158 – Spring 2019
Perceptron Learning Rule
Perceptron Learning Rule
Perceptron Learning Rule
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Perceptron Learning Rule CHAPTER 4 Perceptron Learning Rule Ming-Feng Yeh

Objectives How do we determine the weight matrix and bias for perceptron networks with many inputs, where it is impossible to visualize the decision boundaries? The main object is to describe an algorithm for training perceptron networks, so that they can learn to solve classification problems. Ming-Feng Yeh

History : 1943 Warren McCulloch and Walter Pitts introduced one of the first artificial neurons in 1943. The main feature of their neuron model is that a weighted sum of input signals is compared to a threshold to determine the neuron output. They went to show that networks of these neurons could, in principle, compute any arithmetic or logic function. Unlike biological networks, the parameters of their networks had to designed, as no training method was available. Ming-Feng Yeh

History : 1950s Frank Rosenblatt and several other researchers developed a class of neural networks called perceptrons in the late 1950s. Rosenblatt’s key contribution was the introduction of a learning rule for training perceptron networks to solve pattern recognition problems. The perceptron could even learn when initialized with random values for its weights and biases. Ming-Feng Yeh

History : ~1980s Marvin Minsky and Seymour Papert (1969) demonstrated that the perceptron networks were incapable of implementing certain elementary functions (e.g., XOR gate). It was not until the 1980s that these limitations were overcome with improved (multilayer) perceptron networks and associated learning rules. The perceptron network remains a fast and reliable network for the class of problems that it can solve. Ming-Feng Yeh

Learning Rule Learning rule: a procedure (training algorithm) for modifying the weights and the biases of a network. The purpose of the learning rule is to train the network to perform some task. Supervised learning, unsupervised learning and reinforcement (graded) learning. Ming-Feng Yeh

Supervised Learning The learning rule is provided with a set of examples (the training set) of proper network behavior: {p1,t1}, {p2,t2},…, {pQ,tQ} where pq is an input to the network and tq is the corresponding correct (target) output. As the inputs are applied to the network, the network outputs are compared to the targets. The learning rule is then used to adjust the weights and biases of the network in order to move the network outputs closer to the targets. Ming-Feng Yeh

Supervised Learning Ming-Feng Yeh

Reinforcement Learning The learning rule is similar to supervised learning, except that, instead of being provided with the correct output for each network input, the algorithm is only given a grade. The grade (score) is a measure of the network performance over some sequence of inputs. It appears to be most suited to control system applications. Ming-Feng Yeh

Unsupervised Learning The weights and biases are modified in response to network inputs only. There are no target outputs available. Most of these algorithms perform some kind of clustering operation. They learn to categorize the input patterns into a finite number of classes. This is especially in such applications as vector quantization. Ming-Feng Yeh

Two-Input / Single-Neuron Perceptron  n = 0 point toward The boundary is always orthogonal to W. Ming-Feng Yeh

Perceptron Network Design The input/target pairs for the AND gate are AND Dark circle : 1 Light circle : 0 Step 1: Select a decision boundary Step 2: Choose a weight vector W that is orthogonal to the decision boundary Step 3: Find the bias b, e.g., picking a point on the decision boundary and satisfying n = Wp + b = 0 Ming-Feng Yeh

Test Problem The given input/target pairs are  Two-input and one output network without a bias  the decision boundary must pass through the origin.  Dark circle : 1 Light circle : 0 p1 p2 p3 The length of the weight vector does not matter; only its direction is important.  Ming-Feng Yeh

Constructing Learning Rule Training begins by assigning some initial values for the network parameters.  p1 p2 p3  The network has not returned the correct value, a = 0 and t1 = 1.  The initial weight vector results in a decision boundary that incorrectly classifies the vector p1. Ming-Feng Yeh

Constructing Learning Rule p1 p2 p3  One approach would be set W equal to p1 such that p1 was classified properly in the future.  Unfortunately, it is easy to construct a problem for which this rule cannot find a solution. Ming-Feng Yeh

Constructing Learning Rule Another way would be to add W equal to p1. Adding p1 to W would make W point more in the direction of p1. p1 p2 p3 Ming-Feng Yeh

Constructing Learning Rule  The next input vector is p2. p1 p2 p3  A class 0 vector was misclassified as a 1, a = 1 and t2 = 0. Ming-Feng Yeh

Constructing Learning Rule  Present the 3rd vector p3 p1 p2 p3  A class 0 vector was misclassified as a 1, a = 1 and t2 = 0. Ming-Feng Yeh

Constructing Learning Rule  If we present any of the input vectors to the neuron, it will output the correct class for that input vector. The perceptron has finally learned to classify the three vectors properly.  The third and final rule:  Training sequence: p1 p2 p3 p1 p2 p3  One iteration Ming-Feng Yeh

Unified Learning Rule  Perceptron error: e = t – a Ming-Feng Yeh

Training Multiple-Neuron Perceptron Learning rate : Ming-Feng Yeh

Apple/Orange Recognition Problem Ming-Feng Yeh

Limitations The perceptron can be used to classify input vectors that can be separated by a linear boundary, like AND gate example.  linearly separable (AND, OR and NOT gates) Not linearly separable, e.g., XOR gate Ming-Feng Yeh

Solved Problem P4.3 Design a perceptron network to solve the next problem (1,2) (1,1) (2,0) (2,1) (1,2) (2,1) (1, 1) (2, 2) Class 2: t = (0,1) Class 1: t = (0,0) Class 3: t = (1,0) Class 4: t = (1,1) A two-neuron perceptron creates two decision boundaries. Ming-Feng Yeh

Solution of P4.3 Class 1 Class 2 Class 3 Class 4 -3 -1 -2 1 xin xout xin xout yin yout Ming-Feng Yeh

Solved Problem P4.5 Train a perceptron network to solve P4.3 problem using the perceptron learning rule. Ming-Feng Yeh

Solution of P4.5 Ming-Feng Yeh

Solution of P4.5 Ming-Feng Yeh

Solution of P4.5 -2 -1 xin xout yin yout Ming-Feng Yeh