Presentation is loading. Please wait.

Presentation is loading. Please wait.

Perceptron Learning Rule

Similar presentations


Presentation on theme: "Perceptron Learning Rule"— Presentation transcript:

1 Perceptron Learning Rule

2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a net work. Learning Rules : Supervised Learning Reinforcement Learning Unsupervised Learning

3 Learning Rules Network is provided with a set of examples
• Supervised Learning Network is provided with a set of examples of proper network behavior (inputs/targets) {p1, t1}, {p2, t2}, ……{pn, tn} • Reinforcement Learning Network is only provided with a grade, or score, which indicates network performance • Unsupervised Learning Only network inputs are available to the learning algorithm. Network learns to categorize (cluster) the inputs.

4 Perceptron Architecture
W w T 1 2 S = w i 1 , 2 R =

5 Single-Neuron Perceptron

6 Decision Boundary

7 Example - OR

8 OR Solution Weight vector should be orthogonal to the decision boundary. Pick a point on the decision boundary to find the bias.

9 Multiple-Neuron Perceptron
Each neuron will have its own decision boundary. iwTp+bi =0 A single neuron can classify input vectors into two categories. A S-neuron perceptron can classify input vectors into 2S categories.

10 Learning Rule Test Problem

11 Starting Point Random initial weight: Present p1 to the network:
Incorrect Classification.

12 Tentative Learning Rule

13 Second Input Vector (Incorrect Classification) Modification to Rule:

14 Patterns are now correctly classified.
Third Input Vector (Incorrect Classification) Patterns are now correctly classified.

15 Unified Learning Rule A bias is a weight with an input of 1.

16 Multiple-Neuron Perceptrons
To update the i-th row of the weight matrix: Matrix form:

17 Apple/Orange Example Training Set Initial Weights

18 Apple/Orange Example First Iteration e t 1 a -1 =

19 Second Iteration Second Iteration

20 Third Iteration Third Iteration e t 1 a -1 =

21 Check a = hardlim(-3.5) = 0 = t1 a = hardlim(0.5) = 1 = t2

22 Perceptron Rule Capability
The perceptron rule will always converge to weights which accomplish the desired classification, assuming that such weights exist.

23 Perceptron Limitations
Linear Decision Boundary Linearly Inseparable Problems

24 Example

25 Example

26 Example

27 Example

28 Example

29 Example

30 Example

31 Example


Download ppt "Perceptron Learning Rule"

Similar presentations


Ads by Google