Presentation is loading. Please wait.

Presentation is loading. Please wait.

4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised.

Similar presentations


Presentation on theme: "4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised."— Presentation transcript:

1 4 1 Perceptron Learning Rule

2 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised Learning Reinforcement Learning Unsupervised Learning

3 4 3 Learning Rules Supervised Learning Network is provided with a set of examples of proper network behavior (inputs/targets) Reinforcement Learning Network is only provided with a grade, or score, which indicates network performance Unsupervised Learning Only network inputs are available to the learning algorithm. Network learns to categorize (cluster) the inputs. {p 1, t 1 }, {p 2, t 2 }, …… {p Q, t Q }

4 4 4 Perceptron Architecture w i w i1  w i2  w iR  = W w T 1 w T 2 w T S =

5 4 5 Single-Neuron Perceptron

6 4 6 Decision Boundary

7 4 7 Example - OR

8 4 8 OR Solution Weight vector should be orthogonal to the decision boundary. Pick a point on the decision boundary to find the bias.

9 4 9 Multiple-Neuron Perceptron Each neuron will have its own decision boundary. A single neuron can classify input vectors into two categories. A S-neuron perceptron can classify input vectors into 2 S categories. i w T p+b i =0

10 4 10 Learning Rule Test Problem

11 4 11 Starting Point Present p 1 to the network: Random initial weight: Incorrect Classification.

12 4 12 Tentative Learning Rule

13 4 13 Second Input Vector (Incorrect Classification) Modification to Rule:

14 4 14 Third Input Vector Patterns are now correctly classified. (Incorrect Classification)

15 4 15 Unified Learning Rule A bias is a weight with an input of 1. .1.1

16 4 16 Multiple-Neuron Perceptrons To update the i-th row of the weight matrix: Matrix form:

17 4 17 Apple/Orange Example Training Set Initial Weights

18 4 18 Apple/Orange Example First Iteration et 1 a–01–===

19 4 19 Second Iteration a = hardlim(-1.5) = 0

20 4 20 Third Iteration et 1 a–01–===

21 4 21 Check a = hardlim(-3.5) = 0 = t 1 a = hardlim(0.5) = 1 = t 2

22 4 22 Perceptron Rule Capability The perceptron rule will always converge to weights which accomplish the desired classification, assuming that such weights exist.

23 4 23 Proof of Convergence(Notation) {p 1, t 1 }, {p 2, t 2 }, …… {p Q, t Q }

24 4 24 Proof of Convergence(Notation) a n δ-δ-δ x*Tzqx*Tzq x*Tzqx*Tzq 0

25 4 25 Proof...... (4.64) Proof (4.64):

26 4 26 Proof(cont.)......

27 4 27 Proof(cont.) From the Cauchy-Schwartz inequality (1) (4.66)

28 4 28 Proof(cont.) Note that: If (2) (4.71) (4.72) (4.73) (4.74)

29 4 29 Proof(cont.) Proof (4.72):

30 4 30 Proof(cont.) Proof (4.74):......

31 4 31 Proof(cont.)......

32 4 32 Proof(cont.) (1) (2) or 1. 2. 3.

33 4 33 Perceptron Limitations Linear Decision Boundary Linearly Inseparable Problems

34 4 34 Example

35 4 35 Example

36 4 36 Example

37 4 37 Example

38 4 38 Example

39 4 39 Example

40 4 40 Example

41 4 41 Example

42 4 42 另解


Download ppt "4 1 Perceptron Learning Rule. 4 2 Learning Rules Learning Rules : A procedure for modifying the weights and biases of a network. Learning Rules : Supervised."

Similar presentations


Ads by Google