Download presentation

Presentation is loading. Please wait.

Published byJayla Case Modified over 2 years ago

1
CS344: Principles of Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 11, 12: Perceptron Training 30 th and 31 st Jan, 2012

2
The Perceptron Model A perceptron is a computing element with input lines having associated weights and the cell having a threshold value. The perceptron model is motivated by the biological neuron. Output = y wnwn W n-1 w1w1 X n-1 x1x1 Threshold = θ

3
θ 1 y Step function / Threshold function y = 1 for Σw i x i >=θ =0 otherwise ΣwixiΣwixi

4
Features of Perceptron Input output behavior is discontinuous and the derivative does not exist at Σw i x i = θ Σw i x i - θ is the net input denoted as net Referred to as a linear threshold element - linearity because of x appearing with power 1 y= f(net): Relation between y and net is non- linear

5
Computation of Boolean functions AND of 2 inputs X1 x2 y 0 0 0 0 10 1 00 1 11 The parameter values (weights & thresholds) need to be found. y w1w1 w2w2 x1x1 x2x2 θ

6
Computing parameter values w1 * 0 + w2 * 0 = 0; since y=0 w1 * 0 + w2 * 1 <= θ w2 <= θ; since y=0 w1 * 1 + w2 * 0 <= θ w1 <= θ; since y=0 w1 * 1 + w2 *1 > θ w1 + w2 > θ; since y=1 w1 = w2 = = 0.5 satisfy these inequalities and find parameters to be used for computing AND function.

7
Other Boolean functions OR can be computed using values of w1 = w2 = 1 and = 0.5 XOR function gives rise to the following inequalities: w1 * 0 + w2 * 0 = 0 w1 * 0 + w2 * 1 > θ w2 > θ w1 * 1 + w2 * 0 > θ w1 > θ w1 * 1 + w2 *1 <= θ w1 + w2 <= θ No set of parameter values satisfy these inequalities.

8
Threshold functions n # Boolean functions (2^2^n) #Threshold Functions (2 n2 ) 1 4 4 2 16 14 3 256 128 4 64K 1008 Functions computable by perceptrons - threshold functions #TF becomes negligibly small for larger values of #BF. For n=2, all functions except XOR and XNOR are computable.

9
θ 1 y Step function / Threshold function y = 1 for Σw i x i >=θ =0 otherwise ΣwixiΣwixi

10
Features of Perceptron Input output behavior is discontinuous and the derivative does not exist at Σw i x i = θ Σw i x i - θ is the net input denoted as net Referred to as a linear threshold element - linearity because of x appearing with power 1 y= f(net): Relation between y and net is non- linear

11
Perceptron Training Algorithm (PTA) Preprocessing: 1. The computation law is modified to y = 1 if ∑w i x i > θ y = o if ∑w i x i < θ ... θ, ≤ w1w1 w2w2 wnwn x1x1 x2x2 x3x3 xnxn... θ, < w1w1 w2w2 w3w3 wnwn x1x1 x2x2 x3x3 xnxn w3w3

12
PTA – preprocessing cont… 2. Absorb θ as a weight 3. Negate all the zero-class examples... θ w1w1 w2w2 w3w3 wnwn x2x2 x3x3 xnxn x1x1 w0=θw0=θ x 0 = -1... θ w1w1 w2w2 w3w3 wnwn x2x2 x3x3 xnxn x1x1

13
Example to demonstrate preprocessing OR perceptron 1-class,, 0-class Augmented x vectors:- 1-class,, 0-class Negate 0-class:-

14
Example to demonstrate preprocessing cont.. Now the vectors are x 0 x 1 x 2 X 1 -1 0 1 X 2 -1 1 0 X 3 -1 1 1 X 4 1 0 0

15
Perceptron Training Algorithm 1. Start with a random value of w ex: 2. Test for wx i > 0 If the test succeeds for i=1,2,…n then return w 3. Modify w, w next = w prev + x fail

16
PTA on NAND NAND: Y X 2 X 1 Y 0 0 1 0 1 1 W 2 W 1 1 0 1 1 1 0 X 2 X 1 Converted To W 2 W 1 W 0= Θ X 2 X 1 X 0=-1 Θ Θ

17
Preprocessing NAND Augmented: NAND-0 class Negated X 2 X 1 X 0 Y X 2 X 1 X 0 0 0 -1 1 V 0: 0 0 -1 0 1 -1 1 V 1: 0 1 -1 1 0 -1 1 V 2: 1 0 -1 1 1 -1 0 V 3: -1 -1 1 Vectors for which W= has to be found such that W. Vi > 0

18
PTA Algo steps Algorithm: 1. Initialize and Keep adding the failed vectors until W. Vi > 0 is true. Step 0: W = W 1 = + {V 0 Fails} = W 2 = + {V 3 Fails} = W 3 = + {V 0 Fails} = W 4 = + {V 1 Fails} =

19
Trying convergence W 5 = + {V 3 Fails} = W 6 = + {V 1 Fails} = W 7 = + {V 0 Fails} = W 8 = + {V 3 Fails} = W 9 = + {V 2 Fails} =

20
Trying convergence W 10 = + {V 3 Fails} = W 11 = + {V 1 Fails} = W 12 = + {V 3 Fails} = W 13 = + {V 1 Fails} = W 14 = + {V 2 Fails} =

21
W15 = + {V3 Fails} = W16 = + {V2 Fails} = W17 = + {V3 Fails} = W18 = + {V1 Fails} = W2 = -3, W1 = -2, W0 = Θ = -4 Succeeds for all vectors

22
PTA convergence

23
Statement of Convergence of PTA Statement: Whatever be the initial choice of weights and whatever be the vector chosen for testing, PTA converges if the vectors are from a linearly separable function.

Similar presentations

OK

CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 15, 16: Perceptrons and their computing power 6 th and.

CS344: Introduction to Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 15, 16: Perceptrons and their computing power 6 th and.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on fourth and fifth state of matter Tough times never last but tough people do ppt online Ppt on australian continent area Ppt on cross site scripting error Ppt on euclid geometry for class 9 download Ppt on trial and error method Convert doc file to ppt online reader Ppt on cross docking facility Ppt on computer languages of the 80's Ppt on views in dbms software