Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar.

Similar presentations


Presentation on theme: "1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar."— Presentation transcript:

1

2 1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar Ali Unar

3 2 Some earlier Neuron Models McCulloch-Pitts Model [1943]:   It is a binary device, that is, it can be in only one of two possible states.   Each neuron has a fixed threshold.   The figure on the next slide shows a diagram of McCulloch-Pitts neuron. It has excitory inputs, E, and inhibitory inputs, I. In simple terms, excitory inputs cause the neuron to become active or fire, and inhibitory inputs prevent the neuron from from becoming active.More precisely, if any of the inhibitory inputs are active, (often described in binary terms as 1), the output, labeled Y, will be inactive or 0. Alternatively, if all of the inhibitory inputs are 0, and if the sum of the excitory inputs is greater than the threshold, T, then the output is active or 1.

4 3 Mathematically, the McCulloch-Pitts neuron is expressed as Y = 1 if and Y = 1 if and Y = 0 otherwise T E1E1 E 2 EmEm I1I1 InIn   Y

5 4 Examples: Examples: OR Gate OR Gate T = 1 a b Output = Y AbY000101011111AbY000101011111

6 5 Examples: Examples: AND Gate AND Gate T = 2 a b Output = Y abY000100010111abY000100010111

7 6 ADALINE: It is short name for Adapter Linear Neuron or later Adapter Linear Element. It was developed by Widrow and Hoff in 1960. Each ADALINE has several inputs which can take the value of +1 or –1. Each input has a weight associated with it which gives some indication of the importance of that input. The weights can be positive or negative and have real values. The weighted sum is calculated as where w i is the weight of input x i.

8 7 ADALINE: The value of sum is transformed into the value at the output, y, via a non-linear output function.This function gives an output +1 if the weighted sum is greater than 0 and –1 if the sum is less than or equal to 0. This sort of non-linearity is called a hard-limiter, which is defined as: y = 1 if sum > 0 y = 0 if sum < 0

9 8 ADALINE: The Least Mean Square (LMS) algorithm is used to update the weights and bias of ADALINE. For complete derivation of this algorithm, see my DSP notes. Summary of the algorithm is given below: 1. 1. Initialization: set 2. 2. For n = 1,2,…, compute, for k = 1,2,…,p

10 9 Block diagram of ADALINE  W 1 [n] W 2 [n] W p [n]  x 1 [n] x 2 [n] x p [n] + d[n] -   e[n] 1 y

11 10 Perceptron: Perceptron is a simplest form of a neural network used for the classification of a special type of patterns said to be linearly separable. The Perceptron consists of a single neuron with adjustable synaptic weights and threshold. The algorithm used to adjust the free parameters of this neural network first appeared in a learning procedure developed by Rosenblatt [1958, 1962].

12 11 Perceptron: Rosenblatt proved that if the patterns (vectors) used to train the perceptron are drawn from two linearly separable classes, then the perceptron algorithm converges and positions the decision surface in the form of a hyperplane between the two classes. The proof of converge of the algorithm is known as the Perceptron Convergence Threorem.

13 12 The Perceptron Convergence Theorem Variables and Parameters x[n] = (p+1)  1 input vector w[n] = (p+1)  1 weight vector b[n] = bias y[n] = actual response d[n] = desired response   = learning rate parameter, a positive constant less than unity.

14 13 Step 1:Initialization Step 1:Initialization Set w(0) = 0. Then perform the following computations for time n = 1,2,…. Step 2: Activation At time n, activate the perceptron by applying input vector x[n] and desired response d[n]. Step 3: Computation of actual Response Compute the actual response of the Perceptron: y[n] = sgn[w T [n]x[n]] where sgn[.] is the signum function.

15 14 Step 4: Adaptation of weight vector: Update the weight vector of the Perceptron: w[n+1] = w[n] +  [d[n] – y[n]]x[n] Where 1 if x[n] belongs to class C1 d[n] = -1 if x[n] belong to class C2 Step 5: Increment n by one unit, and go back to step 2.


Download ppt "1 Mehran University of Engineering and Technology, Jamshoro Department of Electronic, Telecommunication and Bio-Medical Engineering Neural Networks Mukhtiar."

Similar presentations


Ads by Google