Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing the Sensitivity of a Layered Perceptron R.J. Marks II August 31, 2002.

Similar presentations


Presentation on theme: "Computing the Sensitivity of a Layered Perceptron R.J. Marks II August 31, 2002."— Presentation transcript:

1 Computing the Sensitivity of a Layered Perceptron R.J. Marks II August 31, 2002

2 Formula* *J.N. Hwang, J.J. Choi, S. Oh & R.J. Marks II, IEEE TNN #2, vol 1, 1991, p.131-136. row -1 row … …has N neurons input = row 0 output = row L u m ( ) f(u ) u a m ( ) neuron j in layer -1 neuron i in layer w ij ( ) initial condition W( )

3 The Sigmoid If, then. Kronecker Matrix Products Vectors: x=[x m ], b=[b m ] vector y = x*b = [x m b m ] Matrix X=[x 1, x 2,…, x J ] matrix Y=X*b =[x 1 *b, x 2 *b,…, x J *b ]. Vector Reciprocals c = 1/b has elements [c m ]=[1/b m ].

4 1. Sensitivity: Initialization Neural Network Parameters Number of Layers, L. Number of neurons in each row: N 0, N 1, …, N, … N L. In addition, each row, except the output, has a bias node. Neural network weights { W( ) ; 0   L -1< L }  L matrices. W( )=[w mj ( )]  an ( (N +1)  ( N +1 +1) ) matrix Input Operating Point Vector, a 0 (0)=bias a(0)= [a 0 (0), a 1 (0), a 2 (0), …, a (0), … a N L (0)] T

5 2. Sensitivity: The Forward Step For the given operating point, compute the activations and states of all neurons. All operations in pseudocode are matrix vector operations. For = 0 : L-1 u( +1)=W( +1) a( ) a( +1)=f ( u ( +1 ) ); exposes each element of a vector to a sigmoid nonlinearity. End

6 3. Sensitivity: The Backward Step Generate sensitivities from the output to the input starting with  (L) = I = identity matrix. In general,  ( ) = [  ij ( ) ]. For = L-1:0  ( )=  ( +1) [W( +1)* f `(u ( +1 ) ) ] End If f is a sigmoid, For = L-1:0  ( )=  ( +1) {W( +1) *[a ( +1 ) * (ones - a ( +1 ) ) ]} End

7 The Final Sensitivity Matrix The ki th element of the matrix is the partial derivative sensitivity

8 The Final Sensitivity Matrices The percentage sensitivity is A matrix of these values is  (0)=  (0)*{ [1/ a(L)] [a(0)] T } Or is it:  (0)=  (0)*{ [a(0)][1/ a(L)] T }


Download ppt "Computing the Sensitivity of a Layered Perceptron R.J. Marks II August 31, 2002."

Similar presentations


Ads by Google