Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton,

Similar presentations


Presentation on theme: "Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton,"— Presentation transcript:

1 Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton, Okyay Kaynak, Fellow, IEEE, and Günhan Dündar Source : IEEE INDUSTRIAL ELECTRONICS MAGAZINE Date : 2012/3/28 Presenter : 林哲緯 1

2 Outline Numerical Analysis Method Neuron Network Architectures NBN Algorithm 2

3 Minimization problem 3 Newton's method

4 Minimization problem 4 Steepest descent method

5 Least square problem 5 Gauss–Newton algorithm

6 Levenberg–Marquardt algorithm – Combine the advantages of Gauss–Newton algorithm and Steepest descent method – far off the minimum like Steepest descent method – Close to the minimum like Newton algorithm – It’s find local minimum not global minimum 6

7 Levenberg–Marquardt algorithm Advantage – Linear – First-order differential Disadvantage – inverting is not used at all 7

8 Outline Numerical Analysis Method Neuron Network Architectures NBN Algorithm 8

9 Weight updating rule 9 Second-order algorithm First-order algorithm α : learning constant g : gradient vector J : Jacobian matrix μ : learning parameter I : identity matrix e : error vector MLPACNFCN

10 Forward & Backward Computation 10 Forward : 12345, 21345, 12435, or Backward : 54321, 54312, 53421, or 53412

11 Jacobian matrix 11 Row : pattern(input)*output Column : weight p = input number no = output number Row = 2*1 = 2 Column = 8 Jacobin size = 2*8

12 Jacobian matrix 12

13 Outline Numerical Analysis Method Neuron Network Architectures NBN Algorithm 13

14 Direct Computation of Quasi-Hessian Matrix and Gradient Vector 14

15 Conclusion memory requirement for quasi-Hessian matrix and gradient vector computation is decreased by(P × M) times can be used arbitrarily connected neural networks two procedures – Backpropagation process(single output) – Without backpropagation process(multiple outputs) 15


Download ppt "Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton,"

Similar presentations


Ads by Google