Download presentation

Presentation is loading. Please wait.

Published byChana Scotten Modified over 2 years ago

1
Neural network architectures and learning algorithms Author : Bogdan M. Wilamowski Source : IEEE INDUSTRIAL ELECTRONICS MAGAZINE Date : 2011/11/22 Presenter : 林哲緯 1

2
Outline Neural Architectures Parity-N Problem Suitable Architectures Use Minimum Network Size Conclusion 2

3
Neural Architectures 3 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

4
Neural Architectures 4 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

5
Neural Architectures 5 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

6
error back propagation(EBP) algorithm – multilayer perceptron (MLP) 6 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

7
multilayer perceptron (MLP) 7 Neural network architectures and learning algorithms, Wilamowski, B.M. MLP-type architecture 3-3-4-1(without connections across layers)

8
neuron by neuron(NBN) algorithm – bridged multilayer perceptron (BMLP) – fully connected cascade (FCC) 8 Neural network architectures and learning algorithms, Wilamowski, B.M. arbitrarily connected network

9
neuron by neuron(NBN) algorithm Levenberg–Marquardt(LM) algorithm – Improve nonlinear function of least square – Forward & Backward Computation Jacobian Matrix – Forward-Only Computation 9

10
bridged multilayer perceptron (BMLP) 10 Neural network architectures and learning algorithms, Wilamowski, B.M. BMLP architecture 3=3=4=1(with connections across layers marked by dotted lines)

11
fully connected cascade (FCC) 11 Neural network architectures and learning algorithms, Wilamowski, B.M. Bipolar neural network for parity-8 problem in a FCC architecture

12
Outline Neural Architectures Parity-N Problem Suitable Architectures Use Minimum Network Size Conclusion 12

13
parity-8 problem MLP 8*9 + 9 = 81 weights BMLP 4*9 + 8 + 4 + 1 = 49 weights 13 Neural network architectures and learning algorithms, Wilamowski, B.M.

14
parity-8 problem 9 + 10 + 11 + 12 = 42 weights 14 Neural network architectures and learning algorithms, Wilamowski, B.M.

15
parity-17 problem MLP architecture needs 18 neurons BMLP architecture with connections across hidden layers needs 9 neurons FCC architecture needs only 5 neurons 15

16
parity-N problem MLP architectures BMLP architectures FCC architectures nn = neurons nw = weights 16 Neural network architectures and learning algorithms, Wilamowski, B.M.

17
Outline Neural Architectures Parity-N Problem Suitable Architectures Use Minimum Network Size Conclusion 17

18
suitable architectures For a limited number of neurons, FCC neural networks are the most powerful architectures, but this does not mean that they are the only suitable architectures 18

19
suitable architectures if the two weights marked by red dotted lines – signal has to be propagated by fewer layers 19 Neural network architectures and learning algorithms, Wilamowski, B.M.

20
Outline Neural Architectures Parity-N Problem Suitable Architectures Use Minimum Network Size Conclusion 20

21
Use Minimum Network Size receive a close-to-optimum answer for all patterns that were never used in training generalization abilities 21

22
Case Study 22 Neural network architectures and learning algorithms, Wilamowski, B.M. TSK fuzzy controller: (a) Required control surface (b) 8*6 = 48 defuzzification rules TSK fuzzy controller: (a) Trapezoidal membership functions (b) Triangular membership functions

23
Case Study 23 Neural network architectures and learning algorithms, Wilamowski, B.M. (a) 3 neurons in cascade (12 weights), training error = 0.21049 (b) 4 neurons in cascade (18 weights), training error = 0.049061 (a) 5 neurons in cascade (25 weights), training error = 0.023973 (b) 8 neurons in cascade (52 weights), training error = 1.118E-005

24
time complexity NBN algorithm can train neural networks 1,000 times faster than the EBP algorithm. 24 Neural network architectures and learning algorithms, Wilamowski, B.M. (a)EBP algorithm, average solution time of 4.2s, and average 4188.3 iterations (b)NBN algorithm, average solution time of 2.4ms, and average 5.73 iterations

25
two-spiral problem 25 Neural network architectures and learning algorithms, Wilamowski, B.M. NBN algorithm using FCC architecture 244 iterations and 0.913s EBP algorithm using FCC architecture 30,8225 iterations and 342.7s

26
Outline Neural Architectures Parity-N Problem Suitable Architectures Use Minimum Network Size Conclusion 26

27
Conclusions FCC or BMLP architectures are not only more powerful but also easier to train use networks with a minimum number of neurons NBN have to invert a nw*nw matrix, but 500 weights are limit now. 27

Similar presentations

OK

Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton,

Computing Gradient Vector and Jacobian Matrix in Arbitrarily Connected Neural Networks Author : Bogdan M. Wilamowski, Fellow, IEEE, Nicholas J. Cotton,

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on review writing sample Ppt on applied operational research journal Ppt on operation research in linear programming Ppt on construction for class 10 cbse Ppt on 2 stroke ic engines Download ppt on minimum wages act 1948 Ppt on business environment nature concept and significance level Ppt on cloud computing pdf Ppt on formal education articles Ppt on service oriented architecture for dummies