Presentation is loading. Please wait.

Presentation is loading. Please wait.

Supervised Learning: Linear Perceptron NN. Distinction Between Approximation- Based vs. Decision-Based NNs Teacher in Approximation-Based NN are quantitative.

Similar presentations


Presentation on theme: "Supervised Learning: Linear Perceptron NN. Distinction Between Approximation- Based vs. Decision-Based NNs Teacher in Approximation-Based NN are quantitative."— Presentation transcript:

1 Supervised Learning: Linear Perceptron NN

2 Distinction Between Approximation- Based vs. Decision-Based NNs Teacher in Approximation-Based NN are quantitative in real or complex values Teacher in Decision-Based NNs are symbols, instead of numeric complex values.

3 Decision-Based NN (DBNN) Linear Perceptron Discriminant function (Score function) Reinforced and Anti-reinforced Learning Rules Hierarchical and Modular Structures

4 incorrect /correct classes next pattern 1xw)1xw) 2xw)2xw) Mxw)Mxw)

5 Supervised Learning: Linear Perceptron NN

6 Upon the presentation of the m-th training pattern z (m), the weight vector w (m) is updated as Two-Classes: Linear Perceptron Learning Rule w (m+1) = w (m) +  (t (m) - d (m) ) z (m)  j  x  w j  ) =  x T w j +w 0 ) = z T ŵ j (= z T w)  ▽  j  z  w j  ) = z where  is a positive learning rate.

7 If a set of training patterns is linearly separable, then the linear perceptron learning algorithm converges to a correct solution in a finite number of iterations. Linear Perceptron: Convergence Theorem (Two Classes)

8 It converges when learning rate  is small enough. w (m+1) = w (m) +  (t (m) - d (m) ) z (m)

9 linearly separable Multiple Classes strongly linearly separable

10 If the given multiple-class training set is linearly separable, then the linear perceptron learning algorithm converges to a correct solution after a finite number of iterations. Linear Perceptron Convergence Theorem (Multiple Classes)

11 Multiple Classes:Linear Perceptron Learning Rule (linearly separability)

12

13 P 1j = [ z 0 0 … -z 0 … 0]

14 DBNN Structure for Nonlinear Discriminant Function x y 1xw)1xw) 2xw)2xw) 3xw)3xw) MAXNET

15 DBNN MAXNET w1w1 w2w2 w3w3 teacher Training if teacher indicates the need x y

16

17 Decision-based learning rule is based on a minimal updating principle. The rule tends to avoid or minimize unnecessary side- effects due to overtraining. One scenario is that the pattern is already correctly classified by the current network, then there will be no updating attributed to that pattern, and the learning process will proceed with the next training pattern. The second scenario is that the pattern is incorrectly classified to another winning class. In this case, parameters of two classes must be updated. The score of the winning class should be reduced, by the anti-reinforced learning rule, while the score of the correct (but not winning) class should be enhanced by the reinforced learning rule.

18  w j  w j  j  x  w) Reinforced and Anti-reinforced Learning  w i  w i  i  x  w) Reinforced Learning Anti-Reinforced Learning Suppose that the m -th training patternn x (m), j = arg max i≠j φ( x (m), Θ j ) The leading challenger is denoted by x (m) is known to belong to the i-th class.

19 Anti-Reinforced Learning  w j  x  w j )  ▽  j  x  w j  ) =  x  w j )  w i  x  w i ) Reinforced Learning For Simple RBF Discriminant Function Upon the presentation of the m-th training pattern z (m), the weight vector w (m) is updated as   j  x  w j  ) =.5  x  w j ) 2

20

21

22 The learning scheme of the DBNN consists of two phases: locally unsupervised learning. globally supervised learning. Decision-Based Learning Rule

23 Several approaches can be used to estimate the number of hidden nodes or the initial clustering can be determined based on VQ or EM clustering methods. Locally Unsupervised Learning Via VQ or EM Clustering Method EM allows the final decision to incorporate prior information. This could be instrumental to multiple- expert or multiple-channel information fusion.

24 The objective of learning is minimum classification error (not maximum likelihood estimation). Inter-class mutual information is used to fine tune the decision boundaries (i.e., the globally supervised learning). In this phase, DBNN applies reinforced-antireinforced learning rule [Kung95], or discriminative learning rule [Juang92], to adjust network parameters. Only misclassified patterns need to be involved in this training phase. Globally Supervised Learning Rules

25 a a aa a a a a a a a a aa a b b b b b b b b b b b b c c c c c c c c c c c c c c c c c b b a a aa a a a aaa a b b b b b b b b c c c c c c  b b b b Pictorial Presentation of Hierarchical DBNN

26 Discriminant function (Score function) LBF Function (or Mixture of) RBF Function (or Mixture of) Prediction Error Function Likelihood Function : HMM

27 Hierarchical and Modular DBNN Subcluster DBNN Probabilistic DBNN Local Experts via K-mean or EM Reinforced and Anti-reinforced Learning

28 MAXNET Subcluster DBNN

29

30 Subcluster Decision-Based Learning Rule

31

32

33

34

35 Probabilistic DBNN Probabilistic DBNN

36 MAXNET Probabilistic DBNN

37

38 MAXNET Probabilistic DBNN

39 Subnetwork of a Probabilistic DBNN is basically a mixture of local experts RBF P(y|x,    P(y|x,    P(y|x,    P(y|x,  k  k-th subnetwork x

40 Probabilistic Decision-Based Neural Networks

41 Training of Probabilistic DBNN Selection of initial local experts: Intra-class training Unsupervised training EM (Probabilistic) Training Training of the experts: Inter-class training Supervised training Reinforced and Anti-reinforced Learning

42 Locally Unsupervised Phase Globally supervised Phase K-means Feature Vectors K-NNs EM Classification Class ID Reinforced Learning Reinforced Learning Converge ? Y Misclassified vectors N Probabilistic Decision-Based Neural Networks Training procedure

43 Probabilistic Decision-Based Neural Networks GMMPDBNN 2-D Vowel Problem:

44 For MOE, the influence from the training patterns on each expert is regulated by the gating network (which itself is under training) so that as the training goes, the training patterns will have higher influence on the closer-by experts, and lower influence on the far-away ones. (The MOE updates all the classes.) Unlike the MOE, the DBNN makes use of both unsupervised (EM-type) and supervised (decision-based) learning rules. The DBNN uses only mis-classified training patterns for its globally supervised learning. The DBNN updates only the ``winner" class and the class which the mis-classified pattern actually belongs to. Its training strategy is to abide by a ``minimal updating principle“. Difference of MOE and DBNN

45 DBNN/PDBNN Applications OCR (DBNN) Texture Segmentation(DBNN) Mammogram Diagnosis (PDBNN) Face Detection(PDBNN) Face Recognition (PDBNN) Money Recognition(PDBNN) Multimedia Library(DBNN)

46 OCR Classification (DBNN)

47 Image Texture Classification (DBNN)

48

49

50

51 Face Detection (PDBNN)

52 Face Recognition (PDBNN)

53

54 show movies

55 Multimedia Library(PDBNN)

56 MatLab Assignment #4: DBNN to separate 2 classes RBF DBNN with 4 centroids per class RBF DBNN with 4 centroids and 6 centroids for green and blue classes respectively. ratio=2:1

57 RBF-BP NN for Dynamic Resource Allocation use content to determine renegotiation time use content/ST-traffic to estimate how much resource to request Neural network traffic predictor yields smaller prediction MSE and higher link utilization.

58 Modern information technology in the internet era should support interactive and intelligent processing that transforms and transfers information. Intelligent Media Agent Integration of signal processing and neural net techniques could be a versatile tool to a broad spectrum of multimedia applications.

59 EM Applications Uncertain Clustering/ Model Channel Confidence * * * Expert 1Expert 2 Channel 1 Channel 2

60 Channel Fusion

61 classes-in-channel network channel Sensor = Channel = Expert

62 Sensor Fusion Human Sensory Modalities Computer Sensory Modalities Da. “Ga” “Ba”

63 Fusion Example Toy Car Recognition

64 Probabilistic Decision-Based Neural Networks

65 Locally Unsupervised Phase Globally supervised Phase K-means Feature Vectors K-NNs EM Classification Class ID Reinforced Learning Reinforced Learning Converge ? Y Misclassified vectors N Probabilistic Decision-Based Neural Networks Training procedure

66 Probabilistic Decision-Based Neural Networks GMMPDBNN 2-D Vowel Problem:

67 References: [1]Lin, S.H., Kung, S.Y. and Lin, L.J. (1997). “Face recognition/detection by probabilistic decision-based neural network, IEEE Trans. on Neural Networks, 8 (1), pp. 114-132. [2]Mak, M.W. et al. (1994), “Speaker Identification using Multi Layer Perceptrons and Radial Basis Functions Networks,” Neurocomputing, 6 (1), 99-118.


Download ppt "Supervised Learning: Linear Perceptron NN. Distinction Between Approximation- Based vs. Decision-Based NNs Teacher in Approximation-Based NN are quantitative."

Similar presentations


Ads by Google