Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 9 Perceptrons and their generalizations. Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation.

Similar presentations


Presentation on theme: "Chapter 9 Perceptrons and their generalizations. Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation."— Presentation transcript:

1 Chapter 9 Perceptrons and their generalizations

2 Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation of indicator functions Method of potential functions and Radial basis functions Three theorem of optimization theory Neural Networks

3 Perceptrons (Rosenblatt, 1950s)

4 Recurrent Procedure

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21 Proofs of the theorems

22 Method of stochastic approximation and sigmoid approximation of indicator functions

23

24 Method of Stochastic Approximation

25

26

27

28 Sigmoid Approximation of Indicator Functions

29 Basic Frame for learning process Use the sigmoid approximation at the stage of estimating the coefficients Use the indicator functions at the stage of recognition.

30 Method of potential functions and Radial Basis Functions

31 Potential function On-line Only one element of the training data RBFs (mid-1980s) Off-line

32 Method of potential functions in asymptotic learning theory Separable condition Deterministic setting of the PR Non-separable condition Stochastic setting of the PR problem

33 Deterministic Setting

34 Stochastic Setting

35 RBF Method

36

37 Three Theorems of optimization theory Fermat ’ s theorem (1629) Entire space, without constraints Lagrange multipliers rule (1788) Conditional optimization problem Kuhn-Tucker theorem (1951) Convex optimizaiton

38

39

40

41

42 To find the stationary points of functions It is necessary to solve a system of n equations with n unknown values.

43 Lagrange Multiplier Rules (1788)

44

45

46

47

48

49

50

51 Kuhn-Tucker Theorem (1951) Convex optimization Minimize a certain type of (convex) objective function under certain (convex) constraints of inequality type.

52

53

54

55

56

57

58

59 Remark

60 Neural Networks A learning machine: Nonlinearly mapped input vector x in feature space U Constructed a linear function into this space.

61 Neural Networks The Back-Propagation method The BP algorithm Neural Networks for the Regression estimation problem Remarks on the BP method.

62 The Back-Propagation method

63 The BP algorithm

64

65

66 For the regression estimation problem

67 Remark The empirical risk functional has many local minima The convergence of the gradient based method is rather slow. The sigmoid function has a scaling factor that affects the quality of the approximation.

68 Neural-networks are not well-controlled learning machines In many practical applications, however, demonstrates good results.


Download ppt "Chapter 9 Perceptrons and their generalizations. Rosenblatt ’ s perceptron Proofs of the theorem Method of stochastic approximation and sigmoid approximation."

Similar presentations


Ads by Google