Presentation is loading. Please wait.

Presentation is loading. Please wait.

COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.

Similar presentations


Presentation on theme: "COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks."— Presentation transcript:

1 COMP305. Part I. Artificial neural networks.

2 Topic 3. Learning Rules of the Artificial Neural Networks.

3 ANN Learning rules. McCulloch-Pitts neuron capable of: storing information and producing logical and arithmetical operations on it. The next step must be to realise another important function of the brain, which is to acquire new knowledge through experience, i.e. learning.

4 ANN Learning rules. Learning means to change in response to experience. In a network of MP-neurons binary weights of connections and thresholds are fixed. The only change can be the change of pattern of connections, which is technically expensive. Some easily changeable free parameters are needed.

5 ANN Learning rules. The ideal free parameters to adjust, and so to resolve learning without changing pattern of connections, are the weights of connections w ji. Abstract neuron.

6 ANN Learning rules. Definition: ANN learning rule defines how to adjust the weights of connections to get desirable output.

7 Hebb’s rule (1949). Hebb conjectured that a particular type of use-dependent modification of the connection strength of synapses might underlie learning in the nervous system.

8 Hebb’s rule (1949). Hebb introduced a neurophysiological postulate : “…When an axon of cell A is near enough to excite a cell B and repeatedly and persistently tales part in firing it, some growth process or metabolic change takes place in one or both cells, such that A’s efficiency as one of the cells firing B, is increased.”

9 Hebb’s rule (1949). The simplest formalisation of Hebb’s rule is to increase weight of connection at every next instant in the way: (1) where (2)

10 Hebb’s rule (1949). (1) where (2) here w ji k is the weight of connection at instant k, w ji k+1 is the weight of connection at the following instant k+1,  w ji k is increment by which the weight of connection is enlarged, C is positive coefficient which determines learning rate, a i k is input value from the presynaptic neuron at instant k, X j k is output of the postsynaptic neuron at the same instant k.

11 Hebb’s rule (1949). (1) where (2) Thus, the weight of connection changes at the next instant only if both preceding input via this connection and the resulting output simultaneously are not equal to 0.

12 Hebb’s rule (1949). (1) where (2) Equation (2) emphasises the correlation nature of a Hebbian synapse. It is sometimes referred to as the activity product rule.

13 Hebb’s rule (1949). (1) where (2) For this reason, Hebb’s rule plays an important role in studies of ANN algorithms much “younger” than the rule itself, such as unsupervised learning or self- organisation, which we shall consider later.

14 Hebb’s rule in practice. Input unit N o 1 2 3 4

15 Hebb’s rule in practice. Input unit N o 1 2 3 4 w01w01 w02w02 w03w03 w04w04 1111 t =0 C=1

16 Hebb’s rule in practice. Input unit N o 1 2 3 4 w01w01 w02w02 w03w03 w04w04 1111 a01a01 a02a02 a03a03 a04a04 1010 t =0 C=1

17 Hebb’s rule in practice. Input unit N o 1 2 3 4 w01w01 w02w02 w03w03 w04w04 1111 a01a01 a02a02 a03a03 a04a04 1010 t =0 C=1

18 Hebb’s rule in practice. Input unit N o 1 2 3 4 w01w01 w02w02 w03w03 w04w04 1111 a01a01 a02a02 a03a03 a04a04 1010 t =0 C=1

19 Hebb’s rule in practice. Input unit N o 1 2 3 4 w01w01 w02w02 w03w03 w04w04 1111 a01a01 a02a02 a03a03 a04a04 1010 t =0 C=1

20 Hebb’s rule in practice. Input unit N o 1 2 3 4 w01w01 w02w02 w03w03 w04w04 1111 a01a01 a02a02 a03a03 a04a04 1010 t =0 C=1

21 Hebb’s rule in practice. Input unit N o 1 2 3 4 w11w11 w12w12 w13w13 w14w14 2121 t =1 C=1 a01a01 a02a02 a03a03 a04a04 1010

22 Hebb’s rule in practice. Input unit N o 1 2 3 4 w11w11 w12w12 w13w13 w14w14 2121 t =1 C=1

23 Hebb’s rule in practice. Input unit N o 1 2 3 4 w11w11 w12w12 w13w13 w14w14 2121 a11a11 a12a12 a13a13 a14a14 1010 t =1 C=1

24 Hebb’s rule in practice. Input unit N o 1 2 3 4 w11w11 w12w12 w13w13 w14w14 2121 a11a11 a12a12 a13a13 a14a14 1010 t =1 C=1

25 Hebb’s rule in practice. Input unit N o 1 2 3 4 w11w11 w12w12 w13w13 w14w14 2121 a11a11 a12a12 a13a13 a14a14 1010 t =1 C=1

26 Hebb’s rule in practice. Input unit N o 1 2 3 4 w11w11 w12w12 w13w13 w14w14 2121 a11a11 a12a12 a13a13 a14a14 1010 t =1 C=1

27 Hebb’s rule in practice. Input unit N o 1 2 3 4 w21w21 w22w22 w23w23 w24w24 3131 t =2 C=1 a11a11 a12a12 a13a13 a14a14 1010

28 Hebb’s rule in practice. Input unit N o 1 2 3 4 w21w21 w22w22 w23w23 w24w24 3131 t =2 C=1

29 Hebb’s rule in practice. Input unit N o 1 2 3 4 w21w21 w22w22 w23w23 w24w24 3131 a21a21 a22a22 a23a23 a24a24 t =2 C=1

30 Hebb’s rule in practice. Input unit N o 1 2 3 4 w21w21 w22w22 w23w23 w24w24 3131 a21a21 a22a22 a23a23 a24a24 1010 t =2 C=1

31 Hebb’s rule in practice. Input unit N o 1 2 3 4 w21w21 w22w22 w23w23 w24w24 3131 a21a21 a22a22 a23a23 a24a24 1100 t =2 C=1

32 Hebb’s rule in practice. Input unit N o 1 2 3 4 w21w21 w22w22 w23w23 w24w24 3131 a21a21 a22a22 a23a23 a24a24 1100 t =2 C=1

33 Hebb’s rule in practice. Input unit N o 1 2 3 4 w21w21 w22w22 w23w23 w24w24 3131 a21a21 a22a22 a23a23 a24a24 1100 t =2 C=1

34 Hebb’s rule in practice. Input unit N o 1 2 3 4 w31w31 w32w32 w33w33 w34w34 4231 a21a21 a22a22 a23a23 a24a24 1100 t =3 C=1

35 Hebb’s rule in practice. Input unit N o 1 2 3 4 w31w31 w32w32 w33w33 w34w34 4231 t =3 C=1 And so on…

36 Next - Perceptron (1958). Rosenblatt (1958) explicitly considered the problem of pattern recognition, where a “teacher” is essential.


Download ppt "COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks."

Similar presentations


Ads by Google