Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning in neural networks Chapter 19 DRAFT. Biological neuron.

Similar presentations


Presentation on theme: "Learning in neural networks Chapter 19 DRAFT. Biological neuron."— Presentation transcript:

1 Learning in neural networks Chapter 19 DRAFT

2 Biological neuron

3 Neuron Soma Dendrites Axon Synapse Action potential

4 Perceptron  g xixi x0x0 xnxn y wiwi y = g (  i=1,…,n w i x i ) ++ + + +- - - - -

5 Perceptron Synonym for Single- Layer, Feed-Forward Network First Studied in the 50’s Other networks were known about but the perceptron was the only one capable of learning and thus all research was concentrated in this area

6 Perceptron A single weight only affects one output so we can restrict our investigations to a model as shown on the right Notation can be simpler, i.e.

7 What can perceptrons represent?

8 0,0 0,1 1,0 1,1 0,0 0,1 1,0 1,1 AND XOR Functions which can be separated in this way are called Linearly Separable Only linearly Separable functions can be represented by a perceptron

9 What can perceptrons represent? Linear Separability is also possible in more than 3 dimensions – but it is harder to visualise

10 Training a perceptron Aim

11 Training a perceptrons t = 0.0 y x W = 0.3 W = -0.4 W = 0.5

12 Function-Learning Formulation  Goal function f  Training set: (x i, f(x i )), i = 1,…,n  Inductive inference: find a function h that fits the point well  Same Keep-It-Simple bias

13 Perceptron  g xixi x0x0 xnxn y wiwi y = g (  i=1,…,n w i x i ) + + + + +- - - - - ?

14 Unit (Neuron)  g xixi x0x0 xnxn y wiwi y = g(  i=1,…,n w i x i ) g(u) = 1/[1 + exp(-  u)]

15 Neural Network Network of interconnected neurons  g xixi x0x0 xnxn y wiwi  g xixi x0x0 xnxn y wiwi Acyclic (feed-forward) vs. recurrent networks

16 Two-Layer Feed-Forward Neural Network InputsHidden layer Output layer

17 Backpropagation (Principle)  New example y k = f(x k )  φ k = outcome of NN with weights w for inputs x k  Error function: E(w) = ||φ k – y k || 2  w ij (k) = w ij (k-1) – ε  E/  w ij  Backpropagation: Update the weights of the inputs to the last layer, then the weights of the inputs to the previous layer, etc.

18 Comments and Issues  How to choose the size and structure of networks? If network is too large, risk of over-fitting (data caching) If network is too small, representation may not be rich enough  Role of representation: e.g., learn the concept of an odd number  Incremental learning


Download ppt "Learning in neural networks Chapter 19 DRAFT. Biological neuron."

Similar presentations


Ads by Google