Presentation is loading. Please wait.

Presentation is loading. Please wait.

Radial Basis-Function Networks. Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Radial Basis-Function Networks Gaussian response.

Similar presentations


Presentation on theme: "Radial Basis-Function Networks. Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Radial Basis-Function Networks Gaussian response."— Presentation transcript:

1 Radial Basis-Function Networks

2 Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Radial Basis-Function Networks Gaussian response function Location of center u Determining sigma Why does RBF network work

3 Back-propagation The algorithm gives a prescription for changing the weights w ij in any feed- forward network to learn a training set of input output pairs {x d,t d } We consider a simple two-layer network

4 xkxk x 1 x 2 x 3 x 4 x 5

5 Given the pattern x d the hidden unit j receives a net input and produces the output

6 Output unit i thus receives And produce the final output

7 In our example E becomes E[w] is differentiable given f is differentiable Gradient descent can be applied

8 Consider a network with M layers m=1,2,..,M V m i from the output of the ith unit of the mth layer V 0 i is a synonym for x i of the ith input Subscript m layers m’s layers, not patterns W m ij mean connection from V j m-1 to V i m

9 Stochastic Back-Propagation Algorithm (mostly used) 1. Initialize the weights to small random values 2. Choose a pattern x d k and apply is to the input layer V 0 k = x d k for all k 3. Propagate the signal through the network 4. Compute the deltas for the output layer 5. Compute the deltas for the preceding layer for m=M,M-1,..2 6. Update all connections 7. Goto 2 and repeat for the next pattern

10 Example w 1 ={w 11 =0.1,w12=0.1,w13=0.1,w14=0.1,w15=0.1} w 2 ={w11=0.1,w12=0.1,w13=0.1,w14=0.1,w15=0.1} w 3 ={w11=0.1,w12=0.1,w13=0.1,w14=0.1,w15=0.1} W 1 ={w11=0.1,w12=0.1,w13=0.1} W 2 ={w11=0.1,w12=0.1,w13=0.1} X 1 ={1,1,0,0,0}; t 1 ={1,0} X 2 ={0,0,0,1,1}; t 1 ={0,1}

11 net 1 1 =1*0.1+1*0.1+0*0.1+0*0.1+0*0.1 V 1 1 =f(net 1 1 )=1/(1+exp(-0.2))=0.54983 V 1 2 =f(net 1 2 )=1/(1+exp(-0.2))=0.54983 V 1 3 =f(net 1 3 )=1/(1+exp(-0.2))=0.54983

12 net 1 1 =0.54983*0.1+ 0.54983*0.1+ 0.54983*0.1= 0.16495 o 1 1 = f(net11)=1/(1+exp(- 0.16495))= 0.54114 net 1 2 =0.54983*0.1+ 0.54983*0.1+ 0.54983*0.1= 0.16495 o 1 2 = f(net11)=1/(1+exp(- 0.16495))= 0.54114

13 We will use stochastic gradient descent with  =1

14  1 =(1- 0.54114)*(1/(1+exp(- 0.16495)))*(1-(1/(1+exp(- 0.16495))))= 0.11394  2 =(0- 0.54114)*(1/(1+exp(- 0.16495)))*(1-(1/(1+exp(- 0.16495))))= -0.13437

15

16  1 = 1/(1+exp(- 0.2))*(1- 1/(1+exp(- 0.2)))*(0.1* 0.11394+0.1*( -0.13437))  1 = -5.0568e-04  2 = -5.0568e-04  3 = -5.0568e-04

17 First Adaptation for x 1 (one epoch, adaptation over all training patterns, in our case x 1 x 2 )  1 = -5.0568e-04  1 = 0.11394  2 = -5.0568e-04  2 = -0.13437  3 = -5.0568e-04 x 1 =1 v 1 =0.54983 x 2 =1 v 2 =0.54983 x 3 =0v 3 =0.54983 x 4 =0 x 5 =0

18 Radial Basis-Function Networks RBF networks train rapidly No local minima problems No oscillation Universal approximators Can approximate any continuous function Share this property with feed forward networks with hidden layer of nonlinear neurons (units) Disadvantage After training they are generally slower to use

19

20 Gaussian response function Each hidden layer unit computes x = an input vector u = weight vector of hidden layer neuron i

21 The output neuron produces the linear weighted sum The weights have to be adopted (LMS)

22 The operation of the hidden layer One dimensional input

23 Two dimensional input

24 Every hidden neuron has a receptive field defined by the basis-function x=u, maximum output Output for other values drops as x deviates from u Output has a significant response to the input x only over a range of values of x called receptive field The size of the receptive field is defined by  u may be called mean and  standard deviation The function is radially symmetric around the mean u

25 Location of centers u The location of the receptive field is critical Apply clustering to the training set each determined cluster center would correspond to a center u of a receptive field of a hidden neuron

26 Determining  The object is to cover the input space with receptive fields as uniformly as possible If the spacing between centers is not uniform, it may be necessary for each hidden layer neuron to have its own  For hidden layer neurons whose centers are widely separated from others,  must be large enough to cover the gap

27 Following heuristic will perform well in practice For each hidden layer neuron, find the RMS distance between u i and the center of its N nearest neighbors c j Assign this value to  i

28

29

30 Why does a RBF network work? The hidden layer applies a nonlinear transformation from the input space to the hidden space In the hidden space a linear discrimination can be performed  ( )

31

32 Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Radial Basis-Function Networks Gaussian response function Location of center u Determining sigma Why does RBF network work

33 Bibliography Wasserman, P. D., Advanced Methods in Neural Computing, New York: Van Nostrand Reinhold, 1993 Simon Haykin, Neural Networks, Secend edition Prentice Hall, 1999

34 Support Vector Machines


Download ppt "Radial Basis-Function Networks. Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Radial Basis-Function Networks Gaussian response."

Similar presentations


Ads by Google