Download presentation

Presentation is loading. Please wait.

Published byDiego Murray Modified over 2 years ago

1
6. Radial-basis function (RBF) networks RBF = radial-basis function: a function which depends only on the radial distance from a point XOR problem quadratically separable

2
So RBFs are functions taking the form Where is a nonlinear activation function, x is the input and x i is the ith position, prototype, basis or centre vector. The idea is that points near the centres will have similar outputs I.e. if x ~ x i then (x) ~ ( x i ) since they should have similar properties. Therefore instead of looking at the data points themselves characterise the data by their distances from the prototype vectors (similar to kernel density estimation)

3
x d 1 d 2 (0,0) (1,1) 1.5 (0,1) 01.1 (1,0).5 x 1 =(0,1) x 2 =(1,0.5) For example, the simplest form of is the identity function (x) = x Now use the distances as the inputs to a network and form a weighted sum of these

4
yMyM Input y1y1 y2y2 Output Can be viewed as a Two-layer network Hidden layer y) y-x N wjwj d output = w i i (y) adjustable parameters are weights w j number of hidden units = number of prototype vectors Form of the basis functions decided in advance

5
use a weighted sum of the outputs from the basis functions for e.g. classification, density estimation etc Theory can be motivated by many things (regularisation, Bayesian classification, kernel density estimation, noisy interpolation etc), but all suggest that basis functions are set so as to represent the data. Thus centres can be thought of as prototypes of input data. * * * ** * O1O MLP vs RBF distributed local

6
P(C 1 ) 0 0 P(C 3 ) xy 3 (x) = p(x|C 3 ) 1 (x) = p(x|C 1 ) E.g. Bayesian interpretation: if we choose to model the probability and we choose appropriate weights then we can interpret the outputs as the posterior probabilities: O k = P(C k |(x) p(x|C k ) P(C k ) O1O1 O2O2 O3O3

7
Starting point: exact interpolation Each input pattern x must be mapped onto a target value d d x

8
That is, given a set of N vectors x i and a corresponding set of N real numbers, d i (the targets), find a function F that satisfies the interpolation condition: F ( x i ) = d i for i =1,...,N or more exactly find: satisfying:

9
Example: XOR problem x d (0,0) 0 (1,1) 0 (0,1) 1 (1,0) 1 Exact interpolation: RBF placed at position of each pattern vector using 1) linear RBF

10
i.e. 4 hidden units in network w Network structure

11
Results = w1w2w3w4w1w2w3w w1w2w3w4w1w2w3w4 = 1 1

12
Ie F(x 1,x 2 ) = sqrt(x 1 2 +x 2 2 ) sqrt((x 1 -1) 2 +x 2 2 ) sqrt(x 1 2 +(x 2 -1) 2 ) + sqrt((x 1 -1) 2 +(x 2 -1) 2 ) And general solution is:

13
x 1 - x 1 ) x 1 - x N ) x N - x N ) x N - x 1 ) = w1wNw1wN d1dNd1dN Interpolation Matrix weight W = D x i - x j ): scalar function of distance between vector x i and x j Equivalently For n vectors get:

14
If is invertible we have a unique solution of the above equation Micchellis Theorem Let x i, i = 1,..., N be a set of distinct points in R d, Then the N-by-N interpolation matrix, whose ji-th element is x i - x j ), is nonsingular. So provided is nonsingular then interpolation matrix will have an inverse and weights to achieve exact interpolation

15
Easy to see that there is always a solution. For instance, if we take (x-y)=1 if x = y, and 0 otherwise (e.g. a Gaussian with very small, setting w i =d i solves the interpolation problem However, this is a bit trivial as the only general conclusion about the input space is that the training data points are different.

16
To summarize: For a given data set containing N points (x i,d i ), i=1,…,N Choose a RBF function Calculate x j x i ) Obtain the matrix Solve the linear equation W = D Get the unique solution Done! Like MLPs, RBFNs can be shown to be able to approximate any function to arbitrary accuracy (using an arbitrarily large numbers of basis functions). Unlike MLPs, however, they have the property of best approximation i.e. that there exists an RBFN with minimum approximation error.

17
(a)Multiquadrics for some c>0 (b) Inverse multiquadrics for some c>0 (c)Gaussian for some >0 Other types of RBFs include

18
Inverse multiquadrics and Gaussian RBFs are both examples of localized functions Multiquadrics RBFs are nonlocalized functions Linear activation function has some undesirable properties e.g. ( x i ) = 0. (NB is still a non-linear function as it is only piecewise linear in x).

19
Localized: as distance from the centre increases the output of the RBF decreases

20
Nonlocalized: as distance from the centre increases the output of the RBF increases

21
Example: XOR problem x d (0,0) 0 (1,1) 0 (0,1) 1 (1,0) 1 Exact interpolation: RBF placed at position of each pattern vector using 2) Gaussian RBF with =1

22
i.e. 4 hidden units in network w Network structure

23
Results exp(0) exp(-.5) exp(-.5) exp(-1) exp(-.5) exp(0) exp(-1) exp(-.5) exp(-.5) exp(-1) exp(0) exp(-.5) exp(-1) exp(-.5) exp(-.5) exp(0) = w1w2w3w4w1w2w3w w1w2w3w4w1w2w3w4 =

24
2) f(x 1,x 2 ) = exp(-(x 1 2 +x 2 2 )/2) exp(-(x 1 -1) 2 +x 2 2 )/2) exp(-(x 1 2 +(x 2 -1) 2 )/2) exp(-(x 1 -1) 2 +(x 2 -1) 2 )/2) 1) f(x 1,x 2 ) = sqrt(x 1 2 +x 2 2 ) - sqrt((x 1 -1) 2 +x 2 2 ) - sqrt(x 1 2 +(x 2 -1) 2 ) + sqrt((x 1 -1) 2 +(x 2 -1) 2 )

25
Large = 1

26
Small = 0.2

27
Problems with exact interpolation can produce poor generalisation performance as only data points constrain mapping overfitting problem Bishop(1995) example Underlying function f(x)= sine(2pi x) sampled randomly for 30 points added gaussian noise to each data point 30 data points 30 hidden RBF units fits all data points but creates oscillations due added noise and unconstrained between data points

28
All Data Points 5 Basis functions

29
To fit an rbf to every data point is very inefficient due to the computational cost of matrix inversion and is very bad for generalisation so: Use less RBFs than data points I.e. M

30
1 parameterd parameters d(d+1)/2 parameters for d rbfs we have

31
6. Radial-basis function (RBF) networks II Generalised radial basis function networks Exact interpolation expensive due to cost of matrix inversion prefer fewer centres (hidden RBF units) centres not necessarily at data points can include biases can have general covariance matrices now no longer exact interpolation, so where M (number of hidden units)

32
xNxN Input: nD vector x1x1 x2x2 Output Three-layer networks Hidden layer x) x-x M wMwM y 1.output = w i i (x) 2.adjustable parameters are weights w j, number of hidden units M (

33
*x * r1r1 r2r2 r 1 ) r 2 ) w1w1 w2w2 F(x) x sig w 1 T x) w 31 w 32 F(x) sig w 2 T x) w1w1 w2w2 w 1 T x = k

34
Comparison of MLP to RBFN MLP hidden unit outputs are monotonic functions of a weighted linear sum of the inputs => constant on (d- 1)D hyperplanes distributed representation as many hidden units contribute to network output => interference between units => non-linear training => slow convergence RBF hidden unit outputs are functions of distance from prototype vector (centre) => constant on concentric (d-1)D hyperellipsoids localised hidden units mean that few contribute to output => lack of interference => faster convergence

35
Comparison of MLP to RBFN MLP more than one hidden layer global supervised learning of all weights global approximations to nonlinear mappings RBF one hidden layer hybrid learning with supervised learning in one set of weights localised approximations to nonlinear mappings

36
xNxN Input: nD vector x1x1 x2x2 Output Three-layer networks Hidden layer x) x-x M wMwM y 1.output = w i i (x) 2.adjustable parameters are weights w j, number of hidden units M (

37
Hybrid training of RBFN Two stage hybrid learning process stage 1: parameterise hidden layer of RBFs - hidden unit number (M) -centre/position (t i ) -width ( ) use unsupervised methods (see below) as they are quick and unlabelled data is plentiful. Idea is to estimate the density of the data stage 2 Find weight values between hidden and output units minimize sum-of-squares error between actual output and desired responses --invert matrix if M=N --Pseudoinverse of if M

38
Random subset approach Randomly select centres of M RBF hidden units from N data points widths of RBFs usually common and fixed to ensure a degree of overlap but based on an average or maximum distance between RBFs e.g. d max /sqrt (2M) where d max is the maximum distance between the set of M RBF units The method is efficient and fast, but suboptimal and its important to get correct …

39

40
Clustering Methods: K-means algorithm --divides data points into K subgroups based on similarity Batch version 1. Randomly assign each pattern vector x to one of K subsets 2. Compute mean vector of each subset 3. Reassign each point to subset with closest mean vector 4. Until no further reassignments, loop back to 2 On-line version 1. Randomly choose K data points to be basis centres i 2. As each vector is x n presented, update the nearest i using: Δ i = x n i ) 3. Repeat until no further changes

41
The covariance matrices ( can now be set to the covariance of the data points of each subset -- However, note that K must be decided at the start -- Also, the algorithm can be sensitive to initial conditions -- Can get problems of no/few points being in a set: see competitive learning lecture -- Might not cover the space accurately Other unsupervised techniques such as self organising maps and Gaussian mixture models can also be used Another approach is to use supervised techniques where the parameters of the basis functions are adaptive and can be optimised. However, this negates the speed and simplicity advantages of the 1st stage of training.

42
Relationship with probability density function estimation Radial basis functions can be related to kernel density functions (Parzen windows) used to estimate probability density functions E.g. In 2 dimensions the pdf at a point x can be estimated from the fraction of training points which fall within a square of side h centred on x Here p(x) = 1/6 x 1/(hxh) x n H(x-x n,h) where H = 1 if |x n -x| < h ie estimate density by fraction of points within each square Alternatively, H(|x n -x|) could be gaussian giving a smoother estimate for the pdf X * * * * * * xh y

43
In Radial basis networks the first stage of training is an attempt to model the density of the data in an unsupervised way As in kernel density estimation, we try to get an idea of the underlying density by picking some prototypical points Then use distribution of the data to approximate a prior distribution

44
Now for each training data vector t i and corresponding target d i we want F ( t i ) = d i, that is, we must find a function F that satisfies the interpolation condition : F ( t i ) = d i for i =1,...,N Or more exactly find: satisfying: Back to Stage 2 for a network with M < N basis vectors

45
t 1 - x 1 ) t 1 - x M ) t N - x N ) t N - x 1 ) = w0w1wMw0w1wM d1dNd1dN So the interpolation matrix becomes: Which can be written as: W = D where is an MxN matrix (not square).

46
To solve this we need to generate an error function such as the least squares error: and minimise it. As the derivative of the least squares error is a linear function of the weights it can be solved using linear matrix inversion techniques (usually singular value decomposition (Press et al., Numerical Recipes)). Other error functions can be used but minimising the error then becomes a non-linear optimisation problem.

47
However, note that the problem is OverDetermined That is, by using N training vectors and only M centres we have M unknowns (the weights) and N bits of information eg training vectors (-2, 0), (1, 0), targets 1, 2 centre: (0, 0), linear rbf W = D => w =0.5 or w =2 ??? Unless N=M and there are no degeneracies (parallel or nearly parallel) data vectors, we cannot simply invert the matrix and must use the pseudoinverse (using Singular Value Decomposition).

48
Alternatively, can view this as an ill-posed problem Ill-posed problems (Tikhonov) How do we infer function F which maps X onto y from a finite data set? This can be done if problem is well-posed - existence = each input pattern has an output - uniqueness = each input pattern maps onto only one output - continuity = small changes in input pattern space imply small changes in y In RBFs however: - noise can violate continuity condition - different output values for same input patterns violates uniqueness - insufficient information in training data may violate existence condition

49
Ill-posed problem: the finite data set does not yield a unique solution

50
Regularization theory (Tikhonov, 1963) To solve ill-posed problems need to supplement finite data set with prior knowledge about nature of mapping -- regularization theory common to place constraint that mapping is smooth (since smoothness implies continuity) add penalty term to standard sum-of squares error for non-smooth mappings E(F)=E S (F)+ E c (F) where eg: E S (F)= 1/2 ( d i - F(x i ) ) 2 and E c (F)=1/2 || DF || 2 and DF could be, say the first or second order derivative of F etc.

51
is called the regularization parameter: unconstrained (smoothness not enforced) = infinity, smoothness constraint dominates and less account is taken of training data error controls balance (trade-off) between a smooth mapping and fitting the data points exactly

52
E C = curvature = 0 = 40

53
Regularization networks --Poggio & Girosi (1990) applied regularization theory to RBF networks --By minimizing the new error function E(F) we obtain (using results from functional analysis) where I is the unit matrix. Provided E C is chosen to be quadratic in y, this equation can be solved using the same techniques as the non-regularised network.

54
Problems of RBFs 1. Need to choose number of basis functions 2. Due to local nature of basis functions has problems in ignoring noisy input dimensions unlike MLPs (helps to use dimensionality reduction such as PCA) 1D data, M rbfsSame data with uncorrelated noise, M 2 rbfs

55
Problems of RBFs 2 3. Optimal choice of basis function parameters may not be optimal for the output task Data from h => rbf at a, but gives a bad representation of h. In contrast, one centred at b would be perfect

56
Problems of RBFs 3 4. Because of dependence on distance, if variation in one parameter is small with respect to the others it will contribute very little to the outcome (l + ) 2 ~ l 2. Therefore, preprocess data to give zero mean and unit variance via simple transformation: x* = (x - ) (Could achieve the same using general covariance matrices but this is simpler)

57
However, this does not take into account correlations in the data. Better to use whitening (Bishop, 1995, pp )

58
x* = -1/2 U T (x - where U is a matrix whose columns are the eigenvectors u i of, the covariance matrix of the data, and a matrix with the corresponding eigenvalues i on the diagonals i.e: U = (u 1, … …, u n ) And: diag( 1, ……, n ) 1 u 1 2 u 2

59
Using RBF Nets in practice Choose a functional form (Gaussian generally, but prior knowledge/experience may suggest others) Select the type of pre-processing --Reduce dimensionality (techniques to follow in next few lectures) ? --Normalise (whiten) data? (no way of knowing if these will be helpful: may need to try a few combinations) Select clustering method (k-means) Select number of basis functions, cluster and find basis centres Find weights (via matrix inversion) Calculate performance measure.

60
If only life were so simple… How do we choose k? Similar to problem of selecting number of hidden nodes for MLP What type of pre-processing is best? Does the clustering method work for the data? E.g might be better to fix and try again. There is NO general answer: each choice will be problem-specific. The only info you have is your performance measure.

61
Note the dependence on the performance measure (make sure its a good one). Good thing about RBF Nets is that the training procedure is relatively quick and so lots of combinations can be used. Idea: try e.g. increasing k until performance measure decreases (or gets to a minimum, or something more adventurous). k Performance measure Optimal k?

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google