Introduction to Radial Basis Function Networks 主講人: 虞台文
Content Overview The Models of Function Approximator The Radial Basis Function Networks RBFN’s for Function Approximation Learning the Kernels Model Selection
Introduction to Radial Basis Function Networks Overview
Typical Applications of NN Pattern Classification Function Approximation Time-Series Forecasting
Function Approximation Unknown f Approximator ˆ f
Supervised Learning Unknown Function + Neural Network
Neural Networks as Universal Approximators Feedforward neural networks with a single hidden layer of sigmoidal units are capable of approximating uniformly any continuous multivariate function, to any desired degree of accuracy. Hornik, K., Stinchcombe, M., and White, H. (1989). "Multilayer Feedforward Networks are Universal Approximators," Neural Networks, 2(5), 359-366. Like feedforward neural networks with a single hidden layer of sigmoidal units, it can be shown that RBF networks are universal approximators. Park, J. and Sandberg, I. W. (1991). "Universal Approximation Using Radial-Basis-Function Networks," Neural Computation, 3(2), 246-257. Park, J. and Sandberg, I. W. (1993). "Approximation and Radial-Basis-Function Networks," Neural Computation, 5(2), 305-316.
Statistics vs. Neural Networks model network estimation learning regression supervised learning interpolation generalization observations training set parameters (synaptic) weights independent variables inputs dependent variables outputs ridge regression weight decay
Introduction to Radial Basis Function Networks The Model of Function Approximator
Linear Models Weights Fixed Basis Functions
Linearly weighted output Linear Models y 1 2 m x1 x2 xn w1 w2 wm x = Linearly weighted output Output Units Decomposition Feature Extraction Transformation Hidden Units Inputs Feature Vectors
Linearly weighted output Linear Models Can you say some bases? y Linearly weighted output Output Units w1 w2 wm Decomposition Feature Extraction Transformation Hidden Units 1 2 m Inputs Feature Vectors x = x1 x2 xn
Example Linear Models Are they orthogonal bases? Polynomial Fourier Series
Single-Layer Perceptrons as Universal Aproximators 1 2 m x1 x2 xn w1 w2 wm x = With sufficient number of sigmoidal units, it can be a universal approximator. Hidden Units
Radial Basis Function Networks as Universal Aproximators y 1 2 m x1 x2 xn w1 w2 wm x = With sufficient number of radial-basis-function units, it can also be a universal approximator. Hidden Units
Non-Linear Models Weights Adjusted by the Learning process
Introduction to Radial Basis Function Networks The Radial Basis Function Networks
Radial Basis Functions Three parameters for a radial function: i(x)= (||x xi||) xi Center Distance Measure Shape r = ||x xi||
Typical Radial Functions Gaussian Hardy Multiquadratic Inverse Multiquadratic
Gaussian Basis Function (=0.5,1.0,1.5)
Inverse Multiquadratic
Basis {i: i =1,2,…} is `near’ orthogonal. Most General RBF + + +
Properties of RBF’s On-Center, Off Surround Analogies with localized receptive fields found in several biological structures, e.g., visual cortex; ganglion cells
The Topology of RBF As a function approximator x1 x2 xn y1 ym Output Units Interpolation Hidden Units Projection Inputs Feature Vectors
The Topology of RBF As a pattern classifier. x1 x2 xn y1 ym Output Units Classes Hidden Units Subclasses Inputs Feature Vectors
Introduction to Radial Basis Function Networks RBFN’s for Function Approximation
The idea y Unknown Function to Approximate Training Data x
Basis Functions (Kernels) The idea y Unknown Function to Approximate Training Data x Basis Functions (Kernels)
Basis Functions (Kernels) The idea y Function Learned x Basis Functions (Kernels)
Basis Functions (Kernels) The idea Nontraining Sample y Function Learned x Basis Functions (Kernels)
The idea Nontraining Sample y Function Learned x
Radial Basis Function Networks as Universal Aproximators Training set x1 x2 xn w1 w2 wm x = Goal for all k
Learn the Optimal Weight Vector Training set x1 x2 xn x = Goal for all k w1 w2 wm
Regularization Training set If regularization is unneeded, set Goal for all k
Learn the Optimal Weight Vector Minimize
Learn the Optimal Weight Vector Define
Learn the Optimal Weight Vector Define
Learn the Optimal Weight Vector
Learn the Optimal Weight Vector Design Matrix Variance Matrix
Training set Summary
Introduction to Radial Basis Function Networks Learning the Kernels
RBFN’s as Universal Approximators xn y1 ym 1 2 l w11 w12 w1l wm1 wm2 wml Training set Kernels
What to Learn? x1 x2 xn y1 ym Weights wij’s Centers j’s of j’s 1 2 l w11 w12 w1l wm1 wm2 wml Weights wij’s Centers j’s of j’s Widths j’s of j’s Number of j’s Model Selection
One-Stage Learning
The simultaneous updates of all three sets of parameters may be suitable for non-stationary environments or on-line setting. One-Stage Learning
Two-Stage Training x1 x2 xn y1 ym Step 2 Step 1 w11 w12 w1l wm1 wm2 1 2 l w11 w12 w1l wm1 wm2 wml Step 2 Determines wij’s. E.g., using batch-learning. Step 1 Determines Centers j’s of j’s. Widths j’s of j’s. Number of j’s.
Train the Kernels
Unsupervised Training + + + + +
Unsupervised Training Random subset selection Clustering Algorithms Mixture Models
The Projection Matrix Unknown Function
The Projection Matrix Unknown Function Error Vector