Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.

Similar presentations


Presentation on theme: "Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL."— Presentation transcript:

1 Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL

2 Weight Optimization NEURAL NETWORKS – Radial Basis Function Networks 2 Weights are computed by means of the least squares estimation method. – For an example (x i, t i ) consider the output of the network – We would like y(x i ) = t i for each example, that is – This can be re-written in matrix form for all examples at the same time as:

3 Supervised RBF Network Training NEURAL NETWORKS – Radial Basis Function Networks 3 Hybrid Learning Process: – Clustering for finding the centers. – Spreads chosen by normalization. – Least Squares Estimation for finding the weights. or – Apply the gradient descent method for finding centers, spread and weights, by minimizing the (instantaneous) squared error: Update for: centers spread weights

4 Example: Example: The XOR Problem NEURAL NETWORKS – Radial Basis Function Networks 4

5 Example: Example: The XOR Problem NEURAL NETWORKS – Radial Basis Function Networks 5

6 Comparison of RBF Networks with MLPs NEURAL NETWORKS – Radial Basis Function Networks 6 Similarities 1. They are both non-linear feed-forward networks. 2. They are both universal approximators. 3. They are used in similar application areas. Output units Linear RBF units Non-linear Input units Supervised Unsupervised

7 Comparison of RBF Networks with MLPs NEURAL NETWORKS – Radial Basis Function Networks 7 Differences 1.An RBF network (in its natural form) has a single hidden layer, whereas MLPs can have any number of hidden layers. 2.RBF networks are usually fully connected, whereas it is common for MLPs to be only partially connected. 3.In MLPs the computation nodes (processing units) in different layers share a common neuronal model, though not necessarily the same activation function. In RBF networks the hidden nodes (basis functions) operate very differently, and have a very different purpose, to the output nodes. 4.In RBF networks, the argument of each hidden unit activation function is the distance between the input and the “weights” (RBF centres), whereas in MLPs it is the inner product of the input and the weights. 5.MLPs are usually trained with a single global supervised algorithm, whereas RBF networks are usually trained one layer at a time with the first layer unsupervised. 6.MLPs construct global approximations to non-linear input-output mappings with distributed hidden representations, whereas RBF networks tend to use localised non-linearities (Gaussians) at the hidden layer to construct local approximations.


Download ppt "Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL."

Similar presentations


Ads by Google