Last lecture summary.

Slides:



Advertisements
Similar presentations
1 Image Classification MSc Image Processing Assignment March 2003.
Advertisements

Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Support Vector Machines
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Ch. 4: Radial Basis Functions Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from many Internet sources Longin.
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
x – independent variable (input)
6/10/ Visual Recognition1 Radial Basis Function Networks Computer Science, KAIST.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Radial Basis-Function Networks. Back-Propagation Stochastic Back-Propagation Algorithm Step by Step Example Radial Basis-Function Networks Gaussian response.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 15: Introduction to Artificial Neural Networks Martin Russell.
Radial Basis Function Networks 표현아 Computer Science, KAIST.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks I PROF. DR. YUSUF OYSAL.
Aula 4 Radial Basis Function Networks
Artificial Neural Network
Radial Basis Function (RBF) Networks
Radial Basis Function G.Anuradha.
Radial-Basis Function Networks
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Radial Basis Function Networks
8/10/ RBF NetworksM.W. Mak Radial Basis Function Networks 1. Introduction 2. Finding RBF Parameters 3. Decision Surface of RBF Networks 4. Comparison.
Last lecture summary.
Radial Basis Function Networks
Radial Basis Function Networks
Biointelligence Laboratory, Seoul National University
Artificial Neural Networks Shreekanth Mandayam Robi Polikar …… …... … net k   
Multi-Layer Perceptrons Michael J. Watts
Chapter 9 Neural Network.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Radial Basis Function Networks:
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Last lecture summary. biologically motivated synapses Neuron accumulates (Σ) positive/negative stimuli from other neurons. Then Σ is processed further.
Non-Bayes classifiers. Linear discriminants, neural networks.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Fundamentals of Artificial Neural Networks Chapter 7 in amlbook.com.
Fast Learning in Networks of Locally-Tuned Processing Units John Moody and Christian J. Darken Yale Computer Science Neural Computation 1, (1989)
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
EEE502 Pattern Recognition
Neural Networks 2nd Edition Simon Haykin
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning Supervised Learning Classification and Regression
Big data classification using neural network
Artificial Neural Networks
Neural Networks Winter-Spring 2014
Radial Basis Function G.Anuradha.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSC 578 Neural Networks and Deep Learning
Neuro-Computing Lecture 4 Radial Basis Function Network
Neural Network - 2 Mayank Vatsa
Multilayer Perceptron & Backpropagation
Multilayer Perceptron: Learning : {(xi, f(xi)) | i = 1 ~ N} → W
Introduction to Radial Basis Function Networks
Neural networks (1) Traditional multi-layer perceptrons
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Presentation transcript:

Last lecture summary

Multilayer perceptron MLP, the most famous type of neural network input layer hidden layer output layer

Processing by one neuron bias activation function output weights inputs

Linear activation functions w∙x > 0 w∙x ≤ 0 linear threshold

Nonlinear activation functions logistic (sigmoid, unipolar) tanh (bipolar)

Backpropagation training algorithm MLP is trained by backpropagation. forward pass present a training sample to the neural network calculate the error (MSE) in each output neuron backward pass first calculate gradient for hidden-to-output weights then calculate gradient for input-to-hidden weights the knowledge of gradhidden-output is necessary to calculate gradinput-hidden update the weights in the network - backpropagation – based on steepest decent - beta … learning rate

input signal propagates forward error propagates backward

Momentum Online learning vs. batch learning Batch learning improves the stability by averaging. Another averaging approach providing stability is using the momentum (μ). μ (between 0 and 1) indicates the relative importance of the past weight change ∆wm-1 on the new weight increment ∆wm - online learning – new patterns must be processed as they are introduced

Other improvements Delta-Bar-Delta (Turboprop) Second order methods Each weight has its own learning rate β. Second order methods Hessian matrix (How fast changes the rate of increase of the function in the small neighborhood?  curvature) QuickProp, Gauss-Newton, Levenberg-Marquardt less epochs, computationally (Hessian inverse, storage) expensive

Improving generalization of MLP Flexibility comes from hidden neurons. Choose such a # of hidden neurons that neither underfitting, nor overfitting occurs. Three most common approaches: exhaustive search stop training after MSE < small_threshold (e.g. 0.001) early stopping large number of hidden neurons regularization weight decay

number of neurons Sandhya Samarasinghe, Neural Networks for Applied Sciences and Engineering, 2006

Network pruning Keep only essential weights/neurons. Optimal Brain Damage (OBD) If the saliency si of the weight is small, remove the weight. Train flexible network (e.g. early stopping), the remove weights, retrain network, etc.

Radial Basis Function Networks (new stuff)

Radial Basis Function (RBF) Network Becoming an increasingly popular neural network. Is probably the main rival to the MLP. Completely different approach by viewing the design of a neural network as an approximation problem in high-dimensional space. Uses radial functions as activation function.

Gaussian RBF Typical radial function is the Gaussian RBF (monotonically decreases with distance from the center). Their response decreases with distance from a central point. Parameters: center c width (radius r) r radius c - center

Local vs. global units Local Global they cover just certain part of the space i.e. they are nonzero just in certain part of the space Global sigmoid, linear Gaussian - Gaussian is local function

MLP RBF classification using global (MLP) and local (RBF) units Pavel Kordík, Data Mining lecture, FEL, ČVUT, 2009

RBFN architecture Each of n compo-nents of the input vector x feeds forward to m basis functions whose outputs are linearly combined with weights w (i.e. dot product x∙w) into the network output f(x). no weights x1 h1 h2 W1 x2 W2 h3 x3 W3 f(x) Wm xn hm Input layer Hidden layer (RBFs) Output layer Pavel Kordík, Data Mining lecture, FEL, ČVUT, 2009

Pavel Kordík, Data Mining lecture, FEL, ČVUT, 2009 Σ - 2D Gaussian

The basic architecture for a RBF is a 3-layer network. The input layer is simply a fan-out layer and does no processing. The hidden layer performs a non-linear mapping from the input space into a (usually) higher dimensional space in which the patterns become linearly separable. The output layer performs a simple weighted sum (i.e. w∙x). If the RBFN is used for regression then this output is fine. However, if pattern classification is required, then a hard-limiter or sigmoid function could be placed on the output neurons to give 0/1 output values

Clustering The unique feature of the RBF network is the process performed in the hidden layer. The idea is that the patterns in the input space form clusters. If the centres of these clusters are known, then the distance from the cluster centre can be measured.

Beyond this area, the value drops dramatically. Furthermore, this distance measure is made non-linear, so that if a pattern is in an area that is close to a cluster centre it gives a value close to 1. Beyond this area, the value drops dramatically. The notion is that this area is radially symmetrical around the cluster centre, so that the non-linear function becomes known as the radial-basis function. non-linearly transformed distance distance from the center of the cluster

RBFN for classification Category 1 Category 1 Category 2 Category 2 Σ Σ

RBFN for regression http://diwww.epfl.ch/mantra/tutorial/english/rbf/html/

XOR problem 1 - 0,0 and 1,1 gives 0 as output 1

XOR problem 2 inputs x1, x2, 2 hidden units (with outputs φ1, φ2), one output The parameters for two hidden units are set as c1 = <0,0>, c2 = <1,1> the value of radius r is chosen such that 2r2 = 1 x1 x2 h1 h2 φ1 φ2 x1 x2 φ1 φ2 1 0.1 0.4

Linear classifier is represented by the output layer. 1 0,1 1,1 1 1,1 0,1 1,0 0,0 0,0 1,0 1 1 - 0,0 and 1,1 gives 0 as output x1 x2 φ1 φ2 1 0.1 0.4 When mapped into the feature space < h1 , h2 >, two classes become linearly separable. So a linear classifier with h1(x) and h2(x) as inputs can be used to solve the XOR problem. Linear classifier is represented by the output layer.

RBF Learning Design decision Parameters to be learnt number of hidden neurons max of neurons = number of input patterns min of neurons = determine more neurons – more complex, smaller tolerance Parameters to be learnt centers radii A hidden neuron is more sensitive to data points near its center. This sensitivity may be tuned by adjusting the radius. smaller radius  fits training data better (overfitting) larger radius  less sensitivity, less overfitting, network of smaller size, faster execution weights between hidden and output layers

Learning can be divide into two independent tasks: Center and radii determination Learning of output layer weights Learning strategies for RBF parameters Sample center position randomly from the training data Self-organized selection of centers Both layers are learnt using supervised learning

Select centers at random Choose centers randomly from the training set. Radius r is calculated as Weights are found by means of numerical linear algebra approach. Requires a large training set for a satisfactory level of performance.

Self-organized selection of centers centers are selected using k-means clustering algorithm radii are usually found using k-NN find k-nearest centers The root-mean squared distance between the current cluster centre and its k (typically 2) nearest neighbours is calculated, and this is the value chosen for r. The output layer is learnt using a gradient descent technique

Supervised learning Supervised learning of all parameters (centers, radii, weights) using gradient descent. Mathematical formulas for updating all of these parameters. They are not shown here, it is not necessary to scare you in such a “nice” day. Learning rate is used.

Advantages/disadvantages RBFN trains faster than a MLP Although the RBFN is quick to train, when training is finished and it is being used it is slower than a MLP. RBFN are essentially well tried statistical techniques being presented as neural networks. Learning mechanisms in statistical neural networks are not biologically plausible. RBFN can give “I don’t know” answer. RBFN construct local approximations to non-linear I/O mapping. MLP construct global approximations to non-linear I/O mapping.