Beyond Linear Separability

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Artificial Intelligence 12. Two Layer ANNs
Multi-Layer Perceptron (MLP)
EA C461 - Artificial Intelligence
Artificial Neural Networks
Backpropagation Learning Algorithm
Slides from: Doug Gray, David Poole
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Artificial Neural Networks
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Classification Neural Networks 1
Artificial Neural Networks
Lecture 14 – Neural Networks
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Artificial Neural Networks
CS 484 – Artificial Intelligence
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Artificial Neural Networks
Classification Part 3: Artificial Neural Networks
Artificial Neural Networks
DIGITAL IMAGE PROCESSING Dr J. Shanbehzadeh M. Hosseinajad ( J.Shanbehzadeh M. Hosseinajad)
CS464 Introduction to Machine Learning1 Artificial N eural N etworks Artificial neural networks (ANNs) provide a general, practical method for learning.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Classification / Regression Neural Networks 2
Artificial Intelligence Techniques Multilayer Perceptrons.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Non-Bayes classifiers. Linear discriminants, neural networks.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
EEE502 Pattern Recognition
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Today’s Lecture Neural networks Training
Neural networks.
Fall 2004 Backpropagation CS478 - Machine Learning.
Learning with Perceptrons and Neural Networks
第 3 章 神经网络.
A Simple Artificial Neuron
CSE 473 Introduction to Artificial Intelligence Neural Networks
CSE P573 Applications of Artificial Intelligence Neural Networks
Classification / Regression Neural Networks 2
Machine Learning Today: Reading: Maria Florina Balcan
Classification Neural Networks 1
of the Artificial Neural Networks.
Artificial Intelligence Chapter 3 Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Artificial Neural Networks
Multilayer Perceptron & Backpropagation
Capabilities of Threshold Neurons
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Beyond Linear Separability

Limitations of Perceptron Only linear separations Only converges for linearly separable data One Solution (SVM’s) Map data into a feature space where they are linearly separable Another solution is to use multiple interconnected perceptrons.

Artificial Neural Networks Interconnected networks of simple units (let's call them "artificial neurons") in which each connection has a weight. Weight wij is the weight of the ith input into unit j. NN have some inputs where the feature values are placed and they compute one or more output values. Depending on the output value we determine the class. If there are more than one output units we choose the one with the greatest value. The learning takes place by adjusting the weights in the network so that the desired output is produced whenever a sample in the input data set is presented.

Single Perceptron Unit We start by looking at a simpler kind of "neural-like" unit called a perceptron. Depending on the value of h(x) it outputs one class or the other.

Beyond Linear Separability Since a single perceptron unit can only define a single linear boundary, it is limited to solving linearly separable problems. A problem like that illustrated by the values of the XOR boolean function cannot be solved by a single perceptron unit.

Multi-Layer Perceptron Solution: Combine multiple linear separators. The introduction of "hidden" units into NN make them much more powerful: they are no longer limited to linearly separable problems. Earlier layers transform the problem into more tractable problems for the latter layers.

Example: XOR problem Output: “class 0” or “class 1”

Example: XOR problem

Example: XOR problem w23o2+w13o1+w03=0 w03=-1/2, w13=-1, w23=1 o2-o1-1/2=0

Multi-Layer Perceptron Any set of training points can be separated by a three-layer perceptron network. “Almost any” set of points is separable by two-layer perceptron network.

Autonomous Land Vehicle In a Neural Network (ALVINN) ALVINN is an automatic steering system for a car based on input from a camera mounted on the vehicle. Successfully demonstrated in a cross-country trip.

ALVINN The ALVINN neural network is shown here. It has 960 inputs (a 30x32 array derived from the pixels of an image), 4 hidden units and 30 output units (each representing a steering command).

Optional material

Multi-Layer Perceptron Learning However, the presence of the discontinuous threshold in the operation means that there is no simple local search for a good set of weights; one is forced into trying possibilities in a combinatorial way. The limitations of the single-layer perceptron and the lack of a good learning algorithm for multilayer perceptrons essentially killed the field for quite a few years.

Soft Threshold A natural question to ask is whether we could use gradient ascent/descent to train a multi-layer perceptron. The answer is that we can't as long as the output is discontinuous with respect to changes in the inputs and the weights. In a perceptron unit it doesn't matter how far a point is from the decision boundary, we will still get a 0 or a 1. We need a smooth output (as a function of changes in the network weights) if we're to do gradient descent.

Sigmoid Unit The classic "soft threshold" that is used in neural nets is referred to as a "sigmoid" (meaning S-like) and is shown here. The variable z is the "total input" or "activation" of a neuron, that is, the weighted sum of all of its inputs. Note that when the input (z) is 0, the sigmoid's value is 1/2. The sigmoid is applied to the weighted inputs (including the threshold value as before). There are actually many different types of sigmoids that can be (and are) used in neural networks. The sigmoid shown here is actually called the logistic function.

Training The key property of the sigmoid is that it is differentiable. This means that we can use gradient based methods of minimization for training. The output of a multi-layer net of sigmoid units is a function of two vectors, the inputs (x) and the weights (w). The output of this function (y) varies smoothly with changes in the input and, importantly, with changes in the weights.

Training

Training Given a set of training points, each of which specifies the net inputs and the desired outputs, we can write an expression for the training error, usually defined as the sum of the squared differences between the actual output (given the weights) and the desired output. The goal of training is to find a weight vector that minimizes the training error. We could also use the mean squared error (MSE), which simply divides the sum of the squared errors by the number of training points instead of just 2. Since the number of training points is a constant, the value for which we get the minimum is not affected.

Training

Gradient Descent We've seen that the simplest method for minimizing a differentiable function is gradient descent (or ascent if we're maximizing). Recall that we are trying to find the weights that lead to a minimum value of training error. Here we see the gradient of the training error as a function of the weights. The descent rule is basically to change the weights by taking a small step (determined by the learning rate ) in the direction opposite this gradient. Online version: We consider each time only the error for one data item

Gradient Descent – Single Unit Substituting in the equation of previous slide we get (for the arbitrary ith element): Delta rule

Derivative of the sigmoid

Generalized Delta Rule Now, let’s compute 4. z4 will influence E, only indirectly through z5 and z6.

Generalized Delta Rule In general, for a hidden unit j we have

Generalized Delta Rule For an output unit we have

Backpropagation Algorithm Initialize weights to small random values Choose a random sample training item, say (xm, ym) Compute total input zj and output yj for each unit (forward prop) Compute n for output layer n = yn(1-yn)(yn-ynm) Compute j for all preceding layers by backprop rule Compute weight change by descent rule (repeat for all weights) Note that each expression involves data local to a particular unit, we don't have to look around summing things over the whole network. It is for this reason, simplicity, locality and, therefore, efficiency that backpropagation has become the dominant paradigm for training neural nets.

Training Neural Nets Now that we have looked at the basic mathematical techniques for minimizing the training error of a neural net, we should step back and look at the whole approach to training a neural net, keeping in mind the potential problem of overfitting. Here we look at a methodology that attempts to minimize that danger.

Training Neural Nets Given: Data set, desired outputs and a neural net with m weights. Find a setting for the weights that will give good predictive performance on new data. Split data set into three subsets: Training set – used for adjusting weights Validation set – used to stop training Test set – used to evaluate performance Pick random, small weights as initial values Perform iterative minimization of error over training set (backprop) Stop when error on validation set reaches a minimum (to avoid overfitting) Repeat training (from step 2) several times (to avoid local minima) Use best weights to compute error on test set.

Backpropagation Example First do forward propagation: Compute zi’s and yi’s. y3 z3 3 w03 -1 w13 w23 y1 z1 y2 z2 1 2 w21 w12 w02 w01 w11 w22 -1 -1 x1 x2