Presentation is loading. Please wait.

Presentation is loading. Please wait.

September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.

Similar presentations


Presentation on theme: "September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly."— Presentation transcript:

1 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly chosen weight vector w 0 ; Let k = 1; while there exist input vectors that are misclassified by w k-1, do Let i j be a misclassified input vector; Let x k = class(i j )  i j, implying that w k-1  x k < 0; Update the weight vector to w k = w k-1 +  x k ; Increment k; end-while;

2 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 2 Another Refresher: Linear Algebra How can we visualize a straight line defined by an equation such as w 0 + w 1 i 1 + w 2 i 2 = 0? One possibility is to determine the points where the line crosses the coordinate axes: i 1 = 0  w 0 + w 2 i 2 = 0  w 2 i 2 = -w 0  i 2 = -w 0 /w 2 i 2 = 0  w 0 + w 1 i 1 = 0  w 1 i 1 = -w 0  i 1 = -w 0 /w 1 Thus, the line crosses at (0, -w 0 /w 2 ) T and (-w 0 /w 1, 0) T. If w 1 or w 2 is 0, it just means that the line is horizontal or vertical, respectively. If w 0 is 0, the line hits the origin, and its slope i 2 /i i is: w 1 i 1 + w 2 i 2 = 0  w 2 i 2 = -w 1 i 1  i 2 /i 1 = -w 1 /w 2

3 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 3 Perceptron Learning Example i1i1 123-3-2 i2i2 1 2 3 -3 -2 1 We would like our perceptron to correctly classify the five 2-dimensional data points below. Let the random initial weight vector w 0 = (2, 1, -2) T. Then the dividing line crosses at (0, 1) T and (-2, 0) T. Then the dividing line crosses at (0, 1) T and (-2, 0) T. class -1 class 1 Let us pick the misclassified point (-2, -1) T for learning: i = (1, -2, -1) T (include offset 1) x 1 = (-1)  (1, -2, -1) T (i is in class -1) x 1 = (-1, 2, 1) T

4 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 4 Perceptron Learning Example i1i1 123-3-2 i2i2 1 2 3 -3 -2 1 w 1 = w 0 + x 1 (let us set  = 1 for simplicity) w 1 = (2, 1, -2) T + (-1, 2, 1) T = (1, 3, -1) T The new dividing line crosses at (0, 1) T and (-1/3, 0) T. Let us pick the next misclassified point (0, 2) T for learning: i = (1, 0, 2) T (include offset 1) x 2 = (1, 0, 2) T (i is in class 1) class -1 class 1

5 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 5 Perceptron Learning Example i1i1 123-3-2 i2i2 1 2 3 -3 -2 1 w 2 = w 1 + x 2 w 2 = (1, 3, -1) T + (1, 0, 2) T = (2, 3, 1) T Now the line crosses at (0, -2) T and (-2/3, 0) T. With this weight vector, the perceptron achieves perfect classification! The learning process terminates. In most cases, many more iterations are necessary than in this example. class -1 class 1

6 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 6 Perceptron Learning Results We proved that the perceptron learning algorithm is guaranteed to find a solution to a classification problem if it is linearly separable. But are those solutions optimal? One of the reasons why we are interested in neural networks is that they are able to generalize, i.e., give plausible output for new (untrained) inputs. How well does a perceptron deal with new inputs?

7 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 7 Perceptron Learning Results Perfect classification of training samples, but may not generalize well to new (untrained) samples.

8 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 8 Perceptron Learning Results This function is likely to perform better classification on new samples.

9 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 9Adalines Idea behind adaptive linear elements (Adalines): Compute a continuous, differentiable error function between net input and desired output (before applying threshold function). For example, compute the mean squared error (MSE) between every training vector and its class (1 or -1). Then find those weights for which the error is minimal. With a differential error function, we can use the gradient descent technique to find this absolute minimum in the error function.

10 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 10 Gradient Descent Gradient descent is a very common technique to find the absolute minimum of a function. It is especially useful for high-dimensional functions. We will use it to iteratively minimizes the network’s (or neuron’s) error by finding the gradient of the error surface in weight-space and adjusting the weights in the opposite direction.

11 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 11 Gradient Descent Gradient-descent example: Finding the absolute minimum of a one-dimensional error function f(x): f(x)x x0x0x0x0 slope: f’(x 0 ) x 1 = x 0 -  f’(x 0 ) Repeat this iteratively until for some x i, f’(x i ) is sufficiently close to 0.

12 September 23, 2010Neural Networks Lecture 6: Perceptron Learning 12 Gradient Descent Gradients of two-dimensional functions: The two-dimensional function in the left diagram is represented by contour lines in the right diagram, where arrows indicate the gradient of the function at different locations. Obviously, the gradient is always pointing in the direction of the steepest increase of the function. In order to find the function’s minimum, we should always move against the gradient.


Download ppt "September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly."

Similar presentations


Ads by Google