Download presentation

Presentation is loading. Please wait.

1
Perceptron

2
**Stochastic Approximation to gradient descent**

Inner-product scalar Perceptron Perceptron learning rule XOR problem linear separable patterns Gradient descent Stochastic Approximation to gradient descent LMS Adaline

3
Inner-product A measure of the projection of one vector onto another

4
Example

5
Activation function

6
sigmoid function

7
**Graphical representation**

8
**Similarity to real neurons...**

9
1040 Neurons 104-5 connections per neuron

10
**Perceptron . Linear treshold unit (LTU) x0=1 x1 w1 w0 w2 x2 o wn xn**

McCulloch-Pitts model of a neuron

11
The goal of a perceptron is to correctly classify the set of pattern D={x1,x2,..xm} into one of the classes C1 and C2 The output for class C1 is o=1 and fo C2 is o=-1 For n=2

12
**Lineary separable patterns**

13
**Perceptron learning rule**

Consider linearly separable problems How to find appropriate weights Look if the output pattern o belongs to the desired class, has the desired value d is called the learning rate 0 < ≤ 1

14
**(d-o) plays the role of the error signal**

In supervised learning the network has ist output compared with known correct answers Supervised learning Learning with a teacher (d-o) plays the role of the error signal

15
**Perceptron The algorithm converges to the correct classification**

if the training data is linearly separable and is sufficiently small When assigning a value to we must keep in mind two conflicting requirements Averaging of past inputs to provide stable weights estimates, which requires small Fast adaptation with respect to real changes in the underlying distribution of the process responsible for the generation of the input vector x, which requires large

16
Several nodes

18
Constructions

19
Frank Rosenblatt

23
**Rosenblatt's bitter rival and professional nemesis was Marvin Minsky of Carnegie Mellon University**

Minsky despised Rosenblatt, hated the concept of the perceptron, and wrote several polemics against him For years Minsky crusaded against Rosenblatt on a very nasty and personal level, including contacting every group who funded Rosenblatt's research to denounce him as a charlatan, hoping to ruin Rosenblatt professionally and to cut off all funding for his research in neural nets

24
**XOR problem and Perceptron**

By Minsky and Papert in mid 1960

26
**Gradient Descent To understand, consider simpler linear unit, where**

Let's learn wi that minimize the squared error, D={(x1,t1),(x2,t2), . .,(xd,td),..,(xm,tm)} (t for target)

27
**Error for different hypothesis, for w0 and w1 (dim 2)**

28
**We want to move the weigth vector in the direction that decrease E**

wi=wi+wi w=w+w

29
The gradient

30
Differentiating E

31
**Update rule for gradient decent**

32
**Gradient Descent Gradient-Descent(training_examples, )**

Each training example is a pair of the form <(x1,…xn),t> where (x1,…,xn) is the vector of input values, and t is the target output value, is the learning rate (e.g. 0.1) Initialize each wi to some small random value Until the termination condition is met, Do Initialize each wi to zero For each <(x1,…xn),t> in training_examples Input the instance (x1,…,xn) to the linear unit and compute the output o For each linear unit weight wi wi= wi + (t-o) xi wi=wi+wi

34
**Stochastic Approximation to gradient descent**

The gradient decent training rule updates summing over all the training examples D Stochastic gradient approximates gradient decent by updating weights incrementally Calculate error for each example Known as delta-rule or LMS (last mean-square) weight update Adaline rule, used for adaptive filters Widroff and Hoff (1960)

35
**LMS Estimate of the weight vector No steepest decent**

No well defined trajectory in the weight space Instead a random trajectory (stochastic gradient descent) Converge only asymptotically toward the minimum error Can approximate gradient descent arbitrarily closely if made small enough

36
**Summary Perceptron training rule guaranteed to succeed if**

Training examples are linearly separable Sufficiently small learning rate Linear unit training rule uses gradient descent or LMS guaranteed to converge to hypothesis with minimum squared error Given sufficiently small learning rate Even when training data contains noise Even when training data not separable by H

37
**Stochastic Approximation to gradient descent**

Inner-product scalar Perceptron Perceptron learning rule XOR problem linear separable patterns Gradient descent Stochastic Approximation to gradient descent LMS Adaline

38
**XOR? Multi-Layer Networks**

output layer hidden layer input layer

Similar presentations

© 2020 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google