1 Computation in neural networks M. Meeter. 2 Perceptron learning problem Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1]

Presentation on theme: "1 Computation in neural networks M. Meeter. 2 Perceptron learning problem Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1]"— Presentation transcript:

1 Computation in neural networks M. Meeter

2 Perceptron learning problem Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1] [+1, +1, -1] [-1, -1, -1, -1] [-1, -1, +1, -1] [-1, -1, -1] [-1, +1, +1, -1][-1, +1, +1] [+1, -1, +1, -1] Calculating a function

3 Types of networks & functions  Attractor  Feedfwrd Hebbian associative (Hebbian) competitive  Feedfwrd error corr. perceptron backprop  completion, autoass. memory association, assoc. memory clustering categorization, generalization nonlinear, same

4 Types of networks  Attractor  Feedfwrd Hebbian associative (Hebbian) competitive  Feedfwrd error corr. perceptron backprop  completion, autoass. memory association, assoc. memory clustering categorization, generalization nonlinear, same

5 Classification A

6 Generalization 76 128 ?

7 Univariate Linear Regression prediction of values Regression = generalization

8 Clustering

9 Types of networks  Attractor  Feedfwrd Hebbian associative (Hebbian) competitive  Feedfwrd error corr. perceptron backprop  completion, autoass. memory association, assoc. memory clustering categorization, generalization nonlinear, same

10 Perceptron learning problem Prototypical Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1] [+1, +1, -1] [-1, -1, -1, -1] [-1, -1, +1, -1] [-1, -1, -1] [-1, +1, +1, -1][-1, +1, +1] [+1, -1, +1, -1] Classification - discrete

11 Perceptron learning problem Prototypical Input Patterns Desired output [+1, +1, -1, -1] [+1, -1, +1] [-1, -1, +1, +1] [+1, +1, -1] [-1, -1, -1, -1] [-1, -1, +1, -1] [-1, -1, -1] [-1, +1, +1, -1][-1, +1, +1] [+1, -1, +1, -1] Classification - discrete

12 XiXi X1X1 X2X2 XnXn w ji  threshold Classification in Perceptron

13 Effe tussendoor…  Bij perceptron etc.: net input knoop>0 dan activatie 0  Niet altijd gewenst: daarom heeft knoop in continue vormen perceptron / backprop een ‘bias’, een activatie die altijd bij input opgeteld wordt  Effect: verschuiven threshold

14 Classification in 2 dimensions Threshold Input= Threshold Input= mixture + -

15 Discriminant Analysis Produces exact same result Find center of two categories, draw line in between, then one diagonal in middle = discrimination line

16 Univariate Linear Regression prediction of values Generalization = Regression

17 XiXi Activation function  (·) X1X1 X2X2 XnXn y Change weights with  rule, minimizing  e 2  j w ji v =  x i *w ji  (v) = av + b   Bias Perceptron with linear activation rule

18 Multivariate = multiple independent variables X =multiple inputs X i X 1 X 2 X n 1 y 1 2 y 2 X Y1Y1 Y2Y2 Multivariate Multiple Linear Regression Multiple = multiple dependent variables Y =multiple outputs

19 Linear vs. nonlinear regression linear x y nonlinear x y  Here: quadratic  General: wrinkle-fitting

20 y1y1 y2y2 XX X= [x 1, x 2,.., x i,.., x n ]    *wxv i jii Multi-Layer Perceptron  Fit any function: “Universal approximators”

21 x y Too simple model Bad Too complex model x y Extremely bad Overfitting

22 Clustering Competitive learning:  next week  ART

23 Conclusions  Neural networks similar to statistical analyses  Perceptron-> categorization / generalization  Backprop-> same but nonlinear  Competitive l.-> clustering  But…  Whole data set vs. one pattern at a time

24 Feature reduction with PCA

25 Feature extraction with PCA ?? Unsupervised Learning Hebbian Learning

Similar presentations