Presentation is loading. Please wait.

Presentation is loading. Please wait.

START OF DAY 4 Reading: Chap. 3 & 4. Project Topics & Teams Select topics/domains Select teams Deliverables – Description of the problem – Selection.

Similar presentations


Presentation on theme: "START OF DAY 4 Reading: Chap. 3 & 4. Project Topics & Teams Select topics/domains Select teams Deliverables – Description of the problem – Selection."— Presentation transcript:

1 START OF DAY 4 Reading: Chap. 3 & 4

2 Project

3 Topics & Teams Select topics/domains Select teams Deliverables – Description of the problem – Selection of objective(s) – Description of the methods used Data preparation Learning algorithms used – Description of the results Project presentations

4 Perceptron

5 Neural Networks Sub-symbolic approach: – Does not use symbols to denote objects – Views intelligence/learning as arising from the collective behavior of a large number of simple, interacting components Motivated by biological plausibility

6 Natural Neuron

7 Artificial Neuron Captures the essence of the natural neuron (Dendrites) Input values X i from the environment or other neurons (Synapses) Real-valued weights w i associated with each input (Soma’s chemical reaction) Function F({X i },{w i }) computing activation as a function of input values and weights (Axon) Activation value that may serve as input to other neurons

8 Feedforward Neural Networks Sets of (highly) interconnected artificial neurons (i.e., simple computational units) – Layered organization Characteristics – Massive parallelism – Distributed knowledge representation (i.e., implicit in patterns of interactions) – Graceful degradation (e.g., grandmother cell) – Less susceptible to brittleness – Noise tolerant – Opaque (i.e., black box) There exist other types of NNs

9 FFNN Topology Pattern of interconnections among neurons: primary source of inductive bias Characteristics – Number of layers – Number of neurons per layer – Interconnectivity (fully connected, mesh, etc.)

10 Perceptrons (1958) Simplest class of neural networks Single-layer, i.e., only one set of connection weights between inputs and outputs Boolean activation (aka step function) x1x1 xnxn x2x2 w1w1 w2w2 wnwn z

11 Learning for Perceptrons Algorithm devised by Rosenblatt in 1958 Given an example (i.e., labeled input pattern): – Compute output – Check output against target – Adapt weights

12 Example (I).8.3 z.4 -.2.1 net =.8*.4 +.3*-.2 =.26 =1 x1x1 x2x2 t 0 1.1.3.4.8 Output matches target

13 Example (II).8.3 z.4 -.2.1 net =.4*.4 +.1*-.2 =.14 =1 x1x1 x2x2 t 0 1.1.3.4.8 Output does not match target

14 Learn-Perceptron When should weights be changed? – Output does not match target:(t i -z i ) How should weights be changed? – By some fixed amount (learning rate):c(t i -z i ) – Proportional to input value:c(t i -z i )x i Algorithm: – Initialize weights (typically random) – For each new training example Compute network output Change weights:  w i = c  t i – z i )x i – Repeat until no change in weights

15 What About Θ? 1 0 1 -> 0 1 0 0 -> 1 Augmented Version 1 0 1 1 -> 0 1 0 0 1 -> 1 Treat threshold like any other weight. Call it a bias since it biases the output up or down Since we start with random weights anyways, ignore -  notion; just think of the bias as an extra available weight Always use a bias weight

16 Example Assume 3-input perceptron (plus bias, outputs 1 if net > 0, else 0) Assume c=1 and initial weights all 0 Training set:0 0 1 -> 0 1 1 1 -> 1 1 0 1 -> 1 0 1 1 -> 0 PatternTargetWeightNetOutput  W____ 0 0 1 100 0 0 0000 0 0 0 1 1 1 110 0 0 0001 1 1 1 1 0 1 111 1 1 1310 0 0 0 0 1 1 101 1 1 1310 -1 -1 -1 0 0 1 101 0 0 0000 0 0 0 1 1 1 111 0 0 0110 0 0 0 1 0 1 111 0 0 0110 0 0 0 0 1 1 101 0 0 0000 0 0 0  w i = c  t i – z i )x i

17 Another Example Assume 2-input perceptron (plus bias, outputs 1 if net > 0, else 0) Assume c=1 and initial weights all 0 Training set:0 0 -> 0 1 0 -> 1 1 1 -> 0 0 1 -> 1 PatternTargetWeightNetOutput  W _ 0 0 1 00 0 0 000 0 0 1 0 110 0 0 001 0 1 1 1 1 01 0 1 210 -1 0 0 1 1 10-1 0 -100 0 1 0 0 100 0 1 110 0 0 1 0 110 0 0 001 0 1 1 1 101 0 1210 -1 0 0 1 110-1 0 -100 0 1  w i = c  t i – z i )x i What is happening? Why?

18 Decision Surface Assume 2-input perceptron – z=1 if w 1 x 1 +w 2 x 2 ≥Θ – z=0 if w 1 x 1 +w 2 x 2 <Θ Decision boundary: w 1 x 1 +w 2 x 2 =Θ – A line with slope –w 1 /w 2 and intercept Θ/w 2 – No bias  line goes through origin In general: hyperplane (i.e., a linear surface)

19 Linear Separability Generalization: noise vs. exception Limited functionality?

20 The Plague of Linear Separability The good news is: – Learn-Perceptron is guaranteed to converge to a correct assignment of weights if such an assignment exists The bad news is: – Such an assignment exists only for linearly separable tasks The really bad news is: – There is a large number of non-linearly separable tasks Let d be the number of inputs – Too many tasks escape the algorithm

21 Are We Stuck? So far we have used What if we preprocessed the inputs in a non-linear way and did To the perceptron algorithm it would look just the same, except with different inputs For example, for a problem with two inputs x and y (plus the bias), we could also add the inputs x 2, y 2, and x·y The perceptron would just think it is a 5-dimensional task, and it is linear in those 5 dimensions – But what kind of decision surfaces would it allow for the 2-d input space?

22 Quadric Machine Example All quadratic surfaces (2 nd order) – ellipsoid – parabola – Etc. For example: -3 -2 -1 0 1 2 3 f1f1 f2f2 f1f1 Perceptron with just feature f 1 cannot separate the data Assume we add another feature to our perceptron f 2 = f 1 2

23 Quadric Machine All quadratic surfaces (2 nd order) – ellipsoid – parabola – etc. That significantly increases the number of problems that can be solved, but there are still many problems that are not quadrically separable Could go to 3 rd and higher order features, but number of possible features grows exponentially Multi-layer neural networks will allow us to discover high-order features automatically from the input space

24 Backpropagation

25 Towards a Solution Main problem: – Learn-Perceptron implements discrete model of error (i.e., identifies the existence of error and adapts to it, but not the magnitude of the error – since step function) First thing to do: – Allow nodes to have real-valued activations (amount of error = difference between computed and target output) Second thing to do: – Design learning rule that adjusts weights based on error Last thing to do: – Use the learning rule to implement a multi-layer algorithm

26 Real-valued Activation Replace the threshold unit (step function) with a linear unit, where: For instance d:

27 Defining Error We define the training error of a hypothesis, or weight vector, by: Goal: minimize E – Find direction of steepest ascent of E (aka, gradient) – Move in the opposite direction (i.e., to decrease E)

28 Minimizing the Error

29 The Delta Rule Gradient descent on the error surface: Initialize weights to small random values Repeat until no progress – Initialize each  w i to 0 – For each training example Compute output o for x For each weight w i –  w i   w i + c(t – o)x i – For each weight w i w i  w i +  w i Stochastic version For each weight w i – w i  w i + c(t – o)x i Better? Note change in sign since we minimize E

30 Discussion Gradient-descent learning (with linear units) requires more than one pass through the training set The good news is: – Convergence is guaranteed if the problem is solvable The bad news is: – Still produces only linear functions – Even when used in a multi-layer context (composition of linear functions is a linear function) Needs to be further generalized!

31 Non-linear Activation Introduce non-linearity with a sigmoid function: 1. Differentiable (required for gradient-descent) 2. Most unstable in the middle

32 Derivative of the Sigmoid You need only compute the sigmoid; its derivative comes for free!

33 Multi-layer Feed-forward NN i i i i j k k k

34 Backpropagation Learning Repeat – Present a training instance d – Compute error  k of output units – For each hidden layer Compute error  j using error from next layer – Update all weights: w pq  w pq +  w pq where Until stopping criterion Note that BP is stochastic

35 Setting Up the Derivation

36 Output Units

37 Hidden Units

38 Putting it all together i i i i j k k k Max when sigmoid is most unstable

39 Example (I) Consider a simple network composed of: – 3 inputs: x, y, z – 1 hidden node: h – 2 outputs: q, r Assume c=0.5, all weights are initialized to 0.2 and weight updates are incremental Consider the training set: – 1 0 1 – 0 1 – 0 1 1 – 1 1 4 iterations over the training set

40 Example (II)

41 Local Minima FFNN can get stuck in local minimum – More common for small networks – For most large networks (many weights), local minima rarely occur in practice Many dimensions of weights  unlikely to be in a minima in every dimension simultaneously – almost always a way down (e.g., water running down a high-dimensional surface) If needed, can use momentum or train several NNs

42 Momentum Simple speed-up modification Weight update maintains momentum in the direction it has been going – Faster in flats – Could leap past minima (good or bad) – Significant speed-up, common value  ≈.9 – Effectively increases learning rate in areas where the gradient is consistently the same sign (a common approach in adaptive learning rate methods)

43 Learning Parameters Connectivity: typically fully connected between layers Number of hidden nodes: – Too many nodes make learning slower, could overfit – Too few will underfit Number of layers: usually 1 or 2 hidden layers which are usually sufficient, attenuation makes learning very slow – 1 most common Momentum: (.5 -.99) Most common method to set parameters: a few trial and error runs (CV) All of these could be set automatically by the learning algorithm and there are numerous approaches to do so

44 Backpropagation Summary Most common neural network approach – Many other different styles of neural networks (RBF, Hopfield, etc.) Excellent empirical results Scaling – the pleasant surprise – Local minima very rare as problem and network complexity increase User defined parameters usually handled by multiple experiments Many variants, such as – Regression – Typically linear output nodes, normal hidden nodes – Adaptive parameters, ontogenic (growing and pruning) learning algorithms – Many different learning algorithm approaches – Recurrent networks – Deep networks – Still an active research area

45 END OF DAY 4 Homework: Decision Tree Learning


Download ppt "START OF DAY 4 Reading: Chap. 3 & 4. Project Topics & Teams Select topics/domains Select teams Deliverables – Description of the problem – Selection."

Similar presentations


Ads by Google