5 Single unit (neuron) of an artificial neural network
6 Activation FunctionsWhere W0,i = t and a0= -1 fixed
7 Boolean gates can be simulated by units with a step function AND OR NOTg is a step function
8 Topologies Feed-forward vs. recurrent Recurrent networks have state (activations from previous time steps have to be remembered): Short-term memory.
9 Hopfield network Bidirectional symmetric (Wi,j = Wj,i) connections g is the sign functionAll units are both input and output unitsActivations are 1“Associative memory”After training on a set of examples, a new stimulus will cause the network to settle into an activation pattern corresponding to the example in the training set that most closely resemble the new stimulus.E.g. parts of photographThrm. Can reliably store #units training examples
10 Boltzman machine Symmetric weights Each output is 0 or 1 Includes units that are neither input units nor output unitsStochastic g, i.e. some probability (as a fn of ini) that g=1State transitions that resemble simulated annealing.Approximates the configuration that best meets the training set.
11 Learning in ANNs is the process of tuning the weights Form of nonlinear regression.
12 ANN topology Representation capability vs. overfitting risk. A feed-forward net with one hidden layer can approximate any continuous fn of the inputs.With 2 hidden layers it can approximate any fn at all.The #units needed in each layer may grow exponentiallyLearning the topologyHill-climbing vs. genetic algorithms vs. …Removing vs. adding (nodes/connections).Compare candidates via cross-validation.
13 Perceptrons Implementable with one output unit Majority fn Decision tree requires O(2n) nodesMajority fn
14 Representation capability of a perceptron Every input can only affect the output in one direction independent of other inputs.E.g. unable to represent WillWait in the restaurant example.Perceptrons can only represent linearly separable fns.For a given problem, does one know in advance whether it is linearly separable?
16 Learning linearly separable functions Training examples used over and over!epochErr = T-OVariant of perceptron learning rule.Thrm. Will learn the linearly separable target fn. (if is not too high)Intuition: gradient descent in a search space with no local optima
17 Encoding for ANNs E.g. #patrons can be none, some or full Local encoding:None=0.0, Some=0.5, Full=1.0Distributed encoding:NoneSomeFull
20 Multilayer feedforward networks Structural credit assignment problemBack propagation algorithm(again, Erri=Ti-Oi)Updating between hidden & output units.Updating between input & hidden units:Back propagation of the error
21 Back propagation (BP) as gradient descent search A way of localizing the computation of the gradient to units.
22 Observations on BP as gradient descent Minimize error move in opposite direction of gradientg needs to be differentiableCannot use sign fn or step fnUse e.g. sigmoid g’=g(1-g)Gradient taken wrt. one training example at a time
25 Expressiveness of BP2n/n hidden units needed to represent arbitrary Boolean fns of n inputs.(such a network has O(2n) weights, and we need at least 2n bits to represent a Boolean fn)Thrm. Any continuous fn f:[0,1]nRmCan be implemented in a 3-layer network with 2n+1 hidden units. (activation fns take special form) [Kolmogorov]
26 Efficiency of BP Using is fast Training is slow Epoch takes May need exponentially many epochs in #inputs
27 More on BP… Generalization: Good on fns where output varies smoothly with inputSensitivity to noise:Very tolerant of noiseDoes not give a degree of certainty in the outputTransparency:Black boxPrior knowledge:Hard to “prime”No convergence guarantees
28 Summary of representation capabilities (model class) of different supervised learning methods 3-layer feedforward ANNDecision TreePerceptronK-Nearest neighborVersion space