4The Retina Most common in the Preliminary parts of The data processing Retina, ears
5What is known about the learning process Activationevery activity lead to the firing of a certain set of neurons.Habituation:is the psychological process in humans and other organisms in which there is a decrease in psychological and behavioral response to a stimulus after repeated exposure to that stimulus over a duration of time.In 1949 introduced Hebbian Learning:synchronous activation increases the synaptic strength;asynchronous activation decreases the synaptic strength.Hebbian LearningWhen activities were repeated, the connections between those neurons strengthened. This repetition was what led to the formation of memory.
6A spectrum of machine learning tasks Typical StatisticsArtificial IntelligenceLow-dimensional data (e.g. less than 100 dimensions)Lots of noise in the dataThere is not much structure in the data, and what structure there is, can be represented by a fairly simple model.The main problem is distinguishing true structure from noise.High-dimensional data (e.g. more than 100 dimensions)The noise is not sufficient to obscure the structure in the data if we process it right.There is a huge amount of structure in the data, but the structure is too complicated to be represented by a simple model.The main problem is figuring out a way to represent the complicated structure so that it can be learned.Link
7Artificial Neural Networks Artificial Neural Networks have been applied successfully to :speech recognition image analysis adaptive controlΣf(n)WOutputsActivation FunctionINPUTSW=WeightNeuron
8Hebbian Learning Hebbian Learning In 1949 introduced Hebbian Learning:synchronous activation increases the synaptic strength;asynchronous activation decreases the synaptic strength.Hebbian LearningWhen activities were repeated, the connections between those neurons strengthened. This repetition was what led to the formation of memory.Update
9The simplest model- the Perceptron The Perceptron was introduced in 1957 byFrank Rosenblatt.-dD0D1D2InputLayerOutputDestinationsPerceptron:Activationfunctions:UpdateLearning:
10The simplest model- the Perceptron Is a linear classifier.Can only perfectly classify a set of linearly separable data.Linkd-How to learn multiple layers?incapable of processing the Exclusive Or (XOR) circuit.Link
11Second generation neural networks (~1985) Back Propagation Compare outputs with correct answer to get error signalBack-propagate error signal to get derivatives for learningoutputshidden layersinput vector
13Back Propagation Advantages What is wrong with back-propagation? Multi layer Perceptron network can be trained byThe back propagation algorithm to perform any mapping between the input and the output.What is wrong with back-propagation?It requires labeled training data.Almost all data is unlabeled.The learning time does not scale wellIt is very slow in networks with multiple hidden layers.It can get stuck in poor local optima.A temporary digressionVapnik and his co-workers developed a very clever type of perceptron called a Support Vector Machine.In the 1990’s, many researchers abandoned neural networks with multiple adaptive hidden layers because Support Vector Machines worked better.
14Overcoming the limitations of back-propagation-Restricted Boltzmann Machines Keep the efficiency and simplicity of using a gradient method for adjusting the weights, but use it for modeling the structure of the sensory input.Adjust the weights to maximize the probability that a generative model would have produced the sensory input.Learn p(image) not p(label | image)
15Restricted Boltzmann Machines(RBM) RBM is a Multiple Layer Perceptron NetworkThe inference problem: Infer the states of the unobserved variables.The learning problem: Adjust the interactions between variables to make the network more likely to generate the observed data.RBM is a Graphical modelInput layerHidden layerOutput layer
16graphical models Restricted Boltzmann Machine: RMF: undirected Each arrow represent mutualdependencies between nodeshiddenBayesian networkor belief network or Boltzmann Machine:directedacyclichiddendataHMM:the simplest Bayesian networkRestrictedBoltzmann Machine:symmetrically directedacyclicno intra-layer connections
17Stochastic binary units (Bernoulli variables) 1jiThese have a state of 1 or 0.The probability of turning on is determined by the weighted input from other units (plus a bias)
18The Energy of a joint configuration (ignoring terms to do with biases) The energy of thecurrent state:The joint probability distributionProbability distributionover the visible vector v:Partition functionThe derivative of theenergy function:ij
20Hinton's method - Contrastive Divergence Max likelihood methodminimizes the Kullback-Leibberdivergence:Intuitively:
21Contrastive Divergence (CD) method In 2002 Hinton proposed a new learning procedure.CD follows approximately the difference of two divergences(="the gradient").is the "distance" of the distribution fromPractically: run the chain only for a small number of steps (actually one is sufficient)The update formula for the weights become:This greatly reduces both the computation per gradient step and the varianceof the estimated gradient.Experiments show good parameter estimation capabilities.
22A picture of the maximum likelihood learning algorithm for an RBM jjjjiiiithe fantasy(i.e. the model)t = t = t = t = ∞Start with a training vector on the visible units.Then alternate between updating all the hidden units in parallel and updating all the visible units in parallel.One Gibbs Sample (CD):
23Multi Layer Network h3 h2 h1 data After Gibbs Sampling for Sufficiently long, the networkreaches thermal equilibrium: thestate of still change, but theprobability of finding the systemin any particular configuration does not.h2datah1h3Adding another layer alwaysimproves the variation boundon the log-likelihood, unless thetop level RBM is already a perfectmodel of the data it’s trained on.
24The network for the 4 squares task 2 input units4 logistic units4 labels
25The network for the 4 squares task 2 input units4 logistic units4 labels
26The network for the 4 squares task 2 input units4 logistic units4 labels
27The network for the 4 squares task 2 input units4 logistic units4 labels
28The network for the 4 squares task 2 input units4 logistic units4 labels
29The network for the 4 squares task 2 input units4 logistic units4 labels
30The network for the 4 squares task 2 input units4 logistic units4 labels
31The network for the 4 squares task 2 input units4 logistic units4 labels
32The network for the 4 squares task 2 input units4 logistic units4 labels
33The network for the 4 squares task 2 input units4 logistic units4 labels
34The network for the 4 squares task 2 input units4 logistic units4 labels
36Results 10 labels 2000 neurons 500 neurons 28x28 pixels output vector The Network usedto recognize handwrittenbinary digits fromMNIST database:28x28 pixels500 neuronsoutput vector2000 neurons10 labelsClass:Non Class:Images from an unfamiliar digit class (the network tries to see every image as a 2)New test images from the digit class that the model was trained on
37Examples of correctly recognized handwritten digits that the neural network had never seen before Pros:Good generalization capabilitiesCons:Only binary values permitted.No Invariance (neither translation nor rotation).
38How well does it discriminate on MNIST test set with no extra information about geometric distortions?Generative model based on RBM’s %Support Vector Machine (Decoste et. al.) %Backprop with 1000 hiddens (Platt) ~1.6%Backprop with >300 hiddens ~1.6%K-Nearest Neighbor ~ 3.3%
39A non-linear generative model for human motion CMU Graphics Lab Motion Capture DatabaseSampled motion from video (30 Hz).Each frame is a Vector 1x60 of the skeletonParameters (3D joint angles).The data does not need to be heavilypreprocessed or dimensionality reduced.
40Conditional RBM (cRBM) Can model temporal dependencesby treating the visible variables inthe past as an additional biases.Add two types of connections:from the past n frames of visibleto the current visible.to the current hidden.Given the past n frames, the hiddenunits at time t are cond. independent we can still use the CD for training cRBMst t t