2 ARTIFICIAL NEURAL NETWORKS Analogy to biological neural systems, the most robust learning systems we know.Attempt to understand natural biological systems through computational modeling.Massive parallelism allows for computational efficiency.Help understand “distributed” nature of neural representations (rather than “localist” representation) that allow robustness and graceful degradation.Intelligent behavior as an “emergent” property of large number of simple units rather than from explicitly encoded symbolic rules and algorithms.
3 Real Neurons Cell structures Cell body Dendrites Axon Synaptic terminals
4 Neural CommunicationElectrical potential across cell membrane exhibits spikes called action potentials.Spike originates in cell body, travels downaxon, and causes synaptic terminals torelease neurotransmitters.Chemical diffuses across synapse todendrites of other neurons.Neurotransmitters can be excititory orinhibitory.If net input of neurotransmitters to a neuron from other neurons is excititory and exceeds some threshold, it fires an action potential.
5 NEURAL SPEED CONSTRAINTS Neurons have a “switching time” on the order of a few milliseconds, compared to nanoseconds for current computing hardware.However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second.Only time for performing 100 serial steps in this time frame, compared to orders of magnitude more for current computers.Must be exploiting “massive parallelism.”Human brain has about 1011 neurons with an average of 104 connections each.
6 Real Neural LearningSynapses change size and strength with experience.Hebbian learning: When two connected neurons are firing at the same time, the strength of the synapse between them increases.“Neurons that fire together, wire together.”
7 ARTIFICIAL NEURAL NETWORKS Inputs xi arrive through pre-synaptic connectionsSynaptic efficacy is modeled using real weights wiThe response of the neuron is a nonlinear function f of its weighted inputs
8 Inputs To Neurons Arise from other neurons or from outside the network Nodes whose inputs arise outside the network are called input nodes and simply copy valuesAn input may excite or inhibit the response of the neuron to which it is applied, depending upon the weight of the connection
9 WeightsRepresent synaptic efficacy and may be excitatory or inhibitoryNormally, positive weights are considered as excitatory while negative weights are thought of as inhibitoryLearning is the process of modifying the weights in order to produce a network that performs some function
10 ARTIFICIAL NEURAL NETWORK LEARNING Learning approach based on modeling adaptation in biological neural systems.Perceptron: Initial algorithm for learning simple neural networks (single layer) developed in the 1950’s.Backpropagation: More complex algorithm for learning multi-layer neural networks developed in the 1980’s.
11 (Tj is threshold for unit j) SINGLE LAYER NETWORKSModel network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, wjiModel net input to cell asCell output is:132546w12w13w14w15w16oj1(Tj is threshold for unit j)Tjnetj
12 Neural ComputationMcCollough and Pitts (1943) showed how single-layer neurons could compute logical functions and be used to construct finite-state machines.Can be used to simulate logic gates:AND: Let all wji be Tj/n, where n is the number of inputs.OR: Let all wji be TjNOT: Let threshold be 0, single input with a negative weight.Can build arbitrary logic circuits, sequential machines, and computers with such gates.Given negated inputs, two layer network can compute any boolean function using a two level AND-OR network.
13 Perceptron TrainingAssume supervised training examples giving the desired output for a unit given a set of known input activations.Learn synaptic weights so that unit produces the correct output for each example.Perceptron uses iterative update algorithm to learn a correct set of weights.
14 Perceptron Learning Rule Update weights by:where η is the “learning rate”tj is the teacher specified output for unit j.Equivalent to rules:If output is correct do nothing.If output is high, lower weights on active inputsIf output is low, increase weights on active inputsAlso adjust threshold to compensate:
15 Perceptron Learning Algorithm Iteratively update weights until convergence.Each execution of the outer loop is typically called an epoch.Initialize weights to random valuesUntil outputs of all training examples are correctFor each training pair, E, do:Compute current output oj for E given its inputsCompare current output to target value, tj , for EUpdate synaptic weights and threshold using learning rule
16 Perceptron as a Linear Separator Since perceptron uses linear threshold function, it is searching for a linear separator that discriminates the classes.o3??Or hyperplane inn-dimensional spaceo2
17 Concept Perceptron Cannot Learn Cannot learn exclusive-or, or parity function in general.o31+–??–+o21
18 Perceptron LimitsSystem obviously cannot learn concepts it cannot represent.Minksy and Papert (1969) wrote a book analyzing the perceptron and demonstrating many functions it could not learn.These results discouraged further research on neural nets; and symbolic AI became the dominate paradigm.
19 Multi-Layer NetworksMulti-layer networks can represent arbitrary functions, but an effective learning algorithm for such networks was thought to be difficult.A typical multi-layer network consists of an input, hidden and output layer, each fully connected to the next, with activation feeding forward.The weights determine the function computed. Given an arbitrary number of hidden units, any boolean function can be computed with a single hidden layer.outputhiddeninputactivation
20 Output The response function is normally nonlinear Samples include SigmoidPiecewise linear
21 Differentiable Output Function Need non-linear output function to move beyond linear functions.A multi-layer linear network is still linear.Standard solution is to use the non-linear, differentiable sigmoidal “logistic” function:1TjnetjCan also use tanh or Gaussian output function
22 LEARNING WITH MULTI-LAYER NETWORKS The BACKPROPAGATION algorithm allows to modify weights in a network consisting of several layers by ‘propagating back’ corrections
23 Error Backpropagation First calculate error of output units and use this to change the top layer of weights.Current output: oj=0.2Correct output: tj=1.0Error δj = oj(1–oj)(tj–oj)0.2(1–0.2)(1–0.2)=0.128outputUpdate weights into jhiddeninput
24 Error Backpropagation Next calculate error for hidden units based on errors on the output units it feeds into.outputhiddeninput
25 Error Backpropagation Finally update bottom layer of weights based on errors calculated for hidden units.outputhiddenUpdate weights into jinput
26 Backpropagation Learning Rule Each weight changed by:where η is a constant called the learning ratetj is the correct teacher output for unit jδj is the error measure for unit j
27 Backpropagation Training Algorithm Create the 3-layer network with H hidden units with full connectivitybetween layers. Set weights to small random real values.Until all training examples produce the correct value (within ε), ormean squared error ceases to decrease, or other termination criteria:Begin epochFor each training example, d, do:Calculate network output for d’s input valuesCompute error between current output and correct output for dUpdate weights by backpropagating error and using learning ruleEnd epoch
28 Backpropagation Preparation Training Set A collection of input-output patterns that are used to train the networkTesting Set A collection of input-output patterns that are used to assess network performanceLearning Rate-η A scalar parameter, analogous to step size in numerical integration, used to set the rate of adjustments
30 A Pseudo-Code Algorithm Randomly choose the initial weightsWhile error is too largeFor each training patternApply the inputs to the networkCalculate the output for every neuron from the input layer, through the hidden layer(s), to the output layerCalculate the error at the outputsUse the output error to compute error signals for pre-output layersUse the error signals to compute weight adjustmentsApply the weight adjustmentsPeriodically evaluate the network performance
31 Apply Inputs From A Pattern Apply the value of each input parameter to each input nodeInput nodes computer only the identity functionFeedforwardInputsOutputs
32 Calculate Outputs For Each Neuron Based On The Pattern The output from neuron j for pattern p is Opj whereandk ranges over the input indices and Wjk is the weight on the connection from input k to neuron jFeedforwardInputsOutputs
33 Calculate The Error Signal For Each Output Neuron The output neuron error signal dpj is given by dpj=(Tpj-Opj) Opj (1-Opj)Tpj is the target value of output neuron j for pattern pOpj is the actual output value of output neuron j for pattern p
34 Calculate The Error Signal For Each Hidden Neuron The hidden neuron error signal dpj is given bywhere dpk is the error signal of a post-synaptic neuron k and Wkj is the weight of the connection from hidden neuron j to the post-synaptic neuron k
35 Calculate And Apply Weight Adjustments Compute weight adjustments DWji by DWji = η dpj OpiApply weight adjustments according to Wji = Wji + DWji
36 Representational Power Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units.Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network.Sigmoid functions can act as a set of basis functions for composing more complex functions, like sine waves in Fourier analysis.Arbitrary function: Any function can be approximated to arbitrary accuracy by a three-layer network.
37 Sample Learned XOR Network 3.116.967.385.24AB2.033.583.65.575.74XYHidden Unit A represents: (X Y)Hidden Unit B represents: (X Y)Output O represents: A B = (X Y) (X Y)= X Y
38 Hidden Unit Representations Trained hidden units can be seen as newly constructed features that make the target concept linearly separable in the transformed space.On many real domains, hidden units can be interpreted as representing meaningful features such as vowel detectors or edge detectors, etc..However, the hidden layer can also become a distributed representation of the input in which each individual unit is not easily interpretable as a meaningful feature.
39 Successful Applications Text to Speech (NetTalk)Fraud detectionFinancial ApplicationsHNC (eventually bought by Fair Isaac)Chemical Plant ControlPavillion TechnologiesAutomated VehiclesGame PlayingNeurogammonHandwriting recognition
40 Analysis of ANN Algorithm Advantages:Produce good results in complex domainsSuitable for both discrete and continuous data (especially better for the continuous domain)Testing is very fastDisadvantages:Training is relatively slowLearned results are difficult for users to interpret than learned rules (comparing with DT)Empirical Risk Minimization (ERM) makes ANN try to minimize training error, may lead to overfitting