Download presentation

Presentation is loading. Please wait.

Published byEan Market Modified over 2 years ago

1
ICT2191 Topic 9 A Neural Network Animat Aiming Problem Perceptron to Learn the Aiming Problem The Correction Problem Perceptron to Learn the Correction Problem Multilayer networks Hidden Layer Topology Other Activation Functions The credit assignment problem solved! Backpropogation algorithm Issues for Neural Networks Reading: Champandard Chapts Link to Neural Networks, Linear Separability on the Website

2
The aiming and correction problems Last lecture we studied how a perceptron can learn to approximate a function after being shown examples A neural network is only one of many ways of building a function- approximating ‘black box’ Now we will consider how such a function approximator might be used to help an animat play a better FPS game (see Chapter 18) First, the problem: real people - and realistically simulated animats - cannot do a perfect job of pointing a weapon toward a target, even if it can be seen clearly That’s because real limbs and weapons have mass (and thus momentum) and limb movements are not perfect When you make a small motion with a gun, you are likely to undercorrect. When you swing a gun around a large angle, you are likely to overcorrect and have to make a smaller motion back

3
ICT2193 The aiming problem We would like to model this for more realism. An explicit model will estimate momentum and position error to throw off the stopping point of the weapon from the target Can a perceptron be used to create this kind of realistic error? It can if the problem is linear – and there is training data x target desired angle initial angle first attempt desired angle initial angle first attempt x target Undershoot Overshoot

4
ICT2194 The aiming problem Let us first understand the physical model creating aiming errors Angular Error Friction & Momentum input desired angle at time t output actual angle for time t output(t) = (input * β) * α + output(t-1) * (1- α) where α is a scaling parameter controlling amount of error in the range [0.3,0.5] β = * noise(input) is the error factor in the range [0.9, 1.1]. Multiplied by input in the model because the error needs to be proportional to the desired angle The noise() function returns an unpredictable value in the range [-1,1]

5
Perceptron to learn to make errors As an exercise, a perceptron can be used to approximate this error function (more likely the equation would be used in a real development) input desired angle at t at t-1 output actual angle for time t To make a training set, generate random input angles, compute outputs using the model equation. Keep the t-1 output so it can be used as an input at t. Put a few tens of pairs together into a batch Then use a simple perceptron with two real-valued inputs and one real-valued output to learn the function Problem is (approximately) linear and should be trainable with only a few tens of examples (depends on the initial random weights)

6
The correction problem Now we would also like to provide for learning to correct for the aiming error online so that, like a human, an animat would gradually reduce the amount of under- or overshoot That is, we want it to learn the inverse function with respect to the previous problem Question: can a perceptron do this, too? Undershoot Overshoot x target desired angle initial angle aiming error desired angle initial angle aiming error x target

7
ICT2197 Perceptron to learn the correction The perceptron would approximate the inverse function thus: Agular Error Friction & Momentum output actual angle Target Selection desired angle corrected input angle inverse function In this case the correcting inverse function would simply undo whatever the aiming error function did The trouble is, inverse functions may in general not be easy to find, or even exist at all – this is the problem with models! Can we avoid working out the inverse mathematically? In this exercise, yes – we can simply compare the desired angle with the observed angle in the world. That difference is the needed correction

8
ICT2198 Practical Matters With enough random observations, a training set can be built up For this problem, online learning is the best option – the animat learns over the course of a game – realistic It is necessary here to scale both the input and the output. A function needs to be applied to expand the possible output values from the perceptron into the possible output angles, for example – the values appearing at the output cannot represent them without it Scaling is often required for NN problems, so give some thought to it Aiming is a 3D matter, so in a practical implementation of this problem, two networks would be used – one for yaw (pan around), one for pitch (tilt up/down)

9
ICT2199 Multi-layer networks Recall from last lecture that perceptrons could not solve problems that were not linearly separable This means that it is not possible to find a linear (line, plane or hyperplane) boundary in the (n-dimensional) problem space that separates all patterns that must be distinguished For more details, see the ‘Linear Separability’ link on the website To overcome this limitation, firstly we need to update our architecture to include a hidden layer – thus making it a multi-layer network Secondly, we need a more sophisticated activation function Finally, we will need a new training algorithm, which is capable of altering weights to the hidden neurons, which the perceptron learning algorithm could not do

10
Hidden layer topology The function learned by the weights of a network layer can be visualised as a multidimensional decision surface – a stored ‘shape’ inside the network which is ‘landed on’ by a pattern and which returns the network’s ‘answer’ We can add internal, hidden layers to our network to increase the potential complexity of the decision surface from a roughly uniform slope to an irregular terrain of hills and valleys Such a network could then approximate more interesting functions

11
ICT21911 More on network topology We could add more layers to further increase the approximating power, but for practical purposes there is little reason to go beyond four layers Three layers can handle nearly any continuous function and four can handle any function at all When there is no connection from earlier layers to later layers, it is called a feed forward network (vs a recurrent network which are also interesting) Networks can also be fully connected or sparse, depending on how many neurons in each layer connect to the next one Connections can even skip layers, but such exotic architectures are more difficult to analyse and understand, so we will leave those to more advanced users fully connected sparsely connected

12
ICT21912 A recurrent neural network is one where activations from later layers are fed back to earlier layers, enabling sequences to be generated

13
ICT21913 Other activation function Recall that an activation function is needed to represent processing of a neuron (step function in our perceptron) Ideally for hidden units we would like a function which is continuous, monotonic, non-linear and which has an easily computable derivative for the error term The function should also be well-behaved in that it and its derivative should not become infinite, and it might need to obey certain polarity constraints Usually one of a standard set of functions is selected for the hidden units: step linear threshold sigmoid 1 1+e -βx sig (x) =

14
ICT21914 Multilayer network in recognition mode To simulate a multilayer network, the recognition algorithm for perceptrons must be modified. All outputs from one layer must be finished before the next layer can be attempted. Assume initialised vector input. Netsum(neuron[i].weights, current) computes the weighted sum of all inputs to the i th neuron of the current layer current = input //first layer processes the input array for each layer for i=1 to n neurons in this layer s = Netsum(neuron[i].weights, current) // weighted sum output[i] = activation(s) // store postprocessed result end for current = output //feed this layer’s output to next layer inp end for

15
ICT21915 Credit Assignment Problem Solved! With a perceptron, it was easy to tell which weights were responsible (and to what degree) for a given error on the output during training With multilayer networks, the problem is more complicated. Any one output in a fully connected network will be affected by the weighted sum of the entire layer before it, which in turn will be affected by the layer before it To compute the adjustments for weights, the backpropagation algorithm was developed For the last layer, the error term is easily available (difference between the actual and desired output) and so the perceptron method can be used there For the hidden layer, find some hidden neurons that are connected with an output neuron whose error is known. Part of that error must originate from the hidden neurons. The error can be propagated backwards from the output neuron, distributing credit (blame) to the earlier weighted connections

16
ICT21916 Backpropagation algorithm //deriv_activate computes the gradient of error relative to net sum for each unit j in the last layer delta[j] = deriv_activate(net_sum) * (desired[j] - output[j]) end for for each layer from last-1 to first layer for each unit j in layer total = 0 for each unit k in layer+1 total = total + delta[k] * weights[j, k] end for delta[j] = deriv_activate(net_sum) + total end for //weights changed by steepest descent for each unit j for each unit i weight[j,i] += rate * delta[j] * output[i] end for δ j =σ’(ζ j ) (t j -y j ) δ j =σ’(ζ j ) ∑ δ k w jk k use δ j to adjust weights by gradient descent

17
ICT21917 Issues for Neural Networks Neural networks are in general good at finding optimal solutions to problems when the complexity is not too great Also NNs are relatively simple to build (once you know how!) and the solution is quite cheap to compute The big advantage is that a model of the problem is not really needed – only examples of good solutions A network can then generalise and solve problem never seen before – provided it is not permitted to overfit the examples When learning incrementally, one needs to decide what ratio of exploration to exploitation is best

18
Issues for Neural Networks Usually the number of inputs and output units is fairly easy to determine, but what about hidden units? No standard formula for this – a black art NNs tend to contain a lot of “adjustable parameters” such as η – too many of these and the system’s usefulness is questionable Knowledge is implicit – so no easy description of what a NN “knows” Existing knowledge might not be preserved through more training An important limitation is the curse of dimensionality - that as the number of dimensions grows, the complexity of the decision surface also grows, which can lead to exponential (or worse!) growth in the required number of hidden units Thus scaling up is a problem – as one tries to represent more complex decision surfaces, the number of units required seems to grow uncontrollably (so how does a real brain do it?)

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google