Presentation is loading. Please wait.

Presentation is loading. Please wait.

Yuki Osada Andrew Cannon 1.  Humans are an intelligent species. ◦ One feature is the ability to learn.  The ability to learn comes down to the brain.

Similar presentations


Presentation on theme: "Yuki Osada Andrew Cannon 1.  Humans are an intelligent species. ◦ One feature is the ability to learn.  The ability to learn comes down to the brain."— Presentation transcript:

1 Yuki Osada Andrew Cannon 1

2  Humans are an intelligent species. ◦ One feature is the ability to learn.  The ability to learn comes down to the brain.  The brain learns from experience. ◦ Research shows that the brain stores information as patterns. ◦ This information is stored in neurons. 2

3  Neurons do not regenerate suggesting that these cells are what provide us with the abilities to: ◦ remember, ◦ think, and ◦ apply previous experiences.  Humans generally have between 80 and 120 million neurons. ◦ Each neuron typically connect with 1000 to 10000 other neurons. ◦ The human brain is a huge network of neurons - a neural network. 3

4  The power of the human mind comes from the sheer number of these neurons, and its connections. ◦ The individual neurons act as a function of their incoming signals. ◦ Although neurons themselves are complicated, they don't exhibit complex behaviour on their own.  This is the key feature that makes it a viable computational intelligence approach. 4

5  Artificial Neural Networks are a computational model inspired by the neural structure of the human brain, a biological neural network. ◦ They attempt to replicate only the basic elements of this complicated, versatile, and powerful organism. ◦ It consists of an interconnected group of artificial neurons. ◦ It learns by changing its structure based on information that flows through the network.  They are used to model complex relationships between inputs and outputs, or to find patterns in data. 5

6  Neurons are the fundamental processing elements of a neural network. 6 Jarosz, Q. (2009), "Neuron Hand-tuned.svg". Retrieved 10 September, 2012, from Wikipedia, Neuron https://en.wikipedia.org/wiki/Neuron.

7  A biological neuron basically: 1.receives inputs from other sources (dendrites), 2.merges them in some way (soma), 3.performs an operation on the result (axon), then 4.outputs the result - possibly to other neurons (axon terminals).  Artificial neurons follow this basic approach. 7

8  The basic structure of an artificial neuron consists of: 1.input connections (dendrites) with weights, 2.a summation function or input function (soma), 3.a transfer function or activation function (axon), and 4.output connections (axon terminals).  It has no learning process as such. 8

9  The function: 1.Input values enter the neuron via the connections. 2.The inputs are multiplied with the weighting factor of their respective connection.  There is often a separate bias connection, which can act as a threshold for the neuron to produce some useful output. 3.The modified inputs are fed into a summation function.  Usually just sums the products. 4.The result from the summation function is sent to a transfer function.  Usually a step function, or a sigmoid function. 5.The neuron outputs the result of the transfer function into other neurons, or to an outside connection. 9

10  How can the neurons be clustered together? ◦ The structure used in these networks is a layering approach. ◦ These layers are connected to each other in a linear fashion.  It's possible that a neuron may have an output connection to itself.  How these layers may be connected is generally problem-dependent. 10

11  Single layer neurons are the simplest networks. ◦ Multiple input sources will be fed into the set of neurons, which produce the outputs to the neural network. ◦ These are called perceptrons. ◦ These perceptrons can only represent linearly- separable functions.  We can make the system represent more complex functions by adding more layers. 11

12  Multi-layered neural networks are more powerful than single-layered neural networks. ◦ The cost is that these hidden layers increase the complexity and training time of these networks.  Networks with a single hidden layer can approximate any continuous function with arbitrary accuracy.  Networks with two hidden layers can represent discontinuous functions. 12

13 13 JokerXtreme (2011), "Artificial_neural_network.svg". Retrieved 10 September, 2012, from Wikipedia, Artificial neural network https://en.wikipedia.org/wiki/Artificial_neural_networks.

14  There are two main types of multi-layered neural networks: 1.Feedforward. ◦ A simple acyclic structure:  Information always moves in one direction; it never goes backwards.  Stateless encoding; no information is accumulated. 14

15  There are two main types of multi-layered neural networks: 2.Recurrent. ◦ A structure with cyclic feedback loops:  Information may be sent to any layer; it can process arbitrary sequences of input, and produce more complex results.  Stateful encoding; introduces short-term memory into the system, and allows dynamic temporal behaviour. 15

16  Artificial neural networks are used to model complex systems that were not understood by the programmer. ◦ We usually don't know how to construct a perfect neural network for a problem. ◦ We must train them to produce better results. ◦ We can only train aspects of a neural network. 16

17  Training is the adjusting of parameters with the aim to minimise a measure of error, the cost function.  What parameters in the artificial neural network do we want to adjust? ◦ The weighting factors.  The link weights influence the function represented by the neural network.  In the case when we have no idea for the link weights, these might be randomly generated at initialisation. 17

18  There are 2 main approaches to training: ◦ Supervised:  The user provides sample input and output data. The network adjusts its weights to match the expected results. ◦ Unsupervised.  Only input data is supplied, and the neural network must find patterns on its own. 18

19  G. McNeil and D. Anderson, ‘Artificial Neural Networks Technology’, The Data & Analysis Center for Software Technical Report, 1992.  Leslie S Smith, "An Introduction to Neural Networks", Centre for Cognitive and Computational Neuroscience, Department of Computing and Mathematics, University of Stirling, 2008. Retrieved 10 September, 2012, http://www.cs.stir.ac.uk/~lss/NNIntro/InvSlides.html.  Jarosz, Q. (2009), "Neuron Hand-tuned.svg". Retrieved 10 September, 2012, from Wikipedia, Neuron https://en.wikipedia.org/wiki/Neuron.  JokerXtreme (2011), "Artificial_neural_network.svg". Retrieved 10 September, 2012, from Wikipedia, Artificial neural network https://en.wikipedia.org/wiki/Artificial_neural_networks. 19

20  Language processing  Character recognition  Pattern recognition  Signal processing  Prediction 20

21  Supervised learning ◦ Perceptron ◦ Feedforward, back-propagation  Unsupervised learning ◦ Self organising maps 21

22  Simplest type of neural network  Introduced by Rosenblatt (1958) 22 Inputs Output Adapted from Haykin, SS 2009 (p48) 1 2 n......

23  Simplest type of neural network  Introduced by Rosenblatt (1958) 23 Inputs Output Adapted from Haykin, SS 2009 (p48) 1 2 n......

24  Simplest type of neural network  Introduced by Rosenblatt (1958) 24 Inputs Output Adapted from Haykin, SS 2009 (p48) 1 2 n......

25  Input is a real vector i = (i 1,i 2,…,i n )  Calculate a weighted scalar s from inputs s = Σ j w j i j + b  Calculate output r = sgn(s) 25

26  Categorises input vectors as being in one of two categories  A single perceptron can be trained to separate inputs into two linearly separable categories 26 Category 1 Category 2

27  Need a training set of input/output pairs  Initialise weights and bias (randomly or to zero)  Calculate output  Adjust the weights and bias in proportion to the difference between actual and expected values 27

28  Repeat until termination criteria is reached  Rosenblatt (1962) showed that the weights and bias will converge to fixed values after a finite number of iterations (if the categories are linearly separable) 28

29  We want to classify points in R 2 into those points for which y≥x+1 and those for which y<x+1 29 x y y=x+1

30  Initialise bias/weight vector to (0,0,0)  Input is the point (-1,-1) (below the line) – expressed as (1,-1,-1)  s = 0x1+0x-1+0x-1 = 0  Actual output is sgn(0) = +1  Expected output is -1 (below the line) 30

31  Error (expected-actual) is -2  Constant learning rate of 0.25  So new weight vector is (0,0,0) 31

32  Error (expected-actual) is -2  Constant learning rate of 0.25  So new weight vector is (0,0,0) + 0.25 32

33  Error (expected-actual) is -2  Constant learning rate of 0.25  So new weight vector is (0,0,0) + 0.25(-2) 33

34  Error (expected-actual) is -2  Constant learning rate of 0.25  So new weight vector is (0,0,0) + 0.25(-2)(1,-1,-1) 34

35  Error (expected-actual) is -2  Constant learning rate of 0.25  So new weight vector is (0,0,0) + 0.25(-2)(1,-1,-1) = (-0.5,0.5,0.5) 35

36  New bias/weight vector is (-0.5,0.5,0.5)  Input is the point (0,2) (above the line) – expressed as (1,0,2)  s = -0.5x1+0.5x0+0.5x2 = 0.5  Actual output is sgn(0.5) = +1  Expected output is +1 (above the line) – no change to weight 36

37  Eventually, this will converge to the correct answer of (-a,-a,a) for some a>0  Generally, we won’t know the correct answer! 37

38  Feedforward network has no connections looping backwards  Back-propagation algorithm allows for learning 38

39  Operates similarly to perceptron learning 39...... Input Output

40  Inputs are fed forward through the network 40...... Input Output

41  Inputs are fed forward through the network 41...... Input Output

42  Inputs are fed forward through the network 42...... Input Output

43  Inputs are fed forward through the network 43...... Input Output

44  Inputs are fed forward through the network 44...... Input Output Compare to expected

45  Errors are propagated back 45...... Input Output

46  Errors are propagated back 46...... Input Output

47  Errors are propagated back 47...... Input Output

48  Errors are propagated back 48...... Input Output

49  Adjust weights based on errors 49...... Input Output

50  Weights might be updated after each pass or after multiple passes 50

51  Need a comprehensive training set  Network cannot be too large for the training set  No guarantees the network will learn  Network design and learning strategies impact the speed and effectiveness of learning 51

52  More powerful (if you can make it work)  No external notion of correct/incorrect output – the network uses internal rules to adjust its output in response to inputs 52

53  One or more inputs connected to a set of outputs 53

54  Output neurons form a lattice in (usually) two dimensional space 54

55  Measurable distance between output neurons 55 d

56  Based on the network weights, for each input, each output neuron is excited to a different degree 56

57  Select best matching unit (BMU) 57

58  Identify a neighbourhood around BMU 58

59  Based on their levels of excitation, adjust the weights of each output neuron in this neighbourhood to more closely match the input  Hope that output neurons diverge into stable (and distinct) categories allowing the input data to be classified 59

60  Adapted from: AI-Junkie n.d., Kohonen's Self Organizing Feature Maps, Available from:, [11 September 2012]. 60

61  AI-Junkie n.d., Kohonen's Self Organizing Feature Maps, Available from:, [11 September 2012].  Bose, NK & Liang, P 1996, Neural network fundamentals with graphs, algorithms, and applications, McGraw-Hill, New York. (Chapters 4,5 and 9)  Fausett, LV 1994, Fundamentals of neural networks : architectures, algorithms, and applications, Prentice-Hall, Englewood Cliffs, N.J.. (Chapter 6)  Haykin, SS 2009, Neural networks and learning machines, 3 rd edn, Prentice Hall, New York. (Chapters 1, 4 and 9)  Kartalopoulos, SV 1996, Understanding neural networks and fuzzy logic - basic concepts and applications, IEEE Press, New York. (Sections 3.2, 3.5 and 3.14)  McNeil, G & Anderson, D 1992, 'Artificial Neural Networks Technology', The Data & Analysis Center for Software Technical Report. 61

62 62


Download ppt "Yuki Osada Andrew Cannon 1.  Humans are an intelligent species. ◦ One feature is the ability to learn.  The ability to learn comes down to the brain."

Similar presentations


Ads by Google