Presentation is loading. Please wait.

Presentation is loading. Please wait.

Semiconductors, BP&A Planning, 2003-01-291. 2 3.

Similar presentations


Presentation on theme: "Semiconductors, BP&A Planning, 2003-01-291. 2 3."— Presentation transcript:

1 Semiconductors, BP&A Planning, 2003-01-291

2 2

3 3

4 4

5 5

6 6 History spiking neural networks Vapnik (1990) ---support vector machine Broomhead & Lowe (1988) ----Radial basis functions (RBF) Linsker (1988) ----- Informax principle Rumelhart, Hinton -------- Back-propagation & Williams (1986) Kohonen(1982) ------ Self-organizing maps Hopfield(1982) ------ Hopfield Networks Minsky & Papert(1969) ------ Perceptrons Rosenblatt(1960) ------ Perceptron Minsky(1954) ------ Neural Networks (PhD Thesis) Hebb(1949) --------The organization of behaviour McCulloch & Pitts (1943) -----neural networks and artificial intelligence were born

7 Semiconductors, BP&A Planning, 2003-01-297 History of Neural Networks 1943: McCullough and Pitts - Modeling the Neuron for Parallel Distributed Processing 1958: Rosenblatt - Perceptron 1969: Minsky and Papert publish limits on the ability of a perceptron to generalize 1970’s and 1980’s: ANN renaissance 1986: Rumelhart, Hinton + Williams present backpropagation 1989: Tsividis: Neural Network on a chip

8 Semiconductors, BP&A Planning, 2003-01-298 William McCulloch

9 Semiconductors, BP&A Planning, 2003-01-299 Neural Networks McCulloch & Pitts (1943) are generally recognised as the designers of the first neural network Many of their ideas still used today (e.g. many simple units combine to give increased computational power and the idea of a threshold)

10 Semiconductors, BP&A Planning, 2003-01-2910 Neural Networks Hebb (1949) developed the first learning rule (on the premise that if two neurons were active at the same time the strength between them should be increased)

11 Semiconductors, BP&A Planning, 2003-01-2911

12 Semiconductors, BP&A Planning, 2003-01-2912 Neural Networks During the 50’s and 60’s many researchers worked on the perceptron amidst great excitement. 1969 saw the death of neural network research for about 15 years – Minsky & Papert Only in the mid 80’s (Parker and LeCun) was interest revived (in fact Werbos discovered algorithm in 1974)

13 Semiconductors, BP&A Planning, 2003-01-2913 How Does the Brain Work ? (1) NEURON The cell that perform information processing in the brain Fundamental functional unit of all nervous system tissue

14 Semiconductors, BP&A Planning, 2003-01-2914 How Does the Brain Work ? (2) Each consists of : SOMA, DENDRITES, AXON, and SYNAPSE

15 Semiconductors, BP&A Planning, 2003-01-2915 Biological neurons axon dendrites synapse cell

16 Semiconductors, BP&A Planning, 2003-01-2916 Neural Networks We are born with about 100 billion neurons A neuron may connect to as many as 100,000 other neurons

17 Semiconductors, BP&A Planning, 2003-01-2917 Biological inspiration Dendrites Soma (cell body) Axon

18 Semiconductors, BP&A Planning, 2003-01-2918 Biological inspiration synapses axon dendrites The information transmission happens at the synapses.

19 Semiconductors, BP&A Planning, 2003-01-2919 Biological inspiration The spikes travelling along the axon of the pre-synaptic neuron trigger the release of neurotransmitter substances at the synapse. The neurotransmitters cause excitation or inhibition in the dendrite of the post-synaptic neuron. The integration of the excitatory and inhibitory signals may produce spikes in the post-synaptic neuron. The contribution of the signals depends on the strength of the synaptic connection.

20 Semiconductors, BP&A Planning, 2003-01-2920 Biological Neurons human information processing system consists of brain neuron: basic building block –cell that communicates information to and from various parts of body Simplest model of a neuron: considered as a threshold unit –a processing element (PE) Collects inputs & produces output if the sum of the input exceeds an internal threshold value

21 Semiconductors, BP&A Planning, 2003-01-2921 Artificial Neural Nets (ANNs) Many neuron-like PEs units –Input & output units receive and broadcast signals to the environment, respectively –Internal units called hidden units since they are not in contact with external environment – units connected by weighted links (synapses) A parallel computation system because –Signals travel independently on weighted channels & units can update their state in parallel –However, most NNs can be simulated in serial computers A directed graph, with labeled edges by weights is typically used to describe the connections among units

22 Semiconductors, BP&A Planning, 2003-01-2922 Each processing unit has a simple program that: a) computes a weighted sum of the input data it receives from those units which feed into it b) outputs of a single value, which in general is a non-linear function of the weighted sum of the its inputs ---this output then becomes an input to those units into which the original units feeds activation level A NODE in i g aiai input function activation function output input links output links ajaj W j,i a i = g(in i )

23 Semiconductors, BP&A Planning, 2003-01-2923 g = Activation functions for units Step function (Linear Threshold Unit) Sign functionSigmoid function step(x) = 1, if x >= threshold 0, if x < threshold sign(x) = +1, if x >= 0 -1, if x < 0 sigmoid(x) = 1/(1+e -x )

24 Semiconductors, BP&A Planning, 2003-01-2924 Real vs artificial neurons axon dendrites synapse cell x0x0 xnxn w0w0 wnwn o Threshold units

25 Semiconductors, BP&A Planning, 2003-01-2925 Artificial neurons Neurons work by processing information. They receive and provide information in form of spikes. The McCullogh-Pitts model Inputs Output w2w2 w1w1 w3w3 wnwn w n-1... x 1 x 2 x 3 … x n-1 x n y

26 Semiconductors, BP&A Planning, 2003-01-2926 Mathematical representation The neuron calculates a weighted sum of inputs and compares it to a threshold. If the sum is higher than the threshold, the output is set to 1, otherwise to -1. Non-linearity

27 Semiconductors, BP&A Planning, 2003-01-2927 x 1 x 2 x n … w 1 w 2 … w n  threshold  f Artificial neurons

28 Semiconductors, BP&A Planning, 2003-01-2928

29 Semiconductors, BP&A Planning, 2003-01-2929 Basic Concepts Definition of a node: A node is an element which performs the function y = f H (∑(w i x i ) + W b ) Node Connection

30 Semiconductors, BP&A Planning, 2003-01-2930 Anatomy of an Artificial Neuron bias inputs 1 f : activation function output h : combine w i & x i

31 Semiconductors, BP&A Planning, 2003-01-2931 Simple Perceptron Binary logic application f H (x) = u(x) [linear threshold] W i = random(-1,1) Y = u(W 0 X 0 + W 1 X 1 + W b ) Now how do we train it?

32 Semiconductors, BP&A Planning, 2003-01-2932 From experience: examples / training data Strength of connection between the neurons is stored as a weight- value for the specific connection. Learning the solution to a problem = changing the connection weights Artificial Neuron An artificial neuron A physical neuron

33 Semiconductors, BP&A Planning, 2003-01-2933 Mathematical Representation b w1w1 w2w2 wnwn x1x1 x2x2 xnxn + b x0x0 f(n)........ n y InputsWeightsSummationActivation Output Inputs Output w2w2 w1w1 wnwn.. … y x2x2 xnxn b x1x1

34 Semiconductors, BP&A Planning, 2003-01-2934

35 Semiconductors, BP&A Planning, 2003-01-2935

36 Semiconductors, BP&A Planning, 2003-01-2936 A simple perceptron It’s a single-unit network Change the weight by an amount proportional to the difference between the desired output and the actual output. Δ W i = η * (D-Y).I i Perceptron Learning Rule Learning rate Desired output Input Actual output

37 Semiconductors, BP&A Planning, 2003-01-2937 Linear Neurons Obviously, the fact that threshold units can only output the values 0 and 1 restricts their applicability to certain problems. We can overcome this limitation by eliminating the threshold and simply turning f i into the identity function so that we get: With this kind of neuron, we can build networks with m input neurons and n output neurons that compute a function f: R m  R n.

38 Semiconductors, BP&A Planning, 2003-01-2938 Linear Neurons Linear neurons are quite popular and useful for applications such as interpolation. However, they have a serious limitation: Each neuron computes a linear function, and therefore the overall network function f: R m  R n is also linear. This means that if an input vector x results in an output vector y, then for any factor  the input  x will result in the output  y. Obviously, many interesting functions cannot be realized by networks of linear neurons.

39 Semiconductors, BP&A Planning, 2003-01-2939 Mathematical Representation

40 Semiconductors, BP&A Planning, 2003-01-2940 Gaussian Neurons Another type of neurons overcomes this problem by using a Gaussian activation function: 1 0 1 f i (net i (t)) net i (t)

41 Semiconductors, BP&A Planning, 2003-01-2941 Gaussian Neurons Gaussian neurons are able to realize non-linear functions. Therefore, networks of Gaussian units are in principle unrestricted with regard to the functions that they can realize. The drawback of Gaussian neurons is that we have to make sure that their net input does not exceed 1. This adds some difficulty to the learning in Gaussian networks.

42 Semiconductors, BP&A Planning, 2003-01-2942 Sigmoidal Neurons Sigmoidal neurons accept any vectors of real numbers as input, and they output a real number between 0 and 1. Sigmoidal neurons are the most common type of artificial neuron, especially in learning networks. A network of sigmoidal units with m input neurons and n output neurons realizes a network function f: R m  (0,1) n

43 Semiconductors, BP&A Planning, 2003-01-2943 Sigmoidal Neurons The parameter  controls the slope of the sigmoid function, while the parameter  controls the horizontal offset of the function in a way similar to the threshold neurons. 1 0 1 f i (net i (t)) net i (t)  = 1  = 0.1

44 Semiconductors, BP&A Planning, 2003-01-2944 Example: A simple single unit adaptive network The network has 2 inputs, and one output. All are binary. The output is –1 if W 0 I 0 + W 1 I 1 + W b > 0 –0 if W 0 I 0 + W 1 I 1 + W b ≤ 0 We want it to learn simple OR: output a 1 if either I 0 or I 1 is 1.

45 Semiconductors, BP&A Planning, 2003-01-2945 Artificial neurons The McCullogh-Pitts model: spikes are interpreted as spike rates; synaptic strength are translated as synaptic weights; excitation means positive product between the incoming spike rate and the corresponding synaptic weight; inhibition means negative product between the incoming spike rate and the corresponding synaptic weight;

46 Semiconductors, BP&A Planning, 2003-01-2946 Artificial neurons Nonlinear generalization of the McCullogh-Pitts neuron: y is the neuron’s output, x is the vector of inputs, and w is the vector of synaptic weights. Examples: sigmoidal neuron Gaussian neuron

47 Semiconductors, BP&A Planning, 2003-01-2947 NNs: Dimensions of a Neural Network –Knowledge about the learning task is given in the form of examples called training examples. –A NN is specified by: –an architecture: a set of neurons and links connecting neurons. Each link has a weight, –a neuron model: the information processing unit of the NN, –a learning algorithm: used for training the NN by modifying the weights in order to solve the particular learning task correctly on the training examples. The aim is to obtain a NN that generalizes well, that is, that behaves correctly on new instances of the learning task.

48 Semiconductors, BP&A Planning, 2003-01-2948 Neural Network Architectures Many kinds of structures, main distinction made between two classes: a) feed- forward (a directed acyclic graph (DAG): links are unidirectional, no cycles b) recurrent: links form arbitrary topologies e.g., Hopfield Networks and Boltzmann machines Recurrent networks: can be unstable, or oscillate, or exhibit chaotic behavior e.g., given some input values, can take a long time to compute stable output and learning is made more difficult…. However, can implement more complex agent designs and can model systems with state We will focus more on feed- forward networks

49 Semiconductors, BP&A Planning, 2003-01-2949

50 Semiconductors, BP&A Planning, 2003-01-2950

51 Semiconductors, BP&A Planning, 2003-01-2951

52 Semiconductors, BP&A Planning, 2003-01-2952

53 Semiconductors, BP&A Planning, 2003-01-2953

54 Semiconductors, BP&A Planning, 2003-01-2954

55 Semiconductors, BP&A Planning, 2003-01-2955

56 Semiconductors, BP&A Planning, 2003-01-2956

57 Semiconductors, BP&A Planning, 2003-01-2957

58 Semiconductors, BP&A Planning, 2003-01-2958 Single Layer Feed-forward Input layer of source nodes Output layer of neurons

59 Semiconductors, BP&A Planning, 2003-01-2959 Multi layer feed-forward Input layer Output layer Hidden Layer 3-4-2 Network

60 Semiconductors, BP&A Planning, 2003-01-2960 Feed-forward networks: Advantage: lack of cycles = > computation proceeds uniformly from input units to output units. -activation from the previous time step plays no part in computation, as it is not fed back to an earlier unit - simply computes a function of the input values that depends on the weight settings –it has no internal state other than the weights themselves. - fixed structure and fixed activation function g: thus the functions representable by a feed-forward network are restricted to have a certain parameterized structure

61 Semiconductors, BP&A Planning, 2003-01-2961 Learning in biological systems Learning = learning by adaptation The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behavior. At the neural level the learning happens by changing of the synaptic strengths, eliminating some synapses, and building new ones.

62 Semiconductors, BP&A Planning, 2003-01-2962 Learning as optimisation The objective of adapting the responses on the basis of the information received from the environment is to achieve a better state. E.g., the animal likes to eat many energy rich, juicy fruits that make its stomach full, and makes it feel happy. In other words, the objective of learning in biological organisms is to optimise the amount of available resources, happiness, or in general to achieve a closer to optimal state.

63 Semiconductors, BP&A Planning, 2003-01-2963 Synapse concept The synapse resistance to the incoming signal can be changed during a "learning" process [1949 ] Hebb’s Rule: If an input of a neuron is repeatedly and persistently causing the neuron to fire, a metabolic change happens in the synapse of that particular input to reduce its resistance

64 Semiconductors, BP&A Planning, 2003-01-2964 Neural Network Learning Objective of neural network learning: given a set of examples, find parameter settings that minimize the error. Programmer specifies - numbers of units in each layer - connectivity between units, Unknowns - connection weights

65 Semiconductors, BP&A Planning, 2003-01-2965 Supervised Learning in ANNs In supervised learning, we train an ANN with a set of vector pairs, so-called exemplars. Each pair (x, y) consists of an input vector x and a corresponding output vector y. Whenever the network receives input x, we would like it to provide output y. The exemplars thus describe the function that we want to “teach” our network. Besides learning the exemplars, we would like our network to generalize, that is, give plausible output for inputs that the network had not been trained with.

66 Semiconductors, BP&A Planning, 2003-01-2966 Supervised Learning in ANNs There is a tradeoff between a network’s ability to precisely learn the given exemplars and its ability to generalize (i.e., inter- and extrapolate). This problem is similar to fitting a function to a given set of data points. Let us assume that you want to find a fitting function f:R  R for a set of three data points. You try to do this with polynomials of degree one (a straight line), two, and nine.

67 Semiconductors, BP&A Planning, 2003-01-2967 Supervised Learning in ANNs Obviously, the polynomial of degree 2 provides the most plausible fit. f(x) x deg. 1 deg. 2 deg. 9


Download ppt "Semiconductors, BP&A Planning, 2003-01-291. 2 3."

Similar presentations


Ads by Google