Presentation is loading. Please wait.

Presentation is loading. Please wait.

Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.

Similar presentations


Presentation on theme: "Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure."— Presentation transcript:

1

2 Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure

3 Neural Networks The Structure of Neurones –Axons connect to dendrites via synapses. –Electro-chemical signals are propagated from the dendritic input, through the cell body, and down the axon to other neurons A neurone has a cell body, a branching input structure (the dendrIte) and a branching output structure (th axOn)

4 Neural Networks The Structure of Neurones A neurone only fires if its input signal exceeds a certain amount (the threshold) in a short time period. Synapses vary in strength –Good connections allowing a large signal –Slight connections allow only a weak signal. –Synapses can be either excitatory or inhibitory.

5 Neural Networks SjSj f (S j ) XjXj aoao a1a1 a2a2 anan +1 w j0 w j1 w j2 w jn A Classic Artifical Neuron(1)

6 Neural Networks All neurons contain an activation function which determines whether the signal is strong enough to produce an output. Shows several functions that could be used as an activation function. A Classic Artifical Neuron(2)

7 Neural Networks Learning When the output is calculated, the desire output is then given to the program to modify the weights. After modifications are done, the same inputs given will produce the outputs desired. Formula : Weight N = Weight N + learning rate * (Desire Output-Actual Output) * Input N * Weight N

8 Neural Networks Tractable Architectures Feedforward Neural Networks –Connections in one direction only –Partial biological justification Complex models with constraints (Hopfield and ART). –Feedback loops included –Complex behaviour, limited by constraining architecture

9 Neural Networks Fig. 1: Multilayer Perceptron Output Values Input Signals (External Stimuli) Output Layer Adjustable Weights Input Layer

10 Neural Networks Types of Layer The input layer. –Introduces input values into the network. –No activation function or other processing. The hidden layer(s). –Perform classification of features –Two hidden layers are sufficient to solve any problem –Features imply more layers may be better

11 Neural Networks Types of Layer (continued) The output layer. –Functionally just like the hidden layers –Outputs are passed on to the world outside the neural network.

12 Neural Networks A Simple Model of a Neuron Each neuron has a threshold value Each neuron has weighted inputs from other neurons The input signals form a weighted sum If the activation level exceeds the threshold, the neuron “fires” w 1j w 2j w 3j w ij y1y1 y2y2 y3y3 yiyi O

13 Neural Networks An Artificial Neuron Each hidden or output neuron has weighted input connections from each of the units in the preceding layer. The unit performs a weighted sum of its inputs, and subtracts its threshold value, to give its activation level. Activation level is passed through a sigmoid activation function to determine output. w 1j w 2j w 3j w ij y1y1 y2y2 y3y3 yiyi f(x) O

14 Neural Networks Mathematical Definition Number all the neurons from 1 up to N The output of the j'th neuron is o j The threshold of the j'th neuron is  j The weight of the connection from unit i to unit j is w ij The activation of the j'th unit is a j The activation function is written as  (x)

15 Neural Networks Mathematical Definition Since the activation a j is given by the sum of the weighted inputs minus the threshold, we can write: o j =  (a j ) a j =  ( w ij o i ) -  j i

16 Neural Networks Activation functions Transforms neuron’s input into output. Features of activation functions: –A squashing effect is required Prevents accelerating growth of activation levels through the network. –Simple and easy to calculate –Monotonically non-decreasing order-preserving

17 Neural Networks Standard activation functions The hard-limiting threshold function –Corresponds to the biological paradigm either fires or not Sigmoid functions ('S'-shaped curves) –The logistic function –The hyperbolic tangent (symmetrical) –Both functions have a simple differential –Only the shape is important  (x) = 1 1 + e -ax

18 Neural Networks Training Algorithms Adjust neural network weights to map inputs to outputs. Use a set of sample patterns where the desired output (given the inputs presented) is known. The purpose is to learn to generalize –Recognize features which are common to good and bad exemplars

19 Neural Networks Back-Propagation A training procedure which allows multi- layer feedforward Neural Networks to be trained; Can theoretically perform “any” input- output mapping; Can learn to solve linearly inseparable problems.

20 Neural Networks Activation functions and training For feedforward networks: –A continuous function can be differentiated allowing gradient-descent. –Back-propagation is an example of a gradient- descent technique. –Reason for prevalence of sigmoid

21 Neural Networks Training versus Analysis Understanding how the network is doing what it does Predicting behaviour under novel conditions


Download ppt "Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure."

Similar presentations


Ads by Google