Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Neural Networks

Similar presentations


Presentation on theme: "Artificial Neural Networks"— Presentation transcript:

1 Artificial Neural Networks

2 Overview Motivation & Goals Perceptron-Learning
Gradient Algorithms & the D-Rule Multi Layer Nets The Backpropagation Algorithm Example Application: Recognition of Faces More Network Architectures Application Areas of ANNs

3 Model: The Brain A complex learning system with simple learning units: the neurons. A network of ~ neurons where each of the neurons has ~ connections. Transmission time of a neuron: ~ (speed versus flexibility) Observation: face recognition time = ~ ® parallelism.

4 Goals of ANNs input output Learning instead of programming
Learning complex functions with simple learning units Parallel computation (e.g. layer model) Network parameter shall be automatically found by a learning algorithm An ANN « black box. input output

5 When are ANNs used? input output
Input instances are described as a vector of discrete or real values The output of a target function is a single value or a vector of discrete or real valued attributes Input data contains noise Target function unknown or difficult to describe input output

6 The Perceptron (as a NN Unit) (1/2)
A linear unit with threshold. S

7 The Perceptron (as a NN Unit) (2/2)

8 Geometrical Classification (Decision Surface)
A perceptron can classify only linear separable training data. ® We need networks of these units. + - not linear separable Ex. XOR-Function + + - + linear separable Ex. OR-Function 0.5 0.3 0.5

9 The Perceptron Learning Rule (1/2)
Training of a perceptron = Learning the best hypothesis, which classifies all training data A hypothesis = a vector of weights

10 The Perceptron Learning Rule (2/2)
Idea: Initialise the weights with random values Apply the perceptron iterative to each training example and modify the weights according to the learning rule where: · t : target output · o: actual output · h: the learning rate Step 2 is repeated for all training examples until all of them are correctly classified.

11 The Perceptron-Learning Rule: Convergence
The perceptron learning rule converges if: The training examples are linear separable and h is chosen small enough (e.g. 0.1). Intuitive explanation:

12 The Gradient Descend Algorithm & the D-Rule (1/5)
Better: the D-Rule converges even if the training examples are not linear separable. Idea: Use the gradient descend algorithm to search for the best hypothesis in hypothesis space. The best hypothesis is the one which maximally minimises the square error Þ Basis of the backpropagation algorithm.

13 The Gradient Descend Algorithm & the D-Rule (2/5)
Because of steadiness the D-learning rule is applied on a linear unit instead of on the perceptron. Linear unit: The square error to be minimised: 1 S D: set of training examples : target output of example d : computed output of example d ,where:

14 The Gradient Descend Algorithm & the D-Rule (3/5)
Geometric Interpretation: H-Space, error function (e.g. 2-dimensional).

15 The Gradient Descend Algorithm & the D-Rule (4/5)
Derivation Gradient: Learning Rule:

16 The Gradient Descend Algorithm & the D-Rule (5/5)
Standard methode: Do until termination criterion is satisfied Initialise For all Compute o For all For all

17 The D-rule Stochastic methode: Do until termination criterion is satisfied Initialise For all Compute o For all Ü the D Rule

18 Remarks Advantages of the stochastic approximation of the gradient: Þ quicker convergence (incremental update of the weights) Þ less likely to stuck in a local minimum.

19 Remarks x2 - + - + x1 not linear separable
Single perceptrons learn only linear separable training data Þ We need multi layer networks of several 'neurons'. Example: the XOR problem: x2 - + - 0.5 x1 x2 1. -1. + x1 not linear separable

20 XOR-Function 0.5 1. -1. 1. 0.5 1 1. -1. 1. 1 0.5 1 1. -1. 0.5 1. -1. -1. 0.5 0.5 1. 1.

21 Supervised Learning Backpropagation NN
Since 1985 the BP algorithm has become one of the widely spread and successful learning algorithms for NNs. Idea: The minimum of the error function of a learning function is searched by descending in direction of the gradient. The vector of weights which minimises the error in the network is seen as the solution of the learning problem. So the gradient of the error function must exist for all points inside the weight space must be differentiable

22 Learning in Backpropagation Networks
The sigmoid unit: Properties of the sigmoid unit: with

23 Definitions used by the BP Algorithm
Input units i : input from node i to unit j : weight of the jth input to unit I outputs: set of output units : output of unit i : target output of unit i : error term of unit n Backpropagation j Hidden units Output units

24 The Backpropagation Algorithm
Initialise all weights to small random numbers Until termination criterion satisfied do For each training example do Compute the network's outcome For each output unit k For each hidden unit h Update each network weight where

25 Derivation of the BP Algorithm
For each training example d: with and where (weighted sum of inputs for unit j) Hidden units i Input units Output units j

26 Derivation of the BP Algorithm
Output layer: Hidden layer: And therefore Downstream(j): the set of units whose immediate inputs include the output of unit j

27 Derivation of the BP Algorithm (Explanation)

28 Convergence of the BP Algorithm
Generalisation to arbitrary acyclic directed network architectures is simple. In practice it works well, but it sometimes sticks in a local but not always global minimum Þ introduction of a momentum a (“escape routes”) : Disadvantage: global minima can be left out by this “jumping”! Training can take thousands of iterations ® slow (accelerated by momentum). Over-fitting versus adaptability of the NN.

29 Example: Recognition of Faces
Given: 32 photos of 20 persons, in different positions: ® Direction of view: right, left, up or straight. ® With and without sunglasses. ® Expression: happy, sad, neutral...

30 Example: Recognition of Faces
Goal: Classification of the photos concerning the direction of view Preparation of the input: • Rastering the photos acceleration of the learning process • Input vector = the grayscale values of the 30 * 32 pixels. • Output vector = (left, straight, right, up) Solution = max(left, right, up, straight) e.g. o = (0.9, 0.1, 0.1, 0.1) looking to the left

31 Recognition of the direction of view

32 Recurrent Neural Networks
They are directed cyclic networks “with memory” ® Outputs at time t = Inputs at time t+1 ® The cycles allow to feed results back into the network. (+) They are more expressive than acyclic networks (-) Training of recurrent networks is expensive. In some cases recurrent networks can be trained using a variant of the Backpropagation algorithm. Example: Forecast of the next stock market prices y(t+1), based on the current indicator x(t) and the last indicator x(t-1).

33 Recurrent NNs y(t+1) y(t+1) b c(t) x(t) c(t) x(t) Recurrent
Feedforward network Recurrent network c(t-1) x(t-2) Recurrent network (unfolded in time) c(t-2)


Download ppt "Artificial Neural Networks"

Similar presentations


Ads by Google