Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Part I Artificial Neural Networks Sofia Nikitaki.

Similar presentations


Presentation on theme: "1 Part I Artificial Neural Networks Sofia Nikitaki."— Presentation transcript:

1 1 Part I Artificial Neural Networks Sofia Nikitaki

2 2 What is a Neural Network  An information processing paradigm inspired by the biological nervous systems  Key  information processing system  Large number of highly interconnected elements  Learn by example

3 3 How the Human Brain Learns?  A neuron collects signals  dendrites

4 4 Artificial Neuron  Many inputs and one output  Modes of operation Training Using

5 5 How Neural Networks works  Basic features: Construction of network Computation functions of network Training of network

6 6 Inside the Neural Network  P i : inputs  Σ : sum for each neuron  f : function of the transfer output  f(a i ) = Σ ( W i,i x P i )+b i b i : threshold of each node.

7 7 Inside the Neural Network (cnt’d)

8 8 Training of the network  Supervised learning Output target exist Backpropagation technique  Unsupervised learning Output target unknown The network adjust similar output for similar inputs

9 9 Backpropagation technique 1.Computes the total weight input Xi 2.Calculates the activity yi using the transfer function 3.Computes the error E

10 10 Backpropagation technique (cnt’d)  How fast the E changes 1.Activity of an output unit is changed  Error derivative (EA)  Yi actual activity  di the desired activity 2.As the total input received by an output unit is changed  Quality (EI) is the answer from step 1 multiplied by the rate at which the output of a unit changes as its total input is changed. 3.As a weight on the connection into an output unit is changed 4.As the activity of a unit in the previous layer is changed

11 11 Transfer functions

12 12 Gradient descent method  Algorithm   step searchs locally to find the optimal value the optimal direction of the weight change  Training function – TrainSCG updates  weight, bias values  Adaption Learning Function – LearnGDM updates  weight, bias learning function

13 13 Performance Function - MSE  Network's performance  the mean of squared errors  Measures the average of the square of the error

14 14 Part II Neural Networks and Location Sensing for indoor environments

15 15 CLS – Neural Networks  Input Deciles of Signal Strength values Signal Strength measurements

16 16 Neural Network  Input : Deciles of Signal Strength values 2 layers Input: 80 SS values per cell Total: 8 APs 85 neurons Tan-sigmoid transfer function POSITION Output:2 values x,y coordinates Linear transfer function

17 17 Results - Location Error Median 2,6meter

18 18 Neural Network  Input : Signal Strength values Input: 480 SS values per cell Total: 8 APs 2 layers 100 neurons Tan-sigmoid transfer function POSITION Output:2 values x,y coordinates Linear transfer function

19 19 Results - Location Error Median 1.8 meter

20 20 CLS – Comparison of all methods

21 21


Download ppt "1 Part I Artificial Neural Networks Sofia Nikitaki."

Similar presentations


Ads by Google