Neural Networks: An Overview. There are many different kinds of Neural Network they include to name a few: The Back Propagating Networks Kohonen (Self-Organising)

Slides:



Advertisements
Similar presentations
Solve this equation to find x
Advertisements

AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
© Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems Introduction.
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Worksheets.
Table de multiplication, division, addition et soustraction.
Multiplication X 1 1 x 1 = 1 2 x 1 = 2 3 x 1 = 3 4 x 1 = 4 5 x 1 = 5 6 x 1 = 6 7 x 1 = 7 8 x 1 = 8 9 x 1 = 9 10 x 1 = x 1 = x 1 = 12 X 2 1.
Division ÷ 1 1 ÷ 1 = 1 2 ÷ 1 = 2 3 ÷ 1 = 3 4 ÷ 1 = 4 5 ÷ 1 = 5 6 ÷ 1 = 6 7 ÷ 1 = 7 8 ÷ 1 = 8 9 ÷ 1 = 9 10 ÷ 1 = ÷ 1 = ÷ 1 = 12 ÷ 2 2 ÷ 2 =
Instructional Strategies Aligned to the 1997
CALENDAR.
Adding and Subtracting Decimals
1 1  1 =.
1  1 =.
CHAPTER 18 The Ankle and Lower Leg
L.O.1 To be able to recall multiplication and division facts involving the 2,3,4,6,7 and 8 times tables.
Tenths and Hundredths.
Area and Perimeter.
Year 10 Exam Revision Groups of 4 Pupil A,B,C,D 1 point for each pupil.
The 5S numbers game..
1 A B C
The basics for simulations
Factoring Quadratics — ax² + bx + c Topic
EXAMPLE 1 Adding and Subtracting Decimals a
An Introduction to International Economics
DIVISIBILITY, FACTORS & MULTIPLES
Look at This PowerPoint for help on you times tables
1 Slides revised The overwhelming majority of samples of n from a population of N can stand-in for the population.
Simple Interest Lesson
1 Prediction of electrical energy by photovoltaic devices in urban situations By. R.C. Ott July 2011.
Introduction Our daily lives often involve a great deal of data, or numbers in context. It is important to understand how data is found, what it means,
Chapter 1: Expressions, Equations, & Inequalities
Quantitative Analysis (Statistics Week 8)
Lob: I can use pencil and paper methods to divide TU by U and HTU by U.
Adding Up In Chunks.
Comparison of X-ray diffraction patterns of La 2 CuO 4+   from different crystals at room temperature Pia Jensen.
Proportions and Percents Unit rates & Proportions Unit Rate Scale Drawing and Probability Fractions Percents.
2011 WINNISQUAM COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=1021.
Before Between After.
Benjamin Banneker Charter Academy of Technology Making AYP Benjamin Banneker Charter Academy of Technology Making AYP.
2011 FRANKLIN COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=332.
Subtraction: Adding UP
Number bonds to 10,
2 x0 0 12/13/2014 Know Your Facts!. 2 x1 2 12/13/2014 Know Your Facts!
Static Equilibrium; Elasticity and Fracture
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Completing the Square Topic
Multiplication Facts Practice
Graeme Henchel Multiples Graeme Henchel
0 x x2 0 0 x1 0 0 x3 0 1 x7 7 2 x0 0 9 x0 0.
Schutzvermerk nach DIN 34 beachten 05/04/15 Seite 1 Training EPAM and CANopen Basic Solution: Password * * Level 1 Level 2 * Level 3 Password2 IP-Adr.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Network Unsupervised Learning
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Self-Organizing Network Model (SOM) Session 11
Unsupervised Networks Closely related to clustering
Presentation transcript:

Neural Networks: An Overview

There are many different kinds of Neural Network they include to name a few: The Back Propagating Networks Kohonen (Self-Organising) Networks Hopfield Networks The Boltzmann Machine This project has used a Kohonen Network. I shall explain the properties of such a network and why it is best for a small scale project such as this. All Neural Networks use the concept of Neurons which are gradually trained to give a more appropriate response to input.

The inputs are added together and this result is used to generate an output for the neuron. The inputs are generally weighted so that some inputs to the neuron may have a greater affect than others. Kohonen (Self-Organising) Networks Neurons have inputs and outputs. This project has used a Kohonen Network. I shall explain the properties of such a network and why it is best for a small scale project such as this. w2w2 w3w3 w1w1 A neuron of such as this one is called an Ada ptive Li near E lement (or Adaline) and may be found own its own … or may be found as part of a larger neural network where the output of a neuron may be the input for the next layer. U x3x3 x2x2 x1x1 What is a Kohonen Network?So what is a Neuron?

What is a Kohonen Network? Kohonen (Self-Organising) Networks A Kohonen network uses these basic principles however in a Kohonen network there is only one layer of neurons which are arranged in a 2D lattice. There is an input layer with inputs which feed into every neuron. x1x1 x2x2

What is a Kohonen Network? Kohonen (Self-Organising) Networks A Kohonen network uses these basic principles however in a Kohonen network there is only one layer of neurons which are arranged in a 2D lattice. There is an input layer with inputs which feed into every neuron. x1x1 x2x2 Each neuron evaluates the sum of the weighted inputs and uses this as its own output. This is the Euclidian distance of the neuron from the input pattern. Individual neuron output = ( x i – w ik ) 2 (for all i) Where w ik is the weight between input i and neuron k. The neuron with the smallest Euclidian distance is the one which best represents the input pattern and is known as the winner. The key to the neural network are the weights as it is they that dictate which neuron will win when the network is presented with a particular input pattern. How does training work?

Kohonen (Self-Organising) Networks All the weights in the network are initialised to small random values. An input pattern is presented to the network and the winning neuron is found. The weights to the winning node are updated using the following equation. w ik (t+ 1 ) = w ik (t) + G (t) ( x i (t) – w ik (t) ) The updated weight between input i and neuron k The gain term The original weight between input i and neuron k Input i This is referred to as course mapping and fine mapping. This promotes large changes away from the initial state of the network but smaller changes later when the network is some way through training. The gain term is a value between 0 and 1 which starts high and as more patterns are presented to the network is decreased. The use of a Neighborhood during training.

x1x1 x2x2 How does training work? Kohonen (Self-Organising) Networks Usually the weights of more neurons than the winner are changed. Once the winner has been identified a neighbourhood of a predefined shape and size is drawn around it.

All the neurons in the neighbourhood have their weights updated to better represent the input pattern. This promotes clustering of similar results. The neighbourhood size starts very large (typically the entire network) and is decreased during training in a similar way to the gain term. The use of a Neighborhood during training. How does training work? Kohonen (Self-Organising) Networks Usually the weights of more neurons than the winner are changed. Once the winner has been identified a neighbourhood of a predefined shape and size is drawn around it. A real output lattice will typically have hundreds of neurons.

How does training work? Kohonen (Self-Organising) Networks A real output lattice will typically have hundreds of neurons. This self organisation gives the network its name and leads to the lattice of nodes often being referred to as a self organising matrix. Over time the lattice of neurons will organise its self into areas of neurons which respond more highly to certain input patterns. Then all that is left is to present some input patterns to the network with known real world values (such as PCO 2 ) so that the defined areas of the lattice can be found for the individual problem and to give the areas real world values. Now when an input pattern is presented to the network the winning neuron should be in or close to a defined area so a value can be assigned The PCO 2 problem on a Kohonen network

The PCO 2 problem on a Kohonen network The inputs The network has been mainly programmed in Fortran and is made up of many console programs each with a particular function to perform on the network. Where ever possible the programs are arbitrary and so such properties as the number of input in an input pattern can be set for each network. The latter networks have had 5 inputs in each pattern, these are: 2 for month, 1 latitude, 1 longitudeand 1 sea surface temperature. The aim of this project is to program, implement and test a Kohonen network designed to predict PCO 2 levels at the surface of the ocean. These where used as they where thought to have the greatest affect on the PCO 2 concentration. To encourage the network to distinguish between months far apart and close in the calendar and to give a greater emphasis on the month input it was included twice in each pattern. However one was an input based on the Sine of the month and one on the Cosine making the month cyclical. Lattice size

The PCO 2 problem on a Kohonen network Lattice size The aim of this project is to program, implement and test a Kohonen network designed to predict PCO 2 levels at the surface of the ocean. Typically a square lattice size of 70x70 neurons was used. Of the 4900 neurons in such a lattice only 18.06% of them will be given exact PCO2 values to report when they are found to be the winner. If an unclassified neuron wins then a function must be employed to find the most appropriate PCO2 value. As the network must find the output of every neuron in the lattice to find the winner the size of the lattice greatly affects the time the network takes to train or run. Lattice sizeClassedAprMayJunJulAugSepOctNovAverage 175x175 (30625)6.13% x70 (4900)18.06% x28 (784)46.68% x11 (121)86.78% x4 (16)100.00% Using data as model All data is restricted to the north Atlantic drift area. This is a table of Χ where Χ 2 =Σ (model-data) 2 Number of Data points been tested. To illustrate the properties of lattice sizes several sizes have PCO 2 data in the North Atlantic drift area from a neural network

The PCO 2 problem on a Kohonen network PCO 2 data in the North Atlantic drift area from a neural network Using Levitus data (which gives a uniform grid of latitude and longitude with temperature values for any month) a map of PCO2 values can be constructed. This was done with MatLab. Each point on the maps has an inside colour set by the PCO 2 value given by the network and an outside colour set by the data the network was trained on, this gives a visual grade of how well the network predicts the PCO 2 levels. The network which created these maps had a lattice size of 70x70 Created by a 70x70 lattice

Created by a 175x175 lattice

Created by a 28x28 lattice

Created by a 11x11 lattice

The PCO 2 problem on a Kohonen network Created by a 4x4 lattice Training method

The training program looks for a file which tells it how to change these properties of the network. Usually the gain is stepped down while the neighbourhood size gets smaller smoothly. As mentioned before the gain and the neighbourhood size change during the training process. The PCO 2 problem on a Kohonen network Training method The number of runs. This is the number of times a random input pattern is chosen and presented to the network. Gain. This is expressed as a percentage. The gain goes from the first of these to the second over the number of runs. Mostly it is fixed. Neighbourhood diameter. This is expressed as a percent of the lattice width. The size goes from the first value to the second smoothly over the number of runs. So two more tests where performed this time changing the training method. The file used in training the last networks was as follows. Long Train

These are designed to do the same as original training method but with more or less runs i.e. the patterns are presented to the network more or less times. The PCO 2 problem on a Kohonen network Training method Long Train Quick Train The networks both had a 70x70 output lattice so a comparison with the 70x70 size network trained with the original method is valid. ClassedAprMayJunJulAugSepOctNovAverage Long18.00% Original18.06% Quick19.84% This is a table of Χ where Χ 2 =Σ (model-data) 2 Number of Data points

Created by a 70x70 lattice using the Long method

Created by a 70x70 lattice using the Orig- inal method

Created by a 70x70 lattice using the Quick method