Neural networks.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Backpropagation Learning Algorithm
Slides from: Doug Gray, David Poole
NEURAL NETWORKS Backpropagation Algorithm
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Simple Neural Nets For Pattern Classification
The back-propagation training algorithm
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Back-Propagation Algorithm
Multi Layer Perceptrons (MLP) Course website: The back-propagation algorithm Following Hertz chapter 6.
Data Mining with Neural Networks (HK: Chapter 7.5)
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural networks.
Artificial neural networks:
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Artificial Neural Networks
Waqas Haider Khan Bangyal. Multi-Layer Perceptron (MLP)
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS 478 – Tools for Machine Learning and Data Mining Backpropagation.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
BACKPROPAGATION: An Example of Supervised Learning One useful network is feed-forward network (often trained using the backpropagation algorithm) called.
Multi-Layer Perceptron
Non-Bayes classifiers. Linear discriminants, neural networks.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
EEE502 Pattern Recognition
Perceptrons Michael J. Watts
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Today’s Lecture Neural networks Training
Machine Learning Supervised Learning Classification and Regression
Multiple-Layer Networks and Backpropagation Algorithms
Fall 2004 Backpropagation CS478 - Machine Learning.
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Chapter 2 Single Layer Feedforward Networks
Artificial neural networks:
第 3 章 神经网络.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CS621: Artificial Intelligence
CSC 578 Neural Networks and Deep Learning
Data Mining with Neural Networks (HK: Chapter 7.5)
of the Artificial Neural Networks.
Artificial Intelligence Chapter 3 Neural Networks
Neural Network - 2 Mayank Vatsa
Capabilities of Threshold Neurons
Artificial Intelligence Chapter 3 Neural Networks
Backpropagation.
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
Artificial Intelligence Chapter 3 Neural Networks
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Neural networks

Neural networks Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated by the red circle. A weight is simply a floating point number and it's these we adjust when we eventually come to train the network.

Neural networks A neuron can have any number of inputs from one to n, where n is the total number of inputs. The inputs may be represented therefore as x1, x2, x3… xn. And the corresponding weights for the inputs as w1, w2, w3… wn.  Output a = x1w1+x2w2+x3w3... +xnwn  

How do we actually use an artificial neuron? feedforward network: The neurons in each layer feed their output forward to the next layer until we get the final output from the neural network. There can be any number of hidden layers within a feedforward network. The number of neurons can be completely arbitrary.

Neural Networks by an Example let's design a neural network that will detect the number '4'. Given a panel made up of a grid of lights which can be either on or off, we want our neural net to let us know whenever it thinks it sees the character '4'. The panel is eight cells square and looks like this: the neural net will have 64 inputs, each one representing a particular cell in the panel and a hidden layer consisting of a number of neurons (more on this later) all feeding their output into just one neuron in the output layer

Neural Networks by an Example initialize the neural net with random weights feed it a series of inputs which represent, in this example, the different panel configurations For each configuration we check to see what its output is and adjust the weights accordingly so that whenever it sees something looking like a number 4 it outputs a 1 and for everything else it outputs a zero. More: http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html

Multi-Layer Perceptron (MLP)

However, we will concentrate on nets with units arranged in layers x1 xn We will introduce the MLP and the backpropagation algorithm which is used to train it MLP used to describe any general feedforward (no recurrent connections) network However, we will concentrate on nets with units arranged in layers

x1 xn NB different books refer to the above as either 4 layer (no. of layers of neurons) or 3 layer (no. of layers of adaptive weights). We will follow the latter convention 1st question: what do the extra layers gain you? Start with looking at what a single layer can’t do

Perceptron Learning Theorem Recap: A perceptron (threshold unit) can learn anything that it can represent (i.e. anything separable with a hyperplane)

The Exclusive OR problem A Perceptron cannot represent Exclusive OR since it is not linearly separable.

Minsky & Papert (1969) offered solution to XOR problem by combining perceptron unit responses using a second layer of Units. Piecewise linear classification using an MLP with threshold (perceptron) units +1 1 3 2 +1

xn x1 x2 Input Output Three-layer networks Hidden layers

Properties of architecture No connections within a layer Each unit is a perceptron

Properties of architecture No connections within a layer No direct connections between input and output layers Each unit is a perceptron

Properties of architecture No connections within a layer No direct connections between input and output layers Fully connected between layers Each unit is a perceptron

Properties of architecture No connections within a layer No direct connections between input and output layers Fully connected between layers Often more than 3 layers Number of output units need not equal number of input units Number of hidden units per layer can be more or less than input or output units Each unit is a perceptron Often include bias as an extra weight

What do each of the layers do? 3rd layer can generate arbitrarily complex boundaries 1st layer draws linear boundaries 2nd layer combines the boundaries

Backpropagation learning algorithm ‘BP’ Solution to credit assignment problem in MLP. Rumelhart, Hinton and Williams (1986) (though actually invented earlier in a PhD thesis relating to economics) BP has two phases: Forward pass phase: computes ‘functional signal’, feed forward propagation of input pattern signals through network Backward pass phase: computes ‘error signal’, propagates the error backwards through network starting at output units (where the error is the difference between actual and desired output values)

Conceptually: Forward Activity - Backward Error

Forward Propagation of Activity Step 1: Initialise weights at random, choose a learning rate η Until network is trained: For each training example i.e. input pattern and target output(s): Step 2: Do forward pass through net (with fixed weights) to produce output(s) i.e., in Forward Direction, layer by layer: Inputs applied Multiplied by weights Summed ‘Squashed’ by sigmoid activation function Output passed to each neuron in next layer Repeat above until network output(s) produced

Step 3. Back-propagation of error

‘Back-prop’ algorithm summary (with Maths!) (Not Examinable)

‘Back-prop’ algorithm summary (with NO Maths!)

MLP/BP: A worked example

Worked example: Forward Pass

Worked example: Forward Pass

Worked example: Backward Pass

Worked example: Update Weights Using Generalized Delta Rule (BP)

Similarly for the all weights wij:

Verification that it works

Training This was a single iteration of back-prop Training requires many iterations with many training examples or epochs (one epoch is entire presentation of complete training set) It can be slow ! Note that computation in MLP is local (with respect to each neuron) Parallel computation implementation is also possible

Training and testing data How many examples ? The more the merrier ! Disjoint training and testing data sets learn from training data but evaluate performance (generalization ability) on unseen test data Aim: minimize error on test data

Binary Logic Unit in an example http://www.cs.usyd.edu.au/~irena/ai01/nn/5.html MultiLayer Perceptron Learning Algorithm http://www.cs.usyd.edu.au/~irena/ai01/nn/8.html