Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Multi-Layer Perceptron (MLP)
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Image Classification MSc Image Processing Assignment March 2003.
G53MLE | Machine Learning | Dr Guoping Qiu
Artificial Neural Networks (1)
also known as the “Perceptron”
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Artificial Neural Networks
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 15: Introduction to Artificial Neural Networks Martin Russell.
Neural Networks Dr. Peter Phillips. Neural Networks What are Neural Networks Where can neural networks be used Examples Recognition systems (Voice, Signature,
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
An Illustrative Example
The McCulloch-Pitts Neuron. Characteristics The activation of a McCulloch Pitts neuron is binary. Neurons are connected by directed weighted paths. A.
CS 484 – Artificial Intelligence
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks
DIGITAL IMAGE PROCESSING Dr J. Shanbehzadeh M. Hosseinajad ( J.Shanbehzadeh M. Hosseinajad)
Parallel Artificial Neural Networks Ian Wesley-Smith Frameworks Division Center for Computation and Technology Louisiana State University
Multi-Layer Perceptrons Michael J. Watts
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Appendix B: An Example of Back-propagation algorithm
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Chapter 20 Section Slide Set 2 Perceptron examples Additional sources used in preparing the slides: Nils J. Nilsson’s book: Artificial Intelligence:
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent. x0x0 + -
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Non-Bayes classifiers. Linear discriminants, neural networks.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
EEE502 Pattern Recognition
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Start with student evals. What function does perceptron #4 represent?
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Fundamental ARTIFICIAL NEURAL NETWORK Session 1st
Neural networks.
Learning with Perceptrons and Neural Networks
Chapter 2 Single Layer Feedforward Networks
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Artificial Intelligence Lecture No. 30
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CS621: Artificial Intelligence
Machine Learning Today: Reading: Maria Florina Balcan
CSC 578 Neural Networks and Deep Learning
Artificial Intelligence Lecture No. 28
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial Intelligence 12. Two Layer ANNs
Computer Vision Lecture 19: Object Recognition III
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
Prof. Pushpak Bhattacharyya, IIT Bombay
Sanguthevar Rajasekaran University of Connecticut
CS621: Artificial Intelligence Lecture 18: Feedforward network contd
Outline Announcement Neural networks Perceptrons - continued
CS621: Artificial Intelligence Lecture 17: Feedforward network (lecture 16 was on Adaptive Hypermedia: Debraj, Kekin and Raunak) Pushpak Bhattacharyya.
Presentation transcript:

Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology (CIIT) Islamabad, Pakistan.

Summary of Previous Lecture Supervised Artificial Neural Networks Perceptrons

Today’s Lecture Single Layer Perceptron Multi-Layer Networks Example Training Multilayer Perceptron

The Parts of a Neuron

Perceptrons

Representational Power of Perceptrons Perceptrons can represent the logical AND, OR, and NOT functions as above. we consider 1 to represent True and – 1 to represent False.

Here there is no way to draw a single line that separates the "+" (true) values from the "-" (false) values.

Train a perceptron At start of the experimenter the W random values Than the train begin with objective of teaching it to differentiate two classes of inputs I and II The goal is to have the nodes output o = 1 if the input is of class I, and to have o= -1 if the input is of class II You can free to choose any inputs (X i ) and to designate them as being of class I or II

Train a perceptron If the node happened to output 1 signal when given a class II input or output -1 signal when given a class I input the weight Wi no change Then the training is complete

Single Layer Perceptron For a problem which calls for more then 2 classes, several perceptrons can be combined into a network. Can distinguish only linear separable functions

Single Layer Perceptron Single layer, five nodes. 2 inputs and 3 outputs Recognizes 3 linear separate classes, by means of 2 features

Multi-Layer Networks

A Multi layer perceptron can classify non linear separable problems. A Multilayer network has one or more hidden layers.

xnxn x1x1 x2x2 Input (visual input) Output (Motor output) Multi-layer networks Hidden layers

EXAMPLE Logical XOR Function XY output X Y 0,0 0,1 1,01,1

XOR Activation Function: if (input >= threshold), fire else, don’t fire All weights are 1, unless otherwise labeled w 13 w 21

XOR Activation Function: if (input >= threshold), fire else, don’t fire All weights are 1, unless otherwise labeled w 13 w 21

XOR Activation Function: if (input >= threshold), fire else, don’t fire All weights are 1, unless otherwise labeled w 13 w 21

XOR Activation Function: if (input >= threshold), fire else, don’t fire All weights are 1, unless otherwise labeled w 13 w 21

Training Multilayer Perceptron The training of multilayer networks raises some important issues: How many layers ?, how many neurons per layer ? Too few neurons makes the network unable to learn the desired behavior. Too many neurons increases the complexity of the learning algorithm.

Training Multilayer Perceptron A desired property of a neural network is its ability to generalize from the training set. If there are too many neurons, there is the danger of over fitting.

Problems with training: Nets get stuck – Not enough degrees of freedom – Hidden layer is too small Training becomes unstable – too many degrees of freedom – Hidden layer is too big / too many hidden layers Over-fitting – Can find every pattern, not all are significant. If neural net is “ over-fit ” it will not generalize well to the testing dataset

Summery of Today’s Lecture Single Layer Perceptron Multi-Layer Networks Example Training Multilayer Perceptron