September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Introduction to Neural Networks Computing
G53MLE | Machine Learning | Dr Guoping Qiu
Artificial Neural Networks (1)
Perceptron Learning Rule
NEURAL NETWORKS Perceptron
Phantom Limb Phenomena. Hand movement observation by individuals born without hands: phantom limb experience constrains visual limb perception. Funk M,
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Computer Vision Lecture 18: Object Recognition II
G5BAIM Artificial Intelligence Methods Graham Kendall Neural Networks.
Neural Networks (II) Simple Learning Rule
Simple Neural Nets For Pattern Classification
A Review: Architecture
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
An Illustrative Example
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
October 5, 2010Neural Networks Lecture 9: Applying Backpropagation 1 K-Class Classification Problem Let us denote the k-th class by C k, with n k exemplars.
The McCulloch-Pitts Neuron. Characteristics The activation of a McCulloch Pitts neuron is binary. Neurons are connected by directed weighted paths. A.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural Networks Lecture 8: Two simple learning algorithms
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Artificial Intelligence & Neural Network
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Chapter 2 Single Layer Feedforward Networks
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
November 20, 2014Computer Vision Lecture 19: Object Recognition III 1 Linear Separability So by varying the weights and the threshold, we can realize any.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
April 12, 2016Introduction to Artificial Intelligence Lecture 19: Neural Network Application Design II 1 Now let us talk about… Neural Network Application.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
CS621 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 20: Neural Net Basics: Perceptron.
1 Neural Networks Winter-Spring 2014 Instructor: A. Sahebalam Instructor: A. Sahebalam Neural Networks Lecture 3: Models of Neurons and Neural Networks.
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
Chapter 2 Single Layer Feedforward Networks
Artificial Intelligence (CS 370D)
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Creating Data Representations
Perceptrons Introduced in1957 by Rosenblatt
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Capabilities of Threshold Neurons
The Naïve Bayes (NB) Classifier
Introduction to Artificial Intelligence Lecture 24: Computer Vision IV
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Computer Vision Lecture 19: Object Recognition III
Presentation transcript:

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version” of the incoming light pattern rather that the exact pattern itself. Visual Illusions

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 2 Visual Illusions He we see that the squares A and B from the previous image actually have the same luminance (but in their visual context are interpreted differently).

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 3 How do NNs and ANNs work? NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals.NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals. The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses.The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses. The NN achieves learning by appropriately adapting the states of its synapses.The NN achieves learning by appropriately adapting the states of its synapses.

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 4 An Artificial Neuron x1x1 x2x2 xnxn … W i,1 W i,2 … W i,n xixi neuron i net input signal synapses output

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 5 The Activation Function One possible choice is a threshold function: The graph of this function looks like this: 1 0  f i (net i (t)) net i (t)

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 6 Binary Analogy: Threshold Logic Units (TLUs) x1x1 x2x2 x3x3 w 1 = w 2 = w 3 =  = Example: x 1 x 2 x TLUs in technical systems are similar to the threshold neuron model, except that TLUs only accept binary inputs (0 or 1).

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 7 Binary Analogy: Threshold Logic Units (TLUs) x1x1 x2x2 w 1 = w 2 =  = Yet another example: x 1  x 2 XOR Impossible! TLUs can only realize linearly separable functions.

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 8 Linear Separability A function f:{0, 1} n  {0, 1} is linearly separable if the space of input vectors yielding 1 can be separated from those yielding 0 by a linear surface (hyperplane) in n dimensions. Examples (two dimensions): 1011 x2x2 x1x x2x2 x1x linearly separablelinearly inseparable

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 9 Linear Separability To explain linear separability, let us consider the function f:R n  {0, 1} with where x 1, x 2, …, x n represent real numbers. This is the exactly the function that our threshold neurons use to compute their output from their inputs.

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 10 Linear Separability Input space in the two-dimensional case (n = 2): w 1 = 1, w 2 = 2,  = 2 x1x x2x w 1 = -2, w 2 = 1,  = 2 x1x x2x w 1 = -2, w 2 = 1,  = 1 x1x x2x

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 11 Linear Separability So by varying the weights and the threshold, we can realize any linear separation of the input space into a region that yields output 1, and another region that yields output 0. As we have seen, a two-dimensional input space can be divided by any straight line. A three-dimensional input space can be divided by any two-dimensional plane. In general, an n-dimensional input space can be divided by an (n-1)-dimensional plane or hyperplane. Of course, for n > 3 this is hard to visualize.

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 12 Linear Separability Of course, the same applies to our original function f of the TLU using binary input values. The only difference is the restriction in the input values. Obviously, we cannot find a straight line to realize the XOR function: 1001x2x2 x1x In order to realize XOR with TLUs, we need to combine multiple TLUs into a network.

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 13 Multi-Layered XOR Network x1x1 x2x2 x 1  x 2 x1x1 x2x

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 14 Capabilities of Threshold Neurons What can threshold neurons do for us? To keep things simple, let us consider such a neuron with two inputs: The computation of this neuron can be described as the inner product of the two-dimensional vectors x and w i, followed by a threshold operation. x1x1 x2x2 W i,1 W i,2 xixi

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 15 The Net Input Signal The net input signal is the sum of all inputs after passing the synapses: This can be viewed as computing the inner product of the vectors w i and x: where  is the angle between the two vectors.

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 16 Capabilities of Threshold Neurons Let us assume that the threshold  = 0 and illustrate the function computed by the neuron for sample vectors w i and x: Since the inner product is positive for -90     90 , in this example the neuron’s output is 1 for any input vector x to the right of or on the dotted line, and 0 for any other input vector. wiwiwiwi first vector component second vector component x

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 17 Capabilities of Threshold Neurons By choosing appropriate weights w i and threshold  we can place the line dividing the input space into regions of output 0 and output 1in any position and orientation. Therefore, our threshold neuron can realize any linearly separable function R n  {0, 1}. Although we only looked at two-dimensional input, our findings apply to any dimensionality n. For example, for n = 3, our neuron can realize any function that divides the three-dimensional input space along a two-dimension plane.