March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks
Advertisements

Perceptron Lecture 4.
Machine Learning Neural Networks.
NEURAL NETWORKS Perceptron
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Computer Vision Lecture 18: Object Recognition II
HMAX Models Architecture Jim Mutch March 31, 2010.
Machine Learning Neural Networks
September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc.
1 Biological Neural Networks Example: The Visual System.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
PERCEPTRON. Chapter 3: The Basic Neuron  The structure of the brain can be viewed as a highly interconnected network of relatively simple processing.
Ahmad Aljebaly Artificial Neural Networks. Agenda History of Artificial Neural Networks What is an Artificial Neural Networks? How it works? Learning.
COGNITIVE NEUROSCIENCE
September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”
November 5, 2009Introduction to Cognitive Science Lecture 16: Symbolic vs. Connectionist AI 1 Symbolism vs. Connectionism There is another major division.
December 1, 2009Introduction to Cognitive Science Lecture 22: Neural Models of Mental Processes 1 Some YouTube movies: The Neocognitron Part I:
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
Artificial Neural Networks
Neural Networks Lecture 17: Self-Organizing Maps
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 14/15 – TP19 Neural Networks & SVMs Miguel Tavares.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
From Biological to Artificial Neural Networks Marc Pomplun Department of Computer Science University of Massachusetts at Boston
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Artificial Intelligence & Neural Network
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Chapter 2 Single Layer Feedforward Networks
Lecture 5 Neural Control
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Perceptrons Michael J. Watts
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
1 Neural Networks Winter-Spring 2014 Instructor: A. Sahebalam Instructor: A. Sahebalam Neural Networks Lecture 3: Models of Neurons and Neural Networks.
Learning with Perceptrons and Neural Networks
Artificial Intelligence (CS 370D)
Dr. Unnikrishnan P.C. Professor, EEE
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
XOR problem Input 2 Input 1
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Artificial Intelligence Lecture No. 28
Artificial Neural Networks
Capabilities of Threshold Neurons
The Naïve Bayes (NB) Classifier
Computer Vision Lecture 19: Object Recognition III
Presentation transcript:

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 2 Computers vs. Neural Networks “Standard” ComputersNeural Networks one CPU / few processing highly parallel cores processing fast processing unitsslow processing units reliable unitsunreliable units static infrastructuredynamic infrastructure

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 3 Why Artificial Neural Networks? There are two basic reasons why we are interested in building artificial neural networks (ANNs): Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 4 Why Artificial Neural Networks? Why do we need another paradigm than symbolic AI for building “intelligent” machines? Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 5 How do NNs and ANNs work? The “building blocks” of neural networks are the neurons.The “building blocks” of neural networks are the neurons. In technical systems, we also refer to them as units or nodes.In technical systems, we also refer to them as units or nodes. Basically, each neuronBasically, each neuron –receives input from many other neurons, –changes its internal state (activation) based on the current input, –sends one output signal to many other neurons, possibly including its input neurons (recurrent network)

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 6 How do NNs and ANNs work? Information is transmitted as a series of electric impulses, so-called spikes.Information is transmitted as a series of electric impulses, so-called spikes. The frequency and phase of these spikes encodes the information.The frequency and phase of these spikes encodes the information. In biological systems, one neuron can be connected to as many as 10,000 other neurons.In biological systems, one neuron can be connected to as many as 10,000 other neurons.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 7 Structure of NNs (and some ANNs) In biological systems, neurons of similar functionality are usually organized in separate areas (or layers).In biological systems, neurons of similar functionality are usually organized in separate areas (or layers). Often, there is a hierarchy of interconnected layers with the lowest layer receiving sensory input and neurons in higher layers computing more complex functions.Often, there is a hierarchy of interconnected layers with the lowest layer receiving sensory input and neurons in higher layers computing more complex functions. For example, neurons in macaque visual cortex have been identified that are activated only when there is a face (monkey, human, or drawing) in the macaque’s visual field.For example, neurons in macaque visual cortex have been identified that are activated only when there is a face (monkey, human, or drawing) in the macaque’s visual field.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 8 “Data Flow Diagram” of Visual Areas in Macaque Brain Blue: motion perception pathway Green: object recognition pathway

Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 9 Stimuli in receptive field of neuron March 31, 2016

Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 10 March 31, 2016

Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 11 Structure of NNs (and some ANNs) In a hierarchy of neural areas such as the visual system, those at the bottom (near the sensory “input” neurons) only “see” local information.In a hierarchy of neural areas such as the visual system, those at the bottom (near the sensory “input” neurons) only “see” local information. For example, each neuron in primary visual cortex only receives input from a small area (1  in diameter) of the visual field (called their receptive field).For example, each neuron in primary visual cortex only receives input from a small area (1  in diameter) of the visual field (called their receptive field). As we move towards higher areas, the responses of neurons become less and less location dependent.As we move towards higher areas, the responses of neurons become less and less location dependent. In inferotemporal cortex, some neurons respond to face stimuli shown at any position in the visual field.In inferotemporal cortex, some neurons respond to face stimuli shown at any position in the visual field.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 12 Receptive Fields in Hierarchical Neural Networks neuron A receptive field of A

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 13 Receptive Fields in Hierarchical Neural Networks receptive field of B in input layer neuron B in top layer

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 14 How do NNs and ANNs work? NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals.NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals. The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses.The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses. The NN achieves learning by appropriately adapting the states of its synapses.The NN achieves learning by appropriately adapting the states of its synapses.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 15 An Artificial Neuron x1x1 x2x2 xnxn … W i,1 W i,2 … W i,n xixi neuron i net input signal synapses output

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 16 The Net Input Signal The net input signal is the sum of all inputs after passing the synapses: This can be viewed as computing the inner product of the vectors w i and x: where  is the angle between the two vectors.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 17 The Activation Function One possible choice is a threshold function: The graph of this function looks like this: 1 0  f i (net i (t)) net i (t)

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 18 Binary Analogy: Threshold Logic Units x1x1 x2x2 x3x3 w 1 = w 2 = w 3 =  = Example: x 1 x 2 x

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 19Networks x1x1 x2x2 w 1 = w 2 =  = Yet another example: x 1  x 2 XOR Impossible! TLUs can only realize linearly separable functions.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 20 Linear Separability A function f:{0, 1} n  {0, 1} is linearly separable if the space of input vectors yielding 1 can be separated from those yielding 0 by a linear surface (hyperplane) in n dimensions. Examples (two dimensions): 1011 x2x2 x1x x2x2 x1x linearly separablelinearly inseparable

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 21 Linear Separability To explain linear separability, let us consider the function f:R n  {0, 1} with where x 1, x 2, …, x n represent real numbers. This will also be useful for understanding the computations of artificial neural networks.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 22 Linear Separability Input space in the two-dimensional case (n = 2): w 1 = 1, w 2 = 2,  = 2 x1x x2x w 1 = -2, w 2 = 1,  = 2 x1x x2x w 1 = -2, w 2 = 1,  = 1 x1x x2x

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 23 Linear Separability So by varying the weights and the threshold, we can realize any linear separation of the input space into a region that yields output 1, and another region that yields output 0. As we have seen, a two-dimensional input space can be divided by any straight line. A three-dimensional input space can be divided by any two-dimensional plane. In general, an n-dimensional input space can be divided by an (n-1)-dimensional plane or hyperplane. Of course, for n > 3 this is hard to visualize.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 24 Linear Separability Of course, the same applies to our original function f using binary input values. The only difference is the restriction in the input values. Obviously, we cannot find a straight line to realize the XOR function: 1001x2x2 x1x In order to realize XOR with TLUs, we need to combine multiple TLUs into a network.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 25 Multi-Layered XOR Network x1x1 x2x2 x 1  x 2 x1x1 x2x

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 26 Capabilities of Threshold Neurons What can threshold neurons do for us? To keep things simple, let us consider such a neuron with two inputs: The computation of this neuron can be described as the inner product of the two-dimensional vectors x and w i, followed by a threshold operation. x1x1 x2x2 W i,1 W i,2 xixi

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 27 Capabilities of Threshold Neurons Let us assume that the threshold  = 0 and illustrate the function computed by the neuron for sample vectors w i and x: Since the inner product is positive for -90     90 , in this example the neuron’s output is 1 for any input vector x to the right of or on the dotted line, and 0 for any other input vector. wiwiwiwi first vector component second vector component x