Presentation is loading. Please wait.

Presentation is loading. Please wait.

March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.

Similar presentations


Presentation on theme: "March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks."— Presentation transcript:

1 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks

2 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 2 Computers vs. Neural Networks “Standard” ComputersNeural Networks one CPU / few processing highly parallel cores processing fast processing unitsslow processing units reliable unitsunreliable units static infrastructuredynamic infrastructure

3 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 3 Why Artificial Neural Networks? There are two basic reasons why we are interested in building artificial neural networks (ANNs): Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing.

4 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 4 Why Artificial Neural Networks? Why do we need another paradigm than symbolic AI for building “intelligent” machines? Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware.

5 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 5 How do NNs and ANNs work? The “building blocks” of neural networks are the neurons.The “building blocks” of neural networks are the neurons. In technical systems, we also refer to them as units or nodes.In technical systems, we also refer to them as units or nodes. Basically, each neuronBasically, each neuron –receives input from many other neurons, –changes its internal state (activation) based on the current input, –sends one output signal to many other neurons, possibly including its input neurons (recurrent network)

6 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 6 How do NNs and ANNs work? Information is transmitted as a series of electric impulses, so-called spikes.Information is transmitted as a series of electric impulses, so-called spikes. The frequency and phase of these spikes encodes the information.The frequency and phase of these spikes encodes the information. In biological systems, one neuron can be connected to as many as 10,000 other neurons.In biological systems, one neuron can be connected to as many as 10,000 other neurons.

7 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 7 Structure of NNs (and some ANNs) In biological systems, neurons of similar functionality are usually organized in separate areas (or layers).In biological systems, neurons of similar functionality are usually organized in separate areas (or layers). Often, there is a hierarchy of interconnected layers with the lowest layer receiving sensory input and neurons in higher layers computing more complex functions.Often, there is a hierarchy of interconnected layers with the lowest layer receiving sensory input and neurons in higher layers computing more complex functions. For example, neurons in macaque visual cortex have been identified that are activated only when there is a face (monkey, human, or drawing) in the macaque’s visual field.For example, neurons in macaque visual cortex have been identified that are activated only when there is a face (monkey, human, or drawing) in the macaque’s visual field.

8 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 8 “Data Flow Diagram” of Visual Areas in Macaque Brain Blue: motion perception pathway Green: object recognition pathway

9 Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 9 Stimuli in receptive field of neuron March 31, 2016

10 Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 10 March 31, 2016

11 Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 11 Structure of NNs (and some ANNs) In a hierarchy of neural areas such as the visual system, those at the bottom (near the sensory “input” neurons) only “see” local information.In a hierarchy of neural areas such as the visual system, those at the bottom (near the sensory “input” neurons) only “see” local information. For example, each neuron in primary visual cortex only receives input from a small area (1  in diameter) of the visual field (called their receptive field).For example, each neuron in primary visual cortex only receives input from a small area (1  in diameter) of the visual field (called their receptive field). As we move towards higher areas, the responses of neurons become less and less location dependent.As we move towards higher areas, the responses of neurons become less and less location dependent. In inferotemporal cortex, some neurons respond to face stimuli shown at any position in the visual field.In inferotemporal cortex, some neurons respond to face stimuli shown at any position in the visual field.

12 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 12 Receptive Fields in Hierarchical Neural Networks neuron A receptive field of A

13 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 13 Receptive Fields in Hierarchical Neural Networks receptive field of B in input layer neuron B in top layer

14 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 14 How do NNs and ANNs work? NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals.NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals. The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses.The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses. The NN achieves learning by appropriately adapting the states of its synapses.The NN achieves learning by appropriately adapting the states of its synapses.

15 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 15 An Artificial Neuron x1x1 x2x2 xnxn … W i,1 W i,2 … W i,n xixi neuron i net input signal synapses output

16 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 16 The Net Input Signal The net input signal is the sum of all inputs after passing the synapses: This can be viewed as computing the inner product of the vectors w i and x: where  is the angle between the two vectors.

17 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 17 The Activation Function One possible choice is a threshold function: The graph of this function looks like this: 1 0  f i (net i (t)) net i (t)

18 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 18 Binary Analogy: Threshold Logic Units x1x1 x2x2 x3x3 w 1 = w 2 = w 3 =  = Example: x 1 x 2 x 3 1 1 1.5

19 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 19Networks x1x1 x2x2 w 1 = w 2 =  = Yet another example: x 1  x 2 XOR Impossible! TLUs can only realize linearly separable functions.

20 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 20 Linear Separability A function f:{0, 1} n  {0, 1} is linearly separable if the space of input vectors yielding 1 can be separated from those yielding 0 by a linear surface (hyperplane) in n dimensions. Examples (two dimensions): 1011 x2x2 x1x1 0 1 01 1001 x2x2 x1x1 0 1 01 linearly separablelinearly inseparable

21 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 21 Linear Separability To explain linear separability, let us consider the function f:R n  {0, 1} with where x 1, x 2, …, x n represent real numbers. This will also be useful for understanding the computations of artificial neural networks.

22 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 22 Linear Separability Input space in the two-dimensional case (n = 2): w 1 = 1, w 2 = 2,  = 2 x1x1 123-3-2 x2x2 1 2 3 -3 -2 0 1 w 1 = -2, w 2 = 1,  = 2 x1x1 123-3-2 x2x2 1 2 3 -3 -2 0 1 w 1 = -2, w 2 = 1,  = 1 x1x1 123-3-2 x2x2 1 2 3 -3 -2 0 1

23 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 23 Linear Separability So by varying the weights and the threshold, we can realize any linear separation of the input space into a region that yields output 1, and another region that yields output 0. As we have seen, a two-dimensional input space can be divided by any straight line. A three-dimensional input space can be divided by any two-dimensional plane. In general, an n-dimensional input space can be divided by an (n-1)-dimensional plane or hyperplane. Of course, for n > 3 this is hard to visualize.

24 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 24 Linear Separability Of course, the same applies to our original function f using binary input values. The only difference is the restriction in the input values. Obviously, we cannot find a straight line to realize the XOR function: 1001x2x2 x1x1 0 1 01 In order to realize XOR with TLUs, we need to combine multiple TLUs into a network.

25 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 25 Multi-Layered XOR Network x1x1 x2x2 x 1  x 2 x1x1 x2x2 1 0.5 1 0.5 1 1

26 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 26 Capabilities of Threshold Neurons What can threshold neurons do for us? To keep things simple, let us consider such a neuron with two inputs: The computation of this neuron can be described as the inner product of the two-dimensional vectors x and w i, followed by a threshold operation. x1x1 x2x2 W i,1 W i,2 xixi

27 March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 27 Capabilities of Threshold Neurons Let us assume that the threshold  = 0 and illustrate the function computed by the neuron for sample vectors w i and x: Since the inner product is positive for -90     90 , in this example the neuron’s output is 1 for any input vector x to the right of or on the dotted line, and 0 for any other input vector. wiwiwiwi first vector component second vector component x


Download ppt "March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks."

Similar presentations


Ads by Google