Presentation is loading. Please wait.

Presentation is loading. Please wait.

September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”

Similar presentations


Presentation on theme: "September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”"— Presentation transcript:

1 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version” of the incoming light pattern rather that the exact pattern itself. Visual Illusions

2 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 2 Visual Illusions He we see that the squares A and B from the previous image actually have the same luminance (but in their visual context are interpreted differently).

3 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 3 How do NNs and ANNs work? NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals.NNs are able to learn by adapting their connectivity patterns so that the organism improves its behavior in terms of reaching certain (evolutionary) goals. The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses.The strength of a connection, or whether it is excitatory or inhibitory, depends on the state of a receiving neuron’s synapses. The NN achieves learning by appropriately adapting the states of its synapses.The NN achieves learning by appropriately adapting the states of its synapses.

4 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 4 An Artificial Neuron x1x1 x2x2 xnxn … W i,1 W i,2 … W i,n xixi neuron i net input signal synapses output

5 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 5 The Activation Function One possible choice is a threshold function: The graph of this function looks like this: 1 0  f i (net i (t)) net i (t)

6 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 6 Binary Analogy: Threshold Logic Units (TLUs) x1x1 x2x2 x3x3 w 1 = w 2 = w 3 =  = Example: x 1 x 2 x 3 1 1 1.5 TLUs in technical systems are similar to the threshold neuron model, except that TLUs only accept binary inputs (0 or 1).

7 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 7 Binary Analogy: Threshold Logic Units (TLUs) x1x1 x2x2 w 1 = w 2 =  = Yet another example: x 1  x 2 XOR Impossible! TLUs can only realize linearly separable functions.

8 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 8 Linear Separability A function f:{0, 1} n  {0, 1} is linearly separable if the space of input vectors yielding 1 can be separated from those yielding 0 by a linear surface (hyperplane) in n dimensions. Examples (two dimensions): 1011 x2x2 x1x1 0 1 01 1001 x2x2 x1x1 0 1 01 linearly separablelinearly inseparable

9 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 9 Linear Separability To explain linear separability, let us consider the function f:R n  {0, 1} with where x 1, x 2, …, x n represent real numbers. This is the exactly the function that our threshold neurons use to compute their output from their inputs.

10 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 10 Linear Separability Input space in the two-dimensional case (n = 2): w 1 = 1, w 2 = 2,  = 2 x1x1 123-3-2 x2x2 1 2 3 -3 -2 0 1 w 1 = -2, w 2 = 1,  = 2 x1x1 123-3-2 x2x2 1 2 3 -3 -2 0 1 w 1 = -2, w 2 = 1,  = 1 x1x1 123-3-2 x2x2 1 2 3 -3 -2 0 1

11 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 11 Linear Separability So by varying the weights and the threshold, we can realize any linear separation of the input space into a region that yields output 1, and another region that yields output 0. As we have seen, a two-dimensional input space can be divided by any straight line. A three-dimensional input space can be divided by any two-dimensional plane. In general, an n-dimensional input space can be divided by an (n-1)-dimensional plane or hyperplane. Of course, for n > 3 this is hard to visualize.

12 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 12 Linear Separability Of course, the same applies to our original function f of the TLU using binary input values. The only difference is the restriction in the input values. Obviously, we cannot find a straight line to realize the XOR function: 1001x2x2 x1x1 0 1 01 In order to realize XOR with TLUs, we need to combine multiple TLUs into a network.

13 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 13 Multi-Layered XOR Network x1x1 x2x2 x 1  x 2 x1x1 x2x2 1 0.5 1 0.5 1 1

14 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 14 Capabilities of Threshold Neurons What can threshold neurons do for us? To keep things simple, let us consider such a neuron with two inputs: The computation of this neuron can be described as the inner product of the two-dimensional vectors x and w i, followed by a threshold operation. x1x1 x2x2 W i,1 W i,2 xixi

15 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 15 The Net Input Signal The net input signal is the sum of all inputs after passing the synapses: This can be viewed as computing the inner product of the vectors w i and x: where  is the angle between the two vectors.

16 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 16 Capabilities of Threshold Neurons Let us assume that the threshold  = 0 and illustrate the function computed by the neuron for sample vectors w i and x: Since the inner product is positive for -90     90 , in this example the neuron’s output is 1 for any input vector x to the right of or on the dotted line, and 0 for any other input vector. wiwiwiwi first vector component second vector component x

17 September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 17 Capabilities of Threshold Neurons By choosing appropriate weights w i and threshold  we can place the line dividing the input space into regions of output 0 and output 1in any position and orientation. Therefore, our threshold neuron can realize any linearly separable function R n  {0, 1}. Although we only looked at two-dimensional input, our findings apply to any dimensionality n. For example, for n = 3, our neuron can realize any function that divides the three-dimensional input space along a two-dimension plane.


Download ppt "September 14, 2010Neural Networks Lecture 3: Models of Neurons and Neural Networks 1 Visual Illusions demonstrate how we perceive an “interpreted version”"

Similar presentations


Ads by Google