Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Neural Networks John Paxton Montana State University Summer 2003.

Similar presentations


Presentation on theme: "Introduction to Neural Networks John Paxton Montana State University Summer 2003."— Presentation transcript:

1 Introduction to Neural Networks John Paxton Montana State University Summer 2003

2 Chapter 3: Pattern Association Aristotle’s observed that human memory associates –similar items –contrary items –items close in proximity –items close in succession (a song)

3 Terminology and Issues Autoassociative Networks Heteroassociative Networks Feedforward Networks Recurrent Networks How many patterns can be stored?

4 Hebb Rule for Pattern Association Architecture x1x1 xnxn ymym y1y1 w 11 w nm

5 Algorithm 1.set w ij = 0 1 <= i <= n, 1 <= j <= m 2.for each training pair s:t 3.x i = s i 4.y j = t j 5.w ij (new) = w ij (old) + x i y j

6 Example s 1 = (1 -1 -1), s 2 = (-1 1 1) t 1 = (1 -1), t 2 = (-1 1) w 11 = 1*1 + (-1)(-1) = 2 w 12 = 1*(-1) + (-1)1 = -2 w 21 = (-1)1+ 1(-1) = -2 w 22 = (-1)(-1) + 1(1) = 2 w 31 = (-1)1 + 1(-1) = -2 w 32 = (-1)(-1) + 1*1 = 2

7 Matrix Alternative s 1 = (1 -1 -1), s 2 = (-1 1 1) t 1 = (1 -1), t 2 = (-1 1) 1 -1 1 -1 2 -2 -1 1 -1 1 = -2 2 -1 1 -2 2

8 Final Network f(y in ) = 1 if y in > 0, 0 if y in = 0, else -1 x1x1 x2x2 x3x3 y2y2 y1y1 2 2 -2 2

9 Properties Weights exist if input vectors are linearly independent Orthogonal vectors can be learned perfectly High weights imply strong correlations

10 Exercises What happens if (-1 -1 -1) is tested? This vector has one mistake. What happens if (0 -1 -1) is tested? This vector has one piece of missing data. Show an example of training data that is not learnable. Show the learned network.

11 Delta Rule for Pattern Association Works when patterns are linearly independent but not orthogonal Introduced in the 1960s for ADALINE Produces a least squares solution

12 Activation Functions Delta Rule (1) w ij (new) = w ij (old) +  (t j – y j )*x i *1 Extended Delta Rule (f’(y in.j )) w ij (new) = w ij (old) +  (t j – y j )*x i *f’(y in.j )

13 Heteroassociative Memory Net Application: Associate characters. A a B b

14 Autoassociative Net Architecture x1x1 xnxn ynyn y1y1 w 11 w nn

15 Training Algorithm Assuming that the training vectors are orthogonal, we can use the Hebb rule algorithm mentioned earlier. Application: Find out whether an input vector is familiar or unfamiliar. For example, voice input as part of a security system.

16 Autoassociate Example 1 1 1 1 1 1 1 0 1 1 1 = 1 1 1 = 1 0 1 1 1 1 1 1 1 0

17 Evaluation What happens if (1 1 1) is presented? What happens if (0 1 1) is presented? What happens if (0 0 1) is presented? What happens if (-1 1 1) is presented? What happens if (-1 -1 1) is presented? Why are the diagonals set to 0?

18 Storage Capacity 2 vectors (1 1 1), (-1 -1 -1) Recall is perfect 1 -1 1 1 1 0 2 2 1 -1 -1 -1 -1 = 2 0 2 1 -1 2 2 0

19 Storage Capacity 3 vectors: (1 1 1), (-1 -1 -1), (1 -1 1) Recall is no longer perfect 1 -1 1 1 1 1 0 1 3 1 -1 -1 -1 -1 -1 = 1 0 1 1 -1 1 1 -1 1 3 1 0

20 Theorem Up to n-1 bipolar vectors of n dimensions can be stored in an autoassociative net.

21 Iterative Autoassociative Net 1 vector: s = (1 1 -1) s t * s = 0 1 -1 1 0 -1 -1 -1 0 (1 0 0) -> (0 1 -1) (0 1 -1) -> (2 1 -1) -> (1 1 -1) (1 1 -1) -> (2 2 -2) -> (1 1 -1)

22 Testing Procedure 1.initialize weights using Hebb learning 2.for each test vector do 3.set x i = s i 4.calculate t i 5.set s i = t i 6. go to step 4 if the s vector is new

23 Exercises 1 piece of missing data: (0 1 -1) 2 pieces of missing data: (0 0 -1) 3 pieces of missing data: (0 0 0) 1 mistake: (-1 1 -1) 2 mistakes: (-1 -1 -1)

24 Discrete Hopfield Net content addressable problems pattern association problems constrained optimization problems w ij = w ji w ii = 0

25 Characteristics Only 1 unit updates its activation at a time Each unit continues to receive the external signal An energy (Lyapunov) function can be found that allows the net to converge, unlike the previous system Autoassociative

26 Architecture y1y1 y2y2 y3 x 1 x 3 x2x2

27 Algorithm 1.initialize weights using Hebb rule 2.for each input vector do 3.y i = x i 4.do steps 5-6 randomly for each y i 5.y in.i = x i +  y j w ji 6.calculate f(y in.i ) 7.go to step 2 if the net hasn’t converged

28 Example training vector: (1 -1) y1y1 y2y2 x 1 x 2

29 Example input (0 -1) update y 1 = 0 + (-1)(-1) = 1 update y 2 = -1 + 1(-1) = -2 -> -1 input (1 -1) update y 2 = -1 + 1(-1) = -2 -> -1 update y 1 = 1 + -1(-1) = 2 -> 1

30 Hopfield Theorems Convergence is guaranteed. The number of storable patterns is approximately n / (2 * log n) where n is the dimension of a vector

31 Bidirectional Associative Memory (BAM) Heteroassociative Recurrent Net Kosko, 1988 Architecture xnxn x1x1 ymym y1y1

32 Activation Function f(y in ) = 1, if y in > 0 f(y in ) = 0, if y in = 0 f(y in ) = -1 otherwise

33 Algorithm 1.initialize weights using Hebb rule 2.for each test vector do 3.present s to x layer 4.present t to y layer 5.while equilibrium is not reached 6.compute f(y in.j ) 7.compute f(x in.j )

34 Example s1 = (1 1), t1 = (1 -1) s2 = (-1 -1), t2 = (-1 1) 1 -1 1 -1 2 -2 1 -1 -1 1 2 -2

35 Example Architecture x2x2 x1x1 y2y2 y1y1 2 -2 2 present (1 1) to x -> 1 -1 present (1 -1) to y -> 1 1

36 Hamming Distance Definition: Number of different corresponding bits in two vectors For example, H[(1 -1), (1 1)] = 1 Average Hamming Distance is ½.

37 About BAMs Observation: Encoding is better when the average Hamming distance of the inputs is similar to the average Hamming distance of the outputs. The memory capacity of a BAM is min(n-1, m-1).


Download ppt "Introduction to Neural Networks John Paxton Montana State University Summer 2003."

Similar presentations


Ads by Google