Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hypercubes and Neural Networks bill wolfe 10/23/2005.

Similar presentations


Presentation on theme: "Hypercubes and Neural Networks bill wolfe 10/23/2005."— Presentation transcript:

1 Hypercubes and Neural Networks bill wolfe 10/23/2005

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20 Modeling

21 Simple Neural Model a i Activation e i External input w ij Connection Strength Assume: w ij = w ji (“symmetric” network)  W = (w ij ) is a symmetric matrix

22 Net Input Vector Format:

23 Dynamics Basic idea:

24 Energy

25

26 Lower Energy da/dt = net = -grad(E)  seeks lower energy

27 Problem: Divergence

28 A Fix: Saturation

29 Keeps the activation vector inside the hypercube boundaries Encourages convergence to corners

30 Summary: The Neural Model a i Activation e i External Input w ij Connection Strength W (w ij = w ji ) Symmetric

31 Example: Inhibitory Networks Completely inhibitory –wij = -1 for all i,j –k-winner Inhibitory Grid –neighborhood inhibition

32 Traveling Salesman Problem Classic combinatorial optimization problem Find the shortest “tour” through n cities n!/2n distinct tours

33 TSP solution for 15,000 cities in Germany

34 TSP 50 City Example

35 Random

36 Nearest-City

37 2-OPT

38 http://www.jstor.org/view/0030364x/ap010105/01a00060/0 An Effective Heuristic for the Traveling Salesman Problem S. Lin and B. W. Kernighan Operations Research, 1973

39 Centroid

40 Monotonic

41 Neural Network Approach neuron

42 Tours – Permutation Matrices tour: CDBA permutation matrices correspond to the “feasible” states.

43 Not Allowed

44 Only one city per time stop Only one time stop per city  Inhibitory rows and columns inhibitory

45 Distance Connections: Inhibit the neighboring cities in proportion to their distances.

46 putting it all together:

47 Research Questions Which architecture is best? Does the network produce: –feasible solutions? –high quality solutions? –optimal solutions? How do the initial activations affect network performance? Is the network similar to “nearest city” or any other traditional heuristic? How does the particular city configuration affect network performance? Is there a better way to understand the nonlinear dynamics?

48 typical state of the network before convergence

49 “Fuzzy Readout”

50 Neural Activations Fuzzy Tour Initial Phase

51

52 Neural ActivationsFuzzy Tour Monotonic Phase

53 Neural ActivationsFuzzy Tour Nearest-City Phase

54 Fuzzy Tour Lengths tour length iteration

55 Average Results for n=10 to n=70 cities (50 random runs per n) # cities

56 DEMO 2 Applet by Darrell Long http://hawk.cs.csuci.edu/william.wolfe/TSP001/TSP1.html

57 Conclusions Neurons stimulate intriguing computational models. The models are complex, nonlinear, and difficult to analyze. The interaction of many simple processing units is difficult to visualize. The Neural Model for the TSP mimics some of the properties of the nearest-city heuristic. Much work to be done to understand these models.

58 3 Neuron Example

59 Brain State:

60 “Thinking”

61 Binary Model a j = 0 or 1 Neurons are either “on” or “off”

62 Binary Stability a j = 1 and Net j >=0 Or a j = 0 and Net j <=0

63 Hypercubes

64

65

66 4-Cube

67

68

69

70 5-Cube

71

72

73

74 http://www1.tip.nl/~t515027/hypercube.html Hypercube Computer Game

75 2-Cube Adjacency Matrix: Hypercube Graph

76 Recursive Definition

77 Theorem 1: If v is an eigenvector of Q n-1 with eigenvalue x then the concatenated vectors [v,v] and [v,-v] are eigenvectors of Q n with eigenvalues x+1 and x-1 respectively. Eigenvectors of the Adjacency Matrix

78 Proof

79 Generating Eigenvectors and Eigenvalues

80 Walsh Functions for n=1, 2, 3

81

82

83 1 000 001 010 011 100 101 110 111 eigenvectorbinary number

84 n=3

85 Theorem 3: Let k be the number of +1 choices in the recursive construction of the eigenvectors of the n-cube. Then for k not equal to n each Walsh state has 2 n-k-1 non adjacent subcubes of dimension k that are labeled +1 on their vertices, and 2 n-k-1 non adjacent subcubes of dimension k that are labeled -1 on their vertices. If k = n then all the vertices are labeled +1. (Note: Here, "non adjacent" means the subcubes do not share any edges or vertices and there are no edges between the subcubes).

86 n=5, k= 3n=5, k= 2

87 Inhibitory Hypercube

88 Theorem 5: Each Walsh state with positive, zero, or negative eigenvalue is an unstable, weakly stable, or strongly stable state of the inhibitory hypercube network, respectively.


Download ppt "Hypercubes and Neural Networks bill wolfe 10/23/2005."

Similar presentations


Ads by Google