Presentation is loading. Please wait.

Presentation is loading. Please wait.

Neural Networks for Optimization William J. Wolfe California State University Channel Islands.

Similar presentations


Presentation on theme: "Neural Networks for Optimization William J. Wolfe California State University Channel Islands."— Presentation transcript:

1 Neural Networks for Optimization William J. Wolfe California State University Channel Islands

2 Top 10 best jobs MONEY Magazine and Salary.com researched hundreds of jobs, considering their growth, pay, stress-levels and other factors. These careers ranked highest: 1. Software Engineer6. Market research analyst 2. College professor7. Computer IT analyst 3. Financial adviser8. Real Estate Appraiser 4. Human Resources Manager9. Pharmacist 5. Physician assistant10. Psychologist http://money.cnn.com/magazines/moneymag/bestjobs/ By Tara Kalwarski, Daphne Mosher, Janet Paskin and Donna Rosato

3 Neural Models Simple processing units, and lots of them Highly interconnected Variety of connection architectures/strengths Exchange excitatory and inhibitory signals Learning: changes in connection strengths Knowledge: connection strengths/architecture No central processor: distributed processing

4 Simple Neural Model a i Activation e i External input w ij Connection Strength Assume: w ij = w ji (“symmetric” network)  W = (w ij ) is a symmetric matrix

5 Net Input

6 Dynamics Basic idea:

7 Energy

8

9 Lower Energy da/dt = net = -grad(E)  seeks lower energy

10 Problem: Divergence

11 A Fix: Saturation

12 Keeps the activation vector inside the hypercube boundaries Encourages convergence to corners

13 Summary: The Neural Model a i Activation e i External Input w ij Connection Strength W (w ij = w ji ) Symmetric

14 Example: Inhibitory Networks Completely inhibitory –wij = -1 for all i,j –k-winner Inhibitory Grid –neighborhood inhibition

15 Traveling Salesman Problem Classic combinatorial optimization problem Find the shortest “tour” through n cities n!/2n distinct tours

16 Neural Network Approach

17 Tours – Permutation Matrices tour: CDBA permutation matrices correspond to the “feasible” states.

18 Not Allowed

19 Only one city per time stop Only one time stop per city  Inhibitory rows and columns

20 Distance Connections: Inhibit the neighboring cities in proportion to their distances.

21 putting it all together:

22 R n 2 = F 0  E c  E r  D aix proj = aix + act avg - rowx avg - coli avg Feasible Solutions

23 E = -1/2 { ∑ i ∑ x ∑ j ∑ y a ix a jy w ixjy } = -1/2 { ∑ i ∑ x ∑ y (- d(x,y)) a ix ( a i+1 y + a i-1 y ) + ∑ i ∑ x ∑ j (-1/n) a ix a jx + ∑ i ∑ x ∑ y (-1/n) a ix a iy + ∑ i ∑ x ∑ j ∑ y (1/n 2 ) a ix a jy }

24 Research Questions Which architecture is best? Does the network produce: –feasible solutions? –high quality solutions? –optimal solutions? How do the initial activations affect network performance? Is the network similar to “nearest city” or any other traditional heuristic? How does the particular city configuration affect network performance? Is there any way to understand the nonlinear dynamics? References: –Neural Networks for Combinatorial Optimization: A Review of More Than a Decade of Research. Kate A. Smith, Informs Journal on Computing, Vol. 11, No. 1, Winter 1999. –An Analytical Framework for Optimization Problems. A. Gee, S. V. B. Aiyer, R. Prager, 1993, Neural Networks 6, 79-97. –Neural Computation of Decisions in Optimization Problems. J. J. Hopfield, D. W. Tank, Biol. Cybern. 52, 141-152 (1985).

25 typical state of the network before convergence

26 “Fuzzy Readout”

27 DEMO 1

28 Fuzzy Tour Lengths Tour Length Iteration

29

30 DEMO 2 Applet by Darrell Long http://hawk.cs.csuci.edu/william.wolfe/TSP001/TSP1.html

31 EXTRA SLIDES

32 Brain Approximately 10 10 neurons Neurons are relatively simple Approximately 10 4 fan out No central processor Neurons communicate via excitatory and inhibitory signals Learning is associated with modifications of connection strengths between neurons

33

34

35 with external input e = 1/2

36 Perfect K-winner Performance: e = k-1/2

37


Download ppt "Neural Networks for Optimization William J. Wolfe California State University Channel Islands."

Similar presentations


Ads by Google