Presentation is loading. Please wait.

Presentation is loading. Please wait.

Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of.

Similar presentations


Presentation on theme: "Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of."— Presentation transcript:

1

2 Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of Toledo

3 Drawbacks of conventional computing systems:- Perform poorly on complex problems Lack the computational power Do not utilize the inherent parallelism of problems Advantages of Artificial Neural Networks:- Perform well even on complex problems Very fast computational cycles if implemented in hardware Can take advantage of the inherent parallelism of problems Driving Force for the Research

4 Earlier Efforts to solve Optimization Problems Many ANN algorithms with feedforward and recurrent architectures have been used to solve unconstrained and combinatorial optimization problems The Hopfield network and its derivatives including Boltzmann machine and MFA seem to be most prominent and extensively applied ANN algorithms to solve these static optimization problems. However HN and their derivatives do not scale well with the increase in the size of the optimization problem.

5 Statement of Thesis Simultaneous Recurrent Neural Network, a trainable and recurrent ANN, to address the scaling problem Artificial Neural Network algorithms currently experience for static optimization problems Can we use ….

6 Research Approach A neural network simulator is developed for simulation of Simultaneous Recurrent Neural Network An extensive simulation study is conducted on two well known static optimization problems - Traveling Salesman Problem - Graph Path Search Problem Simulation results are analyzed

7 Significance of Research A powerful and efficient optimization tool Optimizer can solve real-life size and complex static optimization problems Will require a fraction of time if implemented in hardware Applications in many fields like - Routing in computer networks - VLSI circuit design - Planning in operational and logistic systems - Power distribution systems - Wireless and satellite communication systems

8 Hopfield Network and Static Optimization Problems Most widely used ANN algorithms Offer a computationally simple way for a class of optimization problems HN dynamics minimizes a quadratic Lyapunov function Employed as fixed-point attractors Performance greatly depends on constraint weight parameters

9 Shortcoming of the Hopfield Network Constraint weight parameters are set empirically All weights and connections are specified in advance Difficult to guess weights for large-scale problems Lack mechanism to incorporate the experience gained Quality of solution not good for large scale TSP Do not scale well with increase in the problem size Acceptable solution for Graph Path Search Problem can not be found

10 Why Simultaneous Recurrent Neural Network Hopfield Network do not employ any learning that can benefit from prior relaxations A relaxation based neural search algorithm, which can learn from its own experience is needed Simultaneous Recurrent Neural Network is a ….. - Recurrent algorithm - has relaxation search capability - has ability to learn

11 Simultaneous Recurrent Neural Network Feedforward Mapping  (.,W) Outputs Y Feedback Path Inputs X Simultaneous Recurrent Neural Network is a feedforward network with simultaneous feedback from outputs of the network to its inputs without any time delay

12 Simultaneous Recurrent Neural Network Follows a trajectory in the state space to relax to a fixed point The network is provided with the external inputs and initial outputs are typically assumed randomly The output of previous iteration is fedback to the network along with the external inputs to compute the output of next iteration The network iterates until it reaches a stable equilibrium point

13 Training of SRN Methods available for training of SRN in literature  Backpropagation Through Time (BTT) which requires the knowledge of desired outputs throughout the trajectory path  Error critics (EC) has no quarantee of yielding exact results in equilibrium  Truncation did not provide satisfactory results and needs to be further tested Recurrent Backpropagation requires only knowledge of desired outputs at the end of trajectory path and hence chosen to train SRN

14 Traveling Salesman Problem

15 Network Topology for Traveling Salesman Problem Hidden Layers Cost Matrix Output Array N  N nodes N  N nodes N  N nodes Input LayerHidden Layer(s)Output Layer Path Specification

16 Error Computation for TSP Constraints used for training TSP  Asymmetry of the path traveled  Column inhibition  Row inhibition  Cost of the path traveled  Values of the solution matrix Source Cities Destination Cities Output Matrix

17 Graph Path Search Problem Source Destination

18 Network Topology for Graph Path Search Problem Hidden Layers Cost Matrix Output Array N  N nodes N  N nodes N  N nodes Input LayerHidden Layer(s)Output Layer Path Specification N  N nodes Adjacency Matrix

19 Error Computation for GPSP Constraints used for training GPSP  Asymmetry of the sub-graph  Column inhibition  Row inhibition  Source and target vertex inhibition  Column/row excitation  Row/column excitation  Cost of the solution path  Number of vertices in the path Sources Vertices Destinations Vertices Output Matrix

20 Simulation:- Software Environment Language: C, MATLAB 5.2 GUI: Xwindows11 Plotting of Graphs: C program calling MATLAB functions for plotting of graphs Hardware Environment Sun OS 5.7 Sun Ultra machine 300MHz Physical Memory (RAM) 1280 MB Virtual Memory (Swap) 1590 MB

21 Simulation:- GUI for Simulator

22 Simulation:- Initialization Randomly initialize weights and initial outputs (Range: 0.0 - 1.0) Randomly initialize cost matrix for TSP (Range: 0.0 - 1.0) Randomly initialize adjacency matrix ( 0.0 or 1.0) depending on the connectivity level parameter for GPSP For TSP, values along the diagonal of the cost matrix are clamped to 1.0 to avoid self looping. For GPSP, values along the diagonal of the adjacency matrix and cost matrix are clamped to 1.0 to avoid self looping. Values of constraint weight parameters are set depending on the size of the problem

23 Simulation:- Initialization for TSP Initial values and Increments per 30 relaxation of constraint weight parameters for the TSP

24 Simulation:- Training Error function vs. Simulation Time for TSP

25 Simulation:- Results Convergence criteria of network is checked after every 100 relaxations Criteria: 95% of active nodes have value greater than 0.9

26 Simulation:- Results Plot of Normalized Distance between cities after the convergence of network to an acceptable solution vs. Problem Size

27 Simulation:- Results Plot of Number of Relaxations required for a solution and values of Constraint Weight Parameters g c and g r after 300 Relaxations vs. Problem Size

28 Simulation:- Initialization for GPSP Initial values and Increments per 30 relaxation of constraint weight parameters for the GPSP

29 Simulation:- Results for GPSP Convergence criteria of network is checked after every 100 relaxations Criteria: Active nodes have value greater than 0.8

30 Simulation:- Results for GPSP Plot of Number of Relaxations required for a solution and values of Constraint Weight Parameters g i after 300 Relaxations vs. Problem Size

31 Conclusions The SRN with the RBP was able to find “good quality” solutions, in the range of 0.25-0.35, for large-scale (40 to 500 city) Traveling Salesman Problem Solutions were obtained within acceptable computation efforts Normalized Distance between cities remained almost consistent as the problem size was varied from 40 to 500 cities The simulator developed does not require weights to be predetermined before simulation as is the case with the HN and its derivatives The initial and incremental values of constraint weight parameters play very important role in the training of the network

32 Conclusions (continued) Computational effort and memory requirement increased proportional to the square of the problem size The SRN with the RBP was able to find a solution for large-scale Graph Path Search Problem in the range of 40 to 500 vertices The solutions were obtained within acceptable computation efforts and time The computation effort required for the GPSP is 1.1 to 1.2 times more than that of the TSP The number of relaxations required increased with the increase in the problem size The GPSP was very sensitive to the constraint weight parameters

33 Conclusions (continued) Thus we can say that …. Simultaneous Recurrent Neural Network with Recurrent Backpropagation training algorithm scaled well for large-scale static optimization problems like the Traveling Salesman Problem and the Graph Path Search Problem within acceptable computation effort bounds.

34 Recommendations for Future Study The feasibility of the hardware implementation of the network and algorithm for the TSP should be thought over More number of simulations should be done for the GPSP to find the effect of change in each constraint weight parameter on the solution The effect of incorporating a stochastic or probabilistic component into the learning for network dynamics can also be studied to find the better approach Simulation study on weighted GPSP should be done for more practical use

35 Questions ?


Download ppt "Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of."

Similar presentations


Ads by Google