Hypercubes and Neural Networks bill wolfe 10/23/2005.

Slides:



Advertisements
Similar presentations
Feedback Neural Networks
Advertisements

Chapter3 Pattern Association & Associative Memory
Computational Intelligence
Introduction to Neural Networks Computing
CS6800 Advanced Theory of Computation
22C:19 Discrete Math Graphs Fall 2010 Sukumar Ghosh.
8.3 Representing Relations Connection Matrices Let R be a relation from A = {a 1, a 2,..., a m } to B = {b 1, b 2,..., b n }. Definition: A n m  n connection.
PROCESS MODELLING AND MODEL ANALYSIS © CAPE Centre, The University of Queensland Hungarian Academy of Sciences Analysis of Dynamic Process Models C13.
Reducibility Class of problems A can be reduced to the class of problems B Take any instance of problem A Show how you can construct an instance of problem.
1 Appendix B: Solving TSP by Dynamic Programming Course: Algorithm Design and Analysis.
(0,1)-Matrices If the row-sums of a matrix A are r 1, …,r k, then we shall call the vector r:=(r 1,r 2, …,r k ) the row-sum of A, and similarly for the.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Applied Discrete Mathematics Week 12: Trees
Neural Networks for Optimization William J. Wolfe California State University Channel Islands.
Neural Networks for Optimization Bill Wolfe California State University Channel Islands.
Branch and Bound Similar to backtracking in generating a search tree and looking for one or more solutions Different in that the “objective” is constrained.
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
Chapter 3 Determinants and Matrices
Hypercubes and Neural Networks bill wolfe 9/21/2005.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 16 All shortest paths algorithms Properties of all shortest paths Simple algorithm:
Neural Networks for Optimization William J. Wolfe California State University Channel Islands.
Ant Colony Optimization Algorithms for the Traveling Salesman Problem ACO Kristie Simpson EE536: Advanced Artificial Intelligence Montana State.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Let us switch to a new topic:
MATRICES. Matrices A matrix is a rectangular array of objects (usually numbers) arranged in m horizontal rows and n vertical columns. A matrix with m.
Network Measures Social Media Mining. 2 Measures and Metrics 2 Social Media Mining Network Measures Klout.
Clustering Unsupervised learning Generating “classes”
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
COMPLEXITY SCIENCE WORKSHOP 18, 19 June 2015 Systems & Control Research Centre School of Mathematics, Computer Science and Engineering CITY UNIVERSITY.
Chapter 7 Other Important NN Models Continuous Hopfield mode (in detail) –For combinatorial optimization Simulated annealing (in detail) –Escape from local.
Dr. Marina Gavrilova 1.  Autocorrelation  Line Pattern Analyzers  Polygon Pattern Analyzers  Network Pattern Analyzes 2.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Fuzzy Cognitive Maps Y. İlker TOPCU, Ph.D twitter.com/yitopcu.
© by Kenneth H. Rosen, Discrete Mathematics & its Applications, Sixth Edition, Mc Graw-Hill, 2007 Chapter 9 (Part 2): Graphs  Graph Terminology (9.2)
Yaomin Jin Design of Experiments Morris Method.
Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of.
Based on slides by Y. Peng University of Maryland
Unsupervised learning
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Optimization with Neural Networks Presented by: Mahmood Khademi Babak Bashiri Instructor: Dr. Bagheri Sharif University of Technology April 2007.
The Traveling Salesman Problem Over Seventy Years of Research, and a Million in Cash Presented by Vladimir Coxall.
Spectral Analysis based on the Adjacency Matrix of Network Data Leting Wu Fall 2009.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Hopfield Neural Networks for Optimization 虞台文 大同大學資工所 智慧型多媒體研究室.
NP-completeness NP-complete problems. Homework Vertex Cover Instance. A graph G and an integer k. Question. Is there a vertex cover of cardinality k?
Chapter 9: Graphs.
Neural Network to solve Traveling Salesman Problem Amit Goyal Koustubh Vachhani Ankur Jain 01D05007.
Intro. ANN & Fuzzy Systems Lecture 37 Genetic and Random Search Algorithms (2)
Management Science 461 Lecture 7 – Routing (TSP) October 28, 2008.
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
EMIS 8373: Integer Programming Combinatorial Optimization Problems updated 27 January 2005.
Lecture 39 Hopfield Network
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
ALGEBRAIC EIGEN VALUE PROBLEMS
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Laplacian Matrices of Graphs: Algorithms and Applications ICML, June 21, 2016 Daniel A. Spielman.
Chapter 5 Unsupervised learning
Ch7: Hopfield Neural Model
Degree and Eigenvector Centrality
Fine-Grained Complexity Analysis of Improving Traveling Salesman Tours
Neural Networks Chapter 4
A Dynamic System Analysis of Simultaneous Recurrent Neural Network
Lecture 39 Hopfield Network
Hopfield Neural Networks for Optimization
Computational Intelligence
“Traditional” image segmentation
Computational Intelligence
Presentation transcript:

Hypercubes and Neural Networks bill wolfe 10/23/2005

Modeling

Simple Neural Model a i Activation e i External input w ij Connection Strength Assume: w ij = w ji (“symmetric” network)  W = (w ij ) is a symmetric matrix

Net Input Vector Format:

Dynamics Basic idea:

Energy

Lower Energy da/dt = net = -grad(E)  seeks lower energy

Problem: Divergence

A Fix: Saturation

Keeps the activation vector inside the hypercube boundaries Encourages convergence to corners

Summary: The Neural Model a i Activation e i External Input w ij Connection Strength W (w ij = w ji ) Symmetric

Example: Inhibitory Networks Completely inhibitory –wij = -1 for all i,j –k-winner Inhibitory Grid –neighborhood inhibition

Traveling Salesman Problem Classic combinatorial optimization problem Find the shortest “tour” through n cities n!/2n distinct tours

TSP solution for 15,000 cities in Germany

TSP 50 City Example

Random

Nearest-City

2-OPT

An Effective Heuristic for the Traveling Salesman Problem S. Lin and B. W. Kernighan Operations Research, 1973

Centroid

Monotonic

Neural Network Approach neuron

Tours – Permutation Matrices tour: CDBA permutation matrices correspond to the “feasible” states.

Not Allowed

Only one city per time stop Only one time stop per city  Inhibitory rows and columns inhibitory

Distance Connections: Inhibit the neighboring cities in proportion to their distances.

putting it all together:

Research Questions Which architecture is best? Does the network produce: –feasible solutions? –high quality solutions? –optimal solutions? How do the initial activations affect network performance? Is the network similar to “nearest city” or any other traditional heuristic? How does the particular city configuration affect network performance? Is there a better way to understand the nonlinear dynamics?

typical state of the network before convergence

“Fuzzy Readout”

Neural Activations Fuzzy Tour Initial Phase

Neural ActivationsFuzzy Tour Monotonic Phase

Neural ActivationsFuzzy Tour Nearest-City Phase

Fuzzy Tour Lengths tour length iteration

Average Results for n=10 to n=70 cities (50 random runs per n) # cities

DEMO 2 Applet by Darrell Long

Conclusions Neurons stimulate intriguing computational models. The models are complex, nonlinear, and difficult to analyze. The interaction of many simple processing units is difficult to visualize. The Neural Model for the TSP mimics some of the properties of the nearest-city heuristic. Much work to be done to understand these models.

3 Neuron Example

Brain State:

“Thinking”

Binary Model a j = 0 or 1 Neurons are either “on” or “off”

Binary Stability a j = 1 and Net j >=0 Or a j = 0 and Net j <=0

Hypercubes

4-Cube

5-Cube

Hypercube Computer Game

2-Cube Adjacency Matrix: Hypercube Graph

Recursive Definition

Theorem 1: If v is an eigenvector of Q n-1 with eigenvalue x then the concatenated vectors [v,v] and [v,-v] are eigenvectors of Q n with eigenvalues x+1 and x-1 respectively. Eigenvectors of the Adjacency Matrix

Proof

Generating Eigenvectors and Eigenvalues

Walsh Functions for n=1, 2, 3

eigenvectorbinary number

n=3

Theorem 3: Let k be the number of +1 choices in the recursive construction of the eigenvectors of the n-cube. Then for k not equal to n each Walsh state has 2 n-k-1 non adjacent subcubes of dimension k that are labeled +1 on their vertices, and 2 n-k-1 non adjacent subcubes of dimension k that are labeled -1 on their vertices. If k = n then all the vertices are labeled +1. (Note: Here, "non adjacent" means the subcubes do not share any edges or vertices and there are no edges between the subcubes).

n=5, k= 3n=5, k= 2

Inhibitory Hypercube

Theorem 5: Each Walsh state with positive, zero, or negative eigenvalue is an unstable, weakly stable, or strongly stable state of the inhibitory hypercube network, respectively.