Neural Networks for Optimization William J. Wolfe California State University Channel Islands.

Slides:



Advertisements
Similar presentations
Feedback Neural Networks
Advertisements

Energy-Efficient Distributed Algorithms for Ad hoc Wireless Networks Gopal Pandurangan Department of Computer Science Purdue University.
CS6800 Advanced Theory of Computation
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Vehicle Routing & Scheduling: Part 1
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Neural Networks for Optimization Bill Wolfe California State University Channel Islands.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Nature’s Algorithms David C. Uhrig Tiffany Sharrard CS 477R – Fall 2007 Dr. George Bebis.
Un Supervised Learning & Self Organizing Maps Learning From Examples
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
How does the mind process all the information it receives?
Neural Networks for Optimization William J. Wolfe California State University Channel Islands.
Hypercubes and Neural Networks bill wolfe 10/23/2005.
Ant Colony Optimization Algorithms for the Traveling Salesman Problem ACO Kristie Simpson EE536: Advanced Artificial Intelligence Montana State.
Neural Networks based on Competition
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Ant Colony Optimization: an introduction
Clustering Unsupervised learning Generating “classes”
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234 Lecture 1 -- (14-Jan-09) “Introduction”  Combinatorial Optimization.
Genetic Algorithms and Ant Colony Optimisation
Chapter 7 Other Important NN Models Continuous Hopfield mode (in detail) –For combinatorial optimization Simulated annealing (in detail) –Escape from local.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Network Unsupervised Learning
Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Optimization with Neural Networks Presented by: Mahmood Khademi Babak Bashiri Instructor: Dr. Bagheri Sharif University of Technology April 2007.
Cliff Shaffer Computer Science Computational Complexity.
The Traveling Salesman Problem Over Seventy Years of Research, and a Million in Cash Presented by Vladimir Coxall.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
ECE 1747H: Parallel Programming Lecture 2: Data Parallelism.
Master Tour Routing Vladimir Deineko, Warwick Business School.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
CS 8751 ML & KDDData Clustering1 Clustering Unsupervised learning Generating “classes” Distance/similarity measures Agglomerative methods Divisive methods.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Information-Theoretic Co- Clustering Inderjit S. Dhillon et al. University of Texas, Austin presented by Xuanhui Wang.
Self-Organizing Maps (SOM) (§ 5.5)
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Hopfield Neural Networks for Optimization 虞台文 大同大學資工所 智慧型多媒體研究室.
NETWORK SONGS !! created by Carina Curto & Katherine Morrison January 2016 Input: a simple directed graph G satisfying two rules: 1. G is an oriented.
ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.
Neural Network to solve Traveling Salesman Problem Amit Goyal Koustubh Vachhani Ankur Jain 01D05007.
Management Science 461 Lecture 7 – Routing (TSP) October 28, 2008.
Metaheuristics for the New Millennium Bruce L. Golden RH Smith School of Business University of Maryland by Presented at the University of Iowa, March.
Computational Intelligence Winter Term 2015/16 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
EMIS 8373: Integer Programming Combinatorial Optimization Problems updated 27 January 2005.
Lecture 39 Hopfield Network
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
Chapter 12 Case Studies Part B. Control System Design.
Chapter 5 Unsupervised learning
Ch7: Hopfield Neural Model
One-layer neural networks Approximation problems
Artificial Intelligence (CS 370D)
ECE/CS/ME 539 Neural Networks and Fuzzy Systems
Fine-Grained Complexity Analysis of Improving Traveling Salesman Tours
Neural Networks Chapter 4
Self-Organizing Maps (SOM) (§ 5.5)
Lecture 39 Hopfield Network
Hopfield Neural Networks for Optimization
Computational Intelligence
Computational Intelligence
A Neural Network for Car-Passenger matching in Ride Hailing Services.
Presentation transcript:

Neural Networks for Optimization William J. Wolfe California State University Channel Islands

Neural Models Simple processing units Lots of them Highly interconnected Exchange excitatory and inhibitory signals Variety of connection architectures/strengths “Learning”: changes in connection strengths “Knowledge”: connection architecture No central processor: distributed processing

Simple Neural Model a i Activation e i External input w ij Connection Strength Assume: w ij = w ji (“symmetric” network)  W = (w ij ) is a symmetric matrix

Net Input

Dynamics Basic idea:

Energy

Lower Energy da/dt = net = -grad(E)  seeks lower energy

Problem: Divergence

A Fix: Saturation

Keeps the activation vector inside the hypercube boundaries Encourages convergence to corners

Summary: The Neural Model a i Activation e i External Input w ij Connection Strength W (w ij = w ji ) Symmetric

Example: Inhibitory Networks Completely inhibitory –wij = -1 for all i,j –k-winner Inhibitory Grid –neighborhood inhibition

Traveling Salesman Problem Classic combinatorial optimization problem Find the shortest “tour” through n cities n!/2n distinct tours

TSP 50 City Example

Random

Nearest-City

2-OPT

Centroid

Monotonic

Neural Network Approach neuron

Tours – Permutation Matrices tour: CDBA permutation matrices correspond to the “feasible” states.

Not Allowed

Only one city per time stop Only one time stop per city  Inhibitory rows and columns inhibitory

Distance Connections: Inhibit the neighboring cities in proportion to their distances.

putting it all together:

Research Questions Which architecture is best? Does the network produce: –feasible solutions? –high quality solutions? –optimal solutions? How do the initial activations affect network performance? Is the network similar to “nearest city” or any other traditional heuristic? How does the particular city configuration affect network performance? Is there any way to understand the nonlinear dynamics?

typical state of the network before convergence

“Fuzzy Readout”

Neural Activations Fuzzy Tour Initial Phase

Neural ActivationsFuzzy Tour Monotonic Phase

Neural ActivationsFuzzy Tour Nearest-City Phase

Fuzzy Tour Lengths tour length iteration

Average Results for n=10 to n=70 cities (50 random runs per n) # cities

DEMO 2 Applet by Darrell Long

Conclusions Neurons stimulate intriguing computational models. The models are complex, nonlinear, and difficult to analyze. The interaction of many simple processing units is difficult to visualize. The Neural Model for the TSP mimics some of the properties of the nearest-city heuristic. Much work to be done to understand these models.

EXTRA SLIDES

Brain Approximately neurons Neurons are relatively simple Approximately 10 4 fan out No central processor Neurons communicate via excitatory and inhibitory signals Learning is associated with modifications of connection strengths between neurons

Fuzzy Tour Lengths iteration tour length

Average Results for n=10 to n=70 cities (50 random runs per n) # cities tour length

with external input e = 1/2

Perfect K-winner Performance: e = k-1/2