A Membrane Algorithm for the Min Storage problem Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca WMC.

Slides:



Advertisements
Similar presentations
Sublinear-time Algorithms for Machine Learning Ken Clarkson Elad Hazan David Woodruff IBM Almaden Technion IBM Almaden.
Advertisements

Problems and Their Classes
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Fast Algorithms For Hierarchical Range Histogram Constructions
Theory of Computing Lecture 3 MAS 714 Hartmut Klauck.
CSE 3101: Introduction to the Design and Analysis of Algorithms
Combinatorial Algorithms
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
The number of edge-disjoint transitive triples in a tournament.
the fourth iteration of this loop is shown here
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Computational problems, algorithms, runtime, hardness
Semidefinite Programming
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Approximation Algorithms
Tirgul 8 Universal Hashing Remarks on Programming Exercise 1 Solution to question 2 in theoretical homework 2.
Computational Complexity, Physical Mapping III + Perl CIS 667 March 4, 2004.
Polynomial time approximation scheme Lecture 17: Mar 13.
EAs for Combinatorial Optimization Problems BLG 602E.
Chapter 11: Limitations of Algorithmic Power
NP and NP- Completeness Bryan Pearsaul. Outline Decision and Optimization Problems Decision and Optimization Problems P and NP P and NP Polynomial-Time.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Packing Element-Disjoint Steiner Trees Mohammad R. Salavatipour Department of Computing Science University of Alberta Joint with Joseph Cheriyan Department.
DAST, Spring © L. Joskowicz 1 Data Structures – LECTURE 1 Introduction Motivation: algorithms and abstract data types Easy problems, hard problems.
Approximation Algorithms
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
HOW TO SOLVE IT? Algorithms. An Algorithm An algorithm is any well-defined (computational) procedure that takes some value, or set of values, as input.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Analysis of Algorithms
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Analysis of Algorithms These slides are a modified version of the slides used by Prof. Eltabakh in his offering of CS2223 in D term 2013.
TECH Computer Science NP-Complete Problems Problems  Abstract Problems  Decision Problem, Optimal value, Optimal solution  Encodings  //Data Structure.
Approximation Algorithms
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Outline Introduction Minimizing the makespan Minimizing total flowtime
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
LIMITATIONS OF ALGORITHM POWER
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes.
CES 592 Theory of Software Systems B. Ravikumar (Ravi) Office: 124 Darwin Hall.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
1 Combinatorial Problem. 2 Graph Partition Undirected graph G=(V,E) V=V1  V2, V1  V2=  minimize the number of edges connect V1 and V2.
Approximation Algorithms based on linear programming.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
8.3.2 Constant Distance Approximations
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Approximation Algorithms
Objective of This Course
Bin Packing Michael T. Goodrich Some slides adapted from slides from
Presentation transcript:

A Membrane Algorithm for the Min Storage problem Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca WMC 2006 Leiden – Lorentz Center – 21 July 2006 Alberto Leporati  Dario Pagani 

WMC 7 - Leiden - 21 July Talk outline The ConsComp and Min Storage problems Computational properties of Min Storage ( NP-hardness, 2-approximability ) Nishida’s membrane algorithm for TSP A membrane algorithm for Min Storage: first try: crossover and mutation second try: simple local optimization Some traditional heuristics for Min Storage Computer experiments: generation of random instances comparison with traditional heuristics

WMC 7 - Leiden - 21 July Conservative computations Let: G be an n -input/ m -output gate (function) be a sequence of input values be the corresponding sequence of output values For i = 1,2,…,k, we define: If, then we say the computation is conservative energy function

WMC 7 - Leiden - 21 July Gates with storage unit In a conservative computation it may be e i > 0 or e i < 0 for some i  {1,2,…,k} We assume to use gates equipped with a storage unit, able to store a limited amount C of energy We call C the capacity of the gate Remark: without loss of generality, we can assume that all e i ’s are integer values

WMC 7 - Leiden - 21 July Stored energy during a computation If G(x 1 ), G(x 2 ),…, G(x k ) are computed in this order, then the energy stored into the gate at each step is: For the computation to be C -feasible, it must hold: Question: is there a permutation   S k such that the computation of G(x  (1) ), G(x  (2) ),…, G(x  (k) ) is C - feasible?

WMC 7 - Leiden - 21 July The ConsComp decision problem Instance: a set  = {e 1, e 2, …, e k } of integer numbers such that e 1 +e 2 +…+e k = 0 an integer number C > 0 Question: is there a permutation   S k such that for each i  {1,2,…,k} ? Theorem: ConsComp is NP-complete Proof: reduction from Partition (ask for details!).  i -th prefix sum

WMC 7 - Leiden - 21 July Complexity of ConsComp Theorem: ConsComp is NP-complete in the strong sense. Proof: reduction from (3-)Partition.  We conjecture that the problem remains NP-complete even with constant elements in the instance Notice that Bin Packing and 3-Partition with a constant number of element types are polynomial. Of course, the proofs for these problems do not work for ConsComp

WMC 7 - Leiden - 21 July The Min Storage optimization problem Optimization problem whose natural decision version is ConsComp name: Min Storage instance: a set {e 1, e 2, …, e k } of integer numbers such that e 1 +e 2 +…+e k = 0 solution: a permutation   S k such that for each i  {1,2,…,k} measure:

WMC 7 - Leiden - 21 July Complexity of Min Storage Min Storage  NPO Since ConsComp is strongly NP-complete, Min Storage is NP-hard in the strong sense Trivial bounds for the optimal solution: upper bound: lower bound: It is easily shown that the problem is 2- approximable best known upper bound: Open problem: is it 3/2 -approximable?

WMC 7 - Leiden - 21 July Complexity of Min Storage A 2 -approximation algorithm for Min Storage: There is no FPTAS for Min Storage Open problem: is there a PTAS?

WMC 7 - Leiden - 21 July Membrane algorithms Introduced by Nishida, and applied to TSP Inspired by Membrane Computing Composed by three components: a linear collection of regions separated by nested membranes local optimization sub-algorithms that works upon a small number of candidate solutions a trasport mechanism to move candidate solutions to the immediately inner or outer region Notation: M = number of nested membranes D = number of iterations before halting

WMC 7 - Leiden - 21 July Membrane algorithms

WMC 7 - Leiden - 21 July Nishida’s algorithm for TSP Local optimization subalgorithm for region 0 is tabu search exchanges two nodes in candidate solution (throws away previously considered solutions) In the other regions: edge exchange crossover: combines two solutions to produce two new solutions mutation, performed in region i with probability i/M Halting condition: prefixed number of iterations

WMC 7 - Leiden - 21 July Nishida’s algorithm for TSP 20 tests on eil51, with D = Optimal solution is tests on kroA100, with D = Optimal solution is 21282

WMC 7 - Leiden - 21 July Membrane alg. for Min Storage First try: MA4MS Fitness measure: Same structure Nishida’s algorithm for TSP membrane structure, number of solutions Local optimization subalgorithms: region 0: LocalSearch4MS other regions: Partially Matched Crossover (PMX), followed by Mutation

WMC 7 - Leiden - 21 July LocalSearch4MS Neighborhood of candidate solution , w.r.t. position  : the set of k–1 solutions where  i,  is obtained from  by exchanging elements in positions i and  LocalSearch4MS looks for the best (i.e., lowest fitness) solution in Neigh( ,  )  is chosen as the first position for which the prefix sum is negative (if any) if all prefix sums are non negative,  is chosen at random

WMC 7 - Leiden - 21 July Partially Matched Crossover lr  

WMC 7 - Leiden - 21 July Mutation After applying PMX, to each of the new solutions we apply Mutation in region i, the probability to apply Mutation is i/M Mutation simply chooses two positions at random (with uniform distribution) and exchanges the two elements 

WMC 7 - Leiden - 21 July Generation of instances Problem: generate k -tuples (X 1,X 2,…,X k ) such that in a uniform way, where X 1,X 2,…,X k are discrete, independent random variables uniformly distributed over the set of integers {–M,…, M} First idea: extract a k -tuple and discard it if If you approximate the distribution of with, we get: for k=100 and M=10 6

WMC 7 - Leiden - 21 July Generation of instances Alternative idea: extract a (k–1)- tuple and discard it if Approximating the distribution of with, we get: for k=100 and M=10 6

WMC 7 - Leiden - 21 July Error of approximation The probability distribution ofis almost uniform: Example: for k=100 and M=10 6

WMC 7 - Leiden - 21 July Coefficient of approximation Given: a heuristic A an instance I of Min Storage let: c A (I) be the value returned by A opt(I) be the optimal solution The performance ratio (coefficient of approximation) of A over I is: Note that app A (I)  1, and the closer it is to 1 the better c A (I) is

WMC 7 - Leiden - 21 July Coefficient of approximation For large instances, we are not able to compute opt(I) we use instead we obtain upper bounds to the coefficient of approximation We really compute the average coefficient of approximation: where N is the number of tests

WMC 7 - Leiden - 21 July Some experiments with MA4MS tests with M=10 and D= tests with M=30 and D=150

WMC 7 - Leiden - 21 July MA4MS with local search only Second try: MA4MS LS we use LocalSearch4MS as a local optimization subalgorithm in every region tests with M=30 and D=150 In the following, we will use only MA4MS LS !

WMC 7 - Leiden - 21 July System’s parameters we have tried to make M and D grow together…

WMC 7 - Leiden - 21 July System’s parameters …as well as one at the time Average coefficient of approximation for M varying between 10 and 100, for D=150

WMC 7 - Leiden - 21 July System’s parameters Average coefficient of approximation for D varying between 50 and 500, for M=80

WMC 7 - Leiden - 21 July Some heuristics for Min Storage The Greedy algorithm (time:  ( k 2 )) Some  ( k log k ) heuristics: Min Max MaxMinMax MinMaxMin MinMaxMinMax MaxMinMaxMin The Best Fit algorithm (time:  ( k 2 )) Some assumptions:  all lists are implemented through arrays  removal of a generic element from a list requires constant time  sorting requires  (k log k ) steps

WMC 7 - Leiden - 21 July Some heuristics for Min Storage

WMC 7 - Leiden - 21 July Some heuristics for Min Storage

WMC 7 - Leiden - 21 July First experiment 100 instances of 12 elements from the set {–10 6,…,10 6 } Goal: comparison between optimal solutions and coefficients of approximation

WMC 7 - Leiden - 21 July Second experiment 10 5 instances of 100 elements from the set {–10 6,…,10 6 } performance ratios have been computed using instead of opt(I)

WMC 7 - Leiden - 21 July Third experiment 100 instances of 100 elements from the set {–10 6,…,10 6 } Initial energy comprised between 0 and with steps of 100

WMC 7 - Leiden - 21 July Fourth experiment The same as the third experiment, but with initial energy comprised between 0 and with steps of 100

WMC 7 - Leiden - 21 July Fifth experiment 10 5 instances of 100 elements from the set {–10 6,…,10 6 } Initial energy fixed to

WMC 7 - Leiden - 21 July Conclusions Min Storage seems to be easy to solve on (almost uniformly) randomly chosen instances MA4MS LS is among the best heuristics (the best for relaxed problem) Best Fit performs better when the initial energy is set to 0, but not when the initial energy is > 0 MA4MS LS seems to be stable Comparison with analogous algorithms for further problems?

WMC 7 - Leiden - 21 July Last slide ! Thanks for your attention !