Aussois, 4-8/1/20101 = ? Matteo Fischetti and Domenico Salvagnin University of Padova +

Slides:



Advertisements
Similar presentations
1 Matching Polytope x1 x2 x3 Lecture 12: Feb 22 x1 x2 x3.
Advertisements

Gomory’s cutting plane algorithm for integer programming Prepared by Shin-ichi Tanigawa.
EE 553 Integer Programming
Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) Tobias Achterberg Conflict Analysis SCIP Workshop at ZIB October 2007.
Looking inside Gomory Aussois, January ISMP Pure Cutting Plane Methods for ILP: a computational perspective Matteo Fischetti, DEI, University.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
EMIS 8373: Integer Programming Valid Inequalities updated 4April 2011.
Looking inside Gomory Aussois, January Looking inside Gomory Matteo Fischetti, DEI University of Padova (joint work with Egon Balas and Arrigo.
1 Logic-Based Methods for Global Optimization J. N. Hooker Carnegie Mellon University, USA November 2003.
Solving Integer Programs. Natural solution ideas that don’t work well Solution idea #1: Explicit enumeration: Try all possible solutions and pick the.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
Computational Methods for Management and Economics Carla Gomes
1 Maximum matching Max Flow Shortest paths Min Cost Flow Linear Programming Mixed Integer Linear Programming Worst case polynomial time by Local Search.
1 Contents college 3 en 4 Book: Appendix A.1, A.3, A.4, §3.4, §3.5, §4.1, §4.2, §4.4, §4.6 (not: §3.6 - §3.8, §4.2 - §4.3) Extra literature on resource.
1 Robustness by cutting planes and the Uncertain Set Covering Problem AIRO 2008, Ischia, 10 September 2008 (work supported my MiUR and EU) Matteo Fischetti.
Lift-and-Project cuts: an efficient solution method for mixed-integer programs Sebastian Ceria Graduate School of Business and Computational Optimization.
Looking inside Gomory Aussois, January CPAIOR Toward a MIP cut meta-scheme Matteo Fischetti, DEI, University of Padova.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
On the role of randomness in exact tree search methods EURO XXV, Vilnius, July Matteo Fischetti, University of Padova (based on joint work with Michele.
Decision Procedures An Algorithmic Point of View
Optimizing over the Split Closure Anureet Saxena ACO PhD Student, Tepper School of Business, Carnegie Mellon University. (Joint Work with Egon Balas)
A Decomposition Heuristic for Stochastic Programming Natashia Boland +, Matteo Fischetti*, Michele Monaci*, Martin Savelsbergh + + Georgia Institute of.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Thinning out Steiner Trees
Proximity search heuristics for Mixed Integer Programs Matteo Fischetti University of Padova, Italy RAMP, 17 October Joint work with Martina Fischetti.
Stabilized column generation  Classical column generation often very slow.  Slow convergence (tailing-off effect)  Poor columns in initial stages (head-in.
MILP algorithms: branch-and-bound and branch-and-cut
Approximation Algorithms for Prize-Collecting Forest Problems with Submodular Penalty Functions Chaitanya Swamy University of Waterloo Joint work with.
Self-Splitting Tree Search in a Parallel Environment Matteo Fischetti, Michele Monaci, Domenico Salvagnin University of Padova IFORS 2014, Barcelona 1.
1 Lagrangean Relaxation --- Bounding through penalty adjustment.
15.053Tuesday, April 9 Branch and Bound Handouts: Lecture Notes.
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
Gomory Cuts Updated 25 March Example ILP Example taken from “Operations Research: An Introduction” by Hamdy A. Taha (8 th Edition)“Operations Research:
Branch-and-Cut Valid inequality: an inequality satisfied by all feasible solutions Cut: a valid inequality that is not part of the current formulation.
1 A polynomial relaxation-type algorithm for linear programming Sergei Chubanov University of Siegen, Germany
D Nagesh Kumar, IIScOptimization Methods: M7L2 1 Integer Programming Mixed Integer Linear Programming.
Simplified Benders cuts for Facility Location
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Chapter 2. Optimal Trees and Paths Combinatorial Optimization
Intersection Cuts for Bilevel Optimization
Approximation Algorithms based on linear programming.
Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) Tobias Achterberg Conflict Analysis in Mixed Integer Programming.
Sebastian Ceria Graduate School of Business and
Lap Chi Lau we will only use slides 4 to 19
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Matteo Fischetti, Michele Monaci University of Padova
From Mixed-Integer Linear to Mixed-Integer Bilevel Linear Programming
Topics in Algorithms Lap Chi Lau.
Exact Algorithms for Mixed-Integer Bilevel Linear Programming
Modern Benders (in a nutshell)
The CPLEX Library: Mixed Integer Programming
Matteo Fischetti, University of Padova
Branch-and-cut implementation of Benders’ decomposition
MILP algorithms: branch-and-bound and branch-and-cut
#post_modern Branch-and-Cut Implementation
Chapter 6. Large Scale Optimization
Chapter 6. Large Scale Optimization
Matteo Fischenders, University of Padova
2. Generating All Valid Inequalities
Matteo Fischetti, University of Padova
Part II General Integer Programming
Intersection Cuts from Bilinear Disjunctions
Branch-and-Bound Algorithm for Integer Program
Integer LP: Algorithms
Chapter 6. Large Scale Optimization
1.2 Guidelines for strong formulations
Intersection Cuts for Quadratic Mixed-Integer Optimization
Discrete Optimization
Presentation transcript:

Aussois, 4-8/1/20101 = ? Matteo Fischetti and Domenico Salvagnin University of Padova +

Aussois, 4-8/1/20102 Cutting plane methods Cutting plane methods widely used in convex optimization and to provide bounds for Mixed-Integer Programs (MIPs) Made by two equally important components: –(i) the separation procedure (oracle) that produces the cut(s) used to tighten the current relaxation, and –(ii) the overall search framework that actually uses the generated cuts and determines the next point to cut In the last 50 years, considerable research effort devoted to the study of (i)  families of cuts, cut selection criteria, etc. Search component (ii) much less studied by the MIP community  the standard approach is to always cut an optimal LP vertex

Aussois, 4-8/1/20103 The problem Consider a generic MIP: z(MIP) := min { c T x : x ε X } and its LP relaxation z(LP) := min {c T x: x ε P } P := { x : A x ≤ b }, X := { x ε P: x j integer for j ε J } We are also given a convex set P 1 with conv(X) ≤ P1 ≤ P (e.g., the first GMI closure) described only implicitly through a separation function: oracle(y) returns a valid linear ineq. for P 1 violated by y (if any) We want to (approximately) compute z 1 := min {c T x: x ε P 1 }

Aussois, 4-8/1/20104 Kelley’s cutting plane method A classical search scheme J. E. Kelley. The cutting plane method for solving convex programs, Journal of the SIAM, 8: , Let P’ := { x ε P : x satisfies all cuts generated so far } 2.Find an optimal vertex x* of the current LP: min {c T x: x ε P’ }, 3.Invoke oracle(x*) and repeat (if a violated cut is found) Practically satisfactory only if the oracle is able to find “deep” cuts Very ineffective in case shallow cuts are generated May induce a dangerous correlation between x* and the returned cut (e.g. when the cuts are read from the LP tableau)

Aussois, 4-8/1/20105 Kelley and Gomory: a problematic marriage exponential determinant growth  unstable system! LP bound = 5; ILP optimum = 8

Aussois, 4-8/1/20106 But… is it all Gomory’s fault? Experiment: take a given LP problem (e.g. root node relaxation of a MIP) min { c T x: A’ x ≤ b’, A’’ x = b’’, l ≤ x ≤ u } and define: –P := { x: A’’ x = b’’, l ≤ x ≤ u } –constr. list A’ x ≤ b’ can only be accessed through the separation oracle 3 cut selection criteria implemented for separation: for a given x* the oracle returns: A) the deepest violated cut in the list (Euclidean distance)  “best” facet B) a convex combination of the deepest one and of the (at most) first 10 violated or tight cuts encountered when scanning the list C) the cut is first defined as in case B, but then its rhs is weakened so as to half the degree of violation

Aussois, 4-8/1/20107 Output with Kelley’s search (std) std-A std-B std-C

Aussois, 4-8/1/20108 But other search schemes work better… e.g. yo-yo search… yoyo-B yoyo-A yoyo-C std-A std-B std-C

Aussois, 4-8/1/20109 Lessons learned Kelley’s method is intrinsically nonrobust Other search methods can survive even with weak cuts (analytic center, yo-yo search, etc.) As to GMI cuts, reading both the vertex x* to cut and the cuts themselves from the same tableau soon creates a dangerous feedback (unstable dynamic system) Kelley + Gomory is problematic in the long run, but … it is not all Gomory’s fault! In fact, F. and Lodi (2007) report very good bounds by separating (fractional) Gomory cuts of rank 1 by means of an external MIP solver  a key ingredient was the new search scheme that decoupled optimization and separation Results confirmed for GMI cuts by Balas & Saxena and Dash, Gunluk & Lodi

Aussois, 4-8/1/ Reading GMIs from LP bases If you insist on reading GMI cuts from an LP basis … at least don’t use an optimal one! Steps in this direction: given an optimal LP vertex x* of the “large LP” (original+cuts) and the associated optimal basis B*: –Balas and Perregaard (2003): perform a sequence of pivots leading to a (possibly non-optimal or even infeasible) basis of the large LP leading to a deeper cut w.r.t. the given x* –Dash and Goycoolea (2009): heuristically look for a basis B of the original LP that is “close to B*” in the hope of cutting the given x* with rank-1 GMI cuts associated with B

Aussois, 4-8/1/ Back to Lagrangia Forget about Kelley: optimizing over the first GMI closure reads min c T x x ε P Dualize (in a Lagrangian way) the GMI cuts, i.e. … … solve a sequence of Lagrangian subproblems min { c(λ) T x : x ε P } on the original LP but using the Lagrangian cost vector c(λ) Subgradient s at λ : s i = violation of the i-th GMI cut w.r.t. x*(λ) := argmin { c(λ) T x : x ε P }

Aussois, 4-8/1/ Back to Lagrangia During the Lagrangian dual optimization process, a large number of bases of the original LP is traced  round of rank-1 GMI cuts can easily be generated “on the fly” and stored Use of a cut pool to explicitly store the generated cuts, needed to compute (approx.) subgradients used by Lagrangian optimization Warning: new GMI cuts added on the fly  possible convergence issues due to the imperfect nature of the computed “subgradients” … as the separation oracle does not return the list of all violated GMI cuts, hence the subgradient is truncated somehow …

Aussois, 4-8/1/ Lagrange + Gomory = LaGromory Generate cuts and immediately dualize them (don’t wait they become wild!) No fractional point x* to cut: separation used to find (approx.) subgrad.s Lagrangian optimization as a stabilization filter in the closed loop system  GMI cuts are loosely correlated with the underlying large LP (original+ previously generated GMI cuts) as they don’t even see the large tableau The method can add rank-1 GMI cuts on top of nonlinear constraints and of any other classes of cuts (Cplex cuts etc.), including branching conditions  just dualize them! Key to success: resist to the “Kelley’s temptation” of reading the GMI from the optimal tableau of the large LP!

Aussois, 4-8/1/ Experiments with LaGromory cuts Three preliminary implementations: –lagr: naïve Held-Karp subgradient opt. scheme (10,000 iter.s) –hybr: as before, but every 1,000 subgr. iter.s we solve the “large LP” just to recompute the optimal Lagrangian multipliers for all the cuts in the current pool –fast: as before, but faster: only 10 large LPs solved, each followed by just 50 subgradient iterations with very small step size  10 short walks around the Lagrangian dual optimum to collect bases of the original LP, each made by 50 small steps to perturb Lagrangian costs Comparision with std (one round of GMI cuts)

Aussois, 4-8/1/ Preliminary computational results %gap Closed Time (sec.s) std24.4%0.01 lagr63.6%46.69 hybr69.6%36.97 fast59.1%1.52 Testbed: 32 instances from MIPLIB 3.0 and 2003 CPU seconds on a standard PC (1GB memory allowed) Rank 1 GMI cuts Rank 2 GMI cuts %gap closed Time (sec.s) std33.1%0.02 lagr68.9%72.64 hybr 73.4% fast69.3%4.79

Aussois, 4-8/1/ Fast with different parameters 1 walk5 walks10 walks15 walks20 walks25 walks 100 steps per walk gap37.00%56.60%60.60%62.00%62.60%63.20% time steps per walk gap36.50%55.60%59.10%61.00%61.80%62.40% time Rank 1 GMI cuts (std = 24.4% in 0.01 sec.s)

Aussois, 4-8/1/ … thank you for your attention!

Aussois, 4-8/1/ Lagrange + Gomory ? Matteo Fischetti and Domenico Salvagnin University of Padova