Topics in Algorithms Lap Chi Lau.

Slides:



Advertisements
Similar presentations
Iterative Rounding and Iterative Relaxation
Advertisements

The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
1 Matching Polytope x1 x2 x3 Lecture 12: Feb 22 x1 x2 x3.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Totally Unimodular Matrices
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
The Structure of Polyhedra Gabriel Indik March 2006 CAS 746 – Advanced Topics in Combinatorial Optimization.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Linear Programming and Approximation
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Duality Lecture 10: Feb 9. Min-Max theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum Cut Both.
Computer Algorithms Integer Programming ECE 665 Professor Maciej Ciesielski By DFG.
1 Bipartite Matching Polytope, Stable Matching Polytope x1 x2 x3 Lecture 10: Feb 15.
Matching Polytope, Stable Matching Polytope Lecture 8: Feb 2 x1 x2 x3 x1 x2 x3.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
Linear Programming – Max Flow – Min Cut Orgad Keller.
1 Spanning Tree Polytope x1 x2 x3 Lecture 11: Feb 21.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
Theory of Computing Lecture 13 MAS 714 Hartmut Klauck.
C&O 355 Mathematical Programming Fall 2010 Lecture 18 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
Iterative Rounding in Graph Connectivity Problems Kamal Jain ex- Georgia Techie Microsoft Research Some slides borrowed from Lap Chi Lau.
Optimization - Lecture 5, Part 1 M. Pawan Kumar Slides available online
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Linear Programming Piyush Kumar Welcome to CIS5930.
Approximation Algorithms based on linear programming.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar
Polyhedral Optimization Lecture 2 – Part 2 M. Pawan Kumar Slides available online
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
Lap Chi Lau we will only use slides 4 to 19
The minimum cost flow problem
Algorithm Design and Analysis
Chapter 5. Optimal Matchings
Linear Programming.
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Instructor: Shengyu Zhang
Analysis of Algorithms
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
3.5 Minimum Cuts in Undirected Graphs
Linear Programming and Approximation
Problem Solving 4.
András Sebő and Anke van Zuylen
Algorithms (2IL15) – Lecture 7
Flow Feasibility Problems
Lecture 19 Linear Program
Chapter 2. Simplex method
Simplex method (algebraic interpretation)
Prepared by Po-Chuan on 2016/05/24
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
“Easy” Integer Programming Problems: Network Flow Problems
Presentation transcript:

Topics in Algorithms Lap Chi Lau

TU-simplex-ellipsoid

Totally Unimodular Matrices Simplex Algorithm Elliposid Algorithm Lecture 11: Feb 23

Integer Linear programs Method: model a combinatorial problem as a linear program. Goal: To prove that the linear program has an integer optimum solution for every objective function. Goal: Every vertex (basic) solution is an integer vector. Consequences: The combinatorial problem (even the weighted case) is polynomial time solvable. Min-max theorems for the combinatorial problem through the LP-duality theorem.

Method 1: Convex Combination A point y in Rn is a convex combination of if y is in the convex hull of Fact: A vertex solution is not a convex combination of some other points. Method 1: A non-integer solution must be a convex combination of some other points. Examples: bipartite matching polytope, stable matching polytope.

Method 2: Linear Independence Tight inequalities: inequalities achieved as equalities Vertex solution: unique solution of n linearly independent tight inequalities Think of 3D. Method 2: A set of n linearly independent tight inequalities must have some tight inequalities of the form x(e)=0 or x(e)=1. Proceed by induction. Examples: bipartite matching polytope, general matching polytope.

Method 3: Totally Unimodular Matrices m constraints, n variables Vertex solution: unique solution of n linearly independent tight inequalities Can be rewritten as: That is:

Method 3: Totally Unimodular Matrices Assuming all entries of A and b are integral When does has an integral solution x? By Cramer’s rule here Ai is the matrix where each column is equal to the corresponding column in A except the i-th column is equal to b. x would be integral if det(A) was equal to +1 or -1.

Method 3: Totally Unimodular Matrices A matrix is totally unimodular if the determinant of each square submatrix of is 0, -1, or +1. Theorem 1: If A is totally unimodular, then every vertex solution of is integral. Proof (follows from previous slides): a vertex solution is defined by a set of n linearly independent tight inequalities. Let A’ denote the (square) submatrix of A which corresponds to those inequalities. Then A’x = b’, where b’ consists of the corresponding entries in b. Since A is totally unimodular, det(A) = 1 or -1. By Cramer’s rule, x is integral.

Example of Totally Unimodular Matrices A totally unimodular matrix must have every entry equals to +1,0,-1. Gaussian elimination And so we see that x must be an integral solution.

Example of Totally Unimodular Matrices is not a totally unimodular matrix, as its determinant is equal to 2. x is not necessarily an integral solution.

Method 3: Totally Unimodular Matrices Primal Dual Transpose of A Theorem 2: If A is totally unimodular, and primal and dual are feasible, then both have integer optimal solutions. Proof: if A is totally unimodular, then so is its transpose.

Application 1: Bipartite Graphs Let A be the incidence matrix of a bipartite graph. Each row i represents a vertex v(i), and each column j represents an edge e(j). A(ij) = 1 if and only if edge e(j) is incident to v(i). edges vertices

Application 1: Bipartite Graphs We’ll prove that the incidence matrix A of a bipartite graph is totally unimodular. Consider an arbitrary square submatrix A’ of A. Our goal is to show that A’ has determinant -1,0, or +1. Case 1: A’ has a column with only 0. Then det(A’)=0. Case 2: A’ has a column with only one 1. By induction, A’’ has determinant -1,0, or +1. And so does A’.

Application 1: Bipartite Graphs Case 3: Each column of A’ has exactly two 1’s. +1 We can write -1 Since the graph is bipartite, each column has one 1 in Aup and one -1 in Adown So, by multiplying by +1 the rows in Aup and by -1 those in Adown, we get that the rows are linearly dependent, and thus det(A’)=0, and we’re done. +1 -1

Application 1: Bipartite Graphs Maximum bipartite matching Incidence matrix of a bipartite graph, hence totally unimodular, And another proof that this LP has integral OS.

Application 1: Bipartite Graphs Maximum general matching The linear program for general matching does not come from a totally unimodular matrix, and this is why Edmonds’ result is regarded as a major breakthrough.

Application 1: Bipartite Graphs Theorem 2: If A is totally unimodular, then both the primal and dual programs are integer programs. Maximum matching <= maximum fractional matching <= minimum fractional vertex cover <= minimum vertex cover Theorem 2 show that the first and the last inequalities are equalites. The LP-duality theorem shows that the second inequality is an equality. And so we have maximum matching = minimum vertex cover.

Application 2: Directed Graphs Let A be the incidence matrix of a directed graph. Each row i represents a vertex v(i), and each column j represents an edge e(j). A(ij) = +1 if vertex v(i) is the tail of edge e(j). A(ij) = -1 if vertex v(i) is the head of edge e(j). A(ij) = 0 otherwise. The incidence matrix A of a directed graph is totally unimodular. Consequences: The max-flow problem (even min-cost flow) is polynomial time solvable. Max-flow-min-cut theorem follows from the LP-duality theorem.

LP-solver Black Box Problem Solution Polynomial time LP-formulation Vertex solution LP-solver integral

Simplex Method Simplex method: A simple and effective approach to solve linear programs in practice. It has a nice geometric interpretation. Idea: Focus only on vertex solutions, since no matter what is the objective function, there is always a vertex which attains optimality.

Moving along this direction Simplex Method Simplex Algorithm: Start from an arbitrary vertex. Move to one of its neighbours which improves the cost. Iterate. Key: local minimum = global minimum Global minimum Moving along this direction improves the cost. There is always one neighbor which improves the cost. We are here

Simplex Method Simplex Algorithm: Start from an arbitrary vertex. Move to one of its neighbours which improves the cost. Iterate. Which one? There are many different rules to choose a neighbor, but so far every rule has a counterexample so that it takes exponential time to reach an optimum vertex. MAJOR OPEN PROBLEM: Is there a polynomial time simplex algorithm?

Simplex Method For combinatorial problems, we know that vertex solutions correspond to combinatorial objects like matchings, stable matchings, flows, etc. So, the simplex algorithm actually defines a combinatorial algorithm for these problems. For example, if you consider the bipartite matching polytope and run the simplex algorithm, you get the augmenting path algorithm. The key is to show that two adjacent vertices differ by an augmenting path. Recall that a vertex solution is the unique solution of n linearly independent tight inequalities. So, moving along an edge in the polytope means to replace one tight inequality by another one. There is one degree of freedom and this corresponds to moving along an edge.

Ellipsoid Method Goal: Given a bounded convex set P, find a point x in P. Key: show that the volume decreases fast enough Ellipsoid Algorithm: Start with a big ellipsoid which contains P. Test if the center c is inside P. If not, there is a linear inequality ax <=b for which c is violated. Find a minimum ellipsoid which contains the intersection of the previous ellipsoid and ax <= b. Continue the process with the new (smaller) ellipsoid.

Ellipsoid Method Goal: Given a bounded convex set P, find a point x in P. Why it is enough to test if P contains a point??? Because optimization problem can be reduced to this testing problem. Any point which satisfies this new system is an optimal solution of the original system.

Ellipsoid Method Important property: We just need to know the previous ellipsoid and a violated inequality. This can help to solve some exponential size LP if we have a separation oracle. Separation orcale: given a point x, decide in polynomial time whether x is in P or output a violating inequality.

Application of the Ellipsoid Method Maximum matching To solve this linear program, we need a separation oracle. Given a fractional solution x, the separation oracle needs to determine if x is a feasible solution, or else output a violated inequality. For this problem, it turns out that we can design a polynomial time separation oracle by using the minimum cut algorithm! For each odd set S, exponentially many!

What We Have Learnt (or Heard) Stable matchings Bipartite matchings Minimum spanning trees General matchings Maximum flows Shortest paths Minimum Cost Flows Submodular Flows Linear programming

What We Have Learnt (or Heard) How to model a combinatorial problem as a linear program. See the geometric interpretation of linear programming. How to prove a linear program gives integer optimal solutions? Prove that every vertex solution is integral. By convex combination method. By linear independency of tight inequalities. By totally unimodular matrices. By iterative rounding (to be discussed). By randomized rounding (to be discussed).

What We Have Learnt (or Heard) How to obtain min-max theorems of combinatorial problems? LP-duality theorem, e.g. max-flow-min-cut, max-matching-min-vertex-cover. See combinatorial algorithms from the simplex algorithm, and even give an explanation for the combinatorial algorithms (local minimum = global minimum). We’ve seen how results from combinatorial approach follow from results in linear programming. Later we’ll see many results where linear programming is the only approach we know of!