TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J/18.401J/SMA5503
Advertisements

C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Linear Programming (LP) (Chap.29)
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Introduction to Algorithms
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 8 Tuesday, 11/19/02 Linear Programming.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Linear Programming.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
1 Linear Programming Jose Rolim University of Geneva.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 9 Wednesday, 11/15/06 Linear Programming.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Approximation Algorithms
CSE 421 Algorithms Richard Anderson Lecture 27 NP Completeness.
Hardness Results for Problems
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2008 Lecture 9 Tuesday, 11/18/08 Linear Programming.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Linear Programming Data Structures and Algorithms A.G. Malamos References: Algorithms, 2006, S. Dasgupta, C. H. Papadimitriou, and U. V. Vazirani Introduction.
Theory of Computing Lecture 13 MAS 714 Hartmut Klauck.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
Theory of Computing Lecture 12 MAS 714 Hartmut Klauck.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Linear Programming Piyush Kumar Welcome to CIS5930.
Approximation Algorithms based on linear programming.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
TU/e Algorithms (2IL15) – Lecture 8 1 MAXIMUM FLOW (part II)
TU/e Algorithms (2IL15) – Lecture 10 1 NP-Completeness, II.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
The Theory of NP-Completeness
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
Linear Programming (LP) (Chap.29)
Computability and Complexity
Richard Anderson Lecture 25 NP-Completeness
Algorithms (2IL15) – Lecture 7
Lecture 14 Shortest Path (cont’d) Minimum Spanning Tree
The Theory of NP-Completeness
Lecture 13 Shortest Path (cont’d) Minimum Spanning Tree
Presentation transcript:

TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming

TU/e Algorithms (2IL15) – Lecture 12 2 Summary of previous lecture  ρ-approximation algorithm: algorithm for which computed solution is within factor ρ from OPT  to prove approximation ratio we usually need lower bound on OPT (or, for maximization, upper bound)  PTAS = polynomial-time approximation scheme = algorithm with two parameters, input instance and ε > 0, such that –approximation ratio is1+ ε –running time is polynomial in n for constant ε  FPTAS = PTAS whose running time is polynomial in 1/ ε  some problems are even hard to approximate

TU/e Algorithms (2IL15) – Lecture 12 3 Today: linear programming (LP)  most used and most widely studied optimization method  can be solved in polynomial time (input size measured in bits)  can be used to model many problems  also used in many approximation algorithms (integer LP + rounding) we will only have a very brief look at LP …  what is LP? what are integer LP and 0/1-LP ?  how can we model problems as an LP ? … and not study algorithms to solve LP’s

TU/e Algorithms (2IL15) – Lecture 12 4 Example problem: running a chocolate factory Assortment Cost and availability of ingredients caca o mil k hazelnut s retail price (100 g) Pure Black Creamy Milk Hazelnut Delight Super Nuts cost (kg)available (kg) cacao2.150 milk hazelnuts1.930 How much should we produce of each product to maximize our profit ?

TU/e Algorithms (2IL15) – Lecture 12 5 Modeling the chocolate-factory problem cacaomilkhazelnutsprice (100 g) Pure Black Creamy Milk Hazelnut Delight Super Nuts cost (kg)available (kg) cacao2.150 milk hazelnuts1.930 variables we want to determine: production (kg) of the products b = production of Pure Black m = production of Creamy Milk h = production of Hazelnut Delight s = production of Super Nuts

TU/e Algorithms (2IL15) – Lecture 12 6 Modeling the chocolate-factory problem (cont’d) cacaomilkhazelnutsprice (100 g) Pure Black Creamy Milk Hazelnut Delight Super Nuts cost (kg)available (kg) cacao2.150 milk hazelnuts1.930 profits (per kg) of the products Pure Black19.9 – 2.1 = 17.8 Creamy Milk 14.9 – 0.6 x x 0.35 = 13.5 Hazelnut Delight15.19 Super Nuts: total profit: 17.8 b m h s

TU/e Algorithms (2IL15) – Lecture 12 7 Modeling the chocolate-factory problem cacaomilkhazelnutsprice (100 g) Pure Black Creamy Milk Hazelnut Delight Super Nuts cost (kg)available (kg) cacao2.150 milk hazelnuts1.930 we want to maximize the total profit 17.8 b m h s under the constraints b m h s ≤ 50 (cacao availability) 0.4 m h s ≤ 50 (milk availability) 0.2 h s ≤ 30 (hazelnut availability) This is a linear program: optimize linear function, under set of linear constraints

TU/e Algorithms (2IL15) – Lecture −3 y ≥ − x + 3 y ≥ 2x − 4 y ≤ ½ x + 2 y ≥ 0 m constraints: linear function ≤ constant or ≥ or =, > and < not allowed n variables; here n=2, in chocolate example n=4, but often n is large objective function; must be linear function in the variables goal: maximize (or minimize) x y Linear programming Find values of real variables x, y such that  x − 3 y is maximized  subject to the constraints − 2 x + y ≥ − 4 x + y ≥ 3 − ½ x + y ≤ 2 y ≥ 0

TU/e Algorithms (2IL15) – Lecture 12 9 y = − x + 3 y = 2x − 4 y = ½ x −3 y = 0 Linear programming Find values of real variables x, y such that  x − 3 y is maximized  subject to the constraints − 2 x + y ≥ − 4 x + y ≥ 3 − ½ x + y ≤ 2 y ≥ 0 feasible region = region containing feasible solutions = region containing solutions satisfying all constraints feasible region is convex polytope in n-dim space

TU/e Algorithms (2IL15) – Lecture Linear programming: Find values of real variables x 1, …, x n such that  given linear function c 1 x 1 + c 2 x 2 + … + c n x n is maximized (or: minimized)  and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use Possible outcomes:  unique optimal solution: vertex of feasible region

TU/e Algorithms (2IL15) – Lecture Linear programming: Find values of real variables x 1, …, x n such that  given linear function c 1 x 1 + c 2 x 2 + … + c n x n is maximized (or: minimized)  and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use Possible outcomes:  unique optimal solution: vertex of feasible region  no solution: feasible region in empty

TU/e Algorithms (2IL15) – Lecture Linear programming: Find values of real variables x 1, …, x n such that  given linear function c 1 x 1 + c 2 x 2 + … + c n x n is maximized (or: minimized)  and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use Possible outcomes:  no solution: feasible region in empty  unique optimal solution: vertex of feasible region  bounded optimal solution, but not unique

TU/e Algorithms (2IL15) – Lecture Linear programming: Find values of real variables x 1, …, x n such that  given linear function c 1 x 1 + c 2 x 2 + … + c n x n is maximized (or: minimized)  and given linear constraints on the variables are satisfied constraints: equalities or inequalities using ≥ or ≤, cannot use Possible outcomes:  no solution: feasible region in empty  unique optimal solution: vertex of feasible region  bounded optimal solution, but not unique  unbounded optimal solution

TU/e Algorithms (2IL15) – Lecture n-dimensional vectors m x n matrix c, A, b are input, x must be computed Linear programming: standard form Maximize c 1 x 1 + c 2 x 2 + … + c n x n Subject to a 1,1 x 1 + a 1,2 x 2 + … + a 1,n x n ≤ b 1 a 2,1 x 1 + a 2,2 x 2 + … + a 2,n x n ≤ b a m,1 x 1 + a m,2 x 2 + … + a m,n x n ≤ b m x 1 ≥ 0 x 2 ≥ 0 … x n ≥ 0 Maximize c∙x subject to A x ≤ b and non-negativity constraints on all x i non-negativity constraints for each variable only “≤” (no “=“ and no “≥”) not: minimize

TU/e Algorithms (2IL15) – Lecture Lemma: Any LP with n variables and m constraints can be rewritten as an equivalent LP in standard form with 2n variables and 2n+2m constraints. Proof. LP may not be in standard form because  minimization instead of maximization − negate objective function: minimize 2 x 1 − x 2 + 4x 3  maximize −2 x 1 + x 2 − 4 x n  some constraints are ≥ or = instead of ≤ − getting rid of =: replace 3 x + x 2 − x 3 = 5 by3 x + x 2 − x 3 ≤ 5 3 x + x 2 − x 3 ≥ 5 − changing ≥ to ≤: negate constraint 3 x + x 2 − x 3 ≥ 5  − 3 x − x 2 + x 3 ≤ − 5

TU/e Algorithms (2IL15) – Lecture Lemma: Any LP with n variables and m constraints can be rewritten as an equivalent LP in standard form with 2n variables and 2n+2m constraints. Proof (cont’d). LP may not be in standard form because  minimization instead of maximization  some constraints are ≥ or = instead of ≤  variables without non-negativity constraint − for each such variable x i introduce two new variables u i and v i − replace each occurrence of x i by (u i − v i ) − add non-negativity constraints u i ≥ 0 and v i ≥ 0

TU/e Algorithms (2IL15) – Lecture Lemma: Any LP with n variables and m constraints can be rewritten as an equivalent LP in standard form with 2n variables and 2n+2m constraints. Proof (cont’d).  variables without non-negativity constraint − for each such variable x i introduce two new variables u i and v i − replace each occurrence of x i by (u i − v i ) − add non-negativity constraints u i ≥ 0 and v i ≥ 0 new problem is equivalent to original problem : for any original solution there is new solution with same value − if x i ≥ 0 then u i = x i and v i = 0, otherwise u i = 0 and v i = − x i and vice versa − set x i = (u i − v i )

TU/e Algorithms (2IL15) – Lecture Lemma: Any LP with n variables and m constraints can be rewritten as an equivalent LP in standard form with 2n variables and 2n+2m constraints. Instead of standard form, we can also get so-called slack form: – non-negativity constraint for each variable – all other constraints are =, not ≥ or ≤  Standard form (or slack form): convenient for developing LP algorithms  When modeling a problem: just use general form

TU/e Algorithms (2IL15) – Lecture Algorithms for solving LP’s simplex method − worst-case running time is exponential − fast in practice interior-point methods − worst-case running time is polynomial in input size in bits − some are slow in practice, others are competitive with simplex method LP when dimension (=number of variables) is constant − can be solved in linear time (see course Advanced Algorithms)

TU/e Algorithms (2IL15) – Lecture Modeling a problem as an LP  decide what the variables are (what are the choices to be made?)  write the objective function to be optimized (should be linear)  write the constraints on the variables (should be linear)

TU/e Algorithms (2IL15) – Lecture Example: Max Flow

TU/e Algorithms (2IL15) – Lecture Flow: function f : V x V → R satisfying  capacity constraint: 0 ≤ f (u,v) ≤ c(u,v ) for all nodes u,v  flow conservation: for all nodes u ≠ s, t we have flow in = flow out: ∑ v in V f (v,u) = ∑ v in V f (u,v) value of flow: |f | = ∑ v in V f (s,v) − ∑ v in V f (v,s) source sink s t 3 2 / 3 / 4 / 2 / 1 / 2 / 1 / 0 / flow = 1, capacity = 5

TU/e Algorithms (2IL15) – Lecture Modeling Max Flow as an LP  decide what the variables are (what are the choices to be made?) for each edge (u,v) introduce variable x uv ( x uv represents f(u,v) )  write the objective function to be optimized (should be linear) maximize ∑ v in V x sv − ∑ v in V x vs (note: linear function)  write the constraints on the variables (should be linear) x uv ≥ 0 for all pairs of nodes u,v x uv ≤ c(u,v) for all pairs of nodes u,v ∑ v in V x vu − ∑ v in V x uv = 0 for all nodes u ≠ s, t (note: linear functions)

TU/e Algorithms (2IL15) – Lecture Modeling Max Flow as an LP Now write it down nicely maximize ∑ v in V x sv − ∑ v in V x v,s subject to x uv ≥ 0 for all pairs of nodes u,v x uv ≤ c (u,v) for all pairs of nodes u,v ∑ v in V x vu − ∑ v in V x uv = 0 for all nodes u ≠ s, t Conclusion: Max Flow can trivially be written as an LP (but dedicated max-flow algorithm are faster than using LP algorithms)

TU/e Algorithms (2IL15) – Lecture Example: Shortest Paths

TU/e Algorithms (2IL15) – Lecture − Shortest paths weighted, directed graph G = (V,E )  weight (or: length) of a path = sum of edge weights  δ (u,v) = distance from u to v = min weight of any path from u to v  shortest path from u to v = any path from u to v of weight δ (u,v) v2v2 v4v4 v7v7 v5v5 v6v6 v3v3 v1v1 4 weight = 2 weighted, directed graph δ(v 1,v 5 ) = 2 Is δ (u,v) always well defined? No, not if there are negative-weight cycles.

TU/e Algorithms (2IL15) – Lecture Modeling single-source single-target shortest path as an LP Problem: compute distance δ(s,t) from given source s to given target t  decide what the variables are (what are the choices to be made?) for each vertex v introduce variable x v ( x v represents δ(s,v) )  write the objective function to be optimized (should be linear) minimize x t  write the constraints on the variables (should be linear) x v ≤ x u + w(u,v) for all edges (u,v) in E x s = 0 maximize x t

TU/e Algorithms (2IL15) – Lecture Modeling single-source single-target shortest path as an LP variables: for each vertex v we have a variable x v LP: maximize x t subject to x v ≤ x u + w(u,v) for all edges (u,v) in E x s = 0 Lemma: optimal solution to LP = δ(s,t). Proof. (assume for simplicity that δ(s,t) is bounded) ≥ : consider solution where we set x v = δ(s,v) for all v −solution is feasible and has value dist(s,t)  opt solution ≥ δ(s,t) ≤ : consider opt solution, and shortest path s = v 0,v 1,…,v k,v k+1 = t −prove by induction that x i ≤ δ(s,v i )  opt solution ≤ δ(s,t)

TU/e Algorithms (2IL15) – Lecture Example: Vertex Cover

TU/e Algorithms (2IL15) – Lecture G = (V,E) is undirected graph vertex cover in G: subset C V such that for each edge (u,v) in E we have u in C or v in C (or both) Vertex Cover (optimization version) Input: undirected graph G = (V,E) Problem: compute vertex cover for G with minimum number of vertices  Vertex Cover is NP-hard.  there is a 2-approximation algorithm running in linear time. ∩

TU/e Algorithms (2IL15) – Lecture Modeling Vertex Cover as an LP  decide what the variables are (what are the choices to be made?) for vertex v introduce variable x v ( idea: x v = 1 if v in cover, x v = 0 if v not in cover)  write the objective function to be optimized (should be linear) minimize ∑ v in V x v (note: linear function)  write the constraints on the variables (should be linear) −for each edge (u,v) write constraint x u + x v ≥ 1 (NB: linear function) −for each vertex v write constraint x u in {0,1} not a linear constraint

TU/e Algorithms (2IL15) – Lecture integrality constraint: “x i must be integral” 0/1-constraint: “x i must 0 or 1” integer LP: LP where all variables have integrality constraint 0/1-LP: LP where all variables have 0/1-constraint (of course there are also mixed versions)

TU/e Algorithms (2IL15) – Lecture Theorem: 0/1-LP is NP-hard. Proof. Consider decision problem: is there feasible solution to given 0/1-LP? Which problem do we use in reduction? Need to transform 3-SAT formula into instance of 0/1-LP maximize y 1 (not relevant for decision problem, pick arbitrary function) subject to y 1 + y 2 + (1−y 3 ) ≥ 1 y 2 + (1−y 4 ) + (1−y 5 ) ≥ 1 (1 − y 2 ) + y 3 + y 5 ≥ 1 y i in {0,1} for all i ( x 1 V x 2 V ¬x 3 ) Λ (x 2 V ¬x 4 V ¬x 5 ) Λ (¬x 2 V x 3 V x 5 ) already saw reduction from Vertex Cover; let’s do another one: 3-SAT variable y i for each Boolean x i y i = 1 if x i = TRUE and y i = 0 if x i = FALSE

TU/e Algorithms (2IL15) – Lecture Theorem: 0/1-LP is NP-hard. problem can be modeled as “normal” LP  problem can be solved using LP algorithms  problem can be solved efficiently problem can be modeled as integer LP (or 0/1-LP)  problem can be solved using integer LP (or 0/1-LP) algorithms  does not mean that problem can be solved efficiently (sometimes can get approximation algorithms by relaxation and rounding see course Advanced Algorithms)  there are solvers (software) for integer LPs that in practice are quite efficient

TU/e Algorithms (2IL15) – Lecture Summary  what is an LP? what are integer LP and 0/1-LP?  any LP can be written in standard form (or in slack form)  normal (that is, not integer) LP can be solved in polynomial time (with input size measured in bits)  integer LP and 0/1-LP are NP-hard  when modeling a problem as an LP −define variables and how they relate to the problem −describe objective function (should be linear) −describe constraints (should be linear, not allowed) −no need to use standard or slack form, just use general form