James B. Orlin Presented by Tal Kaminker

Slides:



Advertisements
Similar presentations
Min Cost Flow: Polynomial Algorithms. Overview Recap: Min Cost Flow, Residual Network Potential and Reduced Cost Polynomial Algorithms Approach Capacity.
Advertisements

Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
Outline LP formulation of minimal cost flow problem
Introduction to Algorithms
15.082J & 6.855J & ESD.78J October 14, 2010 Maximum Flows 2.
MINIMUM COST FLOWS: NETWORK SIMPLEX ALGORITHM A talk by: Lior Teller 1.
Network Optimization Models: Maximum Flow Problems
Lectures on Network Flows
Chapter 7 Maximum Flows: Polynomial Algorithms
A POLYNOMIAL COMBINATORIAL ALGORITHM FOR GENERALIZED MINIMUM COST FLOW, KEVIN D. WAYNE Eyal Dushkin –
Network Optimization Models: Maximum Flow Problems In this handout: The problem statement Solving by linear programming Augmenting path algorithm.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
Network Simplex Method Fatme Elmoukaddem Jignesh Patel Martin Porcelli.
Minimum Spanning Trees. Subgraph A graph G is a subgraph of graph H if –The vertices of G are a subset of the vertices of H, and –The edges of G are a.
15.082J and 6.855J and ESD.78J November 4, 2010 The Network Simplex Algorithm.
15.082J, 6.855J, and ESD.78J September 21, 2010 Eulerian Walks Flow Decomposition and Transformations.
Primal-Dual Meets Local Search: Approximating MST’s with Non-uniform Degree Bounds Author: Jochen Könemann R. Ravi From CMU CS 3150 Presentation by Dan.
Minimum Cost Flows. 2 The Minimum Cost Flow Problem u ij = capacity of arc (i,j). c ij = unit cost of shipping flow from node i to node j on (i,j). x.
Approximating the Minimum Degree Spanning Tree to within One from the Optimal Degree R 陳建霖 R 宋彥朋 B 楊鈞羽 R 郭慶徵 R
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
1 COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material has been reproduced and communicated to you by or on behalf.
CS 4407, Algorithms University College Cork, Gregory M. Provan Network Optimization Models: Maximum Flow Problems In this handout: The problem statement.
Max flows in O(nm) time, and sometimes less. by James B. Orlin MIT Sloan School.
1 Network Models Transportation Problem (TP) Distributing any commodity from any group of supply centers, called sources, to any group of receiving.
Maximum Flow Problem (Thanks to Jim Orlin & MIT OCW)
15.082J & 6.855J & ESD.78J October 7, 2010 Introduction to Maximum Flows.
15.082J and 6.855J March 4, 2003 Introduction to Maximum Flows.
Flow in Network. Graph, oriented graph, network A graph G =(V, E) is specified by a non empty set of nodes V and a set of edges E such that each edge.
Network Simplex Animations Network Simplex Animations.
15.082J and 6.855J and ESD.78J The Successive Shortest Path Algorithm and the Capacity Scaling Algorithm for the Minimum Cost Flow Problem.
and 6.855J March 6, 2003 Maximum Flows 2. 2 Network Reliability u Communication Network u What is the maximum number of arc disjoint paths from.
10/11/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 21 Network.
15.082J & 6.855J & ESD.78J September 30, 2010 The Label Correcting Algorithm.
The minimum cost flow problem. Solving the minimum cost flow problem.
15.082J and 6.855J and ESD.78J Network Simplex Animations.
1 Maximum Flows CONTENTS Introduction to Maximum Flows (Section 6.1) Introduction to Minimum Cuts (Section 6.1) Applications of Maximum Flows (Section.
Tuesday, March 19 The Network Simplex Method for Solving the Minimum Cost Flow Problem Handouts: Lecture Notes Warning: there is a lot to the network.
::Network Optimization:: Minimum Spanning Trees and Clustering Taufik Djatna, Dr.Eng. 1.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
The minimum cost flow problem
Network Simplex Animations
Perturbation method, lexicographic method
Flow in Network.
Lectures on Network Flows
Algorithms and Networks
Duality for linear programming.
Chapter 5. Optimal Matchings
EMIS 8374 Dijkstra’s Algorithm Updated 18 February 2008
Jonathan Kalechstain Tel Aviv University
The Network Simplex Method
Analysis of Algorithms
Introduction to Maximum Flows
Dijkstra’s Algorithm for the Shortest Path Problem
3.5 Minimum Cuts in Undirected Graphs
3.4 Push-Relabel(Preflow-Push) Maximum Flow Alg.
CS 583 Analysis of Algorithms
2.2 Shortest Paths Def: directed graph or digraph
Algorithms (2IL15) – Lecture 7
Successive Shortest Path Algorithm
and 6.855J March 6, 2003 Maximum Flows 2
Flow Feasibility Problems
Algorithms and Networks
Network Simplex Animations
The Network Simplex Method
Introduction to Minimum Cost Flows
The Successive Shortest Path Algorithm
Chapter 6. Large Scale Optimization
Max Flows 3 Preflow-Push Algorithms
Class 11 Max Flows Obtain a network, and use the same network to illustrate the shortest path problem for communication networks, the max flow.
Presentation transcript:

James B. Orlin Presented by Tal Kaminker A polynomial time primal network simplex algorithm for minimum cost flows James B. Orlin Presented by Tal Kaminker

Reminder – minimum cost flow 𝑛 nodes, each with supply (𝑏 𝑖 ≥0) or demand (𝑏 𝑖 ≤0). 𝒄 𝒊𝒋 - cost of moving one unit across arc (𝑖,𝑗) 𝒖 𝒊𝒋 - the capacity of the arc (𝑖,𝑗) We need to minimize 𝑐 𝑖𝑗 𝑥 𝑖𝑗 𝒙 𝒊𝒋 - the flow on the arc (𝑖,𝑗) 𝒓 𝒊𝒋 - the residual capacity of the arc (𝑖,𝑗)

overview Decomposition of a flow into 𝑇,𝐿,𝑈 The regular network simplex algorithm The premultiplier algorithm Cost-scaling version of the premultiplier algorithm Proof that the cost scaling version is polynomial time algorithm

Decomposition of a flow into 𝑇,𝐿,𝑈 Decompose each feasible flow into 𝑇,𝐿,𝑈 . 𝑇 – all the arcs (𝑖,𝑗) with 0< 𝑥 𝑖𝑗 < 𝑢 𝑖𝑗 𝐿 – all arcs (𝑖,𝑗) with 𝑥 𝑖𝑗 =0 𝑈 – all arcs (𝑖,𝑗) with 𝑥 𝑖𝑗 = 𝑢 𝑖𝑗 If 𝑇 is a spanning tree, it is possible to derive from the decomposition 𝑇,𝐿,𝑈 the flow. 𝑇,𝐿,𝑈 is called basis structure.

Non-degeneracy assumption We will assume the every basic feasible is non-degenerate; that is, 𝑇 from the 𝑇,𝐿,𝑈 −𝑑𝑒𝑐𝑜𝑚𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 is a spanning tree. Add a virtual node 1 with arcs to all other nodes with huge cost and u 1i =∞ and u i1 =∞ Set 𝑏 1 =𝜀 and decrease other 𝑏 𝑖 by 𝜀 𝑛−1

notations 𝐺 𝑥 - the residual network of flow 𝑥 𝐺 ∗ (𝑥) – the subgraph of 𝐺(𝑥) in which all arcs of 𝑇 and their reversals have been deleted Basic cycle corresponding to an arc (𝑘,𝑙) – the cycle that created by the tree and the non tree arc 𝑘,𝑙

The regular network simplex algorithm algorithm network simplex: find an initial feasible basis structure (T,L,U) let 𝑥 be the basic feasible flow (derived from (T,L,U) ) while 𝑥 is not optimum do find an arc 𝑘,𝑙 ∈ 𝐺 ∗ (𝑥) for which the corresponding basic cycle 𝑊 has a negative cost send 𝛿=min⁡( 𝑟 𝑖𝑗 : 𝑖,𝑗 ∈𝑊) units of flow around 𝑊 Find the arc (𝑝,𝑞) that needs to be removed from 𝑇 and update 𝑥 and (𝑇,𝐿,𝑈)

Potential - reminder Potential - a vector 𝜋 of size 𝑛. 𝑐 𝑖𝑗 𝜋 = 𝑐 𝑖𝑗 − 𝜋 𝑖 + 𝜋 𝑗 - the potential of the arc 𝑖,𝑗 For cycle 𝑊, 𝑐 𝜋 𝑊 = 𝑖,𝑗 ∈𝑊 𝑐 𝑖𝑗 𝜋 =𝑐(𝑊) Thus, looking for a negative cycle with respect to 𝜋 is the same as looking for a negative cycle

rooted in-tree For any tree 𝑇 and a root node 𝑣, we denote 𝑇(𝑣) a subgraph of 𝐺(𝑥) which is a directed spanning in-tree 𝑇 and in which all arcs are directed towards 𝑣. Costs and capacities of arcs in 𝑇(𝑣) are defined as in 𝐺(𝑥).

premultipliers Definition: A vector 𝜋 of node potentials is a set of premultipliers with respect to the rooted in-tree 𝑇(𝑣) if 𝑐 𝑖𝑗 𝜋 ≤0 for every arc 𝑖,𝑗 ∈𝑇 𝑣 .

premultipliers Lemma: Suppose that 𝑇 is a tree, and 𝜋 is a set of premultipliers with respect to 𝑇 𝑣 . Then 𝜋 is also a set of premultipliers with respect to 𝑇(𝑖) if and only if each arc of 𝑇 on the path from node 𝑣 to node 𝑖 has a reduced cost of 0.

premultipliers Definition: Let 𝑇 denote a tree and let 𝜋 denote a vector of premultipliers with respect to 𝑇 𝑣 for some node 𝑣. We say that node 𝑖 is eligible if 𝜋 is a vector of premultipliers with respect to 𝑇(𝑖). Definition: arc 𝑖,𝑗 ∈ 𝐺 ∗ (𝑥) is eligible if node 𝑖 is eligible and 𝑐 𝑖𝑗 𝜋 <0

premultipliers Lemma: Let 𝑇 be a tree, and suppose that 𝜋 is a set of premultipliers with respect to the rooted in-tree 𝑇(𝑣). Then the basic cycle induced by each eligible arc has a negative cost

premultipliers network simplex algorithm Choose an initial basic feasible structure 𝑇,𝐿,𝑈 and a vector 𝜋 of premultipliers with respect to 𝑇(𝑣). While (𝑇,𝐿,𝑈) is not optimal: If there is an eligible arc then: Select an eligible arc (𝑘,𝑙) simplex-pivot(𝑘,𝑙) else: modify-premultipliers

modify-premultipliers procedure modify-premultipliers: let 𝑆 denote the set of eligible nodes 𝑄≔ 𝑖,𝑗 ∈𝑇 𝑣 :𝑖 ∉𝑆, 𝑗∈𝑆 Δ≔ min (− 𝑐 𝑖𝑗 𝜋 : 𝑖,𝑗 ∈𝑄) {observe that Δ>0 whenever 𝑆≠𝑁} for each node 𝑗∈𝑆, increase 𝜋 𝑗 by Δ

simplex-pivot(𝑘,𝑙) procedure simplex-pivot(𝑘,𝑙): let 𝑊 denote the basic cycle containing (𝑘,𝑙) (𝑊 is (𝑘,𝑙) plus the path from 𝑙 to 𝑘 in 𝑇(𝑘)) let 𝛿=min⁡( 𝑟 𝑖𝑗 : 𝑖,𝑗 ∈𝑊) send 𝛿 units of flow around 𝑊. let (𝑝,𝑞) denote the arc of 𝑊 that is pivoted out reset the root of the tree to be 𝑝

Lemma: The premultiplier algorithm maintains a legal vector of premultipliers at every step. Proof: simplex-pivot maintains a legal tree

Proof cont. modify-premultipliers maintains a legal vector of premultipliers. consider arc (𝑖,𝑗) and 𝜋 𝑜𝑙𝑑 be the old premultipliers and 𝜋 𝑛𝑒𝑤 be the new premultipliers. recall that 𝑆={𝑠𝑒𝑡 𝑜𝑓 𝑒𝑙𝑖𝑔𝑖𝑏𝑙𝑒 𝑛𝑜𝑑𝑒𝑠} if 𝑖∉𝑆 and 𝑗∉𝑆 then 𝜋 𝑖 𝑜𝑙𝑑 = 𝜋 𝑖 𝑛𝑒𝑤 and 𝜋 𝑗 𝑜𝑙𝑑 = 𝜋 𝑗 𝑛𝑒𝑤 thus: 𝑐 𝑖𝑗 𝜋 𝑛𝑒𝑤 = 𝑐 𝑖𝑗 𝜋 𝑜𝑙𝑑 if 𝑖∈𝑆 and 𝑗∈𝑆 then 𝜋 𝑖 𝑛𝑒𝑤 = 𝜋 𝑖 𝑜𝑙𝑑 +Δ and 𝜋 𝑗 𝑛𝑒𝑤 = 𝜋 𝑗 𝑜𝑙𝑑 +Δ thus: 𝑐 𝑖𝑗 𝜋 𝑛𝑒𝑤 = 𝑐 𝑖𝑗 𝜋 𝑜𝑙𝑑 −Δ+Δ= 𝑐 𝑖𝑗 𝜋 𝑜𝑙𝑑

Proof cont. if 𝑖∉𝑆 and 𝑗∈𝑆 then: recall that 𝑆={𝑠𝑒𝑡 𝑜𝑓 𝑒𝑙𝑖𝑔𝑖𝑏𝑙𝑒 𝑛𝑜𝑑𝑒𝑠} if 𝑖∉𝑆 and 𝑗∈𝑆 then: recall that the set of this arcs is called 𝑄 𝜋 𝑗 𝑛𝑒𝑤 = 𝜋 𝑗 𝑜𝑙𝑑 +Δ 𝜋 𝑖 𝑛𝑒𝑤 = 𝜋 𝑖 𝑜𝑙𝑑 thus: 𝑐 𝑖𝑗 𝜋 𝑛𝑒𝑤 = 𝑐 𝑖𝑗 𝜋 𝑜𝑙𝑑 +Δ Δ was chosen such that 𝑐 𝑖𝑗 𝜋 𝑛𝑒𝑤 ≤0 for all this arcs and 𝑐 𝑖𝑗 𝜋 𝑛𝑒𝑤 =0 for at least one arc. arcs such that 𝑖∈𝑆 and 𝑗∉𝑆 do not exists

premultipliers network simplex algorithm Lemma: Each call of modify-premultipliers strictly increases the number of eligible nodes. Theorem: The premultiplier algorithm is a special case of the network simplex algorithm. As such, it solves the minimum cost flow problem in a finite number iterations.

cost-scaling version of premultipliers alg. We will show an cost-scaling version of the algorithm with total running time of 𝑂 𝑛 𝑚 2 log 𝑛𝐶 where 𝐶= 𝑐 𝑖𝑗 the paper shows version with running time 𝑂 min 𝑛 2 𝑚 log 𝑛𝐶 , 𝑛 2 𝑚 2 log 𝑛 later improvements showed: 𝑂 min 𝑛𝑚 log 𝑛𝐶 log 𝑛 , 𝑛 𝑚 2 log 2 𝑛

cost-scaling algorithm Definition: a vector of premultipliers 𝜋 with respect to flow 𝑥 is a vector of 𝜀-premultipliers if 𝑐 𝑖𝑗 𝜋 ≥−𝜀 for all arcs 𝑖,𝑗 ∈𝐺(𝑥) cost-scaling algorithm Start with a basic feasible flow 𝑥. Choose some 𝜀-premultipliers 𝜋 with some big 𝜀. Run 𝜀-scaling phases, each reducing 𝜀 by factor of 2. Run until 𝜀 is small enough such that 𝑐 𝑖𝑗 𝜋 ≥−𝜀 actually means that 𝑐 𝑖𝑗 𝜋 ≥0 . If all costs are integral then run 𝜀-scaling phases until 𝜀< 1 𝑛

algorithm scaling premultiplier: choose initial basic feasible solution 𝑥 and a vector 𝜋 of premultipliers with respect for 𝑇 𝑣 𝜀≔ max 𝑐 𝑖𝑗 𝜋 : 𝑐 𝑖𝑗 𝜋 ≤0, 𝑖,𝑗 ∈𝐺(𝑥) while 𝑥 is not optimal: improve-approximation(𝑥,𝜀,𝜋) {𝜀 is decreased by at least a factor if 2 at each stage}

Definition: The set 𝑁 ∗ denotes a subset of nodes whose multipliers have yet to change during the 𝜀-scaling phase. For example, if 𝜋 0 denotes the multipliers at the beginning of the current scaling phase, and if 𝜋 denotes the current multipliers, then: 𝑁 ∗ = 𝑖: 𝜋 𝑖 = 𝜋 𝑖 0 An arc (𝑘,𝑙) in 𝐺 ∗ (𝑥) is called admissible for the 𝜀-scaling phase if node 𝑘 is an eligible and 𝑐 𝑖𝑗 𝜋 ≤− 𝜀 4

procedure improve-approximation(𝑥,𝜀,𝜋): 𝑁 ∗ =𝑁 while 𝑁 ∗ ≠∅: if there is an admissible arc: select and admissible arc (𝑘,𝑙); {recall that 𝑐 𝑖𝑗 𝜋 ≤− 𝜀 4 } simplex-pivot(𝑘,𝑙) else: modify-𝜀-premultipliers

procedure modify-𝜀-premultipliers: 𝑆≔ 𝑠𝑒𝑡 𝑜𝑓 𝑒𝑙𝑖𝑔𝑖𝑏𝑙𝑒 𝑛𝑜𝑑𝑒𝑠 𝑁 ∗ ≔ 𝑁 ∗ −𝑆 if 𝑁 ∗ =∅ then terminate improve-approximation(𝑥,𝜀,𝜋) Δ= min − 𝑐 𝑖𝑗 𝜋 : 𝑖,𝑗 ∈𝑇 𝑣 , 𝑖∉𝑆, 𝑗∈𝑆 increase 𝜋 𝑖 by min Δ, 𝜀 4 for each 𝑖∈𝑆 {observe that Δ>0}

correctness Theorem: The algorithm stops after finite amount of iterations and at the end yields the optimal flow Proof: Almost the same as the regular premultiplier algorithm

Differences between the two algorithms the procedure that updates 𝜋 is not allowed to change 𝜋 𝑖 too much ( 𝜋 𝑖 𝑛𝑒𝑤 ≤ 𝜋 𝑖 𝑜𝑙𝑑 + 𝜀 4 ) we only use arcs which are negative enough: 𝑐 𝑖𝑗 𝜋 ≤− 𝜀 4 we maintain 𝑁 ∗ which tells us when we updated enough and we need to change the 𝜀

outline Each arc will be pivoted in at most 8𝑛 times during a scaling phase, and the total number of pivots per scaling phase is 𝑂 𝑛𝑚 At each scaling phase, there are 𝑂(𝑛𝑚) updates of the vector 𝜋 done by the subroutine modify-𝜀-premultipliers After each phase 𝜀 is reduced by at least factor 2, thus the number of scaling phases is 𝑂 log 𝑛𝐶

Lemma: Suppose that 𝜋 a vector of 𝜀-premultipliers obtained during the 𝜀 –scaling phase, and let 𝑖∉ 𝑁 ∗ . Let 𝜋′ be the vector 𝜀-premultipliers immediately prior to the most recent execution of modify-𝜀-premultipliers at which time 𝑖 was eligible. Then 0< 𝜋 𝑖 − 𝜋 𝑖 ′ ≤ 𝜀 4 (𝜋′ is immediately prior the last time the potential of 𝑖 was updated) Proof: since modify-𝜀-premultipliers increases 𝜋 𝑖 ′ by min Δ, 𝜀 4 thus: 𝜋 𝑖 − 𝜋 𝑖 ′ ≤ 𝜀 4

Theorem: Suppose that 𝑥 is a basic feasible flow and that 𝜋 is a vector of 𝜀-premultipliers obtained during the 𝜀-phase. For all 𝑖,𝑗 ∈𝐺(𝑥), if 𝑖∉ 𝑁 ∗ , then 𝑐 𝑖𝑗 𝜋 ≥− 𝜀 2 . In particular, if 𝑁 ∗ =∅, then 𝜋 is a vector of 𝜀 2 -premultipliers. Proof: Let 𝜋′ be the vector 𝜀-premultipliers immediately prior to the most recent execution of modify-𝜀-premultipliers at which time 𝑖 was eligible, at this time: the arc (𝑖,𝑗) was not admissible thus it had 3 options: it was a tree arc it had 𝑐 𝑖𝑗 𝜋 ≥− 𝜀 4 it had no residual capacity

Theorem: During the 𝜀-scaling phase, 𝜋 𝑖 is increased by at most 2𝑛𝜀 Proof: First a lemma without proof: Lemma: let 𝑥 and 𝑥′ be two distinct flows. Then for any pair of nodes 𝑖 and 𝑗, there is a path 𝑃 in 𝐺(𝑥) from node 𝑖 to 𝑗 such that the reversal of 𝑃 is a path from node 𝑗 to 𝑖 in 𝐺 𝑥 ′

Let 𝑥 denote the basic flow and 𝜋 the premultipliers at the beginning of 𝜀-scaling phase. Let 𝑥′ denote the basic flow and 𝜋′ the premultipliers at the some point of the 𝜀-scaling phase. Select node 𝑗∈ 𝑁 ∗ (thus: 𝜋 𝑗 = 𝜋 𝑗 ′ ). Let P be a path from 𝑗 to 𝑖 in 𝐺(𝑥) such that the reversal of 𝑃 is in 𝐺( 𝑥 ′ ) so: 𝑐 𝜋 𝑃 =𝑐 𝑃 − 𝜋 𝑗 + 𝜋 𝑖 ≥− 𝑛−1 𝜀 𝑐 𝜋 ′ 𝑃 𝑟 =𝑐 𝑃 𝑟 − 𝜋 𝑖 ′ + 𝜋 𝑗 ′ ≥− 𝑛−1 𝜀 adding the two: 𝜋 𝑖 ′ ≤ 𝜋 𝑖 +2 𝑛−1 𝜀

Theorem: The algorithm performs at most 8𝑛𝑚 pivots per scaling phase. Proof: let 𝜋 be the premultipliers when (𝑖,𝑗) or (𝑗,𝑖) was pivoted in. let 𝜋′ be the premultipliers at the next iteration when (𝑖,𝑗) or (𝑗,𝑖) was pivoted in. we will prove that: 𝜋 𝑖 ′ − 𝜋 𝑖 + 𝜋 𝑗 ′ − 𝜋 𝑗 ≥ 𝜀 2

note that if 𝜋 𝑖 ′ − 𝜋 𝑖 + 𝜋 𝑗 ′ − 𝜋 𝑗 ≥ 𝜀 2 is true then in order for an arc to be re-pivoted it must increase 𝜋 𝑖 ′ − 𝜋 𝑖 + 𝜋 𝑗 ′ − 𝜋 𝑗 by at least 𝜀 2 since: 𝜋 𝑖 𝑓𝑖𝑛𝑎𝑙 − 𝜋 𝑖 + 𝜋 𝑗 𝑓𝑖𝑛𝑎𝑙 − 𝜋 𝑗 ≤2n𝜀+2𝑛𝜀=4𝑛𝜀 we conclude that there were at most 8𝑛𝑚 pivots proof on the board

Theorem: The subroutine modify-𝜀-premultipliers is executed 𝑂(𝑛𝑚) timer per scaling phase Proof: If no pivots take place, after at most 4 times the subroutine is executed, some arc’s reduced cost becomes 0. An arc can have it reduced cost zeroed again only if the arc is pivoted out and in again. This can happen only 𝑂(𝑛) times per arc. thus total of 𝑂(𝑛𝑚) times.

Thank you Questions?