Min Cost Flow: Polynomial Algorithms. Overview Recap: Min Cost Flow, Residual Network Potential and Reduced Cost Polynomial Algorithms Approach Capacity.

Slides:



Advertisements
Similar presentations
Maximum flow Main goals of the lecture:
Advertisements

and 6.855J Cycle Canceling Algorithm. 2 A minimum cost flow problem , $4 20, $1 20, $2 25, $2 25, $5 20, $6 30, $
COMP 482: Design and Analysis of Algorithms
Lecture 7. Network Flows We consider a network with directed edges. Every edge has a capacity. If there is an edge from i to j, there is an edge from.
Single Source Shortest Paths
Max Flow Problem Given network N=(V,A), two nodes s,t of V, and capacities on the arcs: uij is the capacity on arc (i,j). Find non-negative flow fij for.
15.082J & 6.855J & ESD.78J October 14, 2010 Maximum Flows 2.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Chapter 6 Maximum Flow Problems Flows and Cuts Augmenting Path Algorithm.
Max Flow: Shortest Augmenting Path
MAXIMUM FLOW Max-Flow Min-Cut Theorem (Ford Fukerson’s Algorithm)
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 19.
1 Maximum Flow w s v u t z 3/33/3 1/91/9 1/11/1 3/33/3 4/74/7 4/64/6 3/53/5 1/11/1 3/53/5 2/22/2 
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Lectures on Network Flows
1 Maximum Flow Networks Suppose G = (V, E) is a directed network. Each edge (i,j) in E has an associated ‘capacity’ u ij. Goal: Determine the maximum amount.
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 7 Maximum Flows: Polynomial Algorithms
1 Efficient implementation of Dinic’s algorithm for maximum flow.
The max flow problem
Maximum Flows Lecture 4: Jan 19. Network transmission Given a directed graph G A source node s A sink node t Goal: To send as much information from s.
1 Maximum flow: The preflow/push method of Goldberg and Tarjan (87)
Maximum flow Algorithms and Networks. A&N: Maximum flow2 Today Maximum flow problem Variants Applications Briefly: Ford-Fulkerson; min cut max flow theorem.
Lecture 8. Why do we need residual networks? Residual networks allow one to reverse flows if necessary. If we have taken a bad path then residual networks.
3/3/ Alperovich Alexander. Motivation  Find a maximal flow over a directed graph  Source and sink vertices are given 3/3/
A New Approach to the Maximum-Flow Problem Andrew V. Goldberg, Robert E. Tarjan Presented by Andrew Guillory.
MAX FLOW CS302, Spring 2013 David Kauchak. Admin.
and 6.855J The Capacity Scaling Algorithm.
1 Minimum Cost Flow - Strongly Polynomial Algorithms Introduction Minimum-Mean Cycle Canceling Algorithm Repeated Capacity Scaling Algorithm Enhanced Capacity.
Advanced Algorithms Piyush Kumar (Lecture 5: Preflow Push) Welcome to COT5405.
1 COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material has been reproduced and communicated to you by or on behalf.
Max Flow – Min Cut Problem. Directed Graph Applications Shortest Path Problem (Shortest path from one point to another) Max Flow problems (Maximum material.
Maximum Flow Problem (Thanks to Jim Orlin & MIT OCW)
and 6.855J The Goldberg-Tarjan Preflow Push Algorithm for the Maximum Flow Problem.
15.082J and 6.855J March 4, 2003 Introduction to Maximum Flows.
15.082J and 6.855J and ESD.78J The Successive Shortest Path Algorithm and the Capacity Scaling Algorithm for the Minimum Cost Flow Problem.
and 6.855J March 6, 2003 Maximum Flows 2. 2 Network Reliability u Communication Network u What is the maximum number of arc disjoint paths from.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
15.082J & 6.855J & ESD.78J September 30, 2010 The Label Correcting Algorithm.
EMIS 8374 The Ford-Fulkerson Algorithm (aka the labeling algorithm) Updated 4 March 2008.
1 Maximum Flows CONTENTS Introduction to Maximum Flows (Section 6.1) Introduction to Minimum Cuts (Section 6.1) Applications of Maximum Flows (Section.
TU/e Algorithms (2IL15) – Lecture 8 1 MAXIMUM FLOW (part II)
Ford-Fulkerson Recap.
Lectures on Network Flows
James B. Orlin Presented by Tal Kaminker
Maximum flow: The preflow/push method of Goldberg and Tarjan (87)
Algorithms and Networks
Instructor: Shengyu Zhang
Introduction to Maximum Flows
Introduction to Maximum Flows
Dijkstra’s Algorithm for the Shortest Path Problem
3.4 Push-Relabel(Preflow-Push) Maximum Flow Alg.
(Push-relabel algorithms)
Analysis of Algorithms
Max Flow Min Cut, Bipartite Matching Yin Tat Lee
Primal-Dual Algorithm
Complexity of Ford-Fulkerson
Algorithms (2IL15) – Lecture 7
EE5900 Advanced Embedded System For Smart Infrastructure
and 6.855J March 6, 2003 Maximum Flows 2
Network Flow.
and 6.855J The Goldberg-Tarjan Preflow Push Algorithm for the Maximum Flow Problem Obtain a network, and use the same network to illustrate the.
and 6.855J The Goldberg-Tarjan Preflow Push Algorithm for the Maximum Flow Problem Obtain a network, and use the same network to illustrate the.
Maximum flow: The preflow/push method of Goldberg and Tarjan (87)
Piyush Kumar (Lecture 3: Preflow Push)
Network Flow.
The Successive Shortest Path Algorithm
Max Flows 3 Preflow-Push Algorithms
Analysis of Algorithms
Class 11 Max Flows Obtain a network, and use the same network to illustrate the shortest path problem for communication networks, the max flow.
Presentation transcript:

Min Cost Flow: Polynomial Algorithms

Overview Recap: Min Cost Flow, Residual Network Potential and Reduced Cost Polynomial Algorithms Approach Capacity Scaling Successive Shortest Path Algorithm Recap Incorporating Scaling Cost Scaling Preflow/Push Algorithm Recap Incorporating Scaling Double Scaling Algorithm - Idea

Min Cost Flow - Recap v1v1 v2v2 v3v3 v4v4 v5v ,14,1 3,43,4 5,15,1 1,11,1 3,33,3

Min Cost Flow - Recap fdsfds Compute feasible flow with min cost

Residual Network - Recap

Reduced Cost - Recap

Min Cost Flow: Polynomial Algorithms

Approach We have seen several algorithm for the MCF problem, but none polynomial – in logU, logC. Idea: Scaling! Capacity/Flow values Costs both Next Week: Algorithms with running time independent of logU, logC Strongly Polynomial Will solve problems with irrational data

Capacity Scaling

Successive Shortest Path - Recap

Algorithm Complexity: Assuming integrality, at most nU iterations. In each iteration, compute shortest paths, Using Dijkstra, bounded by O(m+nlogn) per iteration

Capacity Scaling - Scheme Successive Shortest Path Algorithm may push very little in each iteration. Fix idea: use scaling Modify algorithm to push units of flow Ignore edges with residual capacity < until there is no node with excess or no node with deficit Decrease by factor 2, and repeat Until < 1.

Definitions G(x)

Definitions G(x, 3)

Main Observation in Algorithm Observation: Augmentation of units must start at a node in S( ), along a path in G(x, ), and end at a node in T( ). In the -phase, we find shortest paths in G(x, ), and augment over them. Thus, edges in G(x, ), will satisfy the reduced optimality conditions. We will consider edges with less residual capacity later.

Initializing phases i j

Capacity Scaling Algorithm

Initial values. 0 pseudoflow and potential (optimal!) Large enough

Capacity Scaling Algorithm In beginning of -phase, fix optimality condition on new edges with resid. Cap. r ij < 2 by saturation

Capacity Scaling Algorithm augment path in G(x, ) from node in S( ) to node in T( )

Capacity Scaling Algorithm - Correctness

Capacity Scaling Algorithm - Assumption We assume path from k to l in G(x, ) exists. And we assume we can compute shortest distances from k to rest of nodes. Quick fix: initially, add dummy node D with artificial edges (1,D) and (D,1) with infinite capacity and very large cost.

Capacity Scaling Algorithm – Complexity The algorithm has O(log U) phases. We analyze each phase separately.

Capacity Scaling Algorithm – Phase Complexity D E

Capacity Scaling Algorithm – Phase Complexity – Cont.

Capacity Scaling Algorithm – Complexity

Cost Scaling

Approximate Optimality

Approximate Optimality Properties a b c d

Algorithm Strategy

Preflow Push Recap

Distance Labels Distance Labels Satisfy: d(t) = 0, d(s) = n d(v) d(w) + 1 if r(v,w) > 0 d(v) is at most the distance from v to t in the residual network. s must be disconnected from t …

Terms Nodes with positive excess are called active. Admissible arc in the residual graph: w v d(v) = d(w) + 1

The preflow push algorithm While there is an active node { pick an active node v and push/relabel(v) } Push/relabel(v) { If there is an admissible arc (v,w) then { push = min {e(v), r(v,w)} flow from v to w } else { d(v) := min{d(w) + 1 | r(v,w) > 0} (relabel) }

Running Time The # of relabelings is (2n-1)(n-2) < 2n 2 The # of saturating pushes is at most 2nm The # of nonsaturating pushes is at most 4n 2 m – using potential Φ = Σ v active d(v)

Back to Min Cost Flow…

Applying Preflow Pushs technique j i

Initialization v w -10

Push/relabel until no active nodes exist

Correctness Lemma 1: Let x be pseudo-flow, and x a feasible flow. Then, for every node v with excess in x, there exists a path P in G(x) ending at a node w with deficit, and its reversal is a path in G(x). Proof: Look at the difference x-x, and observe the underlying graph (edges with negative difference are reversed).

Lemma 1: Proof Cont. Proof: Look at the difference x-x, and observe the underlying graph (edges with negative difference are reversed)

Lemma 1: Proof Cont. Proof: Look at the difference x-x, and observe the underlying graph (edges with negative difference are reversed). v w S There must be a node with deficit reachable, otherwise x isnt feasible

Correctness (cont) Corollary: There is an outgoing residual arc incident with every active vertex Corollary: So we can push/relabel as long as there is an active vertex

Correctness – Cont.

Correctness (cont)

Complexity Lemma : a node is relabeled at most 3n times.

Lemma 2 – Cont.

Complexity Analysis (Cont.) Lemma: The # of saturating pushes is at most O(nm). Proof: same as in Preflow Push.

Complexity Analysis – non Saturating Pushes Def: The admissible network is the graph of admissible edges

Complexity Analysis – non Saturating Pushes Def: The admissible network is the graph of admissible edges

Complexity Analysis – non Saturating Pushes Def: The admissible network is the graph of admissible edges. Lemma: The admissible network is acyclic throughout the algorithm. Proof: induction.

Complexity Analysis – non Saturating Pushes – Cont. Lemma: The # of nonsaturating pushes is O(n 2 m). Proof: Let g(i) be the # of nodes reachable from i in admissible network Let Φ = Σ i active g(i)

Complexity Analysis – non Saturating Pushes – Cont. Φ = Σ i active g(i) By acyclicity, decreases (by at least one) by every nonsaturating push ij

Complexity Analysis – non Saturating Pushes – Cont. Φ = Σ i active g(i) Initially g(i) = 1. Increases by at most n by a saturating push : total increase O(n 2 m) Increases by each relabeling by at most n (no incoming edges become admissible): total increase < O(n 3 ) O(n 2 m) non saturating pushes

Cost Scaling Summary Total complexity O(n 2 mlog(nC)). Can be improved using ideas used to improve preflow push

Double Scaling (If Time Permits)

Double Scaling Idea

Network Transformation i b(i) j b(j) C ij, u ij x ij i b(i) (i,j) C ij, x ij j b(j) -u ij 0, r ij

Improve approximation Initialization N1N1 N2N2 +

Capacity Scaling - Scheme

Double Scaling - Correctness Assuming algorithm ends, immediate, since we augment along admissible path. (Residual path from excess to node to deficit node always exists – see cost scaling algorithm) We relabel when indeed no outgoing admissible edges.

Double Scaling - Complexity O(log U) phases. In each phase, each augmentation clears a node from S( ) and doesnt introduce a new one. so O(m) augmentations per phase.

Double Scaling Complexity – Cont. In each augmentation, We find a path of length at most 2n (admissible network is acyclic and bipartite) Need to account retreats.

Double Scaling Complexity – Cont. In each retreat, we relabel. Using above lemma, potential cannot grow more than O(l), where l is a length of path to a deficit node. Since graph is bipartite, l = O(n). So in all augmentations, O(n (m+n)) = O(mn) retreats. N1N1 N2N2

Double Scaling Complexity – Cont. To sum up: Implemented improve-approximation using capacity scaling O(log U) phases. O(m) augmentations per phase. O(n) nodes in path O(mn) node retreats in total. Total complexity: O(log(nC) log(U) mn)

Thank You