Ford-Fulkerson Recap
Residual Flow Network s w v t 5 (3) 2 (0) 3 (3) G Gf v s t w c(e) (f(e))
Residual Flow Network s w v t 5 (3) 2 (0) 3 (3) G Gf v 2 3 s t w Forward edge. capacity: c(e)-f(e) w Backward edge. capacity: f(e)
Residual Flow Network s w v t 5 (3) 2 (0) 3 (3) G Gf v s 2 3 t w
Residual Flow Network s w v t 5 (3) 2 (0) 3 (3) G Gf 3 v s 2 3 t w
Residual Flow Network s w v t 5 (3) 2 (0) 3 (3) G Gf 3 v 2 s 2 3 t 3 2 2 s 2 3 t 3 2 w
Delete all edges with residual capacity 0! Residual Flow Network s w v t 5 (3) 2 (0) 3 (3) G Gf s w v t 2 3 Delete all edges with residual capacity 0!
Gf G u v 2 4 u v 6 (4) saturated u v 6 (6) u v 6 empty u v 6 u v 6 (0)
Gf u v 2 1 4 G 6 (4) v u 3 (2)
Ford-Fulkerson Algorithm Start: f(e)=0 for all edges e Compute Gf. While Gf contains an s-t path: Let P be any simple s-t path in Gf. Let Δ(P)=mine Pcf(e). For each edge e=(u,v) on P: If e is a forward edge, increase f(e) by Δ Otherwise, let e'=(v,u) be the edge of G corresponding to e. Reduce f(e’) by Δ. Recompute Gf. Iterate: improve the flow re-compute Gf
Ford-Fulkerson Algorithm Start: f(e)=0 for all edges e Compute Gf. While Gf contains an s-t path: Let P be any simple s-t path in Gf. Let Δ(P)=mine Pcf(e). For each edge e=(u,v) on P: If e is a forward edge, increase f(e) by Δ Otherwise, let e'=(v,u) be the edge of G corresponding to e. Reduce f(e’) by Δ. Recompute Gf. Flow Update: push Δ(P) flow units along P
Ford-Fulkerson Algorithm Start: f(e)=0 for all edges e Compute Gf. While Gf contains an s-t path: Let P be any simple s-t path in Gf. Let Δ(P)=mine Pcf(e). For each edge e=(u,v) on P: If e is a forward edge, increase f(e) by Δ Otherwise, let e'=(v,u) be the edge of G corresponding to e. Reduce f(e’) by Δ. Recompute Gf.
We Saw: Correctness proof: optimality is missing The flow is always feasible and integral The algorithm terminates Each iteration takes O(m) time Flow value increases by at least 1 in each iteration. Number of iterations at most sum of capacities. Correctness proof: optimality is missing Running time: want faster
Is the Flow Optimal? s w v t 5 (1) 2 (2) 3 (3) 2(2)
Is the Flow Optimal? s w v t 1 100 s w v t 1 (1) 100 (2)
Is the Flow Optimal? v 100 s 1 t v 1 (1) 100 (2) w s 1 (1) t 100 (2)
Ford-Fulkerson Algorithm Start: f(e)=0 for all edges e Compute Gf. While Gf contains an s-t path: Let P be any simple s-t path in Gf. Let Δ(P)=mine Pcf(e). For each edge e=(u,v) on P: If e is a forward edge, increase f(e) by Δ Otherwise, let e'=(v,u) be the edge of G corresponding to e. Reduce f(e’) by Δ. Recompute Gf.
Ford-Fulkerson Algorithm Start: f(e)=0 for all edges e Compute Gf. While Gf contains an s-t path: Let P be any simple s-t path in Gf. Let Δ(P)=mine Pcf(e). For each edge e=(u,v) on P: If e is a forward edge, increase f(e) by Δ Otherwise, let e'=(v,u) be the edge of G corresponding to e. Reduce f(e’) by Δ. Recompute Gf.
Edmonds-Karp Algorithm Start: with f(e)=0 for all edges e Compute Gf. While Gf contains an s-t path: Let P be the shortest simple s-t path in Gf. Let Δ(P)=mine Pcf(e). For each edge e=(u,v) on P: If e is a forward edge, increase f(e) by Δ Otherwise, let e'=(v,u) be the edge of G corresponding to e. Reduce f(e’) by Δ. Recompute Gf.
Changes to Gf Gf s t P
Changes to Gf Gf’ s t Claim: Only edges of P may disappear At least one of them has to disappear The only new edges are anti-parallel to edges of P
Claim: distance from s to t in the residual graph never decreases
Claim: distance from s to t in the residual graph never decreases
d is length of augmenting path BFS from s in Gf t s d is length of augmenting path d+1
forward-looking edges BFS from s in Gf shortcut edge forward-looking edges t s d+1
backward-looking edges BFS from s in Gf backward-looking edges t s d+1
sideways-looking edges BFS from s in Gf sideways-looking edges t s d+1
BFS from s in Gf t s d+1
BFS from s in Gf t s d+1
BFS from s in Gf t s d+1 We do not create shortcut edges! The s-t distance cannot decrease! d+1
Claim: a phase has no more than m iterations.
BFS from s in Gf at the Start of the Phase d+1
BFS from s in Gf at the Start of the Phase d+1
BFS from s in Gf at the Start of the Phase d+1
BFS from s in Gf at the Start of the Phase d+1
BFS from s in Gf at the Start of the Phase d+1
BFS from s in Gf at the Start of the Phase d+1
BFS from s in Gf at the Start of the Phase d+1
BFS from s in Gf at the Start of the Phase We never create shortcut edges As long as augmenting path length is d, it has to visit every layer in turn Induction Conclusion: at most m iterations to a phase! t s We do not create new forward-looking edges in a phase. d+1 At least one forward-looking edge deleted in each iteration