CS38 Introduction to Algorithms Lecture 10 May 1, 2014.

Slides:



Advertisements
Similar presentations
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Advertisements

Single Source Shortest Paths
Longest Common Subsequence
October 31, Algorithms and Data Structures Lecture XIII Simonas Šaltenis Nykredit Center for Database Research Aalborg University
CS138A Single Source Shortest Paths Peter Schröder.
 2004 SDU Lecture9- Single-Source Shortest Paths 1.Related Notions and Variants of Shortest Paths Problems 2.Properties of Shortest Paths and Relaxation.
CSEP 521 Applied Algorithms Richard Anderson Lecture 7 Dynamic Programming.
CS38 Introduction to Algorithms Lecture 5 April 15, 2014.
UNIVERSITY OF SOUTH CAROLINA College of Engineering & Information Technology Bioinformatics Algorithms and Data Structures Chapter 12: Refining Core String.
DYNAMIC PROGRAMMING. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
CSE 421 Algorithms Richard Anderson Dijkstra’s algorithm.
CSE 421 Algorithms Richard Anderson Lecture 19 Longest Common Subsequence.
1 8-ShortestPaths Shortest Paths in a Graph Fundamental Algorithms.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 15 Shortest paths algorithms Properties of shortest paths Bellman-Ford algorithm.
Shortest Path With Negative Weights s 3 t
TCOM 501: Networking Theory & Fundamentals
CSE 421 Algorithms Richard Anderson Lecture 10 Minimum Spanning Trees.
1 Shortest Path Algorithms. 2 Routing Algorithms Shortest path routing What is a shortest path? –Minimum number of hops? –Minimum distance? There is a.
Shortest Paths1 C B A E D F
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
Theory of Computing Lecture 7 MAS 714 Hartmut Klauck.
Advanced Algorithms Piyush Kumar (Lecture 5: Weighted Matching) Welcome to COT5405 Based on Kevin Wayne’s slides.
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 22.
1 Shortest Path Problem Topic 11 ITS033 – Programming & Algorithms C B A E D F Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program,
1 Shortest Path Problems How can we find the shortest route between two points on a road map? Model the problem as a graph problem: –Road map is a weighted.
Data Structures Week 9 Towards Weighted BFS So, far we have measured d s (v) in terms of number of edges in the path from s to v. Equivalent to assuming.
Advanced Algorithms Piyush Kumar (Lecture 5: Weighted Matching) Welcome to COT5405 Based on Kevin Wayne’s slides.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Algorithm Course Dr. Aref Rashad February Algorithms Course..... Dr. Aref Rashad Part: 6 Shortest Path Algorithms.
10/4/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 17 Dynamic.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 17.
Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph. Want to compute a shortest path for each possible.
Lecture 16. Shortest Path Algorithms
Introduction to Algorithms Jiafen Liu Sept
CSE 2331 / 5331 Topic 12: Shortest Path Basics Dijkstra Algorithm Relaxation Bellman-Ford Alg.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Lectures on Greedy Algorithms and Dynamic Programming
Data Structures & Algorithms Shortest Paths Richard Newman based on book by R. Sedgewick and slides by S. Sahni.
Sequence Alignment Tanya Berger-Wolf CS502: Algorithms in Computational Biology January 25, 2011.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 21.
15.082J & 6.855J & ESD.78J September 30, 2010 The Label Correcting Algorithm.
Suppose G = (V, E) is a directed network. Each edge (i,j) in E has an associated ‘length’ c ij (cost, time, distance, …). Determine a path of shortest.
Single Source Shortest Paths Chapter 24 CSc 4520/6520 Fall 2013 Slides adapted from George Bebis, University of Reno, Nevada.
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CSE 421 Algorithms Richard Anderson Lecture 21 Shortest Paths and Network Flow.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
CSE 373: Data Structures and Algorithms Lecture 21: Graphs V 1.
CS502: Algorithms in Computational Biology
JinJu Lee & Beatrice Seifert CSE 5311 Fall 2005 Week 10 (Nov 1 & 3)
Algorithms and Data Structures Lecture XIII
Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Enumerating Distances Using Spanners of Bounded Degree
Chapter 6 Dynamic Programming
CSE 373: Data Structures and Algorithms
Richard Anderson Lecture 20 LCS / Shortest Paths
Chapter 6 Dynamic Programming
Algorithms and Data Structures Lecture XIII
Chapter 6 Dynamic Programming
Minimum Spanning Tree Algorithms
Evaluation of the Course (Modified)
Advanced Algorithms Analysis and Design
Data Structures and Algorithm Analysis Lecture 15
Richard Anderson Lecture 20 Space Efficient LCS
Memory Efficient Dynamic Programming / Shortest Paths
Data Structures and Algorithm Analysis Lecture 8
More Graphs Lecture 19 CS2110 – Fall 2009.
Presentation transcript:

CS38 Introduction to Algorithms Lecture 10 May 1, 2014

CS38 Lecture 102 Outline Dynamic programming design paradigm –longest common subsequence –edit distance/string alignment –shortest paths revisited: Bellman-Ford –detecting negative cycles in a graph –all-pairs-shortest paths * some slides from Kevin Wayne

Dynamic programming “programming” = “planning” “dynamic” = “over time” basic idea: –identify subproblems –express solution to subproblem in terms of other “smaller” subproblems –build solution bottom-up by filling in a table defining subproblem is the hardest part May 1, 2014CS38 Lecture 103

Dynamic programming summary identify subproblems: –present in recursive formulation, or –reason about what residual problem needs to be solved after a simple choice find order to fill in table running time (size of table) ¢ (time for 1 cell) optimize space by keeping partial table store extra info to reconstruct solution May 1, 2014CS38 Lecture 104

Longest common subsequence Two strings: –x = x 1 x 2 … x m –y = y 1 y 2 … y n Goal: find longest string z that occurs as subsequence of both. e.g. x = gctatcgatctagcttata y = catgcaagcttgcactgatctcaaa z = tattctcta May 1, 2014CS38 Lecture 105

Longest common subsequence Two strings: –x = x 1 x 2 … x m –y = y 1 y 2 … y n Goal: find longest string z that occurs as subsequence of both. e.g. x = gctatcgatctagcttata y = catgcaagcttgcactgatctcaaa z = tattctcta May 1, 2014CS38 Lecture 106

Longest common subsequence Two strings: –x = x 1 x 2 … x m –y = y 1 y 2 … y n structure of LCS: let z 1 z 2 … z k be LCS of x 1 x 2 … x m and y 1 y 2 … y n –if x m = y n then z k = x m = y n and z 1 z 2 … z k-1 is LCS of x 1 x 2 … x m-1 and y 1 y 2 … y n-1 May 1, 2014CS38 Lecture 107

Longest common subsequence Two strings: –x = x 1 x 2 … x m –y = y 1 y 2 … y n structure of LCS: let z 1 z 2 … z k be LCS of x 1 x 2 … x m and y 1 y 2 … y n –if x m  y n then z k  x m ) z is LCS of x 1 x 2 … x m-1 and y 1 y 2 … y n z k  y n ) z is LCS of x 1 x 2 … x m and y 1 y 2 … y n-1 May 1, 2014CS38 Lecture 108

Longest common subsequence Two strings: –x = x 1 x 2 … x m –y = y 1 y 2 … y n Subproblems: prefix of x, prefix of y OPT(i,j) = length of LCS for x 1 x 2 … x i and y 1 y 2 … y j using structure of LCS: OPT(i,j) = 0 if i = 0 or j = 0 OPT(i-1,j-1) + 1 if x i = y j max{OPT(i,j-1),OPT(i-1, j)} if x i  y j May 1, 2014CS38 Lecture 109

Longest common subsequence what order to fill in the table? May 1, 2014CS38 Lecture 1010 LCS-length(x, y: strings) 1.OPT(i, 0) = 0 for all i 2.OPT(0, j) = 0 for all j 3.for i = 1 to m 4. for j = 1 to n 5. if x i = y j then OPT(i,j) = OPT(i-1, j-1) elseif OPT(i-1, j) ¸ OPT(i,j-1) then OPT(i,j) = OPT(i-1, j) 7. else OPT(i,j) = OPT(i,j-1) 8.return(OPT(n,m))

Longest common subsequence running time? –O(mn) May 1, 2014CS38 Lecture 1011 LCS-length(x, y: strings) 1.OPT(i, 0) = 0 for all i 2.OPT(0, j) = 0 for all j 3.for i = 1 to m 4. for j = 1 to n 5. if x i = y j then OPT(i,j) = OPT(i-1, j-1) elseif OPT(i-1, j) ¸ OPT(i,j-1) then OPT(i,j) = OPT(i-1, j) 7. else OPT(i,j) = OPT(i,j-1) 8.return(OPT(n,m))

Longest common subsequence space O(nm) –can be improved to O(min{n,m}) May 1, 2014CS38 Lecture 1012 LCS-length(x, y: strings) 1.OPT(i, 0) = 0 for all i 2.OPT(0, j) = 0 for all j 3.for i = 1 to m 4. for j = 1 to n 5. if x i = y j then OPT(i,j) = OPT(i-1, j-1) elseif OPT(i-1, j) ¸ OPT(i,j-1) then OPT(i,j) = OPT(i-1, j) 7. else OPT(i,j) = OPT(i,j-1) 8.return(OPT(n,m))

Longest common subsequence reconstruct LCS? –store which of 3 cases was taken in each cell May 1, 2014CS38 Lecture 1013 LCS-length(x, y: strings) 1.OPT(i, 0) = 0 for all i 2.OPT(0, j) = 0 for all j 3.for i = 1 to m 4. for j = 1 to n 5. if x i = y j then OPT(i,j) = OPT(i-1, j-1) elseif OPT(i-1, j) ¸ OPT(i,j-1) then OPT(i,j) = OPT(i-1, j) 7. else OPT(i,j) = OPT(i,j-1) 8.return(OPT(n,m))

Edit distance How similar are two strings? May 1, 2014CS38 Lecture mismatches, 1 gap ocurrance– occurrence 1 mismatch, 1 gap oc–urrance occurrence 0 mismatches, 3 gaps oc–urr–ance occurre–nce e.g. ocurrance and occurrence

Edit distance Edit distance between two strings: –gap penalty δ –mismatch penalty ® pq –distance = sum of gap + mismatch penalties –many variations, many applications May 1, 2014CS38 Lecture 1015 cost = δ + α CG + α TA CT–GACCTACG CTGGACGAACG

String alignment Given two strings: –x = x 1 x 2 … x m –y = y 1 y 2 … y n alignment = sequence of pairs (x i, y j ) –each symbol in at most one pair –no crossings: (x i, y j ), (x i ’, y j ’) with i j’ –cost(M) = May 1, 2014CS38 Lecture 1016 CTACC–G –TACATG x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 y1y1 y2y2 y3y3 y4y4 y5y5 y6y6 M = {(x 2, y 1 ), (x 3,y 2 ), (x 4,y 3 ), (x 5,y 4 ), (x 6,y 6 )}

String alignment Given two strings: –x = x 1 x 2 … x m –y = y 1 y 2 … y n alignment = sequence of pairs (x i, y j ) –cost(M)= Goal: find minimum cost alignment May 1, 2014CS38 Lecture 1017

String alignment subproblem: OPT(i, j) = minimum cost of aligning prefixes x 1 x 2 … x i and y 1 y 2 … y j –case 1: x i matched with y j cost = ® x i, y j + OPT(i-1, j-1) –case 2: x i unmatched cost = δ + OPT(i-1, j) –case 3: y j unmatched cost = δ + OPT(i, j-1) May 1, 2014CS38 Lecture 1018

String alignment subproblem: OPT(i, j) = minimum cost of aligning prefixes x 1 x 2 … x i and y 1 y 2 … y j conclude: May 1, 2014CS38 Lecture 1019

String alignment May 1, 2014CS38 Lecture 1020 S TRING -A LIGNMENT (m, n, x 1, …, x m, y 1, …, y n, δ, α) ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ F OR i = 0 TO m M [i, 0] ← i δ F OR j = 0 TO n M [0, j] ← j δ F OR i = 1 TO m F OR j = 1 TO n M [i, j] ← min { α[x i, y j ] + M [i – 1, j – 1], δ + M [i – 1, j], δ + M [i, j – 1]}. R ETURN M [m, n]. ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

String alignment May 1, 2014CS38 Lecture 1021 S TRING -A LIGNMENT (m, n, x 1, …, x m, y 1, …, y n, δ, α) ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ F OR i = 0 TO m M [i, 0] ← i δ F OR j = 0 TO n M [0, j] ← j δ F OR i = 1 TO m F OR j = 1 TO n M [i, j] ← min { α[x i, y j ] + M [i – 1, j – 1], δ + M [i – 1, j], δ + M [i, j – 1]}. R ETURN M [m, n]. ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ running time? O(nm) space? O(nm) can improve to O(n + m) (how?) can recover alignment (how?)

Shortest paths (again) Given a directed graph G = (V, E) with (possibly negative) edge weights Find shortest path from node s to node t May 1, source s cost of path = = 18 destination t CS38 Lecture 10

Shortest paths Didn’t we do that with Dijkstra? –can fail if negative weights Idea: add a constant to every edge? –comparable paths may have different # of edges May 1, 2014CS38 Lecture 1023 s v u w u s t w v

Shortest paths negative cycle = directed cycle such that the sum of its edge weights is negative May 1, 2014CS38 Lecture a negative cycle W :

Shortest paths Proof: go around the cycle repeatedly to make path length arbitrarily small. May 1, 2014CS38 Lecture 1025 W c(W) < 0 v t Lemma: If some path from v to t contains a negative cycle, then there does not exist a shortest path from v to t

Shortest paths Proof: –consider a cheapest v ↝ t path P –if P contains a cycle W, can remove portion of P corresponding to W without increasing the cost May 1, 2014CS38 Lecture 1026 W c(W) ≥ 0 v t Lemma If G has no negative cycles, then there exists a shortest path from v to t that is simple (has ≤ n – 1 edges)

Shortest paths Shortest path problem. Given a digraph with edge weights c vw and no negative cycles, find cheapest v ↝ t path for each node v. Negative cycle problem. Given a digraph with edge weights c vw, find a negative cycle (if one exists) negative cycle 4 t 1 -3 shortest-paths tree 5 2 May 1, CS38 Lecture 10

Shortest paths subproblem: OPT(i, v) = cost of shortest v ↝ t path that uses ≤ i edges –case 1: shortest v ↝ t path uses ≤ i – 1 edges OPT(i, v) = OPT(i – 1, v) –case 2: shortest v ↝ t path uses i edges edge (v, w) + shortest w ↝ t path using ≤ i -1 edges May 1, 2014CS38 Lecture 1028 ∞

Shortest paths subproblem: OPT(i, v) = cost of shortest v ↝ t path that uses ≤ i edges OPT(n-1, v) = cost of shortest v ↝ t path overall, if no negative cycles. Why? –can assume path is simple May 1, 2014CS38 Lecture 1029 ∞

Shortest paths May 1, 2014CS38 Lecture 1030 S HORTEST -P ATHS (V, E, c, t) ______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ F OREACH node v ∈ V M [0, v] ← ∞. M [0, t] ← 0. F OR i = 1 TO n – 1 F OREACH node v ∈ V M [i, v] ← M [i – 1, v]. F OREACH edge (v, w) ∈ E M [i, v] ← min { M [i, v], M [i – 1, w] + c vw }. ______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Shortest paths May 1, 2014CS38 Lecture 1031 S HORTEST -P ATHS (V, E, c, t) ______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ F OREACH node v ∈ V M [0, v] ← ∞. M [0, t] ← 0. F OR i = 1 TO n – 1 F OREACH node v ∈ V M [i, v] ← M [i – 1, v]. F OREACH edge (v, w) ∈ E M [i, v] ← min { M [i, v], M [i – 1, w] + c vw }. ______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ ___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ running time? O(nm) space? O(n 2 ) can improve to O(n) (how?) can recover path (how?)

Shortest paths Space optimization: two n-element arrays –d(v) = cost of shortest v ↝ t path so far –successor(v) = next node on current v ↝ t path Performance optimization: –if d(w) was not updated in iteration i – 1, then no reason to consider edges entering w in iteration i May 1, 2014CS38 Lecture 1032

Bellman-Ford B ELLMAN -F ORD (V, E, c, t) _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ ____________________________________________________________________________________________________________________________ F OREACH node v ∈ V d(v) ← ∞. successor(v) ← null. d(t) ← 0. F OR i = 1 TO n – 1 F OREACH node w ∈ V I F (d(w) was updated in previous iteration) F OREACH edge (v, w) ∈ E I F ( d(v) > d(w) + c vw ) d(v) ← d(w) + c vw. successor(v) ← w. I F no d(w) value changed in iteration i, S TOP. _____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ ____________________________________________________________________________________________________________________________ 1 pass early stopping rule

Bellman-Ford notice that algorithm is well-suited to distributed, “local” implementation –n iterations/passes –each time, node v updates M(v) based on M(w) values of its neighbors important property exploited in routing protocols Dijkstra is “global” (e.g., must maintain set S) May 1, 2014CS38 Lecture 1034

Bellman-Ford Is this correct? Attempt: after the i th pass, d(v) = cost of shortest v ↝ t path using at most i edges –counterexample: May 1, 2014CS38 Lecture 1035 t 2 d(t) = 0 d(w) = 2 1 if nodes w considered before node v, then d(v) = 3 after 1 pass d(v) = 3 w v 4

Bellman-Ford Lemma: Throughout algorithm, d(v) is the cost of some v ↝ t path; after the i th pass, d(v) is no larger than the cost of the shortest v ↝ t path using ≤ i edges. Proof (induction on i) –Assume true after i th pass. –Let P be any v ↝ t path with i + 1 edges. –Let (v, w) be first edge on path and let P' be subpath from w to t. –By inductive hypothesis, d(w) ≤ c(P') since P' is a w ↝ t path with i edges. –After considering v in pass i+1: May 1, 2014CS38 Lecture 1036 d(v)d(v)≤c vw + d(w) ≤ c vw + c(P') =c(P)c(P) Theorem: Given digraph with no negative cycles, algorithm computes cost of shortest v ↝ t paths in O(mn) time and O(n) space.

Claim. Throughout the Bellman-Ford algorithm, following successor(v) pointers gives a directed path from v to t of cost d(v). Counterexample. Claim is false! ・ Cost of successor v ↝ t path may have strictly lower cost than d(v). 37 Bellman-Ford: analysis t 1 d(t) = 0d(1) = 10d(2) = s(2) = 1s(1) = t 1 d(3) = 1 s(3) = t consider nodes in order: t, 1, 2, 3

Claim. Throughout the Bellman-Ford algorithm, following successor(v) pointers gives a directed path from v to t of cost d(v). Counterexample. Claim is false! ・ Cost of successor v ↝ t path may have strictly lower cost than d(v). 38 Bellman-Ford: analysis t 1 d(t) = 0d(1) = 2d(2) = s(2) = 1s(1) = 3 1 d(3) = 1 s(3) = t consider nodes in order: t, 1, 2, 3

Claim. Throughout the Bellman-Ford algorithm, following successor(v) pointers gives a directed path from v to t of cost d(v). Counterexample. Claim is false! ・ Cost of successor v ↝ t path may have strictly lower cost than d(v). ・ Successor graph may have cycles. 39 Bellman-Ford: analysis t 9 5 d(t) = 0 d(2) = 8 d(1) = 5 d(3) = 10 d(4) = 11 consider nodes in order: t, 1, 2, 3, 4

Claim. Throughout the Bellman-Ford algorithm, following successor(v) pointers gives a directed path from v to t of cost d(v). Counterexample. Claim is false! ・ Cost of successor v ↝ t path may have strictly lower cost than d(v). ・ Successor graph may have cycles. 40 Bellman-Ford: analysis t 9 5 d(t) = 0 d(2) = 8 d(1) = 3 d(3) = 10 d(4) = 11 consider nodes in order: t, 1, 2, 3, 4

Bellman-Ford Lemma: If successor graph contains directed cycle W, then W is a negative cycle. Proof: –if successor(v) = w, we must have d(v) ≥ d(w) + c vw. (LHS and RHS are equal when successor(v) is set; d(w) can only decrease; d(v) decreases only when successor(v) is reset) –Let v 1 → v 2 → … → v k be the nodes along the cycle W. –Assume that (v k, v 1 ) is the last edge added to the successor graph. –Just prior to that: –add inequalities: c(v 1, v 2 ) + c(v 2, v 3 ) + … + c(v k–1, v k ) + c(v k, v 1 ) < 0 May 1, d(v1)d(v1)≥d(v2)d(v2)+ c(v 1, v 2 ) d(v2)d(v2)≥d(v3)d(v3)+ c(v 2, v 3 ) ⋮ ⋮⋮ d(v k–1 )≥d(vk)d(vk)+ c(v k–1, v k ) d(vk)d(vk)>d(v1)d(v1)+ c(v k, v 1 ) holds with strict inequality since we are updating d(v k ) CS38 Lecture 10

Bellman-Ford Theorem: Given a digraph with no negative cycles, algorithm finds the shortest s ↝ t paths in O(mn) time and O(n) space. Proof: –The successor graph cannot have a cycle (previous lemma). –Thus, following the successor pointers from s yields a directed path to t. –Let s = v 1 → v 2 → … → v k = t be the nodes along this path P. –Upon termination, if successor(v) = w, we must have d(v) = d(w) + c vw. (LHS and RHS are equal when successor(v) is set; d(·) did not change) –Thus: Adding equations yields d(s) = d(t) + c(v 1, v 2 ) + c(v 2, v 3 ) + … + c(v k–1, v k ) d(v1)d(v1)=d(v2)d(v2)+ c(v 1, v 2 ) d(v2)d(v2)=d(v3)d(v3)+ c(v 2, v 3 ) ⋮ ⋮⋮ d(v k–1 )=d(vk)d(vk)+ c(v k–1, v k ) since algorithm terminated min cost of any s ↝ t path 0 cost of path P