Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Spring 2013 Unit 4 1 Unit 4: Dynamic Programming ․ Course contents:  Assembly-line scheduling  Rod cutting  Matrix-chain multiplication  Longest.

Similar presentations


Presentation on theme: "1 Spring 2013 Unit 4 1 Unit 4: Dynamic Programming ․ Course contents:  Assembly-line scheduling  Rod cutting  Matrix-chain multiplication  Longest."— Presentation transcript:

1 1 Spring 2013 Unit 4 1 Unit 4: Dynamic Programming ․ Course contents:  Assembly-line scheduling  Rod cutting  Matrix-chain multiplication  Longest common subsequence  Optimal binary search trees  Applications: optimal polygon triangulation, flip-chip routing, technology mapping for logic synthesis ․ Reading:  Chapter 15

2 2 Spring 2013 Unit 4 2 Dynamic Programming (DP) vs. Divide-and-Conquer ․ Both solve problems by combining the solutions to subproblems. ․ Divide-and-conquer algorithms  Partition a problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem.  Inefficient if they solve the same subproblem more than once. ․ Dynamic programming (DP)  Applicable when the subproblems are not independent.  DP solves each subproblem just once.

3 3 Spring 2013 Unit 4 3 Assembly-line Scheduling ․ An auto chassis enters each assembly line, has parts added at stations, and a finished auto exits at the end of the line.  S i,j : the jth station on line i  a i,j : the assembly time required at station S i,j  t i,j : transfer time from station S i,j to the j+1 station of the other line.  e i (x i ): time to enter (exit) line i station S 2,1 S 2,2 a 1,1 a 1,2 a 1,3 a 1,n-1 a 1,n x1x1 x2x2 t 1,1 t 2,1 a 2,1 a 2,2 a 2,3 a 2,n-1 a 2,n e2e2 t 1,2 t 2,2 t 1,n-1 t 2,n-1 e1e1 auto exits chassis enters line 1 line 2 station S 1,1 S 1,2 S 1,3 S 1,n-1 S 1,n S 2,3 S 2,n-1 S 2,n

4 4 Spring 2013 Unit 4 4 Optimal Substructure ․ Objective: Determine the stations to choose to minimize the total manufacturing time for one auto.  Brute force:  (2 n ), why?  The problem is linearly ordered and cannot be rearranged => Dynamic programming? ․ Optimal substructure: If the fastest way through station S i,j is through S 1,j-1, then the chassis must have taken a fastest way from the starting point through S 1,j-1. S 1,1 S 1,2 S 1,3 S 1,n-1 S 1,n S 2,1 S 2,2 a 1,1 a 1,2 a 1,3 a 1,n-1 a 1,n x1x1 x2x2 t 1,1 t 2,1 a 2,1 a 2,2 a 2,3 a 2,n-1 a 2,n e2e2 t 1,2 t 2,2 t 1,n-1 t 2,n-1 e1e1 auto exits chassis enters line 1 line 2 S 2,3 S 2,n-1 S 2,n

5 5 Spring 2013 Unit 4 5 Overlapping Subproblem: Recurrence ․ Overlapping subproblem: The fastest way through station S 1,j is either through S 1,j-1 and then S 1,j, or through S 2,j-1 and then transfer to line 1 and through S 1,j. ․ f i [j]: fastest time from the starting point through S i,j ․ The fastest time all the way through the factory f* = min(f 1 [n] + x 1, f 2 [n] + x 2 ) S 1,1 S 1,2 S 1,3 S 1,n-1 S 1,n S 2,1 S 2,2 a 1,1 a 1,2 a 1,3 a 1,n-1 a 1,n x1x1 x2x2 t 1,1 t 2,1 a 2,1 a 2,2 a 2,3 a 2,n-1 a 2,n e2e2 t 1,2 t 2,2 t 1,n-1 t 2,n-1 e1e1 auto exits chassis enters line 1 line 2 S 2,3 S 2,n-1 S 2,n

6 6 Spring 2013 Unit 4 6 Computing the Fastest Time ․ l i [j]: The line number whose station j-1 is used in a fastest way through S i,j Fastest-Way(a, t, e, x, n) 1. f 1 [1] = e 1 + a 1,1 2. f 2 [1] = e 2 + a 2,1 3. for j = 2 to n 4. if f 1 [j-1] + a 1,j  f 2 [j-1] + t 2,j -1 + a 1,j 5. f 1 [j] = f 1 [j-1] + a 1,j 6. l 1 [j] = 1 7. else f 1 [j] = f 2 [j-1] + t 2,j -1 + a 1,j 8. l 1 [j] = 2 9. if f 2 [j-1] + a 2,j  f 1 [j-1] + t 1,j -1 + a 2,j 10. f 2 [j] = f 2 [j-1] + a 2,j 11. l 2 [j] = else f 2 [j] = f 1 [j-1] + t 1,j -1 + a 2,j 13. l 2 [j] = if f 1 [n] + x 1  f 2 [n] + x f* = f 1 [n] + x l* = else f* = f 2 [n] + x l* = 2 Running time? Linear time!!

7 7 Spring 2013 Unit 4 7 Constructing the Fastest Way Print-Station(l, n) 1. i = l* 2. Print “line” i “, station “ n 3. for j = n downto 2 4. i = l i [j] 5. Print “line “ i “, station “ j-1 S 1,1 S 1,2 S 1,3 S 1,5 S 1,6 S 2,1 S 2, auto exits chassis enters line 1 line 2 S 2,3 S 2,5 S 2,6 S 1, S 2,4 1 2 line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1

8 8 Spring 2013 Unit 4 8 An Example S 1,1 S 1,2 S 1,3 S 1,5 S 1,6 S 2,1 S 2, auto exits chassis enters line 1 line 2 S 2,3 S 2,5 S 2,6 S 1, S 2, j f* = 38 l* = 1 f1[j]f1[j] f2[j]f2[j] l1[j]l1[j] l2[j]l2[j]

9 9 Spring 2013 Unit 4 9 ․ Typically apply to optimization problem. ․ Generic approach  Calculate the solutions to all subproblems.  Proceed computation from the small subproblems to larger subproblems.  Compute a subproblem based on previously computed results for smaller subproblems.  Store the solution to a subproblem in a table and never recompute. ․ Development of a DP 1.Characterize the structure of an optimal solution. 2.Recursively define the value of an optimal solution. 3.Compute the value of an optimal solution bottom-up. 4.Construct an optimal solution from computed information (omitted if only the optimal value is required). Dynamic Programming (DP)

10 10 Spring 2013 Unit 4 10 When to Use Dynamic Programming (DP) ․ DP computes recurrence efficiently by storing partial results  efficient only when the number of partial results is small. ․ Hopeless configurations: n! permutations of an n-element set, 2 n subsets of an n-element set, etc. ․ Promising configurations: contiguous substrings of an n-character string, n(n+1)/2 possible subtrees of a binary search tree, etc. ․ DP works best on objects that are linearly ordered and cannot be rearranged!!  Linear assembly lines, matrices in a chain, characters in a string, points around the boundary of a polygon, points on a line/circle, the left-to-right order of leaves in a search tree, etc.  Objects are ordered left-to-right  Smell DP?

11 11 Spring 2013 Unit 4 11 Rod Cutting ․ Cut steel rods into pieces to maximize the revenue  Each cut is free; rod lengths are integral numbers. ․ Input: A length n and table of prices p i, for i = 1, 2, …, n. ․ Output: The maximum revenue obtainable for rods whose lengths sum to n, computed as the sum of the prices for the individual rods. length i price p i length i = max revenue r 4 = 5+5 = Objects are linearly ordered (and cannot be rearranged)??

12 12 Spring 2013 Unit 4 12 Optimal Rod Cutting ․ If p n is large enough, an optimal solution might require no cuts, i.e., just leave the rod as n unit long. ․ Solution for the maximum revenue r i of length i r 1 = 1 from solution 1 = 1 (no cuts) r 2 = 5 from solution 2 = 2 (no cuts) r 3 = 8 from solution 3 = 3 (no cuts) r 4 = 10 from solution 4 = r 5 = 13 from solution 5 = r 6 = 17 from solution 6 = 6 (no cuts) r 7 = 18 from solution 7 = or 7 = r 8 = 22 from solution 8 = length i price p i max revenue r i

13 13 Spring 2013 Unit 4 13 Optimal Substructure ․ Optimal substructure: To solve the original problem, solve subproblems on smaller sizes. The optimal solution to the original problem incorporates optimal solutions to the subproblems.  We may solve the subproblems independently. ․ After making a cut, we have two subproblems.  Max revenue r 7 : r 7 = 18 = r 4 + r 3 = (r 2 + r 2 ) + r 3 or r 1 + r 6 ․ Decomposition with only one subproblem: Some cut gives a first piece of length i on the left and a remaining piece of length n - i on the right. length i price p i max revenue r i

14 14 Spring 2013 Unit 4 14 Recursive Top-Down “Solution” ․ Inefficient solution: Cut-Rod calls itself repeatedly, even on subproblems it has already solved!! Cut-Rod(p, n) 1. if n == 0 2. return 0 3. q = - ∞ 4. for i = 1 to n 5. q = max(q, p[i] + Cut-Rod(p, n - i)) 6. return q Cut-Rod(p, 4) Cut-Rod(p, 3) Cut-Rod(p, 1) T(n) = 2 n Cut-Rod(p, 0) Overlapping subproblems?

15 15 Spring 2013 Unit 4 15 Top-Down DP Cut-Rod with Memoization ․ Complexity: O(n 2 ) time  Solve each subproblem just once, and solves subproblems for sizes 0,1, …, n. To solve a subproblem of size n, the for loop iterates n times. Memoized-Cut-Rod(p, n) 1. let r[0..n] be a new array 2. for i = 0 to n 3. r[i] = -  4. return Memoized-Cut-Rod -Aux(p, n, r) Memoized-Cut-Rod-Aux(p, n, r) 1. if r[n] ≥ 0 2. return r[n] 3. if n == 0 4. q = 0 5. else q = - ∞ 6. for i = 1 to n 7. q = max(q, p[i] + Memoized-Cut-Rod -Aux(p, n-i, r)) 8. r[n] = q 9. return q // Each overlapping subproblem // is solved just once!!

16 16 Spring 2013 Unit 4 16 Bottom-Up DP Cut-Rod ․ Complexity: O(n 2 ) time  Sort the subproblems by size and solve smaller ones first.  When solving a subproblem, have already solved the smaller subproblems we need. Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] be a new array 2. r[0] = 0 3. for j = 1 to n 4. q = - ∞ 5. for i = 1 to j 6. q = max(q, p[i] + r[j – i]) 7. r[j] = q 8. return r[n]

17 17 Spring 2013 Unit 4 17 Bottom-Up DP with Solution Construction ․ Extend the bottom-up approach to record not just optimal values, but optimal choices. ․ Saves the first cut made in an optimal solution for a problem of size i in s[i]. Extended-Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] and s[0..n] be new arrays 2. r[0] = 0 3. for j = 1 to n 4. q = - ∞ 5. for i = 1 to j 6. if q < p[i] + r[j – i] 7. q = p[i] + r[j – i] 8. s[j] = i 9. r[j] = q 10. return r and s Print-Cut-Rod-Solution(p, n) 1. (r, s) = Extended-Bottom-Up-Cut-Rod(p, n) 2. while n > 0 3. print s[n] 4. n = n – s[n] i r[i]r[i] s[i]s[i]

18 18 Spring 2013 Unit 4 18 DP Example: Matrix-Chain Multiplication ․ If A is a p x q matrix and B a q x r matrix, then C = AB is a p x r matrix time complexity: O(pqr). Matrix-Multiply(A, B) 1. if A.columns ≠ B.rows 2. error “incompatible dimensions” 3. else let C be a new A.rows * B.columns matrix 4. for i = 1 to A.rows 5. for j = 1 to B.columns 6. c ij = 0 7. for k = 1 to A.columns 8. c ij = c ij + a ik b kj 9. return C

19 19 Spring 2013 Unit 4 19 DP Example: Matrix-Chain Multiplication ․ The matrix-chain multiplication problem  Input: Given a chain of n matrices, matrix A i has dimension p i-1 x p i  Objective: Parenthesize the product A 1 A 2 … A n to minimize the number of scalar multiplications ․ Exp: dimensions: A 1 : 4 x 2; A 2 : 2 x 5; A 3 : 5 x 1 (A 1 A 2 )A 3 : total multiplications = 4 x 2 x x 5 x 1 = 60 A 1 (A 2 A 3 ): total multiplications = 2 x 5 x x 2 x 1 = 18 ․ So the order of multiplications can make a big difference!

20 20 Spring 2013 Unit 4 20 Matrix-Chain Multiplication: Brute Force ․ A = A 1 A 2 … A n : How to evaluate A using the minimum number of multiplications? ․ Brute force: check all possible orders?  P(n): number of ways to multiply n matrices. , exponential in n. ․ Any efficient solution?  The matrix chain is linearly ordered and cannot be rearranged!!  Smell Dynamic programming?

21 21 Spring 2013 Unit 4 21 Matrix-Chain Multiplication ․ m[i, j]: minimum number of multiplications to compute matrix A i..j = A i A i +1 … A j, 1  i  j  n.  m[1, n]: the cheapest cost to compute A 1..n. ․ Applicability of dynamic programming  Optimal substructure: an optimal solution contains within its optimal solutions to subproblems.  Overlapping subproblem: a recursive algorithm revisits the same problem over and over again; only  (n 2 ) subproblems. matrix A i has dimension p i-1 x p i

22 22 Spring 2013 Unit 4 22 Bottom-Up DP Matrix-Chain Order Matrix-Chain-Order(p) 1. n = p.length – 1 2. Let m[1..n, 1..n] and s[1..n-1, 2..n] be new tables 3. for i = 1 to n 4. m[i, i] = 0 5. for l = 2 to n // l is the chain length 6. for i = 1 to n – l j = i + l m[i, j] =  9. for k = i to j q = m[i, k] + m[k+1, j] + p i-1 p k p j 11. if q < m[i, j] 12. m[i, j] = q 13. s[i, j] = k 14. return m and s s A i dimension p i-1 x p i

23 23 Spring 2013 Unit 4 23 Constructing an Optimal Solution ․ s[i, j]: value of k such that the optimal parenthesization of A i A i+1 … A j splits between A k and A k+1 ․ Optimal matrix A 1..n multiplication: A 1..s[1, n] A s[1, n] + 1..n ․ Exp: call Print-Optimal-Parens(s, 1, 6): ((A 1 (A 2 A 3 ))((A 4 A 5 ) A 6 )) Print-Optimal-Parens(s, i, j) 1. if i == j 2. print “A” i 3. else print “(“ 4. Print-Optimal-Parens(s, i, s[i, j]) 5. Print-Optimal-Parens(s, s[i, j] + 1, j) 6. print “)“ s 0

24 24 Spring 2013 Unit 4 24 Top-Down, Recursive Matrix-Chain Order ․ Time complexity:  (2 n ) (T(n) > (T(k)+T(n-k)+1)). Recursive-Matrix-Chain(p, i, j) 1. if i == j 2. return 0 3. m[i, j] =  4. for k = i to j -1 5 q = Recursive-Matrix-Chain(p,i, k) + Recursive-Matrix-Chain(p, k+1, j) + p i-1 p k p j 6. if q < m[i, j] 7. m[i, j] = q 8. return m[i, j]

25 25 Spring 2013 Unit 4 25 Top-Down DP Matrix-Chain Order (Memoization) ․ Complexity: O(n 2 ) space for m[] matrix and O(n 3 ) time to fill in O(n 2 ) entries (each takes O(n) time) Memoized-Matrix-Chain(p) 1. n = p.length let m[1..n, 1..n] be a new table 2. for i = 1 to n 3. for j = i to n 4. m[i, j] =  5. return Lookup-Chain(m, p,1,n) Lookup-Chain(m, p, i, j) 1. if m[i, j] <  2. return m[i, j] 3. if i == j 4. m[ i, j] = 0 5. else for k = i to j q = Lookup-Chain(m, p, i, k) + Lookup-Chain(m, p, k+1, j) + p i-1 p k p j 7. if q < m[i, j] 8. m[i, j] = q 9. return m[i, j]

26 26 Spring 2013 Unit 4 26 Two Approaches to DP 1.Bottom-up iterative approach  Start with recursive divide-and-conquer algorithm.  Find the dependencies between the subproblems (whose solutions are needed for computing a subproblem).  Solve the subproblems in the correct order. 2.Top-down recursive approach (memoization)  Start with recursive divide-and-conquer algorithm.  Keep top-down approach of original algorithms.  Save solutions to subproblems in a table (possibly a lot of storage).  Recurse only on a subproblem if the solution is not already available in the table. ․ If all subproblems must be solved at least once, bottom-up DP is better due to less overhead for recursion and for maintaining tables. ․ If many subproblems need not be solved, top-down DP is better since it computes only those required.

27 27 Spring 2013 Unit 4 27 Longest Common Subsequence ․ Problem: Given X = and Y =, find the longest common subsequence (LCS) of X and Y. ․ Exp: X = and Y = LCS = (also, LCS = ). ․ Exp: DNA sequencing:  S1 = ACCGGTCGAGATGCAG; S2 = GTCGTTCGGAATGCAT; LCS S3 = CGTCGGATGCA ․ Brute-force method:  Enumerate all subsequences of X and check if they appear in Y.  Each subsequence of X corresponds to a subset of the indices {1, 2, …, m} of the elements of X.  There are 2 m subsequences of X. Why?

28 28 Spring 2013 Unit 4 28 Optimal Substructure for LCS ․ Let X = and Y = be sequences, and Z = be LCS of X and Y. 1.If x m = y n, then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1. 2.If x m ≠ y n, then z k ≠ x m implies Z is an LCS of X m-1 and Y. 3.If x m ≠ y n, then z k ≠ y n implies Z is an LCS of X and Y n-1. ․ c[i, j]: length of the LCS of X i and Y j ․ c[m, n]: length of LCS of X and Y ․ Basis: c[0, j] = 0 and c[i, 0] = 0

29 29 Spring 2013 Unit 4 29 Top-Down DP for LCS ․ c[i, j]: length of the LCS of X i and Y j, where X i = and Y j =. ․ c[m, n]: LCS of X and Y. ․ Basis: c[0, j] = 0 and c[i, 0] = 0. ․ The top-down dynamic programming: initialize c[i, 0] = c[0, j] = 0, c[i, j] = NIL TD-LCS(i, j) 1. if c[i,j] == NIL 2. if x i == y j 3. c[i, j] = TD-LCS(i-1, j-1) else c[i, j] = max(TD-LCS(i, j-1), TD-LCS(i-1, j)) 5. return c[i, j]

30 30 Spring 2013 Unit 4 30 Bottom-Up DP for LCS ․ Find the right order to solve the subproblems ․ To compute c[i, j], we need c[i-1, j-1], c[i-1, j], and c[i, j-1] ․ b[i, j]: points to the table entry w.r.t. the optimal subproblem solution chosen when computing c[i, j] LCS-Length(X,Y) 1. m = X.length 2. n = Y.length 3. let b[1..m, 1..n] and c[0..m, 0..n] be new tables 4. for i = 1 to m 5. c[i, 0] = 0 6. for j = 0 to n 7. c[0, j] = 0 8. for i = 1 to m 9. for j = 1 to n 10. if x i == y j 11. c[i, j] = c[i-1, j-1] b[i, j] = “ ↖ ” 13. elseif c[i-1,j]  c[i, j-1] 14. c[i,j] = c[i-1, j] 15. b[i, j] = ``  '' 16. else c[i, j] = c[i, j-1] 17. b[i, j] = “  “ 18. return c and b

31 31 Spring 2013 Unit 4 31 Example of LCS ․ LCS time and space complexity: O(mn). ․ X = and Y =  LCS =.

32 32 Spring 2013 Unit 4 32 Constructing an LCS ․ Trace back from b[m, n] to b[1, 1], following the arrows: O(m+n) time. Print-LCS(b, X, i, j) 1. if i == 0 or j == 0 2. return 3. if b[i, j] == “ ↖ “ 4. Print-LCS(b, X, i-1, j-1) 5. print x i 6. elseif b[i, j] == “  “ 7. Print-LCS(b, X, i -1, j) 8. else Print-LCS(b, X, i, j-1)

33 33 Spring 2013 Unit 4 33 Optimal Binary Search Tree ․ Given a sequence K = of n distinct keys in sorted order (k 1 for searching the keys in K and Q = for unsuccessful searches (corresponding to D = of n+1 distinct dummy keys with d i representing all values between k i and k i+1 ), construct a binary search tree whose expected search cost is smallest. k2k2 k1k1 k3k3 k4k4 k5k5 d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 k2k2 k1k1 k4k4 k5k5 k3k3 d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 p2p2 q2q2 p1p1 p4p4 p3p3 p5p5 q0q0 q1q1 q3q3 q4q4 q5q5

34 34 Spring 2013 Unit 4 34 An Example i pipi qiqi Cost = 2.75 Optimal!! k2k2 k1k1 k4k4 k5k5 k3k3 d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 Cost = 2.80 k2k2 k1k1 k3k3 k4k4 k5k5 d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 0.10×1 (depth 0) 0.15×2 0.10×2 0.10×4 0.20×3

35 35 Spring 2013 Unit 4 35 Optimal Substructure ․ If an optimal binary search tree T has a subtree T’ containing keys k i, …, k j, then this subtree T’ must be optimal as well for the subproblem with keys k i, …, k j and dummy keys d i-1, …, d j.  Given keys k i, …, k j with k r (i  r  j) as the root, the left subtree contains the keys k i, …, k r-1 (and dummy keys d i-1, …, d r-1 ) and the right subtree contains the keys k r+1, …, k j (and dummy keys d r, …, d j ).  For the subtree with keys k i, …, k j with root k i, the left subtree contains keys k i,.., k i-1 (no key) with the dummy key d i-1. k2k2 k1k1 k4k4 k5k5 k3k3 d0d0 d1d1 d2d2 d3d3 d4d4 d5d5

36 36 Spring 2013 Unit 4 36 Overlapping Subproblem: Recurrence ․ e[i, j] : expected cost of searching an optimal binary search tree containing the keys k i, …, k j.  Want to find e[1, n].  e[i, i -1] = q i-1 (only the dummy key d i-1 ). ․ If k r (i  r  j) is the root of an optimal subtree containing keys k i, …, k j and let then e[i, j] = p r + (e[i, r-1] + w(i, r-1)) + (e[r+1, j] +w(r+1, j)) = e[i, r-1] + e[r+1, j] + w(i, j) ․ Recurrence: k3k3 k4k4 k5k5 d2d2 d3d3 d4d4 d5d5 0.10×1 0.10×3 0.20×2 Node depths increase by 1 after merging two subtrees, and so do the costs

37 37 Spring 2013 Unit 4 37 Computing the Optimal Cost ․ Need a table e[1..n+1, 0..n] for e[i, j] (why e[1, 0] and e[n+1, n]?) ․ Apply the recurrence to compute w(i, j) (why?) Optimal-BST(p, q, n) 1. let e[1..n+1, 0..n], w[1..n+1, 0..n], and root[1..n, 1..n] be new tables 2. for i = 1 to n e[i, i-1] = q i-1 4. w[i, i-1] = q i-1 5. for l = 1 to n 6. for i = 1 to n – l j = i + l e[i, j] =  9. w[i, j] = w[i, j-1] + p j + q j 10. for r = i to j 11. t = e[i, r-1] + e[r+1, j] + w[i, j] 12. if t < e[i, j] 13. e[i, j] = t 14. root[i, j] = r 15. return e and root ․ root[i, j] : index r for which k r is the root of an optimal search tree containing keys k i, …, k j e j i k3k3 k4k4 k5k5 d2d2 d3d3 d4d4 d5d5

38 38 Spring 2013 Unit 4 38 Example e j i w j i root j i i pipi qiqi k2k2 k1k1 k4k4 k5k5 k3k3 d0d0 d1d1 d2d2 d3d3 d4d4 d5d5 e[1, 1] = e[1, 0] + e[2, 1] + w(1,1) = = 0.45 e[1, 5] = e[1, 1] + e[3, 5] + w(1,5) = = 2.75

39 39 Spring 2013 Unit 4 39 Appendix A: Optimal Polygon Triangulation ․ Terminology: polygon, interior, exterior, boundary, convex polygon, triangulation? ․ The Optimal Polygon Triangulation Problem: Given a convex polygon P = and a weight function w defined on triangles, find a triangulation that minimizes    w(  ). ․ One possible weight function on triangle: w(  v i v j v k )=|v i v j | + |v j v k | + |v k v i |, where |v i v j | is the Euclidean distance from v i to v j.

40 40 Spring 2013 Unit 4 40 Optimal Polygon Triangulation (cont'd) ․ Correspondence between full parenthesization, full binary tree (parse tree), and triangulation  full parenthesization  full binary tree  full binary tree (n-1 leaves)  triangulation (n sides) ․ t[i, j]: weight of an optimal triangulation of polygon.

41 41 Spring 2013 Unit 4 41 Pseudocode: Optimal Polygon Triangulation ․ Matrix-Chain-Order is a special case of the optimal polygonal triangulation problem. ․ Only need to modify Line 9 of Matrix-Chain-Order. ․ Complexity: Runs in  (n 3 ) time and uses  (n 2 ) space. Optimal-Polygon-Triangulation(P) 1. n = P.length 2. for i = 1 to n 3. t[i,i] = 0 4. for l = 2 to n 5. for i = 1 to n – l j = i + l t[i, j] =  8. for k = i to j-1 9. q = t[i, k] + t[k+1, j ] + w(  v i-1 v k v j ) 10. if q < t[i, j] 11. t[i, j] = q 12. s[i,j] = k 13. return t and s

42 42 Spring 2013 Unit 4 A tree decomposition: –Tree with a vertex set called bag associated to every node. –For all edges {v,w}: there is a set containing both v and w. –For every v: the nodes that contain v form a connected subtree. b c d f aa b c c c d g g f f a e e a e Tree decomposition f

43 43 Spring 2013 Unit 4 Tree decomposition A tree decomposition: –Tree with a vertex set called bag associated to every node. –For all edges {v,w}: there is a set containing both v and w. –For every v: the nodes that contain v form a connected subtree. aa b c c c d g f f a e e f b c d f g a e

44 44 Spring 2013 Unit 4 A tree decomposition: –Tree with a vertex set called bag associated to every node. –For all edges {v,w}: there is a set containing both v and w. –For every v: the nodes that contain v form a connected subtree. Tree decomposition aa b c c c d g f f a e e f b c d f g a e

45 45 Spring 2013 Unit 4 Treewidth (definition) ․ Width of tree decomposition: ․ Treewidth of graph G: tw(G)= minimum width over all tree decompositions of G. ab c d e f g aa b c c c d g f f a e e f b c d f g a e

46 46 Spring 2013 Unit 4 Dynamic programming algorithms ․ Many NP-hard (and some PSPACE-hard, or #P-hard) graph problems become polynomial or linear time solvable when restricted to graphs of bounded treewidth (or pathwidth or branchwidth)  Well known problems like Independent Set, Hamiltonian Circuit, Graph Coloring  Frequency assignment  TSP

47 47 Spring 2013 Unit 4 Dynamic programming with tree decompositions ․ For each bag, a table is computed:  Contains information on subgraph formed by vertices in bag and bags below it  Bags are separators =>  Size of tables often bounded by function of width ․ Computing a table needs only tables of children and local information


Download ppt "1 Spring 2013 Unit 4 1 Unit 4: Dynamic Programming ․ Course contents:  Assembly-line scheduling  Rod cutting  Matrix-chain multiplication  Longest."

Similar presentations


Ads by Google