Presentation is loading. Please wait.

Presentation is loading. Please wait.

TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II 0 0 0 0 0 0 0 0 0 0.

Similar presentations


Presentation on theme: "TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II 0 0 0 0 0 0 0 0 0 0."— Presentation transcript:

1 TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II 0 0 0 0 0 0 0 0 0 0

2 TU/e Algorithms (2IL15) – Lecture 4 2 Techniques for optimization optimization problems typically involve making choices backtracking: just try all solutions  can be applied to almost all problems, but gives very slow algorithms  try all options for first choice, for each option, recursively make other choices greedy algorithms: construct solution iteratively, always make choice that seems best  can be applied to few problems, but gives fast algorithms  only try option that seems best for first choice (greedy choice), recursively make other choices dynamic programming  in between: not as fast as greedy, but works for more problems

3 TU/e Algorithms (2IL15) – Lecture 4 3 5 steps in designing dynamic-programming algorithms  define subproblems [#subproblems]  guess first choice [#choices]  give recurrence for the value of an optimal solution [time/subproblem treating recursive calls as  (1))]  define subproblem in terms of a few parameters  define variable m[..] = value of optimal solution for subproblem  relate subproblems by giving recurrence for m[..]  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem Running time: #subproblems * time/subproblem Correctness: (i) correctness of recurrence: relate OPT to recurrence (ii) correctness of algorithm: induction using (i) Today more examples: longest common subsequence and optimal binary search trees

4 TU/e Algorithms (2IL15) – Lecture 4 4 5 steps: Matrix chain multiplication  define subproblems given i,j with 1≤i<j ≤n, compute A i … A j in cheapest way [  (n 2 )] 2.guess first choice final multiplication [  (n)] 3.give recurrence for the value of an optimal solution [  (n)] 4.algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem: split markers s[i,j] Running time: #subproblems * time/subproblem =  (n 3 ) i≤k<j m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i< j Blackboard example

5 TU/e Algorithms (2IL15) – Lecture 4 5 Longest Common Subsequence

6 TU/e Algorithms (2IL15) – Lecture 4 6 Comparing DNA sequences DNA: string of characters A = adenine C = cytosine G = guanine T = thymine Problem: measure the similarity of two given DNA sequences X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G How to measure similarity?  edit distance: number of changes to transform X into Y  length of longest common subsequence (LCS) of X and Y ……

7 TU/e Algorithms (2IL15) – Lecture 4 7 Longest common subsequence substring of a string: string that can be obtained by omitting zero or more characters string: A C C A T T G example substrings: A C C A T T G or A C C A T T G or … LCS(X,Y) = a longest common subsequence of strings X and Y = a longest string that is a subsequence of X and a subsequence of Y X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G LCS(X,Y) = ACGTTGACG (or ACGTTCACG …)

8 TU/e Algorithms (2IL15) – Lecture 4 8 The LCS problem Input: strings X = x 1, x 2, …, x n and Y = y 1, y 2, …, y m Output: (the length of) a longest common subsequence of X and Y So we have valid solution = common subsequence profit = length of the subsequence (to be maximized)

9 TU/e Algorithms (2IL15) – Lecture 4 9 X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G LCS for x 1 …x n-1 and y 1 …y m-1 so: optimal substructure x 1 x 2 x n y 1 y 2 y m Let’s study the structure of an optimal solution  what are the choices that need to be made?  what are the subproblems that remain? do we have optimal substructure? first choice: is last character of X matched to last character of Y ? or is last character of X unmatched ? or is last character of Y unmatched ? greedy choice ? overlapping subproblems ? optimal solution

10 TU/e Algorithms (2IL15) – Lecture 4 10 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem

11 TU/e Algorithms (2IL15) – Lecture 4 11 X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G x 1 x 2 x n y 1 y 2 y m X 0 is empty string 3.Give recursive formula for the value of an optimal solution  define subproblem in terms of a few parameters Find LCS of X i := x 1 … x i and Y j := y 1 … y j, for parameters i, j with 0 ≤ i ≤ n and 0 ≤ j ≤ m ?  define variable c [..] = value of optimal solution for subproblem c [i,j ] = length of LCS of X i and Y j  give recursive formula for c [..]

12 TU/e Algorithms (2IL15) – Lecture 4 12 X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G x 1 x 2 x n y 1 y 2 y m X i := x 1 … x i and Y j := y 1 … y j, for 0 ≤ i ≤ n and 0 ≤ j ≤ m we want recursive formula for c [i,j ] = length of LCS of X i and Y j Lemma: c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0

13 TU/e Algorithms (2IL15) – Lecture 4 13 Proof: i =0 or j = 0: trivial (at least one of X i and Y j is empty) i,j > 0 and x i = y j : We can extend LCS (X i-1,Y j-1 ) by appending x i, so c [i,j ] ≥ c [i-1,j-1] + 1. On the other hand, by deleting the last character from LCS(X i,Y j ) we obtain a substring of X i-1 and Y j-1. Hence, c [i-1,j-1] ≥ c [i,j ] - 1. It follows that c [i,j ] = c [i-1,j-1] + 1. i,j > 0 and x i ≠ y j : If LCS(X i,Y j ) does not end with x i then c [i,j ] ≤ c[i-1,j ] ≤ max {.. }. Otherwise LCS(X i,Y j ) cannot end with y j, so c [i,j ] ≤ c[i,j-1] ≤ max {.. }. Obviously c [i,j ] ≥ max {.. }. We conclude that c [i,j ] = max {.. }. Lemma: c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0

14 TU/e Algorithms (2IL15) – Lecture 4 14 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem

15 TU/e Algorithms (2IL15) – Lecture 4 15 i j 0 0m n solution to original problem 0 000 00 0 000 0 0 0 0 0 0 0 0 0 c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0 4.Algorithm: fill in table for c [..] in suitable order

16 TU/e Algorithms (2IL15) – Lecture 4 16 LCS-Length(X,Y) 1. n ← length[X]; m ← length[Y] 2. for i ← 0 to n do c[i,0] ← 0 3. for j ← 0 to m do c[0,j] ← 0 4. for i ← 1 to n 5. do for j ← 1 to m 6. do if X[i] = Y[j] 7. then c[i,j] ← c[i-1,j-1] + 1 8. else c[i,j] ← max { c[i-1,j], c[i,j-1] } 9. return c[n,m] c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0 4.Algorithm: fill in table for c [..] in suitable order j 0m i 0 n

17 TU/e Algorithms (2IL15) – Lecture 4 17 Θ(n) Θ(m) Θ(1) Lines 4 – 8: ∑ 1≤i≤n ∑ 1≤j≤m Θ(1) = Θ(nm) LCS-Length(X,Y) 1. n ← length[X]; m ← length[Y] 2. for i ← 0 to n do c[i,0] ← 0 3. for j ← 0 to m do c[0,j] ← 0 4. for i ← 1 to n 5. do for j ← 1 to m 6. do if X[i] = Y[j] 7. then c[i,j] ← c[i-1,j-1] + 1 8. else c[i,j] ← max { c[i-1,j], c[i,j-1] } 9. return c[n,m] Analysis of running time

18 TU/e Algorithms (2IL15) – Lecture 4 18 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem Approaches for going from optimum value to optimum solution: i.store choices that led to optimum (e.g. s[i,j] for matrix multiplication) ii.retrace/deduce sequence of solutions that led to optimum backwards (next slide)

19 TU/e Algorithms (2IL15) – Lecture 4 19 Print-LCS(c,X,Y,i,j)  if i=0 or j=0  then skip 3. else if X[i] = Y[j] 4. then Print-LCS(c,X,Y,i-1,j-1); print x i 5. else if c[i-1,j] > c[i,j-1] 6. then Print-LCS(c,X,Y,i-1,j) 7. else Print-LCS (c,X,Y,i,j-1) Advantage of avoiding extra table: saves memory Disadvantage: extra work to construct solution Initial call: Print-LCS(c,X,Y,n,m) 5.Extend algorithm so that it computes an actual solution that is optimal, and not just the value of an optimal solution. Usual approach: extra table to remember choices that lead to optimal solution It’s also possible to do without the extra table.

20 TU/e Algorithms (2IL15) – Lecture 4 20 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem Approaches for going from optimum value to optimum solution: i.store choices that led to optimum (e.g. s[i,j] for matrix multiplication) ii.retrace/deduce sequence of solutions that led to optimum backwards (next slide)

21 TU/e Algorithms (2IL15) – Lecture 4 21 Optimal Binary Search Trees

22 TU/e Algorithms (2IL15) – Lecture 4 22 Observation: balanced binary search tree does not necessarily have best average search time, if certain keys are searched for more often than others porcupine is the Example: dictionary for automatic translation of texts (keys are words) porcupine the is balanced better average search time

23 TU/e Algorithms (2IL15) – Lecture 4 23 The problem of constructing optimal binary search trees Input:  keys k 1, k 2, …, k n with k 1 < k 2 < … < k n  probabilities p 1, p 2, …, p n with p i = probability that query key is k i  dummy keys d 0, d 1, …, d n  probabilities q 0, q 1, …, q n with q i = probability that query key lies between k i and k i+1 (k 0 = - ∞ en k n+1 = ∞), that is, that search ends in dummy d i Note: ∑ p i + ∑ q i = 1 d0d0 d1d1 d2d2 d3d3 dndn 0.001 0.02 k1k1 k2k2 k3k3 knkn 0.030.04 0.072 0.008 0.002 0.001

24 TU/e Algorithms (2IL15) – Lecture 4 24 Output: tree that minimizes expected search time (expected = weighted average) d0d0 d1d1 d2d2 d3d3 k1k1 k2k2 k3k3 k4k4 d0d0 d1d1 d2d2 k1k1 k2k2 k3k3 d4d4 d3d3 k4k4 d4d4 depth = 0 depth = 1 expected cost of a query = ∑ p i ∙ (1 + depth of k i ) + ∑ q i ∙ (1 + depth of d i ) = 1 + ∑ p i ∙(depth of k i ) + ∑ q i ∙(depth of d i ) this is what we want to minimize 0.05 0.08 0.25 0.1 0.15 0.3 0.25 0.1 0.15 0.04 0.02 0.05 0.08 0.04 0.02 0.01

25 TU/e Algorithms (2IL15) – Lecture 4 25 krkr optimal tree for d 0,k 1,…,d r-1 optimal tree for d r,k r+1,…,d n Let’s study the structure of an optimal solution  what are the choices that need to be made?  what are the subproblems that remain? do we have optimal substructure? d0d0 d1d1 d2d2 d3d3 dndn k1k1 k2k2 k3k3 knkn

26 TU/e Algorithms (2IL15) – Lecture 4 26 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem

27 TU/e Algorithms (2IL15) – Lecture 4 27 3.Give recursive formula for the value of an optimal solution  define subproblem in terms of a few parameters Find tree for d i-1,k i,…,d j that minimizes expected search cost for parameters i, j with 0 ≤ i -1≤ j ≤ n?  define variable m [..] = value of optimal solution for subproblem m [i,j ] = expected search cost of optimal tree for d i-1,k i,…,d j  give recursive formula for m [..] krkr d 0,k i,…,d r-1 d r,k r-1,…,d n

28 TU/e Algorithms (2IL15) – Lecture 4 28 recursive formula for m [i,j ] = expected search cost of optimal tree for d i-1,k i,…,d j krkr d i-1,k i,…,d r-1 d r,k r-1,…,d j Lemma: m [i,j ] = if j = i-1 if j > i -1 q i-1 min { m[i,r-1] + m[r+1,j] + w(i,j) } i ≤ r ≤ j expected search cost = p r + (cost of left subtree) + (cost of right subtree) = p r + ( m [i,r-1 ] + ∑ i≤s ≤ r-1 p s + ∑ i-1≤s≤r-1 q s ) + ( m [r+1,j ] + ∑ r+1≤s ≤ j p s + ∑ r≤s≤j q s ) = m [i,r-1 ] + m [r+1,j ] + ∑ i≤s ≤ j p s + ∑ i-1≤s≤j q s define w(i,j) = ∑ i≤s≤j p s + ∑ i-1≤s≤j q s

29 TU/e Algorithms (2IL15) – Lecture 4 29 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem

30 TU/e Algorithms (2IL15) – Lecture 4 30 OptimalBST-Cost ( p, q ) // p and q: arrays with probabilities p 1,..,p n and q 0,…,q n 1.n ← length[p] 2. for i ← 1 to n+1 3. do m[i,i-1] ← q i-1 4. for i ← n downto 1 5. do for j ← i to n 6. do m[i,j] ← min { m[i,r-1] + m[r+1,j] + w(i,j) } 7. return m[1,n] i ≤ r ≤ j precompute i j 1 0 n n+1 4.Algorithm: fill in table for m [..] in suitable order m [i,j ] = if j = i-1 if j > i -1 q i-1 min { m[i,r-1] + m[r+1,j] + w(i,j) } i ≤ r ≤ j

31 TU/e Algorithms (2IL15) – Lecture 4 31 Analysis of running time OptimalBST-Cost ( p, q ) // p and q: arrays with probabilities p 1,..,p n and q 0,…,q n 1.n ← length[p] 2. for i ← 1 to n+1 3. do m[i,i-1] ← q i-1 4. for i ← n downto 1 5. do for j ← i to n 6. do m[i,j] ← min { m[i,r-1] + m[r+1,j] + w(i,j) } 7. return m[1,n] precompute i j 1 0 n n+1 i ≤ r ≤ j Θ(n 3 ): similar to algorithm for matrix-chain multiplication

32 TU/e Algorithms (2IL15) – Lecture 4 32 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem

33 TU/e Algorithms (2IL15) – Lecture 4 33 5 steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem Try it!

34 TU/e Algorithms (2IL15) – Lecture 4 34 5 steps in designing dynamic-programming algorithms  define subproblems [#subproblems]  guess first choice [#choices]  give recurrence for the value of an optimal solution [time/subproblem treating recursive calls as  (1))]  define subproblem in terms of a few parameters  define variable m[..] = value of optimal solution for subproblem  relate subproblems by giving recurrence for m[..]  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem Running time: #subproblems * time/subproblem Correctness: (i) correctness of recurrence: relate OPT to recurrence (ii) correctness of algorithm: induction using (i) A problem to think about: Knapsack with integer weights items with values v 1, …, v n and integer weights w 1, …, w n select items to pack of total weight < W, maximizing total value

35 TU/e Algorithms (2IL15) – Lecture 4 35 Summary of part I of the course optimization problems we want to find valid solution minimizing cost (or maximizing profit) techniques for optimization  backtracking: just try all solutions (pruning when possible)  greedy algorithms: make choice that seems best, recursively solve remaining subproblem  dynamic programming: give recursive formula, fill in table always start by investigating structure of optimal solution


Download ppt "TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II 0 0 0 0 0 0 0 0 0 0."

Similar presentations


Ads by Google