TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II 0 0 0 0 0 0 0 0 0 0.

Slides:



Advertisements
Similar presentations
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Advertisements

Overview What is Dynamic Programming? A Sequence of 4 Steps
Algorithms Dynamic programming Longest Common Subsequence.
COMP8620 Lecture 8 Dynamic Programming.
Chapter 7 Dynamic Programming.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 2 Tuesday, 2/5/02 Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
7 -1 Chapter 7 Dynamic Programming Fibonacci Sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2008 Design Patterns for Optimization Problems Dynamic Programming.
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Analysis of Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Algorithms and Data Structures Lecture X
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Dynamic Programming Tutorial &Practice on Longest Common Sub-sequence.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Lecture21: Dynamic Programming Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
Introduction to Algorithms Jiafen Liu Sept
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
2IMA20 Algorithms for Geographic Data Spring 2016 Lecture 2: Similarity.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
CS 3343: Analysis of Algorithms
Chapter 8 Dynamic Programming.
CSCE 411 Design and Analysis of Algorithms
Algorithms (2IL15) – Lecture 2
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
CS6045: Advanced Algorithms
Longest Common Subsequence
Lecture 8. Paradigm #6 Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
Dynamic Programming II DP over Intervals
Algorithms and Data Structures Lecture X
Analysis of Algorithms CS 477/677
Longest Common Subsequence
Dynamic Programming.
Algorithm Course Dr. Aref Rashad
Presentation transcript:

TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II

TU/e Algorithms (2IL15) – Lecture 4 2 Techniques for optimization optimization problems typically involve making choices backtracking: just try all solutions  can be applied to almost all problems, but gives very slow algorithms  try all options for first choice, for each option, recursively make other choices greedy algorithms: construct solution iteratively, always make choice that seems best  can be applied to few problems, but gives fast algorithms  only try option that seems best for first choice (greedy choice), recursively make other choices dynamic programming  in between: not as fast as greedy, but works for more problems

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems [#subproblems]  guess first choice [#choices]  give recurrence for the value of an optimal solution [time/subproblem treating recursive calls as  (1))]  define subproblem in terms of a few parameters  define variable m[..] = value of optimal solution for subproblem  relate subproblems by giving recurrence for m[..]  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem Running time: #subproblems * time/subproblem Correctness: (i) correctness of recurrence: relate OPT to recurrence (ii) correctness of algorithm: induction using (i) Today more examples: longest common subsequence and optimal binary search trees

TU/e Algorithms (2IL15) – Lecture steps: Matrix chain multiplication  define subproblems given i,j with 1≤i<j ≤n, compute A i … A j in cheapest way [  (n 2 )] 2.guess first choice final multiplication [  (n)] 3.give recurrence for the value of an optimal solution [  (n)] 4.algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem: split markers s[i,j] Running time: #subproblems * time/subproblem =  (n 3 ) i≤k<j m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i< j Blackboard example

TU/e Algorithms (2IL15) – Lecture 4 5 Longest Common Subsequence

TU/e Algorithms (2IL15) – Lecture 4 6 Comparing DNA sequences DNA: string of characters A = adenine C = cytosine G = guanine T = thymine Problem: measure the similarity of two given DNA sequences X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G How to measure similarity?  edit distance: number of changes to transform X into Y  length of longest common subsequence (LCS) of X and Y ……

TU/e Algorithms (2IL15) – Lecture 4 7 Longest common subsequence substring of a string: string that can be obtained by omitting zero or more characters string: A C C A T T G example substrings: A C C A T T G or A C C A T T G or … LCS(X,Y) = a longest common subsequence of strings X and Y = a longest string that is a subsequence of X and a subsequence of Y X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G LCS(X,Y) = ACGTTGACG (or ACGTTCACG …)

TU/e Algorithms (2IL15) – Lecture 4 8 The LCS problem Input: strings X = x 1, x 2, …, x n and Y = y 1, y 2, …, y m Output: (the length of) a longest common subsequence of X and Y So we have valid solution = common subsequence profit = length of the subsequence (to be maximized)

TU/e Algorithms (2IL15) – Lecture 4 9 X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G LCS for x 1 …x n-1 and y 1 …y m-1 so: optimal substructure x 1 x 2 x n y 1 y 2 y m Let’s study the structure of an optimal solution  what are the choices that need to be made?  what are the subproblems that remain? do we have optimal substructure? first choice: is last character of X matched to last character of Y ? or is last character of X unmatched ? or is last character of Y unmatched ? greedy choice ? overlapping subproblems ? optimal solution

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem

TU/e Algorithms (2IL15) – Lecture 4 11 X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G x 1 x 2 x n y 1 y 2 y m X 0 is empty string 3.Give recursive formula for the value of an optimal solution  define subproblem in terms of a few parameters Find LCS of X i := x 1 … x i and Y j := y 1 … y j, for parameters i, j with 0 ≤ i ≤ n and 0 ≤ j ≤ m ?  define variable c [..] = value of optimal solution for subproblem c [i,j ] = length of LCS of X i and Y j  give recursive formula for c [..]

TU/e Algorithms (2IL15) – Lecture 4 12 X = A C C G T A A T C G A C G Y = A C G A T T G C A C T G x 1 x 2 x n y 1 y 2 y m X i := x 1 … x i and Y j := y 1 … y j, for 0 ≤ i ≤ n and 0 ≤ j ≤ m we want recursive formula for c [i,j ] = length of LCS of X i and Y j Lemma: c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0

TU/e Algorithms (2IL15) – Lecture 4 13 Proof: i =0 or j = 0: trivial (at least one of X i and Y j is empty) i,j > 0 and x i = y j : We can extend LCS (X i-1,Y j-1 ) by appending x i, so c [i,j ] ≥ c [i-1,j-1] + 1. On the other hand, by deleting the last character from LCS(X i,Y j ) we obtain a substring of X i-1 and Y j-1. Hence, c [i-1,j-1] ≥ c [i,j ] - 1. It follows that c [i,j ] = c [i-1,j-1] + 1. i,j > 0 and x i ≠ y j : If LCS(X i,Y j ) does not end with x i then c [i,j ] ≤ c[i-1,j ] ≤ max {.. }. Otherwise LCS(X i,Y j ) cannot end with y j, so c [i,j ] ≤ c[i,j-1] ≤ max {.. }. Obviously c [i,j ] ≥ max {.. }. We conclude that c [i,j ] = max {.. }. Lemma: c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem

TU/e Algorithms (2IL15) – Lecture 4 15 i j 0 0m n solution to original problem c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0 4.Algorithm: fill in table for c [..] in suitable order

TU/e Algorithms (2IL15) – Lecture 4 16 LCS-Length(X,Y) 1. n ← length[X]; m ← length[Y] 2. for i ← 0 to n do c[i,0] ← 0 3. for j ← 0 to m do c[0,j] ← 0 4. for i ← 1 to n 5. do for j ← 1 to m 6. do if X[i] = Y[j] 7. then c[i,j] ← c[i-1,j-1] else c[i,j] ← max { c[i-1,j], c[i,j-1] } 9. return c[n,m] c [i,j ] = if i = 0 or j = 0 if i,j > 0 and x i = y j if i,j > 0 and x i ≠ y j c[i-1,j-1] + 1 max { c[i-1,j], c[i,j-1] } 0 4.Algorithm: fill in table for c [..] in suitable order j 0m i 0 n

TU/e Algorithms (2IL15) – Lecture 4 17 Θ(n) Θ(m) Θ(1) Lines 4 – 8: ∑ 1≤i≤n ∑ 1≤j≤m Θ(1) = Θ(nm) LCS-Length(X,Y) 1. n ← length[X]; m ← length[Y] 2. for i ← 0 to n do c[i,0] ← 0 3. for j ← 0 to m do c[0,j] ← 0 4. for i ← 1 to n 5. do for j ← 1 to m 6. do if X[i] = Y[j] 7. then c[i,j] ← c[i-1,j-1] else c[i,j] ← max { c[i-1,j], c[i,j-1] } 9. return c[n,m] Analysis of running time

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem Approaches for going from optimum value to optimum solution: i.store choices that led to optimum (e.g. s[i,j] for matrix multiplication) ii.retrace/deduce sequence of solutions that led to optimum backwards (next slide)

TU/e Algorithms (2IL15) – Lecture 4 19 Print-LCS(c,X,Y,i,j)  if i=0 or j=0  then skip 3. else if X[i] = Y[j] 4. then Print-LCS(c,X,Y,i-1,j-1); print x i 5. else if c[i-1,j] > c[i,j-1] 6. then Print-LCS(c,X,Y,i-1,j) 7. else Print-LCS (c,X,Y,i,j-1) Advantage of avoiding extra table: saves memory Disadvantage: extra work to construct solution Initial call: Print-LCS(c,X,Y,n,m) 5.Extend algorithm so that it computes an actual solution that is optimal, and not just the value of an optimal solution. Usual approach: extra table to remember choices that lead to optimal solution It’s also possible to do without the extra table.

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for c [..] in suitable order (or recurse & memoize)  solve original problem Approaches for going from optimum value to optimum solution: i.store choices that led to optimum (e.g. s[i,j] for matrix multiplication) ii.retrace/deduce sequence of solutions that led to optimum backwards (next slide)

TU/e Algorithms (2IL15) – Lecture 4 21 Optimal Binary Search Trees

TU/e Algorithms (2IL15) – Lecture 4 22 Observation: balanced binary search tree does not necessarily have best average search time, if certain keys are searched for more often than others porcupine is the Example: dictionary for automatic translation of texts (keys are words) porcupine the is balanced better average search time

TU/e Algorithms (2IL15) – Lecture 4 23 The problem of constructing optimal binary search trees Input:  keys k 1, k 2, …, k n with k 1 < k 2 < … < k n  probabilities p 1, p 2, …, p n with p i = probability that query key is k i  dummy keys d 0, d 1, …, d n  probabilities q 0, q 1, …, q n with q i = probability that query key lies between k i and k i+1 (k 0 = - ∞ en k n+1 = ∞), that is, that search ends in dummy d i Note: ∑ p i + ∑ q i = 1 d0d0 d1d1 d2d2 d3d3 dndn k1k1 k2k2 k3k3 knkn

TU/e Algorithms (2IL15) – Lecture 4 24 Output: tree that minimizes expected search time (expected = weighted average) d0d0 d1d1 d2d2 d3d3 k1k1 k2k2 k3k3 k4k4 d0d0 d1d1 d2d2 k1k1 k2k2 k3k3 d4d4 d3d3 k4k4 d4d4 depth = 0 depth = 1 expected cost of a query = ∑ p i ∙ (1 + depth of k i ) + ∑ q i ∙ (1 + depth of d i ) = 1 + ∑ p i ∙(depth of k i ) + ∑ q i ∙(depth of d i ) this is what we want to minimize

TU/e Algorithms (2IL15) – Lecture 4 25 krkr optimal tree for d 0,k 1,…,d r-1 optimal tree for d r,k r+1,…,d n Let’s study the structure of an optimal solution  what are the choices that need to be made?  what are the subproblems that remain? do we have optimal substructure? d0d0 d1d1 d2d2 d3d3 dndn k1k1 k2k2 k3k3 knkn

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem

TU/e Algorithms (2IL15) – Lecture Give recursive formula for the value of an optimal solution  define subproblem in terms of a few parameters Find tree for d i-1,k i,…,d j that minimizes expected search cost for parameters i, j with 0 ≤ i -1≤ j ≤ n?  define variable m [..] = value of optimal solution for subproblem m [i,j ] = expected search cost of optimal tree for d i-1,k i,…,d j  give recursive formula for m [..] krkr d 0,k i,…,d r-1 d r,k r-1,…,d n

TU/e Algorithms (2IL15) – Lecture 4 28 recursive formula for m [i,j ] = expected search cost of optimal tree for d i-1,k i,…,d j krkr d i-1,k i,…,d r-1 d r,k r-1,…,d j Lemma: m [i,j ] = if j = i-1 if j > i -1 q i-1 min { m[i,r-1] + m[r+1,j] + w(i,j) } i ≤ r ≤ j expected search cost = p r + (cost of left subtree) + (cost of right subtree) = p r + ( m [i,r-1 ] + ∑ i≤s ≤ r-1 p s + ∑ i-1≤s≤r-1 q s ) + ( m [r+1,j ] + ∑ r+1≤s ≤ j p s + ∑ r≤s≤j q s ) = m [i,r-1 ] + m [r+1,j ] + ∑ i≤s ≤ j p s + ∑ i-1≤s≤j q s define w(i,j) = ∑ i≤s≤j p s + ∑ i-1≤s≤j q s

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem

TU/e Algorithms (2IL15) – Lecture 4 30 OptimalBST-Cost ( p, q ) // p and q: arrays with probabilities p 1,..,p n and q 0,…,q n 1.n ← length[p] 2. for i ← 1 to n+1 3. do m[i,i-1] ← q i-1 4. for i ← n downto 1 5. do for j ← i to n 6. do m[i,j] ← min { m[i,r-1] + m[r+1,j] + w(i,j) } 7. return m[1,n] i ≤ r ≤ j precompute i j 1 0 n n+1 4.Algorithm: fill in table for m [..] in suitable order m [i,j ] = if j = i-1 if j > i -1 q i-1 min { m[i,r-1] + m[r+1,j] + w(i,j) } i ≤ r ≤ j

TU/e Algorithms (2IL15) – Lecture 4 31 Analysis of running time OptimalBST-Cost ( p, q ) // p and q: arrays with probabilities p 1,..,p n and q 0,…,q n 1.n ← length[p] 2. for i ← 1 to n+1 3. do m[i,i-1] ← q i-1 4. for i ← n downto 1 5. do for j ← i to n 6. do m[i,j] ← min { m[i,r-1] + m[r+1,j] + w(i,j) } 7. return m[1,n] precompute i j 1 0 n n+1 i ≤ r ≤ j Θ(n 3 ): similar to algorithm for matrix-chain multiplication

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems  guess first choice  give recurrence for the value of an optimal solution  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem Try it!

TU/e Algorithms (2IL15) – Lecture steps in designing dynamic-programming algorithms  define subproblems [#subproblems]  guess first choice [#choices]  give recurrence for the value of an optimal solution [time/subproblem treating recursive calls as  (1))]  define subproblem in terms of a few parameters  define variable m[..] = value of optimal solution for subproblem  relate subproblems by giving recurrence for m[..]  algorithm: fill in table for m [..] in suitable order (or recurse & memoize)  solve original problem Running time: #subproblems * time/subproblem Correctness: (i) correctness of recurrence: relate OPT to recurrence (ii) correctness of algorithm: induction using (i) A problem to think about: Knapsack with integer weights items with values v 1, …, v n and integer weights w 1, …, w n select items to pack of total weight < W, maximizing total value

TU/e Algorithms (2IL15) – Lecture 4 35 Summary of part I of the course optimization problems we want to find valid solution minimizing cost (or maximizing profit) techniques for optimization  backtracking: just try all solutions (pruning when possible)  greedy algorithms: make choice that seems best, recursively solve remaining subproblem  dynamic programming: give recursive formula, fill in table always start by investigating structure of optimal solution