242-535 ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J/18.401J/SMA5503
Advertisements

Dynamic Programming Nithya Tarek. Dynamic Programming Dynamic programming solves problems by combining the solutions to sub problems. Paradigms: Divide.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Algorithms Dynamic programming Longest Common Subsequence.
COMP8620 Lecture 8 Dynamic Programming.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
1 Longest Common Subsequence (LCS) Problem: Given sequences x[1..m] and y[1..n], find a longest common subsequence of both. Example: x=ABCBDAB and y=BDCABA,
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Analysis of Algorithms CS 477/677
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Analysis of Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Algorithms and Data Structures Lecture X
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
CS 8833 Algorithms Algorithms Dynamic Programming.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
CSC 221: Recursion. Recursion: Definition Function that solves a problem by relying on itself to compute the correct solution for a smaller version of.
Data Structures R e c u r s i o n. Recursive Thinking Recursion is a problem-solving approach that can be used to generate simple solutions to certain.
Dynamic Programming David Kauchak cs302 Spring 2013.
Introduction to Algorithms Jiafen Liu Sept
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Contest Algorithms January 2016 Introduce DP; Look at several examples; Compare DP to Greedy 11. Dynamic Programming (DP) 1Contest Algorithms: 11. DP.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming David Kauchak cs161 Summer 2009.
CS38 Introduction to Algorithms Lecture 10 May 1, 2014.
Part 2 # 68 Longest Common Subsequence T.H. Cormen et al., Introduction to Algorithms, MIT press, 3/e, 2009, pp Example: X=abadcda, Y=acbacadb.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
CS583 Lecture 12 Jana Kosecka Dynamic Programming Longest Common Subsequence Matrix Chain Multiplication Greedy Algorithms Many slides here are based on.
Dynamic Programming Typically applied to optimization problems
Algorithm Design and Analysis (ADA)
David Meredith Dynamic programming David Meredith
CS200: Algorithm Analysis
Chapter 8 Dynamic Programming.
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
CS 3343: Analysis of Algorithms
CS6045: Advanced Algorithms
Longest Common Subsequence
Lecture 8. Paradigm #6 Dynamic Programming
Dynamic programming & Greedy algorithms
Algorithms: Design and Analysis
Introduction to Algorithms: Dynamic Programming
Longest Common Subsequence
Longest common subsequence (LCS)
Longest Common Subsequence
Dynamic Programming.
Algorithm Course Dr. Aref Rashad
Presentation transcript:

ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci series and LCS Algorithm Design and Analysis (ADA) , Semester Dynamic Programming

Overview 7.Is the c[] Algorithm Optimal? 8.Repeated Subproblems? 9.LCS() as a DP Problem 10.Bottom-up Examples 11.Finding the LCS 12.Space and Bottom-up 1.Dynamic Programming (DP) 2.Fibonacci Series 3.Features of DP Code 4.Longest Common Subsequence (LCS) 5.Towards a Better LCS Algorithm 6.Recursive Definition of c[]

ADA: 7. Dynamic Prog.3 The "programming" in DP predates computing, and means "using tables to store information". DP is a recursive programming technique: o the problem is recursively described in terms of smilar but smaller subproblems 1. Dynamic Programming (DP)

ADA: 7. Dynamic Prog.4 Optimal solution from optimal parts o An optimal (best) solution to the problem is a composition of optimal (best) subproblem solutions Repeated (overlapping) sub-problems o the problem contains the same subproblems, repeated many times What makes DP Different? Hallmark #1 Hallmark #2

ADA: 7. Dynamic Prog.5 Unix diff for comparing two files Smith-Waterman for genetic sequence alignment o determines similar regions between two protein sequences (strings) Bellman-Ford for shortest path routing in networks Cocke-Kasami-Younger for parsing context free grammars Some Famous DP Examples

ADA: 7. Dynamic Prog.6 Series defined by a 0 = 1 a 1 = 1 a n = a n-1 + a n-2 Recursive algorithm: Running Time? O(2 n ) 2. Fibonacci Series 1, 1, 2, 3, 5, 8, 13, 21, 34, … fib(n) 1. if n == 0 or n == 1, then 2. return 1 3. else 4. a = fib(n-1) 5. b = fib(n-2) 6. return a+b fib(n) 1. if n == 0 or n == 1, then 2. return 1 3. else 4. a = fib(n-1) 5. b = fib(n-2) 6. return a+b

ADA: 7. Dynamic Prog.7 Fibonacci can be viewed as a DP problem: o the optimal solution (fib(n)) is a combination of optimal sub-solutions (fib(n-1) and fib(n-2)) o There are a lot of repeated subproblems look at the execution graph (next slide) 2 n subproblems, but only n are different Computing Fibonacci Faster 7

ADA: 7. Dynamic Prog.8 Execution of fib(5): lots of repeated work, that only gets worse for bigger n lots of repeated work, that only gets worse for bigger n

ADA: 7. Dynamic Prog.9 Memoization : values returned by recursive calls are stored in a table (array) so they do not need to be recalculated o this can save enormous amount of running time Bottom-up algorithms : simple solutions are computed first (e.g. fib(0), fib(1),...), leading to larger solutions (e.g. fib(23), fib(24),...) o this means that the solutions are added to the table in a fixed order which may allow them to be calculated quicker and the table to be less large 3. Features of DP Code

ADA: 7. Dynamic Prog.10 Running time is linear = O(n) Requires extra space for the m[] table = O(n) Memoization fib() Algorithm fib(n) 1.if (m[n] == 0) then 2. m[n] = fib(n − 1) + fib(n − 2) 3.return m[n] fib(n) 1.if (m[n] == 0) then 2. m[n] = fib(n − 1) + fib(n − 2) 3.return m[n] : m[] n :

Bottom-up fib() Algorithm int fib(int n) { if (n == 0) return 1; else { int prev = 1; int curr = 1; int temp; for (int i=1; i < n; i++) { temp = prev + curr; prev = curr; curr = temp; } return curr; } Running time = O(n) Space requirement is 3 variables = O(1) ! o this has nothing to do with changing from recursion to a loop, but with changing from top-down to bottom- up execution, which in this case is easier to write as a loop

4. Longest Common Subsequence (LCS) Given two sequences x[1.. m] and y[1.. n], find a longest subsequence common to them both. x:x:ABCBDAB y:y:BDCABA BCBA = LCS(x, y) and BDAB BCAB

ADA: 7. Dynamic Prog.13 Check every subsequence of x[1.. m] to see if it is also a subsequence of y[1.. n]. Analysis Checking time for each subsequence is O(n). 2 m subsequences of x[] (can use or not use each element in x). Worst-case running time = O(n*2 m ), exponential time. Brute-force LCS Algorithm SLOW == BAD

ADA: 7. Dynamic Prog.14 Simplify the problem: 1.Find the length of a LCS 2. We'll extend the algorithm later to find the LCS. 5. Towards a Better LCS Algorithm

ADA: 7. Dynamic Prog.15 If X = then A prefix is x[1.. 4] == o we abbreviate this as x 4 Also x 0 is the empty sequencePrefixes

ADA: 7. Dynamic Prog.16 c[] is a table (2D array) for storing LCS lengths: c[i, j] = | LCS(x[1.. i], y[1.. j]) | | s | is the length of a sequence s Since x is of length m, and y is of length n, then o c[m, n] = | LCS(x, y) | Creating a Table of Lengths

ADA: 7. Dynamic Prog.17 Since X 0 and Y 0 are empty strings, their LCS is always empty (i.e. c[0, 0] == 0) The LCS of an empty string and any other string is empty, so for every i and j: c[0, j] == c[i, 0] == 0 Calculating LCS Lengths

ADA: 7. Dynamic Prog.18 Initial c[]

ADA: 7. Dynamic Prog.19 The first line of this definition fills the top row and first column of c[] with 0's, as in the non-recursive approach. 6. Recursive Definition of c[]

ADA: 7. Dynamic Prog.20 When we calculate c[i, j], there are two cases: First case: x[i] == y[j]: one more symbol in strings X and Y matches, so the length of LCS X i and Y j equals the length of LCS of smaller strings X i-1 and Y i-1, plus 1

ADA: 7. Dynamic Prog.21 Second case: x[i] != y[j] As symbols don’t match, our solution is not improved, and the length of LCS(X i, Y j ) is the same as the biggest from before (i.e. max of LCS(X i, Y j-1 ) and LCS(X i-1,Y j )

ADA: 7. Dynamic Prog.22 One advantage of a recursive definition for c[] is that it makes it easy to show that c is optimal by induction o c[i, j] increases the size of a sub-solution (line 2) or uses the bigger of two sub-solutions (line 3) o assuming that a smaller c[] entry is optimal, then a larger c[] entry is optimal. o when combined with the base cases, which are optimal, then c[] is an optimal solution for all entries 7. Is c[] Algorithm Optimal?

ADA: 7. Dynamic Prog.23 c[] is an optimal way to calculate the length of a LCS using smaller optimal solutions ( Hallmark #1 ) The c[] algorithm can be used to return the LCS (see later), so LCS also has Hallmark #1 Is LCS() Optimal?

ADA: 7. Dynamic Prog.24 LCS(x, y, i, j) if i == 0 or j == 0 then c[i, j] ← 0 else if x[i] == y[ j] then c[i, j] ← LCS(x, y, i–1, j–1) + 1 else c[i, j] ← max( LCS(x, y, i–1, j), LCS(x, y, i, j–1) ) return c[i, j] The recursive definition of c[] has been changed into a LCS() function which returns c[] LCS Length as Recursive Code

ADA: 7. Dynamic Prog.25 Does the LCS() algorithm have many repeating (overlapping) subproblems? o i.e. does it have DP Hallmark #2 ? Consider the worst case execution o x [ i ] ≠ y [ j ], in which case the algorithm evaluates two subproblems, each with only one parameter decremented 8. Repeated Subproblems?

ADA: 7. Dynamic Prog.26 Height = m + n. The total work is exponential, but we’re repeating lots of subproblems. Recursion Tree (in worst cases)

ADA: 7. Dynamic Prog.27 The number of distinct LCS subproblems for two strings of lengths m and n is only m*n. o a lot less than 2 m+n total no. of problems Dynamic Programming Hallmark #2

ADA: 7. Dynamic Prog.28 LCS has both DP hallmarks, and so will benefit from the DP programming techniques: o recursion o memoization o bottom-up execution 9. LCS() as a DP Problem

ADA: 7. Dynamic Prog.29 LCS(x, y, i, j) if c[i, j] is empty then // calculate if not already in c[i, j] if i == 0 or j == 0 then c[i, j] ← 0 else if x[i] == y[ j] then c[i, j] ← LCS(x, y, i–1, j–1) + 1 else c[i, j] ← max( LCS(x, y, i–1, j), LCS(x, y, i, j–1) ) return c[i, j] 9.1. Memoization Time = Θ(m*n) == constant work per table entry Space = Θ(m*n)

ADA: 7. Dynamic Prog.30 This algorithm works top-down o start with large subsequences, and calculate the smaller subsequences Let's switch to bottom-up execution o calculate the small subsequences first, then move to larger ones 9.2. Bottom-up Execution

ADA: 7. Dynamic Prog.31 LCS-Length(X, Y) 1. m = length(X) // get the # of symbols in X 2. n = length(Y) // get the # of symbols in Y 3. for i = 1 to m c[i,0] = 0 // special case: Y 0 4. for j = 1 to n c[0,j] = 0 // special case: X 0 5. for i = 1 to m // for all X i 6. for j = 1 to n // for all Y j 7. if ( X i == Y j ) 8. c[i,j] = c[i-1,j-1] else c[i,j] = max( c[i-1,j], c[i,j-1] ) 10. return c LCS Length Bottom-up the same recursive definition of c[] as before

ADA: 7. Dynamic Prog.32 We’ll see how a bottom-up LCS works on: X = ABCB Y = BDCAB 10. Bottom-up Examples LCS(X, Y) = BCB X = A B C B Y = B D C A B LCS-length(X, Y) = 3

ADA: 7. Dynamic Prog.33 LCS Example 1 j i Xi A B C B YjBBACD X = ABCB; m = |X| = 4 Y = BDCAB; n = |Y| = 5 Allocate array c[5,4] ABCB BDCAB

ADA: 7. Dynamic Prog.34 j i Xi A B C B YjBBACD ABCB BDCAB for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0

ADA: 7. Dynamic Prog.35 j i Xi A B C B YjBBACD ABCB BDCAB if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

ADA: 7. Dynamic Prog.36 j i Xi A B C B YjBBACD ABCB BDCAB if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

ADA: 7. Dynamic Prog.37 j i Xi A B C B YjBBACD ABCB BDCAB if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

ADA: 7. Dynamic Prog.38 j i Xi A B C B YjBBACD ABCB BDCAB if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

ADA: 7. Dynamic Prog.39 j i Xi A B C B YjBBACD ABCB BDCAB if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] )

ADA: 7. Dynamic Prog.40 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.41 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.42 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.43 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.44 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.45 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.46 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.47 j i Xi A B C B YjBBACD if ( X i == Y j ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) ABCB BDCAB

ADA: 7. Dynamic Prog.48 The bottom-up LCS algorithm calculates the values of each entry of the array c[m, n] So what is the running time? O(m*n) Since each c[i, j] is calculated in constant time, and there are m*n elements in the array Running Time

ADA: 7. Dynamic Prog.49 Example 2 m elements in x[1..m] n elements in y[1..n] B == B, so max + 1

ADA: 7. Dynamic Prog.50 So far, we have found the length of LCS. We want to modify this algorithm to have it calculate LCS of X and Y Each c[i, j] depends on c[i-1, j] and c[i, j-1] or c[i-1, j-1] For each c[i, j] we can trace back how it was calculated: 11. Finding the LCS For example, here c[i, j] = c[i-1, j-1] +1 = 2+1=3

ADA: 7. Dynamic Prog.51 So we can start from c[m,n] (bottom right of c[]) and move backwards Whenever c[i,j] = c[i-1, j-1]+1, record x[i] as part of the LCS When i=0 or j=0, we have reached the beginning, so can stop. Remember that:

ADA: 7. Dynamic Prog.52 Finding LCS Example 1 j i Xi A B C YjBBACD B

ADA: 7. Dynamic Prog.53 j i Xi A B C YjBBACD B

ADA: 7. Dynamic Prog.54 j i Xi A B C YjBBACD B BCB LCS:

ADA: 7. Dynamic Prog.55 Reconstruct LCS by tracing backwards (start at bottom-right, follow numbers and the lines back to top-left). LCS() = BCBA Finding LCS for Example 2

ADA: 7. Dynamic Prog.56 There may be several paths through the table, which represent different answers for LCS() LCS() = BDAB All of them have the LCS-length of 4 LCS() Offers Choices D C A B AABCBDB B A

ADA: 7. Dynamic Prog.57 With this bottom-up approach, the space requirements can be reduced to O(min(m, n)) from O(m*n). Why? The calculation of each element in the current row only depends on 3 elements (2 from the previous row). Alternatively, a column can be calculated using the previous column. So the calculation space only requires 1 row or 1 column, which ever is smaller. 12. Space and Bottom-up