© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Algorithms Dynamic programming Longest Common Subsequence.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
1 prepared from lecture material © 2004 Goodrich & Tamassia COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming UNC Chapel Hill Z. Guo.
Text Processing 1 Last Update: July 31, Topics Notations & Terminology Pattern Matching – Brute Force – Boyer-Moore Algorithm – Knuth-Morris-Pratt.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Lecture21: Dynamic Programming Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Dynamic Programming 張智星 (Roger Jang) 多媒體資訊檢索實驗室 台灣大學 資訊工程系.
Dynamic Programming David Kauchak cs302 Spring 2013.
Introduction to Algorithms Jiafen Liu Sept
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Part 2 # 68 Longest Common Subsequence T.H. Cormen et al., Introduction to Algorithms, MIT press, 3/e, 2009, pp Example: X=abadcda, Y=acbacadb.
CSC 213 Lecture 19: Dynamic Programming and LCS. Subsequences (§ ) A subsequence of a string x 0 x 1 x 2 …x n-1 is a string of the form x i 1 x.
Lecture 2: Dynamic Programming 主講人 : 虞台文. Content What is Dynamic Programming? Matrix Chain-Products Sequence Alignments Knapsack Problem All-Pairs Shortest.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 COMP9024: Data Structures and Algorithms Week Ten: Text Processing Hui Wu Session 1, 2016
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Pattern Matching 9/14/2018 3:36 AM
Chapter 8 Dynamic Programming.
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
CS6045: Advanced Algorithms
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Longest Common Subsequence
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Lecture 8. Paradigm #6 Dynamic Programming
Dynamic Programming-- Longest Common Subsequence
Dynamic Programming II DP over Intervals
Matrix Chain Product 張智星 (Roger Jang)
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Presentation transcript:

© 2004 Goodrich, Tamassia Dynamic Programming1

© 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming is a general algorithm design paradigm. Rather than give the general structure, let us first give a motivating example: Matrix Chain-Products Review: Matrix Multiplication. C = A*B A is d × e and B is e × f O(def ) time AC B dd f e f e i j i,j

© 2004 Goodrich, Tamassia Dynamic Programming3 Matrix Chain-Products Matrix Chain-Product: Compute A=A 0 *A 1 *…*A n-1 A i is d i × d i+1 Problem: How to parenthesize? Example B is 3 × 100 C is 100 × 5 D is 5 × 5 (B*C)*D takes = 1575 ops B*(C*D) takes = 4000 ops

© 2004 Goodrich, Tamassia Dynamic Programming4 An Enumeration Approach Matrix Chain-Product Alg.: Try all possible ways to parenthesize A=A 0 *A 1 *…*A n-1 Calculate number of ops for each one Pick the one that is best Running time: The number of paranethesizations is equal to the number of binary trees with n nodes This is exponential! It is called the Catalan number, and it is almost 4 n. This is a terrible algorithm!

© 2004 Goodrich, Tamassia Dynamic Programming5 A Greedy Approach Idea #1: repeatedly select the product that uses (up) the most operations. Counter-example: A is 10 × 5 B is 5 × 10 C is 10 × 5 D is 5 × 10 Greedy idea #1 gives (A*B)*(C*D), which takes = 2000 ops A*((B*C)*D) takes = 1000 ops

© 2004 Goodrich, Tamassia Dynamic Programming6 Another Greedy Approach Idea #2: repeatedly select the product that uses the fewest operations. Counter-example: A is 101 × 11 B is 11 × 9 C is 9 × 100 D is 100 × 99 Greedy idea #2 gives A*((B*C)*D)), which takes = ops (A*B)*(C*D) takes = ops The greedy approach is not giving us the optimal value.

© 2004 Goodrich, Tamassia Dynamic Programming7 A “Recursive” Approach Define subproblems: Find the best parenthesization of A i *A i+1 *…*A j. Let N i,j denote the number of operations done by this subproblem. The optimal solution for the whole problem is N 0,n-1. Subproblem optimality: The optimal solution can be defined in terms of optimal subproblems There has to be a final multiplication (root of the expression tree) for the optimal solution. Say, the final multiply is at index i: (A 0 *…*A i )*(A i+1 *…*A n-1 ). Then the optimal solution N 0,n-1 is the sum of two optimal subproblems, N 0,i and N i+1,n-1 plus the time for the last multiply. If the global optimum did not have these optimal subproblems, we could define an even better “optimal” solution.

© 2004 Goodrich, Tamassia Dynamic Programming8 A Characterizing Equation The global optimal has to be defined in terms of optimal subproblems, depending on where the final multiply is at. Let us consider all possible places for that final multiply: Recall that A i is a d i × d i+1 dimensional matrix. So, a characterizing equation for N i,j is the following: Note that subproblems are not independent--the subproblems overlap.

© 2004 Goodrich, Tamassia Dynamic Programming9 A Dynamic Programming Algorithm Since subproblems overlap, we don’t use recursion. Instead, we construct optimal subproblems “bottom-up.” N i,i ’s are easy, so start with them Then do length 2,3,… subproblems, and so on. Running time: O(n 3 ) Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal paranethization of S for i  1 to n-1 do N i,i  0 for b  1 to n-1 do for i  0 to n-b-1 do j  i+b N i,j  +infinity for k  i to j-1 do N i,j  min{N i,j, N i,k +N k+1,j +d i d k+1 d j+1 }

© 2004 Goodrich, Tamassia Dynamic Programming10 answer N … n-1 … j i A Dynamic Programming Algorithm Visualization The bottom-up construction fills in the N array by diagonals N i,j gets values from pervious entries in i-th row and j-th column Filling in each entry in the N table takes O(n) time. Total run time: O(n 3 ) Getting actual parenthesization can be done by remembering “k” for each N entry

© 2004 Goodrich, Tamassia Dynamic Programming11 The General Dynamic Programming Technique Applies to a problem that at first seems to require a lot of time (possibly exponential), provided we have: Simple subproblems: the subproblems can be defined in terms of a few variables, such as j, k, l, m, and so on. Subproblem optimality: the global optimum value can be defined in terms of optimal subproblems Subproblem overlap: the subproblems are not independent, but instead they overlap (hence, should be constructed bottom-up).

© 2004 Goodrich, Tamassia Dynamic Programming12 Subsequences (§ ) A subsequence of a character string x 0 x 1 x 2 …x n-1 is a string of the form x i 1 x i 2 …x i k, where i j < i j+1. Not the same as substring! Example String: ABCDEFGHIJK Subsequence: ACEGJIK Subsequence: DFGHK Not subsequence: DAGH

© 2004 Goodrich, Tamassia Dynamic Programming13 The Longest Common Subsequence (LCS) Problem Given two strings X and Y, the longest common subsequence (LCS) problem is to find a longest subsequence common to both X and Y Has applications to DNA similarity testing (alphabet is {A,C,G,T}) Example: ABCDEFG and XZACKDFWGH have ACDFG as a longest common subsequence

© 2004 Goodrich, Tamassia Dynamic Programming14 A Poor Approach to the LCS Problem A Brute-force solution: Enumerate all subsequences of X Test which ones are also subsequences of Y Pick the longest one. Analysis: If X is of length n, then it has 2 n subsequences This is an exponential-time algorithm!

© 2004 Goodrich, Tamassia Dynamic Programming15 A Dynamic-Programming Approach to the LCS Problem Define L[i,j] to be the length of the longest common subsequence of X[0..i] and Y[0..j]. Allow for -1 as an index, so L[-1,k] = 0 and L[k,-1]=0, to indicate that the null part of X or Y has no match with the other. Then we can define L[i,j] in the general case as follows: 1. If xi=yj, then L[i,j] = L[i-1,j-1] + 1 (we can add this match) 2. If xi≠yj, then L[i,j] = max{L[i-1,j], L[i,j-1]} (we have no match here) Case 1:Case 2:

© 2004 Goodrich, Tamassia Dynamic Programming16 An LCS Algorithm Algorithm LCS(X,Y ): Input:Strings X and Y with n and m elements, respectively Output: For i = 0,…,n-1, j = 0,...,m-1, the length L[i, j] of a longest string that is a subsequence of both the string X[0..i] = x 0 x 1 x 2 …x i and the string Y [0.. j] = y 0 y 1 y 2 …y j for i =1 to n-1 do L[i,-1] = 0 for j =0 to m-1 do L[-1,j] = 0 for i =0 to n-1 do for j =0 to m-1 do if x i = y j then L[i, j] = L[i-1, j-1] + 1 else L[i, j] = max{L[i-1, j], L[i, j-1]} return array L

© 2004 Goodrich, Tamassia Dynamic Programming17 Visualizing the LCS Algorithm

© 2004 Goodrich, Tamassia Dynamic Programming18 Analysis of LCS Algorithm We have two nested loops The outer one iterates n times The inner one iterates m times A constant amount of work is done inside each iteration of the inner loop Thus, the total running time is O(nm) Answer is contained in L[n,m] (and the subsequence can be recovered from the L table).