CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.

Slides:



Advertisements
Similar presentations
Dynamic Programming.
Advertisements

1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
UMass Lowell Computer Science Analysis of Algorithms Prof. Giampiero Pecelli Fall, 2010 Paradigms for Optimization Problems Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
Introduction to Algorithms Second Edition by Cormen, Leiserson, Rivest & Stein Chapter 15.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Lecture 2 (Part 1) Tuesday, 9/11/01 Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 2 Tuesday, 2/5/02 Dynamic Programming.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Analysis of Algorithms CS 477/677
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2008 Design Patterns for Optimization Problems Dynamic Programming.
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
chapter chapter chapter253.
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
1 Dynamic programming algorithms for all-pairs shortest path and longest common subsequences We will study a new technique—dynamic programming algorithms.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2005 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
Introduction to Algorithms Jiafen Liu Sept
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Part 2 # 68 Longest Common Subsequence T.H. Cormen et al., Introduction to Algorithms, MIT press, 3/e, 2009, pp Example: X=abadcda, Y=acbacadb.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
David Meredith Dynamic programming David Meredith
CS330 Discussion 6.
CS200: Algorithm Analysis
CSCE 411 Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
Chapter 15: Dynamic Programming II
Ch. 15: Dynamic Programming Ming-Te Chi
Introduction to Algorithms: Dynamic Programming
DYNAMIC PROGRAMMING.
Longest Common Subsequence
Analysis of Algorithms CS 477/677
Negative-Weight edges:
Longest Common Subsequence
Analysis of Algorithms CS 477/677
Presentation transcript:

CS420 Lecture 9 Dynamic Programming

Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems are encountered. This often leads to a recursive definition of a solution. However, the recursive algorithm is often inefficient in that it solves the same sub problem many times. Dynamic programming avoids this repetition by solving the problem bottom up and storing sub solutions.

Dynamic vs Greedy Compared to Greedy, there is no predetermined local best choice of a sub solution but a best solution is chosen by computing a set of alternatives and picking the best.

Dynamic Programming Dynamic Programming has the following steps - Characterize the structure of the problem - Recursively define the optimum - Compute the optimum bottom up, storing values of sub solutions - Construct the optimum from the stored data

Assembly line scheduling A factory has two assembly lines with the same stations S 1,j and S 2,j, j=1..n. Each station has its own assembly time A i,j, it takes e i time to enter assembly line i, and x i time to leave assembly line i. For rush jobs the product can switch lines to go through faster. Switching from station S i,j costs t i,j time.

S 1,1 S 1,2 S 1,3 S 2,1 S 2,2 S 2, startfinish Fastest way to get through?

S 1,1 S 1,2 S 1,3 S 2,1 S 2,2 S 2, startfinish Fastest way to get through: S 1,1, S 2,2 and S 1,3.

Structure of the Optimal Solution The fastest way to get through S 1,j is: only one way for j=1 two ways for j>1: without extra cost from S 1,j-1 or with switch cost from S 2,j-1 A similar argument holds for S 2,j.

Optimal Sub structure the fastest way to get through S 1,j is based on the fastest ways to get through S 1,j-1 and S 2,j-1 otherwise a faster way could be constructed (  contradiction) This property is called optimal substructure and indicates that a recursive definition of the optimum problem, based on optimal solutions of sub problems can be formulated.

Recursive definition The fastest way through the assembly lines f* = min(f 1 (n)+x 1,f 2 (n)+x 2 ) f 1 (j) = e 1 +A 1,1 if j=1 = min(f 1 (j-1)+A 1,j, f 2 (j-1)+t 2,j-1 +A 1,j ) if j>1 f 2 (j) = e 2 +A 2,1 if j=1 = min(f 2 (j-1)+A 2,j, f 1 (j-1)+t 2,j-1 +A 2,j ) if j>1 Draw a call tree, how many times is f i (n-j) called?

Recursive definition The fastest way through the assembly lines f* = min(f 1 (n)+x 1,f 2 (n)+x 2 ) f 1 (j) = e 1 +A 1,1 if j=1 = min(f 1 (j-1)+A 1,j, f 2 (j-1)+t 2,j-1 +A 1,j ) if j>1 f 2 (j) = e 2 +A 2,1 if j=1 = min(f 2 (j-1)+A 2,j, f 1 (j-1)+t 2,j-1 +A 2,j ) if j>1 Draw a call tree, how many times is f i (n-j) called: 2 n-j times, so exponential complexity

Bottom Up Approach In the recursive definition we define the f 1 (n) top down in terms of f 1 (n-1) and f 2 (n-1) When we compute f i (j) bottom up in increasing j order we can reuse the previous f i (j-1) values in the computation of f i (j).

S 1,1 S 1,2 S 1,3 S 2,1 S 2,2 S 2, startfinish Time at end of S i,1?

S 1,1 S 1,2 S 1,3 S 2,1 S 2,2 S 2, startfinish Best time at end of S i,2,source? 9 12

S 1,1 S 1,2 S 1,3 S 2,1 S 2,2 S 2, startfinish Best time at end of S i,3 ? source?

S 1,1 S 1,2 S 1,3 S 2,1 S 2,2 S 2, startfinish Best finish time?, source?

S 1,1 S 1,2 S 1,3 S 2,1 S 2,2 S 2, startfinish Best Finish time?, source?

FastLine(A,t,e,x,n) { f1[1] = e[1]+A[1,1]; f2[1] = e[2]+A[2,1] for j = 2 to n { l1[j] = minSource1(j); l2[j] = minSource2(j); } if (f1[n]+x[1] <= f2[n]+x[2]) { fstar = f1[n]+x[1]; lstar=1 } else { fstar = f2[n]+x[2]; lstar=2 } FastLine(A,t,e,x,n) { f1[1] = e[1]+A[1,1]; f2[1] = e[2]+A[2,1] for j = 2 to n { l1[j] = minSource1(j); l2[j] = minSource2(j); } if (f1[n]+x[1] <= f2[n]+x[2]) { fstar = f1[n]+x[1]; lstar=1 } else { fstar = f2[n]+x[2]; lstar=2 }

minSource1(j) { if (f1[j-1]+A[1,j] <= f2[j-1]+t[2,j-1]+A[1,j]) { f1[j]=f1[j-1]+A[1,j] return 1 } else { f1[j]=f2[j-1]+t[2,j-1]+A[1,j] return 2 } minSource1(j) { if (f1[j-1]+A[1,j] <= f2[j-1]+t[2,j-1]+A[1,j]) { f1[j]=f1[j-1]+A[1,j] return 1 } else { f1[j]=f2[j-1]+t[2,j-1]+A[1,j] return 2 }

minSource2(j) { if (f2[j-1]+A[2,j] <= f1[j-1]+t[1,j-1]+A[2,j]) { f2[j]=f2[j-1]+A[2,j] return 2 } else { f2[j]=f1[j-1]+t[1,j-1]+A[2,j] return 1 } minSource2(j) { if (f2[j-1]+A[2,j] <= f1[j-1]+t[1,j-1]+A[2,j]) { f2[j]=f2[j-1]+A[2,j] return 2 } else { f2[j]=f1[j-1]+t[1,j-1]+A[2,j] return 1 }

The arrays l1 and l2 keep track of which source station is used: l i [j] is the line number whose station j-1 is used in the fastest way through S i,j. i = lstar print "line" i ", station" n for j = n downto 2{ i = l i [j] print "line" i ", station" j-1 }

Optimal substructure Dynamic programming works when a problem has optimal substructure. Not all problems have optimal substructure.

Shortest Paths Find a shortest path in terms of number of edges from u to v (u  v) in an un-weighted graph. In lecture 7 we saw that breadth first search finds such shortest paths, and that these paths have no cycles. Any path from u to v has an intermediate vertex w (which can be u or v). The shortest path from u to v consists of the shortest path from u to w and the shortest path from w to v. Why?

Shortest Paths Find a shortest path in terms of number of edges from u to v (u  v) in an un-weighted graph. In lecture 7 we saw that breadth first search finds such shortest paths, and that these paths have no cycles. Any path from u to v has an intermediate vertex w (which can be u or v). The shortest path from u to v consists of the shortest path from u to w and the shortest path from w to v. If not there would be a shorter shortest path (contradiction). So?

Shortest Paths... has optimal subsructure

Longest simple paths Simple: no cycles Again in an unweighted graph Suppose again w is on the path from u to v. Then the longest simple path from u to v does not need to consist of the longest simple paths from u to w and from w to v.

Q T R Longest simple path from Q to T?

Q T R Longest simple path from Q to T: Q  R  T Longest simple path from R to T?

Q T R Longest simple path from Q to T: Q  R  T Longest simple path from R to T: R  Q  T Longest path from Q to T is Q ->R-> T, but it does not consist of the longest paths from Q to R and from R to T. The longest simple path problem has no optimal substructure!

Subsequence A sequence X= is a subsequence of sequence Y= if there is a strictly increasing sequence of indices I= such that: for all j=1,2,..,m we have X[j]=Y[i j ] Eg, X=, Y=, I=. How many sub sequences does Y have?

Subsequence A sequence X= is a subsequence of sequence Y= if there is a strictly increasing sequence of indices I= such that: for all j=1,2,..,m we have X[j]=Y[i j ] Eg, X=, Y=, I=. How many sub sequences does Y have: 2 |Y|

Longest Common Subsequence (LCS) A sequence Z is a common subsequence of X and Y if it is a subsequence of X and of Y. The longest common subsequence problem (LCS) is to find, well yes, the longest common subsequence of two sequences X and Y. Trial and error takes exponential because there are an exponential #subsequences

LCS has optimal substructure X i is the length i prefix of X. Cormen et.al. prove that for two sequences X= and Y= and LCS Z= of X and Y: 1. if x m =y n then z k =x m and Z k-1 is an LCS of X m-1 and Y n-1 2. if x m  y n then z k  x m  Z is an LCS of X m-1 and Y 3. if x m  y n then z k  y n  Z is an LCS of Y n-1 and X

Recursive algorithm for LCS 1. if x m =y n then z k =x m and Z k-1 is an LCS of X m-1 and Y n-1 2. if x m  y n then z k  x m  Z is an LCS of X m-1 and Y 3. if x m  y n then z k  y n  Z is an LCS of Y n-1 and X gives rise to a recursive formulation: LCS(X m,Y n ): if x m =y n find the LCS of X m-1 and Y n-1 and append x m to it, else find the LCSs of X m-1 and Y and of Y n-1 and X and pick the largest. Complexity? (call tree)

Length of LCS Length C[i,j] of of X i and Y j : C[i,j] = 0 if i=0 or j=0, C[i-1,j-1]+1 if i,j>0 and x i =y j max(C[i,j-1],C[i-1,j] if i,j>0 and x i  y j This can be computed in dynamic fashion We build C bottom up and we build a table B B[i,j] points at the entry chosen in the recurrence of C:  for C[i-1,j-1]  for C[i,j-1}  for C[i-1,j] In which order?

Length of LCS Length C[i,j] of of X i and Y j : C[i,j] = 0 if i=0 or j=0, C[i-1,j-1]+1 if i,j>0 and x i =y j max(C[i,j-1],C[i-1,j] if i,j>0 and x i  y j This can be computed in dynamic fashion We build C bottom up and we build a table B B[i,j] points at the entry chosen in the recurrence of C:  for C[i-1,j-1]  for C[i,j-1}  for C[i-1,j] In which order? Def before use  row major is OK

LCS algorithm for i in 1 to m C[i,0]=0 for j in 0 to n C[0,j] = 0 for i = 1 to m for j = 1 to n if (X[i]==Y[j]) { C[i,j]=C[i-1,j-1]+1; B[i,j]=  } else if (C[i-1,j]>=C[i,j-1]) { C[i,j]=C[i-1,j]; B[i,j]=  } else { C[i,j]=C[i,j-1]; B[i,j]=  }

try it on x=bab y=ababa C: B:   

Finding LCS By backtracking through B, starting at B[n,m] we can print the LCS. A  at B[i,j] indicates that X[i]=Y[j] is the end of the prefix of the LCS still to be printed.

Print LCS if (i==0 or j==0) return if (B[i,j]==  ) { Print-LCS(i-1,j-1); print(X[i]) else if (B[i,j]==  ) Print-LCS(i-1,j) else Print-LCS(i,j-1)

LCS The B table is not necessary; – C can be used in Print-LCS. The creation of C and B in LCS takes O(mn) time and space Print-LCS takes O(m+n) time.