Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Analysis of Algorithms
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Overview What is Dynamic Programming? A Sequence of 4 Steps
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 14 Dynamic.
Dynamic Programming.
CSCI 256 Data Structures and Algorithm Analysis Lecture 13 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 20.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
DYNAMIC PROGRAMMING. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic.
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Dynamic programming 叶德仕
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Final Exam Review Final exam will have the similar format and requirements as Mid-term exam: Closed book, no computer, no smartphone Calculator is Ok Final.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 28, 2014.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Lectures on Greedy Algorithms and Dynamic Programming
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Sequence Alignment Tanya Berger-Wolf CS502: Algorithms in Computational Biology January 25, 2011.
Chapter 6 Dynamic Programming
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Instructor Neelima Gupta Instructor: Ms. Neelima Gupta.
CS38 Introduction to Algorithms Lecture 10 May 1, 2014.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 21.
CSCI 256 Data Structures and Algorithm Analysis Lecture 16 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
CS502: Algorithms in Computational Biology
Chapter 6 Dynamic Programming
Algorithm Design and Analysis
Dynamic programming 叶德仕
Analysis of Algorithms CS 477/677
Prepared by Chen & Po-Chuan 2016/03/29
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming Longest Path in a DAG, Yin Tat Lee
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Analysis of Algorithms CS 477/677
Data Structures and Algorithm Analysis Lecture 15
Presentation transcript:

Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.

2 Weighted Activity Selection Weighted activity selection problem (generalization of CLR 17.1). n Job requests 1, 2, …, N. n Job j starts at s j, finishes at f j, and has weight w j. n Two jobs compatible if they don't overlap. n Goal: find maximum weight subset of mutually compatible jobs. Time 0 A C F B D G E H

3 Activity Selection: Greedy Algorithm Recall greedy algorithm works if all weights are 1. Sort jobs by increasing finish times so that f 1  f 2 ...  f N. S =  FOR j = 1 to N IF (job j compatible with A) S  S  {j} RETURN S Greedy Activity Selection Algorithm S = jobs selected.

4 Weighted Activity Selection Notation. n Label jobs by finishing time: f 1  f 2 ...  f N. n Define q j = largest index i < j such that job i is compatible with j. – q 7 = 3, q 2 = 0 Time

5 Weighted Activity Selection: Structure Let OPT(j) = value of optimal solution to the problem consisting of job requests {1, 2,..., j }. n Case 1: OPT selects job j. – can't use incompatible jobs { q j + 1, q j + 2,..., j-1 } – must include optimal solution to problem consisting of remaining compatible jobs { 1, 2,..., q j } n Case 2: OPT does not select job j. – must include optimal solution to problem consisting of remaining compatible jobs { 1, 2,..., j - 1 }

6 Weighted Activity Selection: Brute Force INPUT: N, s 1,…,s N, f 1,…,f N, w 1,…,w N Sort jobs by increasing finish times so that f 1  f 2 ...  f N. Compute q 1, q 2,..., q N r-compute(j) { IF (j = 0) RETURN 0 ELSE return max(w j + r-compute(q j ), r-compute(j-1)) } Recursive Activity Selection

7 Dynamic Programming Subproblems Spectacularly redundant subproblems  exponential algorithms. 1, 2, 3, 4, 5, 6, 7, 8 1, 2, 3, 4, 5, 6, 71, 2, 3, 4, 5 1, 2, 3, 4, 5, 6 1, 2, 3  1, 2, 3, 4 11, 2, 3  1, 2 1, 2, 3, 4, 5  1 ...

8 Divide-and-Conquer Subproblems Independent subproblems  efficient algorithms. 1, 2, 3, 4, 5, 6, 7, 8 1, 2, 3, 4 1, 23, , 6, 7, 8 5, 67,

9 Weighted Activity Selection: Memoization INPUT: N, s 1,…,s N, f 1,…,f N, w 1,…,w N Sort jobs by increasing finish times so that f 1  f 2 ...  f N. Compute q 1, q 2,..., q N Global array OPT[0..N] FOR j = 0 to N OPT[j] = "empty" m-compute(j) { IF (j = 0) OPT[0] = 0 ELSE IF (OPT[j] = "empty") OPT[j] = max(w j + m-compute(q j ), m-compute(j-1)) RETURN OPT[j] } Memoized Activity Selection

10 Weighted Activity Selection: Running Time Claim: memoized version of algorithm takes O(N log N) time. n Ordering by finish time: O(N log N). n Computing q j : O(N log N) via binary search. m-compute(j) : each invocation takes O(1) time and either – (i) returns an existing value of OPT[] – (ii) fills in one new entry of OPT[] and makes two recursive calls Progress measure  = # nonempty entries of OPT[].  Initially  = 0, throughout   N.  (ii) increases  by 1  at most 2N recursive calls. Overall running time of m-compute(N) is O(N).

11 Weighted Activity Selection: Finding a Solution m-compute(N) determines value of optimal solution. n Modify to obtain optimal solution itself. n # of recursive calls  N  O(N). ARRAY: OPT[0..N] Run m-compute(N) find-sol(j) { IF (j = 0) output nothing ELSE IF (w j + OPT[q j ] > OPT[j-1]) print j find-sol(q j ) ELSE find-sol(j-1) } Finding an Optimal Set of Activities

12 Weighted Activity Selection: Bottom-Up Unwind recursion in memoized algorithm. INPUT: N, s 1,…,s N, f 1,…,f N, w 1,…,w N Sort jobs by increasing finish times so that f 1  f 2 ...  f N. Compute q 1, q 2,..., q N ARRAY: OPT[0..N] OPT[0] = 0 FOR j = 1 to N OPT[j] = max(w j + OPT[q j ], OPT[j-1]) Bottom-Up Activity Selection

13 Dynamic Programming Overview Dynamic programming. n Similar to divide-and-conquer. – solves problem by combining solution to sub-problems n Different from divide-and-conquer. – sub-problems are not independent – save solutions to repeated sub-problems in table – solution usually has a natural left-to-right ordering Recipe. n Characterize structure of problem. – optimal substructure property n Recursively define value of optimal solution. n Compute value of optimal solution. n Construct optimal solution from computed information. Top-down vs. bottom-up: different people have different intuitions.

14 Least Squares Least squares. n Foundational problem in statistic and numerical analysis. n Given N points in the plane { (x 1, y 1 ), (x 2, y 2 ),..., (x N, y N ) }, find a line y = ax + b that minimizes the sum of the squared error: n Calculus  min error is achieved when:

15 Segmented Least Squares Segmented least squares. n Points lie roughly on a sequence of 3 lines. n Given N points in the plane p 1, p 2,..., p N, find a sequence of lines that minimize: – the sum of the sum of the squared errors E in each segment – the number of lines L n Tradeoff function: e + c L, for some constant c > 0. 3 lines better than one

16 Segmented Least Squares: Structure Notation. n OPT(j) = minimum cost for points p 1, p i+1,..., p j. n e(i, j) = minimum sum of squares for points p i, p i+1,..., p j Optimal solution: n Last segment uses points p i, p i+1,..., p j for some i. n Cost = e(i, j) + c + OPT(i-1). New dynamic programming technique. n Weighted activity selection: binary choice. n Segmented least squares: multi-way choice.

17 Segmented Least Squares: Algorithm Running time: n Bottleneck = computing e(i, n) for O(N 2 ) pairs, O(N) per pair using previous formula. n O(N 3 ) overall. INPUT: N, p 1,…,p N, c ARRAY: OPT[0..N] OPT[0] = 0 FOR j = 1 to N FOR i = 1 to j compute the least square error e[i,j] for the segment p i,..., p j OPT[j] = min 1  i  j (e[i,j] + c + OPT[i-1]) RETURN OPT[N] Bottom-Up Segmented Least Squares

18 Segmented Least Squares: Improved Algorithm A quadratic algorithm. n Bottleneck = computing e(i, j). n O(N 2 ) preprocessing + O(1) per computation. Preprocessing

19 Knapsack Problem Knapsack problem. n Given N objects and a "knapsack." n Item i weighs w i > 0 Newtons and has value v i > 0. n Knapsack can carry weight up to W Newtons. n Goal: fill knapsack so as to maximize total value. ItemValueWeight W = 11 OPT value = 40: { 3, 4 } Greedy = 35: { 5, 2, 1 } v i / w i

20 Knapsack Problem: Structure OPT(n, w) = max profit subset of items {1,..., n} with weight limit w. n Case 1: OPT selects item n. – new weight limit = w – w n – OPT selects best of {1, 2,..., n – 1} using this new weight limit n Case 2: OPT does not select item n. – OPT selects best of {1, 2,..., n – 1} using weight limit w New dynamic programming technique. n Weighted activity selection: binary choice. n Segmented least squares: multi-way choice. n Knapsack: adding a new variable.

21 Knapsack Problem: Bottom-Up INPUT: N, W, w 1,…,w N, v 1,…,v N ARRAY: OPT[0..N, 0..W] FOR w = 0 to W OPT[0, w] = 0 FOR n = 1 to N FOR w = 1 to W IF (w n > w) OPT[n, w] = OPT[n-1, w] ELSE OPT[n, w] = max {OPT[n-1, w], v n + OPT[n-1, w-w n ]} RETURN OPT[N, W] Bottom-Up Knapsack

22 Knapsack Algorithm Weight Limit0 {1}0 {1, 2}0 {1, 2, 3}0 {1, 2, 3, 4}0 {1, 2, 3, 4, 5}0  ItemValueWeight N + 1 W + 1

23 Knapsack Problem: Running Time Knapsack algorithm runs in time O(NW). n Not polynomial in input size! n "Pseudo-polynomial." n Decision version of Knapsack is "NP-complete." n Optimization version is "NP-hard." Knapsack approximation algorithm. n There exists a polynomial algorithm that produces a feasible solution that has value within 0.01% of optimum. n Stay tuned.

24 Sequence Alignment How similar are two strings? n ocurrance n occurrence ocurrance ccurrenceo ocurrnce ccurrnceo a e ocurrance ccurrenceo 5 mismatches, 1 gap 1 mismatch, 1 gap0 mismatches, 3 gaps

25 Industrial Application

26 2  +  CA CGACCTACCT CTGACTACAT TGACCTACCT CTGACTACAT T C C C  TC +  GT +  AG + 2  CA Sequence Alignment: Applications Applications. n Spell checkers / web dictionaries. – ocurrance – occurrence n Computational biology. – ctgacctacct – cctgactacat Edit distance. n Needleman-Wunsch, n Gap penalty . n Mismatch penalty  pq. n Cost = sum of gap and mismatch penalties.

27 Problem. n Input: two strings X = x 1 x 2... x M and Y = y 1 y 2... y N. n Notation: {1, 2,..., M} and {1, 2,..., N} denote positions in X, Y. n Matching: set of ordered pairs (i, j) such that each item occurs in at most one pair. n Alignment: matching with no crossing pairs. – if (i, j)  M and (i', j')  M and i < i', then j < j' n Example: CTACCG vs. TACATG. – M = { (2,1) (3,2) (4,3), (5,4), (6,6) } n Goal: find alignment of minimum cost. Sequence Alignment CTACC TACAT G G

28 Sequence Alignment: Problem Structure OPT(i, j) = min cost of aligning strings x 1 x 2... x i and y 1 y 2... y j. n Case 1: OPT matches (i, j). – pay mismatch for (i, j) + min cost of aligning two strings x 1 x 2... x i-1 and y 1 y 2... y j-1 n Case 2a: OPT leaves m unmatched. – pay gap for i and min cost of aligning x 1 x 2... x i-1 and y 1 y 2... y j n Case 2b: OPT leaves n unmatched. – pay gap for j and min cost of aligning x 1 x 2... x i and y 1 y 2... y j-1

29 Sequence Alignment: Algorithm O(MN) time and space. INPUT: M, N, x 1 x 2...x M, y 1 y 2...y N, ,  ARRAY: OPT[0..M, 0..N] FOR i = 0 to M OPT[0, i] = i  FOR j = 0 to N OPT[j, 0] = j  FOR i = 1 to M FOR j = 1 to N OPT[i, j] = min(  [x i, y j ] + OPT[i-1, j-1],  + OPT[i-1, j],  + OPT[i, j-1]) RETURN OPT[M, N] Bottom-Up Sequence Alignment

30 Sequence Alignment: Linear Space Straightforward dynamic programming takes  (MN) time and space. n English words or sentences  may not be a problem. n Computational biology  huge problem. – M = N = 100,000 – 10 billion ops OK, but 10 gigabyte array? Optimal value in O(M + N) space and O(MN) time. n Only need to remember OPT( i - 1, ) to compute OPT( i, ). n Not clear how to recover optimal alignment itself. Optimal alignment in O(M + N) space and O(MN) time. n Clever combination of divide-and-conquer and dynamic programming.

31 Consider following directed graph (conceptually). n Note: takes  (MN) space to write down graph. Let f(i, j) be shortest path from (0,0) to (i, j). Then, f(i, j) = OPT(i, j). Sequence Alignment: Linear Space

32 Let f(i, j) be shortest path from (0,0) to (i, j). Then, f(i, j) = OPT(i, j). n Base case: f(0, 0) = OPT(0, 0) = 0. n Inductive step: assume f(i', j') = OPT(i', j') for all i' + j' < i + j. n Last edge on path to (i, j) is either from (i-1, j-1), (i-1, j), or (i, j-1). Sequence Alignment: Linear Space

33 Let g(i, j) be shortest path from (i, j) to (M, N). n Can compute in O(MN) time for all (i, j) by reversing arc orientations and flipping roles of (0, 0) and (M, N). Sequence Alignment: Linear Space

34 Observation 1: the cost of the shortest path that uses (i, j) is f(i, j) + g(i, j). Sequence Alignment: Linear Space i-j M-N x1x1 x2x2 y1y1 x3x3 y2y2 y3y3 y4y4 y5y5 y6y6   0-0

35 Observation 1: the cost of the shortest path that uses (i, j) is f(i, j) + g(i, j). Observation 2: let q be an index that minimizes f(q, N/2) + g(q, N/2). Then, the shortest path from (0, 0) to (M, N) uses (q, N/2). Sequence Alignment: Linear Space i-j M-N x1x1 x2x2 y1y1 x3x3 y2y2 y3y3 y4y4 y5y5 y6y6   0-0

36 Divide: find index q that minimizes f(q, N/2) + g(q, N/2) using DP. Conquer: recursively compute optimal alignment in each "half." Sequence Alignment: Linear Space i-j M-N x1x1 x2x2 y1y1 x3x3 y2y2 y3y3 y4y4 y5y5 y6y6   0-0 N / 2

37 T(m, n) = max running time of algorithm on strings of length m and n. Theorem. T(m, n) = O(mn). n O(mn) work to compute f (, n / 2) and g (, n / 2). n O(m + n) to find best index q. n T(q, n / 2) + T(m - q, n / 2) work to run recursively. n Choose constant c so that: n Base cases: m = 2 or n = 2. n Inductive hypothesis: T(m, n)  2cmn. Sequence Alignment: Linear Space