Dynamic Programming Part One HKOI Training Team 2004.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Dynamic Programming Nithya Tarek. Dynamic Programming Dynamic programming solves problems by combining the solutions to sub problems. Paradigms: Divide.
Types of Algorithms.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Chapter 7 Dynamic Programming.
Introduction to Algorithms
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
16-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 7 Dynamic Programming 7.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
7 -1 Chapter 7 Dynamic Programming Fibonacci Sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming From An Excel Perspective. Dynamic Programming From An Excel Perspective Ranette Halverson, Richard Simpson, Catherine Stringfellow.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
動態規劃 ( 一 ) HKOI Training 2007 Kelly Choi 19 May 2007 Acknowledgement: References and slides extracted from 1. [Advanced] Dynamic Programming, ,
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
HKOI Training 2009 Kelly Choi 27 June 2009 Acknowledgements: References and slides extracted from 1. [Advanced] Dynamic Programming, , by Chi-Man.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Dynamic Programming 21 August 2004 Presented by Eddy Chan Written by Ng Tung.
CS 8833 Algorithms Algorithms Dynamic Programming.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Introduction to Algorithms Jiafen Liu Sept
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Advanced Dynamic Programming II HKOI Training Team 2004.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
Dynamic Programming.
Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Trevor Brown DC 2338, Office hour M3-4pm
Dynamic Programming-- Longest Common Subsequence
DYNAMIC PROGRAMMING.
Algorithms and Data Structures Lecture X
Presentation transcript:

Dynamic Programming Part One HKOI Training Team 2004

2 Recurrences Function defined in terms of itself –M(X) = X * M(X-1) –F n+2 = F n+1 + F n –K n+1 = max{K n , K n } One or more base cases are needed –M(0) = 1 –F 0 = 1, F 1 = 1 –K 10 = 100, K 11 = 177

3 Evaluating recurrences Combination –C(n, r) = C(n-1, r) + C(n-1, r-1) –C(n, 0) = C(n, n) = 1 Algorithm: function C(n, r) if (r=0) or (r=n) return 1 else return C(n-1, r) + C(n-1, r-1)

4 Slow... C(4, 2) C(3, 2)C(3, 1) C(2, 2)C(2, 1)C(2, 0) C(1, 1)C(1, 0) C(2, 1) C(1, 1)C(1, 0) C(5, 2) C(4, 1) C(3, 1)C(3, 0) C(2, 0)C(2, 1) C(1, 1)C(1, 0)

5 A table... C(5, 2) C(4, 2)C(4, 1) C(3, 2)C(3, 1)C(3, 0) C(2, 2)C(2, 1)C(2, 0) C(1, 1)C(1, 0)

6 Redundant calculations C(4, 2) C(3, 2)C(3, 1) C(2, 2)C(2, 1)C(2, 0) C(1, 1)C(1, 0) C(2, 1) C(1, 1)C(1, 0) C(5, 2) C(4, 1) C(3, 1)C(3, 0) C(2, 0)C(2, 1) C(1, 1)C(1, 0) C(2, 1) C(1, 1)C(1, 0) C(3, 1) C(2, 0)C(2, 1) C(1, 1)C(1, 0)

7 C(3, 1) C(2, 0)C(2, 1) C(1, 1)C(1, 0) C(2, 1) C(1, 1)C(1, 0) Fast... C(4, 2) C(3, 2)C(3, 1) C(2, 2)C(2, 1)C(2, 0) C(1, 1)C(1, 0) C(5, 2) C(4, 1) C(3, 0)

8 Dynamic programming Abbreviation: DP / DyP Not particularly related to programming –As in “ Linear Programming ” ! An approach (paradigm) for solving certain kinds of problems –Not a particular algorithm Usually applied on recurrences

9 Principles of DP Evaluating recurrences may sometimes be slow (exponential time) Accelerate by avoiding redundant calculations! –Memorize previously calculated values Usually applied to problems with: –a recurrence relation –overlapping subproblems –solutions exhibiting optimal substructure

10 Why DP? Reduce runtime –In the previous computation of C(n, k), how many times is the function invoked? Slow algorithm: 2C(n,k) – 1 [exponential in n] Fast algorithm: O(nk) [polynomial in n] Tradeoff – memory –In the C(n, k) problem, the memory requirement is O(n) using pure recursion; O(nk) is required for memorization In fact, the O(nk) bound can be improved

11 Classification of problems Induction –Evaluation of a certain function is the main concern –Examples: The N-th Fibonacci number Number of different binary trees with N nodes Optimization –Maximization or minimization of certain function is the objective –Examples: Shortest paths Activity scheduling (maximize no. of activities)

12 Triangle (IOI ’94) Given a triangle with N levels like the one on the left, find a path with maximum sum from the top to the bottom Only the sum is required

13 Triangle (analysis) Exhaustion? –How many paths are there in total? Greedy? –It doesn’t work. Why? Graph problem? –Possible, but not simple enough

14 Triangle (formulation) Let A[i][j] denote the number in the i- th row and j-th column, (i, j) Let F[i][j] denote the sum of the numbers on a path with max sum from (1, 1) to (i, j) Answer = maximum of F[N][1], F[N][2], …, F[N][N]

15 Triangle (formulation) Base case: F[1][1] = A[1][1] Progress: (i > 1, 1 < j < i) –F[i][1] = F[i-1][1]+A[i][1] –F[i][i] = F[i-1][i-1]+A[i][i] –F[i][j] = max{F[i-1][j-1],F[i-1][j]}+A[i][j] i-1, j-1 i-1, j i, j

16 Triangle (order of calc.) F[i][*] depends on F[i-1][*] Compute F row by row, from top to bottom

17 Triangle (algorithm) Algorithm: –F[1][1]  A[1][1]; for i  2 to N do F[i][1] = F[i-1][1]+A[i][1]; F[i][i] = F[i-1][i-1]+A[i][i]; for j  2 to i-1 do F[i][j]  max{F[i][j],F[i][j-1]} + A[i][j] answer  max{F[N][1], …, F[N][N]}

18 Triangle (complexity) Number of array entries to be computed: about N(N-1)/2 Time for computing one entry: O(1) Thus the total time complexity is O(N 2 ) Memory complexity: O(N 2 ) –This can be reduced to O(N) since F[i][*] only depends on F[i-1][*] –We can discard some previous entries after starting on a new row

19 Longest Common Subsequence Given two strings A and B of length n and m respectively, find their longest common subsequence (NOT substring) Example –A: aabcaabcaadyyyefg –B: cdfehgjaefazadxex –LCS: caaade –Explanation A: a a b c a a b c a a d y y y e f g B: c d f e h g j a e f a z a d x e x

20 LCS (analysis) Exhaustion –Number of subsequences of A = 2 n –Number of subsequences of B = 2 m –Exponential time! Observation –If a LCS of A and B is nonempty then it must have a last character (trivial) A: a a b c a a b c a a d y y y e f g B: c d f e h g j a e f a z a d x e x

21 LCS (analysis) Observation –Suppose the last LCS character corresponds to A[i] and B[j] –Removing the last LCS character results in a LCS of A[1..i-1] and B[1..j-1] A: a a b c a a b c a a d y y y e f g B: c d f e h g j a e f a z a d x e x A: a a b c a a b c a a d y y y e f g B: c d f e h g j a e f a z a d x e x

22 LCS (formulation) Thus it is reasonable to define F[i][j] as the length of any LCS of A[1..i] and B[1..j] Base cases: –F[i][0] = 0 for all i –F[0][j] = 0 for all j Progress: (0 < i <= n, 0 < j <= m) –F[i][j] = F[i-1][j-1] + 1 if A[i]=B[j] –F[i][j] = max{F[i-1][j], F[i][j-1]} else

23 LCS (order of calc.) F[i][j] depends on F[a][b], a ≤ i and b ≤ j Order: from top to bottom, from left to right –There are some other possible orders!

24 LCS (complexity) Time complexity: O(NM) Memory complexity: O(NM) –Again, this can be reduced to O(N+M), but why and how?

25 Coins Given an unlimited supply of $2, $3 and $5 coins, find the number of different ways to form a denomination of $N Example: –N = 10 –{2, 2, 2, 2, 2}, {2, 2, 3, 3}, {2, 3, 5}, {5, 5} Solve this induction problem by yourself

26 Matrix Chain Multiplication A matrix is a rectangle of numbers The number of “usual” multiplications needed to multiply a n×m matrix and a m×r matrix is nmr Matrix multiplication is associative –(AB)C = A(BC) A matrix of size 2×3

27 Matrix Chain Multiplication Given a sequence of matrices A 1, A 2, …, A n, find an order of multiplications such that the total number of “usual” multiplications is minimized Example: –A: 3×7, B: 7×5, C: 5×2 –(AB)C takes 3×7×5+3×5×2 = 135 muls –A(BC) takes 7×5×2+3×7×2 = 112 muls

28 MCM (analysis) How many different orders of multiplications are there? –This can be solved by DP! Does greedy work? Let’s have a look at the brackets –(A((BC)((DE)F)))(G((HI)J)) –There must be one last multiplication –(A((BC)((DE)F)))(G((HI)J)) –Every subexpression with more than one matrix has one last multiplication

29 MCM (analysis) Optimal substructure –If the last multiplication is (A 1 A 2 …A i )(A i+1 …A N ) and the multiplication scheme is optimal, then the multiplication schemes for A 1 A 2 …A i and A i+1 …A N must also be optimal

30 MCM (formulation) Let r[i] and c[i] be the number of rows and the number of columns of A i respectively Let F[i][j] be the minimum number of “usual” multiplications required to compute A i A i+1 …A j Base case: F[i][i] = 0 for all i Progress: for all i < j F[i][j] = min{F[i][k]+F[k+1][j]+ r[i]*c[k]*c[j]} i ≤ k < j

31 MCM (order of calc.) F with fewer matrices first j, i first last

32 MCM (alternative) Let G[i][j] be the minimum number of “usual” multiplications required to compute A i A i+1 …A i+j-1 Base case: G[i][1] = 0 for all i Progress: for all 1 < j < N-i G[i][j] = min{G[i][k]+G[i+k][j-k]+ r[i]*r[i+k]*c[i+j-1]} Explain!! 1 ≤ k < j

33 MCM (alternative) Still, G with fewer matrices first i, j first last

34 MCM (algorithm) Bottom-up algorithm (algorithm 2): for i  1 to N do G[i][1]  0 for j  2 to N do for i  1 to N-j+1 G[i][j]  min{G[i][k] +G[i+k][j-k]+ r[i]*r[i+k]*c[i+j-1]} answer  G[1][N] 1 ≤ k < j

35 MCM algorithm) Top-down algorithm (algorithm 1): function MCM_DP(i, j) if F[i][j] is not ∞, return F[i][j] m  ∞ for k  i to j-1 do m  min{m, MCM_DP(i, k)+ MCM_DP(k+1, j)+ r[i]*c[k]*c[j]} F[i][j]  m return F[i][j]

36 MCM (algorithm) Top-down algorithm (main): initialize everything in F to ∞ for i  1 to N do F[i][i]  0 answer  MCM_DP(1, N)

37 MCM (complexity) Number of array entries to be computed: about N(N-1)/2 Number of references to other entries when computing one single entry: –It varies –On the average, about N/2 Thus the total time complexity is O(N 3 )

38 Fishing There are N fish ponds and you are going to spend M minutes on fishing Given the time-reward relationship of each pond, determine the time you should spend at each pond in order to get the biggest reward time/pond123 1 minute0 fish2 fish1 fish 2 minutes3 fish2 fish 3 minutes3 fish4 fish

39 Fishing (example) For example, if N=3, M=3 and the relationships are given in the previous slide, then the optimal schedule is –Pond 1: 2 minutes –Pond 2: 1 minute –Pond 3: 0 minute –Reward: 5 fish

40 Fishing (analysis) You can think of yourself visiting ponds 1, 2, 3, …, N in order –Why? Suppose in an optimal schedule you spend K minutes on fishing at pond 1 So you have M-K minutes to spend at the remaining N-1 ponds –The problem is reduced But how can I know what is K? –You don’t know, so try all possible values!

41 Fishing (formulation) Let F[i][j] be the maximum reward you can get by spending j minutes at the first i ponds Base cases: 0 ≤ i, j ≤ N F[i][0] = 0 F[0][j] = 0 Progress: 1 ≤ i ≤ N, 1 ≤ j ≤ M F[i][j] = max{F[i-1][k]+R[i][j-k]} 0 ≤ k ≤ j

42 Brackets A balanced-bracket expression (BBE) is defined as follows –The empty string is a BBE –If X and Y are BBEs then (X) and XY BBEs –Nothing else is a BBE For example, (), (()), ()(()())(()) are BBEs while ((), )(, (()()))()() are not BBEs Find the number of different BBEs with exactly N pairs of brackets

43 Brackets (analysis) Obviously this is an induction problem Listing out all BBEs with N pairs of brackets is not a good idea What makes a bracket expression not a BBE? –the numbers of opening and closing brackets do not match –the number of closing brackets exceeds the number of opening brackets at some point during a scan from left to right

44 Brackets (analysis) Clearly only the second rule matters in this problem Intuitively it is easy to construct a BBE from left to right We call a bracket expression a BBE- prefix (BP) if the number of closing brackets never exceeds the number of opening brackets during a scan from left to right

45 Brackets (formulation) Let F[i][j] be the number of BPs with i opening brackets and j closing brackets (thus of length i+j) Base cases: F[0][0] = 1 F[i][j] = 0 for all 0 ≤ i < j ≤ N Progress: (0 < j ≤ i ≤ N) F[i][j] = F[i-1][j]+F[i][j-1]

46 Polygon Triangulation Given an N- sided convex polygon A, find a triangulation scheme with minimum total cut length

47 Polygon (analysis) Every edge of A belongs to exactly one triangle resulting from the triangulation We get two (or one) smaller polygons after deleting a triangle

48 Polygon (analysis) The order of cutting does not matter Optimal substructure –If the cutting scheme for A is optimal, then the cutting schemes for B and C must also be optimal A B C

49 Polygon (formulation) Take this problem as an exercise A small hint: similar to Matrix Chain Multiplication Nothing more

50 Summary Remember, DP is just a technique, not a particular algorithm The problems we have discussed are quite straightforward that you should be able to know they can be solved by DP with little inspection The DP problems in NOI and IOI are much harder –They are well disguised Looking at a wide variety of DP problems seems to be the only way to master DP

51 Looking forward… Hopefully the trainer responsible for Advanced DP II will talk about DP on non-rectangular structures, such as trees and graphs The problems discussed will be more complicated than those you have just seen, so please be prepared

52 The end LET’S HAVE LUNCH!!! The last thing…