Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Dynamic Programming Introduction Prof. Muhammad Saeed.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Greedy Algorithms.
Types of Algorithms.
Chapter 5 Fundamental Algorithm Design Techniques.
Analysis of Algorithms
Overview What is Dynamic Programming? A Sequence of 4 Steps
COMP8620 Lecture 8 Dynamic Programming.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 14 Dynamic.
Introduction to Algorithms
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
16-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
1 Dynamic Programming Jose Rolim University of Geneva.
Greedy vs Dynamic Programming Approach
1 Greedy Algorithms. 2 2 A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Lectures on Greedy Algorithms and Dynamic Programming
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Dynamic Programming … Continued 0-1 Knapsack Problem.
9-Feb-16 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide and.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
CS 361 – Chapter 10 “Greedy algorithms” It’s a strategy of solving some problems –Need to make a series of choices –Each choice is made to maximize current.
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Dynamic Programming 26-Apr-18.
Dynamic Programming Sequence of decisions. Problem state.
Weighted Interval Scheduling
CS 3343: Analysis of Algorithms
Topic 25 Dynamic Programming
CS38 Introduction to Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
Chapter 6 Dynamic Programming
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Chapter 6 Dynamic Programming
Discrete Mathematics CMP-101 Lecture 12 Sorting, Bubble Sort, Insertion Sort, Greedy Algorithms Abdul Hameed
Advanced Algorithms Analysis and Design
Greedy Algorithms.
Dynamic Programming 23-Feb-19.
CSC 413/513- Intro to Algorithms
Analysis of Algorithms CS 477/677
0-1 Knapsack problem.
Dynamic Programming.
Presentation transcript:

Dynamic Programming

 Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples of Dynamic Programming:  Coins  Binomial  Scheduling  Knapsack Problem  Viterbi Algorithm  Sequence Alignment

3  DP is a method for solving certain kind of problems  DP can be applied when the solution of a problem includes solutions to subproblems  We need to find a recursive formula for the solution  We can recursively solve subproblems, starting from the trivial case, and save their solutions in memory  In the end we’ll get the solution of the whole problem

4  Simple Subproblems  We should be able to break the original problem to smaller subproblems that have the same structure  Optimal Substructure of the problems  The solution to the problem must be a composition of subproblem solutions  Subproblem Overlap  Optimal subproblems to unrelated problems can contain subproblems in common

 How can we find the minimum number of coins to make up a specific denomination  We could use the greedy method to find the minimum number of US coins to make any amount  At each step, just choose the largest coin that does not overshoot the desired amount: 31¢=25  What if the nickel (5¢) did not exist  Greedy method would produce 7 coins ( )  The optimal solution is actually 4 coins ( )  How can we find the minimum number of coins for any given coin set?

 For the following examples, we will assume coins in the following denominations: 1¢ 5¢ 10¢ 21¢ 25¢  We’ll use 63¢ as our goal

 To make K cents:  If there is a K-cent coin, then that one coin is the minimum  Otherwise, for each value i < K,  Find the minimum number of coins needed to make i cents  Find the minimum number of coins needed to make K - i cents  Choose the i that minimizes this sum  This algorithm can be viewed as divide-and-conquer, or as brute force  This solution is very recursive  It requires exponential work  It is infeasible to solve for 63¢

 We can reduce the problem recursively by choosing the first coin, and solving for the amount that is left  For 63¢:  One 1¢ coin plus the best solution for 62¢  One 5¢ coin plus the best solution for 58¢  One 10¢ coin plus the best solution for 53¢  One 21¢ coin plus the best solution for 42¢  One 25¢ coin plus the best solution for 38¢  Choose the best solution from among the 5 given above  Instead of solving 62 recursive problems, we solve 5  This is still a very expensive algorithm

 Idea: Solve first for one cent, then two cents, then three cents, etc., up to the desired amount  Save each answer in an array !  For each new amount N, compute all the possible pairs of previous answers which sum to N  For example, to find the solution for 13¢,  First, solve for all of 1¢, 2¢, 3¢,..., 12¢  Next, choose the best solution among:  Solution for 1¢ + solution for 12¢  Solution for 2¢ + solution for 11¢  Solution for 3¢ + solution for 10¢  Solution for 4¢ + solution for 9¢  Solution for 5¢ + solution for 8¢  Solution for 6¢ + solution for 7¢

 Suppose coins are 1¢, 3¢, and 4¢  There’s only one way to make 1¢ (one coin)  To make 2¢, try 1¢+1¢ (one coin + one coin = 2 coins)  To make 3¢, just use the 3¢ coin (one coin)  To make 4¢, just use the 4¢ coin (one coin)  To make 5¢, try  1¢ + 4¢ (1 coin + 1 coin = 2 coins)  2¢ + 3¢ (2 coins + 1 coin = 3 coins)  The first solution is better, so best solution is 2 coins  To make 6¢, try  1¢ + 5¢ (1 coin + 2 coins = 3 coins)  2¢ + 4¢ (2 coins + 1 coin = 3 coins)  3¢ + 3¢ (1 coin + 1 coin = 2 coins) – best solution  Etc.

 The first algorithm is recursive, with a branching factor of up to 62  Possibly the average branching factor is somewhere around half of that (31)  The algorithm takes exponential time, with a large base  The second algorithm is much better—it has a branching factor of 5  This is exponential time, with base 5  The dynamic programming algorithm is O(N*K), where N is the desired amount and K is the number of different kinds of coins

12  (x + y) 2 = x 2 + 2xy + y 2, coefficients are 1,2,1  (x + y) 3 = x 3 + 3x 2 y + 3xy 2 + y 3, coefficients are 1,3,3,1  (x + y) 4 = x 4 + 4x 3 y + 6x 2 y 2 + 4xy 3 + y 4, coefficients are 1,4,6,4,1  (x + y) 5 = x 5 + 5x 4 y + 10x 3 y x 2 y 3 + 5xy 4 + y 5, coefficients are 1,5,10,10,5,1  The n+1 coefficients can be computed for (x + y) n according to the formula n C x = n! / (x! * (n – x)!)  The repeated computation of all the factorials gets to be expensive  We can use dynamic programming to save the factorials as we go

n n C 0 n C 1 n C 2 n C 3 n C 4 n C 5 n C  Each row depends only on the preceding row  Only linear space and quadratic time are needed  This algorithm is known as Pascal’s Triangle

 Dynamic programming is a technique for finding an optimal solution  The principle of optimality applies if the optimal solution to a problem always contains optimal solutions to all subproblems  Example: Consider the problem of making N¢ with the fewest number of coins  Either there is an N¢ coin, or  The set of coins making up an optimal solution for N¢ can be divided into two nonempty subsets, n 1 ¢ and n 2 ¢  If either subset, n 1 ¢ or n 2 ¢, can be made with fewer coins, then clearly N¢ can be made with fewer coins

 The principle of optimality holds if  Every optimal solution to a problem contains optimal solutions to all subproblems  The principle of optimality does not say  If you have optimal solutions to all subproblems then you can combine them to get an optimal solution  Example: In US coinage,  The optimal solution to 7¢ is 5¢ + 1¢ + 1¢, and  The optimal solution to 6¢ is 5¢ + 1¢, but  The optimal solution to 13¢ is not 5¢ + 1¢ + 1¢ + 5¢ + 1¢  But there is some way of dividing up 13¢ into subsets with optimal solutions (say, 11¢ + 2¢) that will give an optimal solution for 13¢  Hence, the principle of optimality holds for this problem

 Dynamic programming relies on working “from the bottom up” and saving the results of solving simpler problems  These solutions to simpler problems are then used to compute the solution to more complex problems  Dynamic programming solutions can often be quite complex and tricky  Dynamic programming is used for optimization problems, especially ones that would otherwise take exponential time  Only problems that satisfy the principle of optimality are suitable for dynamic programming solutions  Since exponential time is unacceptable for all but the smallest problems, dynamic programming is sometimes essential

 Problem  scheduling a lecture hall  each request has start time, end time, value  goal: maximize value  Basic idea  sort requests by end time  calculate p( request )  the next previous request that does not overlap  optimal schedule for requests 1..n:  if n part of optimal solution: optimal schedule for p( n ) and n  else: optimal schedule for n-1  use sub solutions to build up complete solution

 Weighted interval scheduling problem.  Given a collection of intervals I 1,…,I n with weights w 1,…,w n, choose a maximum weight set of non-overlapping intervals Time f g h e a b c d

 Weighted interval scheduling problem.  Given a collection of intervals I 1,…,I n with weights w 1,…,w n, choose a maximum weight set of non-overlapping intervals  p(j) = largest index i < j such that job i is compatible with j Time p(1) = 0 p(2) = 0 p(3) = 1 p(4) = 0 p(5) = 3 p(6) = 3 Previous job w 6 = 1 w 5 = 2 w 1 = 2 w 2 = 4 w 3 = 4 w 4 =

 Assume we know optimal solution O: either contains or doesn’t contain the last request n  Opt(j) = max(w j + Opt(p(j)), Opt(j-1)) Time w 6 = 1 w 5 = 2 w 1 = 2 w 2 = 4 w 3 = 4 w 4 = p(1) = 0 p(2) = 0 p(3) = 1 p(4) = 0 p(5) = 3 p(6) = 3

 Opt(j) = max(w j + Opt(p(j)), Opt(j-1))  Opt(6) = max(w 6 + Opt(3), Opt(5)) = max(1 + 6, 8) = 8  Opt(3) = max(w 3 + Opt(1), Opt(2)) = max(4 + 2, 4) = 6  Opt(5) = max(w 5 + Opt(3), Opt(4)) = max(2 + 6, 7) = 8  Opt(1) = max(w 1 + Opt(0), Opt(0)) = max(2 + 0, 0) = 2  Opt(2) = max(w 2 + Opt(0), Opt(1)) = max(4 + 0, 2) = 4  Opt(4) = max(w 4 + Opt(0), Opt(3)) = max(7 + 0, 6) = 7 Time w 6 = 1 w 5 = 2 w 1 = 2 w 2 = 4 w 3 = 4 w 4 = p(1) = 0 p(2) = 0 p(3) = 1 p(4) = 0 p(5) = 3 p(6) = 3

 A thief is robbing a house.  He has a bag to fill with items. This bag has a maximum weight capacity of W.  Each item he is considering stealing has a value v i and a weight w i.  His objective is to steal the items with maximum total value while staying within the weight limit of his bag

 In this case, what would a sub-problem be? Does a solution for item n include the solution for item n-1?  Assume we know optimal solution O: either contains or doesn’t contain the last item n  It Does  Add value of item n (vn) to the total value  O must contain an optimal solution to the problem consisting of item n- 1 and total weight W – wn  It Doesn’t  O is equal to the optimal solution to the problem consisting of item n-1 and total weight W  If w < w i then Opt(i, w) = Opt(i – 1, w) otherwise Opt(i, w) = max(Opt(i – 1, w), v i + Opt(i -1, w – w i ))

 i = n = 4, w = W = 5  5 < 5 ? No Opt(4, 5) = max(Opt(3, 5), 6 + Opt(3, 0))  i = 3, w = 5  5 < 4? No Opt(3, 5) = max(Opt(2, 5), 5 + Opt(3, 1))  i = 3, w = 0  0 < 4? Yes Opt(3, 0) = Opt(2, 0)....

 Knapsack problem.  Given n objects and a "knapsack."  Item i weighs w i > 0 kilograms and has value v i > 0.  Knapsack has capacity of W kilograms.  Goal: fill knapsack so as to maximize total value.  Ex: { 3, 4 } has value Value Weight Item W = 11

 Def. OPT(i) = max profit subset of items 1, …, i.  Case 1: OPT does not select item i.  OPT selects best of { 1, 2, …, i-1 }  Case 2: OPT selects item i.  accepting item i does not immediately imply that we will have to reject other items  without knowing what other items were selected before i, we don't even know if we have enough room for I  Conclusion. Need more sub-problems!

 Def. OPT(i, w) = max profit subset of items 1, …, i with weight limit w.  Case 1: OPT does not select item i.  OPT selects best of { 1, 2, …, i-1 } using weight limit w  Case 2: OPT selects item i.  new weight limit = w – w i  OPT selects best of { 1, 2, …, i–1 } using this new weight limit

n Value Weight Item  { 1, 2 } { 1, 2, 3 } { 1, 2, 3, 4 } { 1 } { 1, 2, 3, 4, 5 } W + 1 W = 11 OPT: { 4, 3 } value = = 40