1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Dynamic Programming Introduction Prof. Muhammad Saeed.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Chapter 5 Fundamental Algorithm Design Techniques.
Algorithms + L. Grewe.
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Overview What is Dynamic Programming? A Sequence of 4 Steps
COMP8620 Lecture 8 Dynamic Programming.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Problem Solving Dr. Andrew Wallace PhD BEng(hons) EurIng
Dynamic Programming.
Review: Dynamic Programming
Introduction to Algorithms
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
Transform & Conquer Lecture 08 ITS033 – Programming & Algorithms
Greedy vs Dynamic Programming Approach
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Shortest Path Problem Topic 11 ITS033 – Programming & Algorithms C B A E D F Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program,
1 Minimum Spanning Tree Problem Topic 10 ITS033 – Programming & Algorithms Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing.
1 Decrease-and-Conquer Approach Lecture 06 ITS033 – Programming & Algorithms Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing.
1 State Space of a Problem Lecture 03 ITS033 – Programming & Algorithms Asst. Prof.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CSC5101 Advanced Algorithms Analysis
Dynamic Programming David Kauchak cs161 Summer 2009.
Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Dynamic Programming (DP) By Denon. Outline Introduction Fibonacci Numbers (Review) Longest Common Subsequence (LCS) More formal view on DP Subset Sum.
Dynamic Programming … Continued 0-1 Knapsack Problem.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
Dynamic Programming … Continued
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Dynamic Programming Sequence of decisions. Problem state.
Decrease-and-Conquer Approach
Advanced Design and Analysis Techniques
Dynamic Programming.
Dynamic Programming.
Prepared by Chen & Po-Chuan 2016/03/29
Data Structures and Algorithms
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Longest Common Subsequence
DYNAMIC PROGRAMMING.
CSC 413/513- Intro to Algorithms
Lecture 4 Dynamic Programming
This is not an advertisement for the profession
0-1 Knapsack problem.
Dynamic Programming.
Presentation transcript:

1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology Sirindhorn International Institute of Technology Thammasat University X 2005 ITS033 – Programming & Algorithms

2 ITS033 Topic 01-Problems & Algorithmic Problem Solving Topic 01 - Problems & Algorithmic Problem Solving Topic 02 – Algorithm Representation & Efficiency Analysis Topic 03 - State Space of a problem Topic 04 - Brute Force Algorithm Topic 05 - Divide and Conquer Topic 06-Decrease and Conquer Topic 06 - Decrease and Conquer Topic 07 - Dynamics Programming Topic 08-Transform and Conquer Topic 08 - Transform and Conquer Topic 09 - Graph Algorithms Topic 10 - Minimum Spanning Tree Topic 11 - Shortest Path Problem Topic 12 - Coping with the Limitations of Algorithms Power and Midterm

3 This Week Overview Dynamic Programming  Fibonacci Series Knapsack Problem Memory Function

4 Dynamic Programming: Introduction Topic 07.1 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology Sirindhorn International Institute of Technology Thammasat University X 2005 ITS033 – Programming & Algorithms

5 Introduction Dynamic programming is a technique for solving problems with overlapping sub-problems. Typically, these sub-problems arise from a recurrence relating a solution to a given problem with solutions to its smaller sub-problems of the same type.

6 Introduction Rather than solving overlapping sub-problems again and again, dynamic programming suggests solving each of the smaller sub-problems only once and recording the results in a table from which we can then obtain a solution to the original problem.

7 Introduction The Fibonacci numbers are the elements of the sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,..., Algorithm fib(n) if n = 0 or n = 1 return 1 return fib(n − 1) + fib(n − 2) The original problem F(n) is defined by F(n-1) and F(n-2)

8 Introduction The Fibonacci numbers are the elements of the sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34,..., Algorithm fib(n) if n = 0 or n = 1 return 1 return fib(n − 1) + fib(n − 2) If we try to use recurrence directly to compute the n th Fibonacci number F(n), we would have to recompute the same values of this function many times in fact, we can avoid using an extra array to accomplish this task by recording the values of just the last two elements of the Fibonaccisequence

9 Introduction Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times: fib(5) fib(4) + fib(3) (fib(3) + fib(2)) + (fib(2) + fib(1)) ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1)) If we try to use recurrence directly to compute the n th Fibonacci number F(n), we would have to recompute the same values of this function many times

10 Introduction Certain algorithms compute the n th Fibonacci number without computing all the preceding elements of this sequence. It is typical of an algorithm based on the classic bottom-up dynamic programming approach, A top-down variation of it exploits so-called memory functions The crucial step in designing such an algorithm remains the same => Deriving a recurrence relating a solution to the problem’s instance with solutions of its smaller (and overlapping) subinstances.

11 Introduction Dynamic programming usually takes one of two approaches: Bottom-up approach: All subproblems that might be needed are solved in advance and then used to build up solutions to larger problems. This approach is slightly better in stack space and number of function calls, but it is sometimes not intuitive to figure out all the subproblems needed for solving the given problem. Top-down approach: The problem is broken into subproblems, and these subproblems are solved and the solutions remembered, in case they need to be solved again. This is recursion and Memory Function combined together.

12 Bottom Up In the bottom-up approach we calculate the smaller values of Fibo first, then build larger values from them. This method also uses linear (O(n)) time since it contains a loop that repeats n − 1 times. In both these examples, we only calculate fib(2) one time, and then use it to calculate both fib(4) and fib(3), instead of computing it every time either of them is evaluated. Algorithm Fibo(n) previousFib = 0, currentFib = 1 repeat n − 1 times newFib = previousFib + currentFib previousFib = currentFib currentFib = newFib return currentFib

13 Top-Down suppose we have a simple map object, m, which maps each value of Fibo that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time: This technique of saving values that have already been calculated is called Memory Function; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values m [0] = 0 m [1] = 1 Algorithm Fibo(n) if map m does not contain key n m[n] := Fibo(n − 1) + Fibo(n − 2) return m[n]

14 Dynamic Programming: Knapsack Problem Topic 07.2 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology Sirindhorn International Institute of Technology Thammasat University X 2005 ITS033 – Programming & Algorithms

Knapsack problem Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight w i and benefit value b i (all w i, b i and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items?

Knapsack problem: W = 20 wiwi bibi WeightBenefit value This is a knapsack Max weight: W = 20 Items

Knapsack problem Problem, in other words, is to find The problem is called a “0-1” problem, because each item must be entirely accepted or rejected. Just another version of this problem is the “Fractional Knapsack Problem”, where we can take fractions of items. (see Greedy Algorithm)

Knapsack problem: brute-force approach Since there are n items, there are 2 n possible combinations of items. We go through all combinations and find the one with the most total value and with total weight less or equal to W Running time will be O(2 n )

Knapsack problem Can we do better? Yes, with an algorithm based on dynamic programming We need to carefully identify the subproblems

20 The Knapsack Problem To design a dynamic programming algorithm, we need to derive a recurrence relation that expresses a solution to an instance of the knapsack problem in terms of solutions to its smaller sub-instances.

21 Recursive Formula for subproblems It means, that the best subset of S k that has total weight w is one of the two: 1) the best subset of S k-1 that has total weight w, or 2) the best subset of S k-1 that has total weight w-w k plus the item k Recursive formula for subproblems:

22 Recursive Formula The best subset of S k that has the total weight w, either contains item k or not. First case: w k >w.  Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable Second case: w k <=w.  Then the item k can be in the solution, and we choose the case with greater value

Knapsack Algorithm for w = 0 to W B[0,w] = 0 for i = 0 to n B[i,0] = 0 for w = 0 to W if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w

24 Example Let’s run our algorithm on the following data: n = 4 (# of elements) W = 5 (max weight) Elements (weight, benefit): (2,3), (3,4), (4,5), (5,6)

25 Example (2) for w = 0 to W B[0,w] = W i

26 Example (3) for i = 0 to n B[i,0] = W i

27 Example (4) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=1 w-w i =-1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0

28 Example (5) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=2 w-w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 4 0 3

29 Example (6) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=3 w-w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

30 Example (7) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=4 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

31 Example (8) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=1 b i =3 w i =2 w=5 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

32 Example (9) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=1 w-w i =-2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

33 Example (10) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=2 w-w i =-1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

34 Example (11) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=3 w-w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

35 Example (12) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=4 w-w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

36 Example (13) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=2 b i =4 w i =3 w=5 w-w i =2 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

37 Example (14) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=1..3 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

38 Example (15) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=4 w- w i =0 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

39 Example (15) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=5 w- w i =1 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

40 Example (16) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=1..4 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

41 Example (17) if w i <= w // item i can be part of the solution if b i + B[i-1,w-w i ] > B[i-1,w] B[i,w] = b i + B[i-1,w- w i ] else B[i,w] = B[i-1,w] else B[i,w] = B[i-1,w] // w i > w W i i=3 b i =5 w i =4 w=5 Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6)

42 Comments This algorithm only finds the max possible value that can be carried in the knapsack To know the items that make this maximum value, an addition to this algorithm is necessary We can trace back to see how to extract this data from the table we built

43 Running time for w = 0 to W B[0,w] = 0 for i = 0 to n B[i,0] = 0 for w = 0 to W What is the running time of this algorithm? O(W) Repeat n times O(n*W) Remember that the brute-force algorithm takes O(2 n )

44 Example capacity W = 5 Build a Dynamic Programming Table for this Knapsack Problem

45 Example – Dynamic Programming Table capacity W = 5

46 Example Thus, the maximal value is V [4, 5]= $37. We can find the composition of an optimal subset by tracing back the computations of this entry in the table. Since V [4, 5] is not equal to V [3, 5], item 4 was included in an optimal solution along with an optimal subset for filling = 3 remaining units of the knapsack capacity. capacity W = 5

47 Example The remaining is V[3,3] Here V[3,3] = V[2,3] so item 3 is not included V[2,3]  V[1,3] so item 2 is included capacity W = 5

48 Example The remaining is V[1,2] V[1,2]  V[0,2] so item 1 is included The solution is {item 1, item 2, item 4} Total weight is 5 Total value is 37 capacity W = 5

49 The Knapsack Problem The time efficiency and space efficiency of this algorithm are both in θ(nW). The time needed to find the composition of an optimal solution is in O(n + W).

50 Dynamic Programming: Memory Function Lecture 07.3 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology Sirindhorn International Institute of Technology Thammasat University X 2005 ITS033 – Programming & Algorithms

51 Memory Function The classic dynamic programming approach, fills a table with solutions to all smaller subproblems but each of them is solved only once. An unsatisfying aspect of this approach is that solutions to some of these smaller subproblems are often not necessary for getting a solution to the problem given.

52 Memory Function Since this drawback is not present in the top-down approach, it is natural to try to combine the strengths of the top-down and bottom-up approaches. The goal is to get a method that solves only subproblems that are necessary and does it only once. Such a method exists; it is based on using memory functions

53 Memory Function Initially, all the table’s entries are initialized with a special “null” symbol to indicate that they have not yet been calculated. Thereafter, whenever a new value needs to be calculated, the method checks the corresponding entry in the table first: if this entry is not “null,” it is simply retrieved from the table; otherwise, it is computed by the recursive call whose result is then recorded in the table.

54 Memory Function for solving Knapsack Problem

55 Memory Function for solving Knapsack Problem

56 Memory Function In general, we cannot expect more than a constant-factor gain in using the memory function method for the knapsack problem because its time efficiency class is the same as that of the bottom-up algorithm A memory function method may be less space-efficient than a space efficient version of a bottom-up algorithm.

57 Conclusion Dynamic programming is a useful technique of solving certain kind of problems When the solution can be recursively described in terms of partial solutions, we can store these partial solutions and re-use them as necessary Running time (Dynamic Programming algorithm vs. naïve algorithm):  0-1 Knapsack problem: O(W*n) vs. O(2 n )

58 ITS033 Topic 01-Problems & Algorithmic Problem Solving Topic 01 - Problems & Algorithmic Problem Solving Topic 02 – Algorithm Representation & Efficiency Analysis Topic 03 - State Space of a problem Topic 04 - Brute Force Algorithm Topic 05 - Divide and Conquer Topic 06-Decrease and Conquer Topic 06 - Decrease and Conquer Topic 07 - Dynamics Programming Topic 08-Transform and Conquer Topic 08 - Transform and Conquer Topic 09 - Graph Algorithms Topic 10 - Minimum Spanning Tree Topic 11 - Shortest Path Problem Topic 12 - Coping with the Limitations of Algorithms Power and Midterm

59 End of Chapter 7 Thank you!