Dynamic Programming Sequence of decisions. Problem state.

Slides:



Advertisements
Similar presentations
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Advertisements

Analysis of Algorithms
Dynamic Programming.
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.
CS 8833 Algorithms Algorithms Dynamic Programming.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Advanced Algorithms Analysis and Design
Design & Analysis of Algorithm Dynamic Programming
Lecture on Design and Analysis of Computer Algorithm
Analysis of Algorithms
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
Algorithm Design Methods
Algorithmics - Lecture 11
Weighted Interval Scheduling
CS 3343: Analysis of Algorithms
Introduction to the Design and Analysis of Algorithms
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Advanced Design and Analysis Techniques
The Knapsack Problem.
Dynamic Programming.
Prepared by Chen & Po-Chuan 2016/03/29
Dynamic Programming Steps.
CS 3343: Analysis of Algorithms
Unit-5 Dynamic Programming
Greedy Algorithm Enyue (Annie) Lu.
Dynamic Programming General Idea
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Advanced Algorithms Analysis and Design
Dynamic Programming.
Lecture 6 Topics Greedy Algorithm
Dynamic Programming.
Lecture 8. Paradigm #6 Dynamic Programming
Dynamic Programming General Idea
Algorithm Design Methods
Dynamic Programming.
Lecture 4 Dynamic Programming
Algorithm Design Methods
Dynamic Programming Sequence of decisions. Problem state.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Steps.
Advanced Analysis of Algorithms
Major Design Strategies
Dynamic Programming Steps.
Advance Algorithm Dynamic Programming
Dynamic Programming.
Dynamic Programming Sequence of decisions. Problem state.
Department of Computer Science & Engineering
Algorithm Design Methods
Seminar on Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.

Sequence Of Decisions As in the greedy method, the solution to a problem is viewed as the result of a sequence of decisions. Unlike the greedy method, decisions are not made in a greedy and binding manner.

Dynamic Programming (DP) Large class of seemingly exponential problems have a polynomial solution (“only”) via DP * DP ≈ “controlled brute force” * DP ≈ recursion + re-use Source: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-spring-2008/lecture-notes/lec19.pdf

0/1 Knapsack Problem Let xi = 1 when item i is selected and let xi = 0 when item i is not selected. n pi xi maximize i = 1 n wi xi <= c subject to i = 1 and xi = 0 or 1 for all i All profits and weights are positive.

Sequence Of Decisions Decide the xi values in the order x1, x2, x3, …, xn. Decide the xi values in the order xn, xn-1, xn-2, …, x1. Decide the xi values in the order x1, xn, x2, xn-1, … Or any other order.

Problem State The state of the 0/1 knapsack problem is given by the weights and profits of the available items the capacity of the knapsack When a decision on one of the xi values is made, the problem state changes. item i is no longer available the remaining knapsack capacity may be less

Problem State Suppose that decisions are made in the order x1, x2, x3, …, xn. The initial state of the problem is described by the pair (1, c). Items 1 through n are available (the weights, profits and n are implicit). The available knapsack capacity is c. Following the first decision the state becomes one of the following: (2, c) … when the decision is to set x1= 0. (2, c-w1) … when the decision is to set x1= 1.

Problem State Suppose that decisions are made in the order xn, xn-1, xn-2, …, x1. The initial state of the problem is described by the pair (n, c). Items 1 through n are available (the weights, profits and first item index are implicit). The available knapsack capacity is c. Following the first decision the state becomes one of the following: (n-1, c) … when the decision is to set xn= 0. (n-1, c-wn) … when the decision is to set xn= 1.

Principle Of Optimality An optimal solution satisfies the following property: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. A problem that can be broken down this way has “Optimal substructure” Source: Bellman, 1957, Chap. III.3. https://en.wikipedia.org/wiki/Bellman_equation Dynamic programming may be used only when the principle of optimality holds.

Reminder Optimal substructure: An optimal solution to a problem, contains optimal solutions to subproblems

Methods Typically, if a problem with optimal substructure can be proven to be optimal at each step (by induction)  greedy Otherwise, if it exhibits optimal substructure AND overlapping subproblems  dynamic programming If a problem can be solved by combining optimal solutions to non-overlapping sub-problems  divide and conquer Else, a lengthy straightforward search of solution space Source: https://en.wikipedia.org/wiki/Optimal_substructure Source: https://en.wikipedia.org/wiki/Dynamic_programming Example for overlapping subproblem: Fibonacci sequence

0/1 Knapsack Problem Suppose that decisions are made in the order x1, x2, x3, …, xn. Let x1= a1, x2 = a2, x3 = a3, …, xn = an be an optimal solution. If a1 = 0, then following the first decision the state is (2, c). a2, a3, …, an must be an optimal solution to the knapsack instance given by the state (2,c).

x1 = a1 = 0 n pi xi maximize i = 2 n wi xi <= c subject to i = 2 and xi = 0 or 1 for all i So 0, b2, b3, …, bn is a better solution that a1, a2, …, an. But a1, …, an is optimal. If not, this instance has a better solution b2, b3, …, bn. i = 2 n pi bi > pi ai

x1 = a1 = 0 x1= a1, x2 = b2, x3 = b3, …, xn = bn is a better solution to the original instance than is x1= a1, x2 = a2, x3 = a3, …, xn = an. So x1= a1, x2 = a2, x3 = a3, …, xn = an cannot be an optimal solution … a contradiction with the assumption that it is optimal.

x1 = a1 = 1 Next, consider the case a1 = 1. Following the first decision the state is (2, c-w1). a2, a3, …, an must be an optimal solution to the knapsack instance given by the state (2,c -w1).

x1 = a1 = 1 n pi xi maximize i = 2 n wi xi <= c- w1 subject to and xi = 0 or 1 for all i So 1, b2, b3, …, bn is a better solution that a1, a2, …, an. But a1, …, an is optimal. If not, this instance has a better solution b2, b3, …, bn. i = 2 n pi bi > pi ai

x1 = a1 = 1 x1= a1, x2 = b2, x3 = b3, …, xn = bn is a better solution to the original instance than is x1= a1, x2 = a2, x3 = a3, …, xn = an. So x1= a1, x2 = a2, x3 = a3, …, xn = an cannot be an optimal solution … a contradiction with the assumption that it is optimal.

0/1 Knapsack Problem Therefore, no matter what the first decision, the remaining decisions are optimal with respect to the state that results from this decision. The principle of optimality holds and dynamic programming may be applied.

Dynamic Programming Recurrence Let f(i,y) be the profit value of the optimal solution to the knapsack instance defined by the state (i,y). Items i through n are available. Available capacity is y. For the time being assume that we wish to determine only the value of the best solution. Later we will worry about determining the xis that yield this maximum value. Under this assumption, our task is to determine f(1,c). Define a recurrence equation on some function of the problem state.

Dynamic Programming Recurrence f(n,y) is the value of the optimal solution to the knapsack instance defined by the state (n,y). Only item n is available. Available capacity is y. If wn <= y, f(n,y) = pn. If wn > y, f(n,y) = 0.

Dynamic Programming Recurrence Suppose that i < n. f(i,y) is the value of the optimal solution to the knapsack instance defined by the state (i,y). Items i through n are available. Available capacity is y. Suppose that in the optimal solution for the state (i,y), the first decision is to set xi= 0. From the principle of optimality (we have shown that this principle holds for the knapsack problem), it follows that f(i,y) = f(i+1,y).

Dynamic Programming Recurrence The only other possibility for the first decision is xi= 1. The case xi= 1 can arise only when y >= wi. From the principle of optimality, it follows that f(i,y) = f(i+1,y-wi) + pi. Combining the two cases, we get f(i,y) = f(i+1,y) whenever y < wi. f(i,y) = max{f(i+1,y), f(i+1,y-wi) + pi}, y >= wi.

Recursive Code int f(int i, int y) {// Return f(i, y). if (i == n) return (y < w[n]) ? 0 : p[n]; if (y < w[i]) return f(i + 1, y); return max(f(i + 1, y), f(i + 1, y - w[i]) + p[i]); }

Recursion Tree f(1,c) f(2,c) f(2,c-w1) f(3,c) f(3,c-w2) f(3,c-w1) f(3,c-w1 –w2) f(4,c) f(4,c-w3) f(4,c-w2) f(5,c) f(4,c-w1 –w3) f(5,c-w1 –w3 –w4)

Time Complexity Let t(n) be the time required when n items are available. t(0) = t(1) = a, where a is a constant. When t > 1, t(n) <= 2t(n-1) + b, where b is a constant. t(n) = O(2n). Solving dynamic programming recurrences recursively can be hazardous to run time.

Reducing Run Time f(1,c) f(2,c) f(2,c-w1) f(3,c) f(3,c-w2) f(3,c-w1) f(3,c-w1 –w2) f(4,c) f(4,c-w3) f(4,c-w2) f(5,c) f(4,c-w1 –w3) f(5,c-w1 –w3 –w4) Several f(i,y)s computed more than once. Use a dictionary that stores already computed f(i,y)s. When weights (and hence c) are integer, the number of different y’s on a level Is at most c+1.

Time Complexity Level i of the recursion tree has up to 2i-1 nodes. At each such node an f(i,y) is computed. Several nodes may compute the same f(i,y). We can save time by not recomputing already computed f(i,y)s. Save computed f(i,y)s in a dictionary. Key is (i, y) value. f(i, y) is computed recursively only when (i,y) is not in the dictionary. Otherwise, the dictionary value is used.

Integer Weights Assume that each weight is an integer. The knapsack capacity c may also be assumed to be an integer. Only f(i,y)s with 1 <= i <= n and 0 <= y <= c are of interest. Even though level i of the recursion tree has up to 2i-1 nodes, at most c+1 represent different f(i,y)s.

Integer Weights Dictionary Use an array fArray[][] as the dictionary. fArray[1:n][0:c] fArray[i][y] = -1 iff f(i,y) not yet computed. This initialization is done before the recursive method is invoked. The initialization takes O(cn) time.

No Recomputation Code int f(int i, int y) { if (fArray[i][y] >= 0) return fArray[i][y]; if (i == n) {fArray[i][y] = (y < w[n]) ? 0 : p[n]; return fArray[i][y];} if (y < w[i]) fArray[i][y] = f(i + 1, y); else fArray[i][y] = max(f(i + 1, y), f(i + 1, y - w[i]) + p[i]); return fArray[i][y]; }

Time Complexity t(n) = O(cn). Analysis done in text. Good when cn is small relative to 2n. n = 3, c = 1010101 w = [100102, 1000321, 6327] p = [102, 505, 5] 2n = 8 cn = 3030303