CSC401: Analysis of Algorithms 5-3-1 CSC401 – Analysis of Algorithms Chapter 5 -- 3 Dynamic Programming Objectives: Present the Dynamic Programming paradigm.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
RAIK 283: Data Structures & Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Dynamic Programming Reading Material: Chapter 7..
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Analysis of Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
Lecture21: Dynamic Programming Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Dynamic Programming … Continued 0-1 Knapsack Problem.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
Lecture 2: Dynamic Programming 主講人 : 虞台文. Content What is Dynamic Programming? Matrix Chain-Products Sequence Alignments Knapsack Problem All-Pairs Shortest.
Dynamic Programming … Continued
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Chapter 8 Dynamic Programming.
CS38 Introduction to Algorithms
Chapter 8 Dynamic Programming
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
CS6045: Advanced Algorithms
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Dynamic Programming-- Longest Common Subsequence
Dynamic Programming.
Matrix Chain Product 張智星 (Roger Jang)
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Seminar on Dynamic Programming.
Presentation transcript:

CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm Review the Matrix Chain-Product problem Discuss the General Technique Solve the 0-1 Knapsack Problem Introduce pseudo-polynomial-time algorithms

CSC401: Analysis of Algorithms Dynamic Programming Dynamic Programming is a general algorithm design technique Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems “Programming” here means “planning” Main idea: solve several smaller (overlapping) subproblems solve several smaller (overlapping) subproblems record solutions in a table so that each subproblem is only solved once record solutions in a table so that each subproblem is only solved once final state of the table will be (or contain) solution final state of the table will be (or contain) solution

CSC401: Analysis of Algorithms Example: Fibonacci numbers Recall definition of Fibonacci numbers: f(0) = 0 f(1) = 1 f(n) = f(n-1) + f(n-2) Computing the n th Fibonacci number recursively (top-down): f(n) f(n-1) + f(n-2) f(n-2) + f(n-3) f(n-3) + f(n-4) Time efficiency: T(n) = T(n-1)+T(n-2)+1  O(2 n )

CSC401: Analysis of Algorithms Example: Fibonacci numbers Computing the n th fibonacci number using bottom-up iteration: f(0) = 0 f(1) = 1 f(2) = 0+1 = 1 f(3) = 1+1 = 2 f(4) = 1+2 = 3 f(5) = 2+3 = 5 f(n-2) = f(n-1) = f(n) = f(n-1) + f(n-2) Time efficiency: O(n)

CSC401: Analysis of Algorithms Examples of Dynamic Programming Algorithms Computing binomial coefficients Optimal chain matrix multiplication Constructing an optimal binary search tree Warshall’s algorithm for transitive closure Floyd’s algorithms for all-pairs shortest paths Some instances of difficult discrete optimization problems: travelling salesman knapsack

CSC401: Analysis of Algorithms Optimal chain matrix multiplication AC B dd f e f e i j i,j Review: Matrix Multiplication. –C = A×B –A is d × e and B is e × f –O(def ) time

CSC401: Analysis of Algorithms Matrix Chain-Products Matrix Chain-Product: –Compute A=A 0 *A 1 *…*A n-1 –A i is d i × d i+1 –Problem: How to parenthesize? Example –B is 3 × 100 –C is 100 × 5 –D is 5 × 5 –(B*C)*D takes = 1575 operations –B*(C*D) takes = 4000 operations

CSC401: Analysis of Algorithms A Brute-force Approach Matrix Chain-Product Algorithm: –Try all possible ways to parenthesize A=A 0 *A 1 *…*A n-1 –Calculate number of operations for each one –Pick the best one Running time: –The number of paranethesizations is equal to the number of binary trees with n nodes –This is exponential! –It is called the Catalan number, and it is almost 4 n. –This is a terrible algorithm!

CSC401: Analysis of Algorithms Greedy Approaches Idea #1: repeatedly select the product that uses (up) the most operations. Counter-example: –A is 10 × 5 –B is 5 × 10 –C is 10 × 5 –D is 5 × 10 –Greedy idea #1 gives (A*B)*(C*D), which takes = 2000 operations –A*((B*C)*D) takes = 1000 operations Idea #2: repeatedly select the product that uses the fewest operations. Counter-example: –A is 101 × 11 –B is 11 × 9 –C is 9 × 100 –D is 100 × 99 –Greedy idea #2 gives A*((B*C)*D)), which takes = operations –(A*B)*(C*D) takes = operations The greedy approach is not giving us the optimal value.

CSC401: Analysis of Algorithms A “Recursive” Approach Define subproblems: –Find the best parenthesization of A i *A i+1 *…*A j. –Let N i,j denote the number of operations done by this subproblem. –The optimal solution for the whole problem is N 0,n-1. Subproblem optimality: The optimal solution can be defined in terms of optimal subproblems –There has to be a final multiplication (root of the expression tree) for the optimal solution. –Say, the final multiply is at index i: (A 0 *…*A i )*(A i+1 *…*A n-1 ). –Then the optimal solution N 0,n-1 is the sum of two optimal subproblems, N 0,i and N i+1,n-1 plus the time for the last multiply. –If the global optimum did not have these optimal subproblems, we could define an even better “optimal” solution.

CSC401: Analysis of Algorithms A Characterizing Equation The global optimal has to be defined in terms of optimal subproblems, depending on where the final multiply is at. Let us consider all possible places for that final multiply: –Recall that A i is a d i × d i+1 dimensional matrix. –So, a characterizing equation for N i,j is the following: Note that subproblems are not independent--the subproblems overlap.

CSC401: Analysis of Algorithms A Dynamic Programming Algorithm Since subproblems overlap, we don’t use recursion. Instead, we construct optimal subproblems “bottom-up.” N i,i ’s are easy, so start with them Then do length 2,3,… subproblems, and so on. Running time: O(n 3 ) Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal paranethization of S for i  1 to n-1 do N i,i  0 for b  1 to n-1 do for i  0 to n-b-1 do j  i+b N i,j  +infinity for k  i to j-1 do N i,j  min{N i,j, N i,k +N k+1,j +d i d k+1 d j+1 }

CSC401: Analysis of Algorithms A Dynamic Programming Algorithm The bottom-up construction fills in the N array by diagonals N i,j gets values from pervious entries in i-th row and j-th column Filling in each entry in the N table takes O(n) time. Total run time: O(n 3 ) Getting actual parenthesization can be done by remembering “k” for each N entry answer N … n-1 … j i

CSC401: Analysis of Algorithms The General Dynamic Programming Technique Applies to a problem that at first seems to require a lot of time (possibly exponential), provided we have: –Simple subproblems: the subproblems can be defined in terms of a few variables, such as j, k, l, m, and so on. –Subproblem optimality: the global optimum value can be defined in terms of optimal subproblems –Subproblem overlap: the subproblems are not independent, but instead they overlap (hence, should be constructed bottom-up).

CSC401: Analysis of Algorithms The 0/1 Knapsack Problem Given: A set S of n items, with each item i having –b i - a positive benefit –w i - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. If we are not allowed to take fractional amounts, then this is the 0/1 knapsack problem. –In this case, we let T denote the set of items we take –Objective: maximize –Constraint:

CSC401: Analysis of Algorithms Given: A set S of n items, with each item i having –b i - a positive benefit –w i - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. Example Weight: Benefit: in2 in 6 in2 in $20$3$6$25$80 Items: 9 in Solution: 5 (2 in) 3 (2 in) 1 (4 in) “knapsack”

CSC401: Analysis of Algorithms A 0/1 Knapsack Algorithm, First Attempt S k : Set of items numbered 1 to k. Define B[k] = best selection from S k. Problem: does not have subproblem optimality: –Consider S={(3,2),(5,4),(8,5),(4,3),10,9)} weight- benefit pairs Best for S 4 : Best for S 5 :

CSC401: Analysis of Algorithms A 0/1 Knapsack Algorithm, Second Attempt S k : Set of items numbered 1 to k. Define B[k,w] = best selection from S k with weight exactly equal to w Good news: this does have subproblem optimality: I.e., best subset of S k with weight exactly w is either the best subset of S k-1 w/ weight w or the best subset of S k-1 w/ weight w-w k plus item k.

CSC401: Analysis of Algorithms The 0/1 Knapsack Algorithm Recall definition of B[k,w]: Since B[k,w] is defined in terms of B[k-1,*], we can reuse the same array Running time: O(nW). Not a polynomial-time algorithm if W is large This is a pseudo- polynomial time algorithm Algorithm 01Knapsack(S, W): Input: set S of items w/ benefit b i and weight w i ; max. weight W Output: benefit of best subset with weight at most W for w  0 to W do B[w]  0 for k  1 to n do for w  W downto w k do if B[w-w k ]+b k > B[w] then B[w]  B[w-w k ]+b k

CSC401: Analysis of Algorithms Pseudo-Polynomial Time Algorithms An algorithm is a pseudo-polynomial time algorithm, if the running time of the algorithm depends on the magnitude of a number given in the input, not its encoding size Example: The 0/1 Knapsack algorithm depends on W Not a true polynomial-time algorithm But usually it’s better than the brute-force algorithm