Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Overview What is Dynamic Programming? A Sequence of 4 Steps
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
UNC Chapel Hill Lin/Foskey/Manocha Minimum Spanning Trees Problem: Connect a set of nodes by a network of minimal total length Some applications: –Communication.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Solving Optimization Problems.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Algorithms and Data Structures Lecture X
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Advanced Design and Analysis Techniques
Chapter 8 Dynamic Programming.
Unit-5 Dynamic Programming
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Chapter 15: Dynamic Programming II
Lecture 8. Paradigm #6 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Dynamic Programming II DP over Intervals
Matrix Chain Product 張智星 (Roger Jang)
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Total running time is O(E lg E).
Matrix Chain Multiplication
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Seminar on Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may be several solutions to achieve an optimal value.) Two common techniques: –Dynamic Programming (global) –Greedy Algorithms (local)

Dynamic Programming Similar to divide-and-conquer, it breaks problems down into smaller problems that are solved recursively. In contrast, DP is applicable when the sub- problems are not independent, i.e. when sub-problems share sub-sub-problems. It solves every sub-sub-problem just once and save the results in a table to avoid duplicated computation.

Elements of DP Algorithms Sub-structure: decompose problem into smaller sub-problems. Express the solution of the original problem in terms of solutions for smaller problems. Table-structure: Store the answers to the sub- problem in a table, because sub-problem solutions may be used many times. Bottom-up computation: combine solutions on smaller sub-problems to solve larger sub- problems, and eventually arrive at a solution to the complete problem.

Applicability to Optimization Problems Optimal sub-structure (principle of optimality): for the global problem to be solved optimally, each sub- problem should be solved optimally. This is often violated due to sub-problem overlaps. Often by being “less optimal” on one problem, we may make a big savings on another sub-problem. Small number of sub-problems: Many NP-hard problems can be formulated as DP problems, but these formulations are not efficient, because the number of sub- problems is exponentially large. Ideally, the number of sub-problems should be at most a polynomial number.

Optimized Chain Operations Determine the optimal sequence for performing a series of operations. (the general class of the problem is important in compiler design for code optimization & in databases for query optimization) For example: given a series of matrices: A 1 …A n, we can “parenthesize” this expression however we like, since matrix multiplication is associative (but not commutative). Multiply a p x q matrix A times a q x r matrix B, the result will be a p x r matrix C. (# of columns of A must be equal to # of rows of B.)

Matrix Multiplication In particular for 1  i  p and 1  j  r, C[i, j] =  k = 1 to q A[i, k] B[k, j] Observe that there are pr total entries in C and each takes O(q) time to compute, thus the total time to multiply 2 matrices is pqr.

Chain Matrix Multiplication Given a sequence of matrices A 1 A 2 …A n, and dimensions p 0 p 1 …p n where A i is of dimension p i-1 x p i, determine multiplication sequence that minimizes the number of operations. This algorithm does not perform the multiplication, it just figures out the best order in which to perform the multiplication.

Example: CMM Consider 3 matrices: A 1 be 5 x 4, A 2 be 4 x 6, and A 3 be 6 x 2. Mult[((A 1 A 2 )A 3 )] = (5 x 4 x 6) + (5 x 6 x 2) = 180 Mult[(A 1 (A 2 A 3 ))] = (4 x 6 x 2) + (5 x 4 x 2) = 88 Even for this small example, considerable savings can be achieved by reordering the evaluation sequence.

Naive Algorithm If we have just 1 item, then there is only one way to parenthesize. If we have n items, then there are n-1 places where you could break the list with the outermost pair of parentheses, namely just after the first item, just after the 2nd item, etc. and just after the ( n-1 )th item. When we split just after the k th item, we create two sub-lists to be parenthesized, one with k items and the other with n-k items. Then we consider all ways of parenthesizing these. If there are L ways to parenthesize the left sub-list, R ways to parenthesize the right sub-list, then the total possibilities is L  R.

Cost of Naive Algorithm The number of different ways of parenthesizing n items is P(n) = 1, if n = 1 P(n) =  k = 1 to n-1 P(k)P(n-k), if n  2 This is related to Catalan numbers (which in turn is related to the number of different binary trees on n nodes). Specifically P(n) = C(n-1). C(n) = (1/(n+1)) C (2n, n)   (4 n / n 3/2 ) where C (2n, n) stands for the number of various ways to choose n items out of 2n items total.

DP Solution (I) Let A i…j be the product of matrices i through j. A i…j is a p i-1 x p j matrix. At the highest level, we are multiplying two matrices together. That is, for any k, 1  k  n-1, A 1…n = ( A 1…k )( A k+1…n ) The problem of determining the optimal sequence of multiplication is broken up into 2 parts: Q: How do we decide where to split the chain (what k )? A : Consider all possible values of k. Q: How do we parenthesize the subchains A 1…k & A k+1…n ? A : Solve by recursively applying the same scheme. NOTE: this problem satisfies the “principle of optimality”. Next, we store the solutions to the sub-problems in a table and build the table in a bottom-up manner.

DP Solution (II) For 1  i  j  n, let m[i, j] denote the minimum number of multiplications needed to compute A i…j. Example: Minimum number of multiplies for A 3…7 In terms of p i, the product A 3…7 has dimensions ____.

DP Solution (III) The optimal cost can be described be as follows: –i = j  the sequence contains only 1 matrix, so m[i, j] = 0. –i < j  This can be split by considering each k, i  k < j, as A i…k ( p i-1 x p k ) times A k+1…j ( p k x p j ). This suggests the following recursive rule for computing m[i, j] : m[i, i] = 0 m[i, j] = min i  k < j (m[i, k] + m[k+1, j] + p i-1 p k p j ) for i < j

Computing m[i, j] For a specific k, ( A i …A k )( A k+1 … A j ) = m[i, j] = min i  k < j (m[i, k] + m[k+1, j] + p i-1 p k p j )

Computing m[i, j] For a specific k, ( A i …A k )( A k+1 … A j ) = A i…k ( A k+1 … A j ) ( m[i, k] mults) m[i, j] = min i  k < j (m[i, k] + m[k+1, j] + p i-1 p k p j )

Computing m[i, j] For a specific k, ( A i …A k )( A k+1 … A j ) = A i…k ( A k+1 … A j ) ( m[i, k] mults) = A i…k A k+1…j ( m[k+1, j] mults) m[i, j] = min i  k < j (m[i, k] + m[k+1, j] + p i-1 p k p j )

Computing m[i, j] For a specific k, ( A i …A k )( A k+1 … A j ) = A i…k ( A k+1 … A j ) ( m[i, k] mults) = A i…k A k+1…j ( m[k+1, j] mults) = A i…j ( p i-1 p k p j mults) m[i, j] = min i  k < j (m[i, k] + m[k+1, j] + p i-1 p k p j )

Computing m[i, j] For a specific k, ( A i …A k )( A k+1 … A j ) = A i…k ( A k+1 … A j ) ( m[i, k] mults) = A i…k A k+1…j ( m[k+1, j] mults) = A i…j ( p i-1 p k p j mults) For solution, evaluate for all k and take minimum. m[i, j] = min i  k < j (m[i, k] + m[k+1, j] + p i-1 p k p j )

Matrix-Chain-Order(p) 1. n  length[p] for i  1 to n // initialization: O(n) time 3. do m[i, i]  0 4. for L  2 to n // L = length of sub-chain 5. do for i  1 to n - L+1 6. do j  i + L m[i, j]   8. for k  i to j do q  m[i, k] + m[k+1, j] + p i-1 p k p j 10. if q < m[i, j] 11. then m[i, j]  q 12. s[i, j]  k 13. return m and s

Analysis The array s[i, j] is used to extract the actual sequence (see next). There are 3 nested loops and each can iterate at most n times, so the total running time is  (n 3 ).

Extracting Optimum Sequence Leave a split marker indicating where the best split is (i.e. the value of k leading to minimum values of m[i, j] ). We maintain a parallel array s[i, j] in which we store the value of k providing the optimal split. If s[i, j] = k, the best way to multiply the sub- chain A i…j is to first multiply the sub-chain A i…k and then the sub-chain A k+1…j, and finally multiply them together. Intuitively s[i, j] tells us what multiplication to perform last. We only need to store s[i, j] if we have at least 2 matrices & j > i.

Mult (A, i, j) 1. if (j > i) 2. then k = s[i, j] 3. X = Mult(A, i, k) // X = A[i]...A[k] 4. Y = Mult(A, k+1, j)// Y = A[k+1]...A[j] 5. return X*Y // Multiply X*Y 6. else return A[i]// Return ith matrix

Example: DP for CMM The initial set of dimensions are : we are multiplying A 1 (5x4) times A 2 (4x6) times A 3 (6x2) times A 4 (2x7). Optimal sequence is (A 1 (A 2 A 3 )) A 4.

Finding a Recursive Solution Figure out the “top-level” choice you have to make (e.g., where to split the list of matrices) List the options for that decision Each option should require smaller sub-problems to be solved Recursive function is the minimum (or max) over all the options m[i, j] = min i  k < j (m[i, k] + m[k+1, j] + p i-1 p k p j )