1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
1 Appendix B: Solving TSP by Dynamic Programming Course: Algorithm Design and Analysis.
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Introduction to Algorithms Jiafen Liu Sept
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Dynamic Programming Typically applied to optimization problems
All-pairs Shortest paths Transitive Closure
Advanced Design and Analysis Techniques
Lecture 5 Dynamic Programming
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Chapter 15: Dynamic Programming III
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Chapter 15-1 : Dynamic Programming I
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Ch. 15: Dynamic Programming Ming-Te Chi
All pairs shortest path problem
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
DYNAMIC PROGRAMMING.
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
Presentation transcript:

1 Chapter 15-2: Dynamic Programming II

2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we multiply matrices A and B, we obtain a resulting matrix C = AB whose dimension is p x r We can obtain each entry in C using q operations  in total, pqr operations

3 Matrix Multiplication Example : How to obtain c 1,2 ?

4 Matrix Multiplication In fact, ((A 1 A 2 )A 3 ) = (A 1 (A 2 A 3 )) so that matrix multiplication is associative  Any way to write down the parentheses gives the same result E.g., (((A 1 A 2 )A 3 )A 4 ) = ((A 1 A 2 )(A 3 A 4 )) = (A 1 ((A 2 A 3 )A 4 )) = ((A 1 (A 2 A 3 ))A 4 ) = (A 1 (A 2 (A 3 A 4 )))

5 Matrix Multiplication Question: Why do we bother this? Because different computation sequence may use different number of operations! E.g., Let the dimensions of A 1, A 2, A 3 be: 1 x 100, 100 x 1, 1 x 100, respectively #operations to get ((A 1 A 2 )A 3 ) = ?? #operations to get (A 1 (A 2 A 3 )) = ??

6 Lemma: Suppose that to multiply B 1,B 2,…,B n, the way with minimum #operations is to: (i) first, obtain B 1 B 2 … B x (ii) then, obtain B x+1 … B n (iii) finally, multiply the matrices of part (i) and part (ii) Then, the matrices in part (i) and part (ii) must be obtained with min #operations Optimal Substructure (allows recursion)

7 Let f i,j denote the min #operations to obtain the product A i A i+1 … A j  f i,i = 0 Let r k and c k denote #rows and #cols of A k Then, we have: Optimal Substructure Lemma: For any j > i, f i,j = min x { f i,x + f x+1,j + r i c x c j }

8 Define a function Compute_F(i,j) as follows: Compute_F(i, j) /* Finding f i,j */ 1. if (i == j) return 0; 2. m =  ; 3. for (x = i, i+1, …, j-1) { g = Compute_F(i,x) + Compute_F(x+1,j) + r i c x c j ; if (g  m) m = g; } 4. return m ; Recursive-Matrix-Chain

9 The recursion tree for the computation of RECURSUVE-MATRIX-CHAIN(P, 1, 4)

10 Time Complexity Question: Time to get Compute_F(1,n)? By substituion method, we can show that Running time =  (3 n )

11 Counting the number of parenthesizations Remark: On the other hand, #operations for each possible way of writing parentheses are computed at most once  Running time = O( C(2n-2,n-1)/n )  Catalan Number

12 Overlapping Subproblems Here, we can see that : To Compute_F(i,j) and Compute_F(i,j+1), both have many COMMON subproblems: Compute_F(i,i+1), …, Compute_F(i,j-1) So, in our recursive algorithm, there are many redundant computations ! Question: Can we avoid it ?

13 Bottom-Up Approach We notice that f i,j depends only on f x,y with y-x  j-i Let us create a 2D table F to store all f i,j values once they are computed Then, compute f i,j for j-i = 1,2,…,n-1

14 BottomUp_F( ) /* Finding min #operations */ 1. for j = 1, 2, …, n, set F[j, j] = 0 ; 2. for (length = 1,2,…, n-1) { Compute F[i,i+length] for all i; // Based on F[x,y] with |x-y| < length } 3. return F[1,n] ; Running Time =  (n 3 ) Bottom-Up Approach

15 Example:

16 The m and s table computed by MATRIX-CHAIN-ORDER for n = 6 Optimal Solution:

17 Example m[2,5]= Min { m[2,2]+m[3,5]+p 1 p 2 p 5 =  15  20=13000, m[2,3]+m[4,5]+p 1 p 3 p 5 =  5  20=7125, m[2,4]+m[5,5]+p 1 p 4 p 5 =  10  20=11374 } =7125

18 Remarks Again, a slight change in the algorithm allows us to get the exact sequence of steps (or the parentheses) that achieves the minimum number of operations Also, we can make minor changes to the recursive algorithm and obtain a memoized version (whose running time is O(n 3 ))

19 MATRIX_CHAIN_ORDER MATRIX_CHAIN_ORDER(p) 1 n  length[p] –1 2 for i  1 to n 3do m[i, i]  0 4 for l  2 to n 5do for i  1 to n – l + 1 6do j  i + l – 1 7 m[i, j]   8 for k  i to j – 1 9do q  m[i, k] + m[k+1, j]+ p i-1 p k p j 10 if q < m[i, j] 11 then m[i, j]  q 12 s[i, j]  k 13 return m and s

20 When should we apply DP? Optimal structure: an optimal solution to the problem contains optimal solutions to subproblems. –Example: Matrix-multiplication problem Overlapping subproblems: a recursive algorithm revisits the same subproblem over and over again.

21 Common pattern in discovering optimal substructure 1. Show that a solution to a problem consists of making a choice, which leaves one or subproblems to solve. 2. Suppose that for a given problem, you are given the choice that leads to an optimal solution. You do not concern how to determine this choice. You just assume that it has been given to you. 3. Given this choice, determine which subproblems ensure and how to best characterize the resulting (solution) space of subproblems.

22 Common pattern in discovering optimal substructure 4. Show that the solutions to the subproblems used within the optimal solution must themselves be optimal. Usually use “ cut-and- paste ” technique.

23 Optimal substructure Optimal substructure varies across problem domains in two ways: 1. how many subproblems are used in an optimal solution to the original problem, and 2. how many choices we have in determining which subproblem(s) to use in an optimal solution. Informally, running time depends on (# of subproblems overall) x (# of choices).

24 Refinement One should be careful not to assume that optimal substructure applies when it does not. Consider the following two problems in which we are given a directed graph G = ( V, E ) and vertices u, v  V. –Unweighted shortest path: Find a path from u to v consisting of the fewest edges. Good for Dynamic programming. –Unweighted longest simple path: Find a simple path from u to v consisting of the most edges. Not good for Dynamic programming.

25 Shortest path Shortest path has optimal substructure. Suppose p is shortest path u -> v. Let w be any vertex on p. Let p 1 be the portion of p going u -> w. Then p 1 is a shortest path u -> w.

26 Longest simple path Does longest path have optimal substructure? Consider q -> r -> t = longest path q -> t. Are its subpaths longest paths? No! Longest simple path q -> r is q -> s -> t -> r. Longest simple path r -> t is r -> q ->s -> t. Not only isn’t there optimal substructure, but we can’t even assemble a legal solution from solutions to subproblems.

27 Overlapping subproblems These occur when a recursive algorithm revisits the same problem over and over. Solutions 1. Bottom up 2. Memorization (memorize the natural, but inefficient)

28 Top down approach RECURSIVE_MATRIX_CHAIN(p, i, j) 1 if i = j 2 then return 0 3 m[i, j]   4 for k  i to j – 1 5 do q  RMC(p,i,k) + RMC(p,k+1,j) + p i-1 p k p j 6 if q < m[i, j] 7 then m[i, j]  q 8 return m[i, j]

29 The recursion tree for the computation of RECURSUVE-MATRIX-CHAIN(P, 1, 4)

30 Memoization Alternative approach to dynamic programming:  “Store, don’t recompute.”  Make a table indexed by subproblem.  When solving a subproblem: Lookup in table.  If answer is there, use it.  Else, compute answer, then store it.  In bottom-up dynamic programming, we go one step further. We determine in what order we’d want to access the table, and fill it in that way.

31 MEMORIZED_MATRIX_CHAIN MEMORIZED_MATRIX_CHAIN(p) 1 n  length[p] –1 2 for i  1 to n 3 do for j  1 to n 4 do m[i, j]   5 return LC (m,p,1,n)

32 LOOKUP_CHAIN LOOKUP_CHAIN(m, p,i,j) 1if m[i, j] <  2then return m[i, j] 3if i = j 4 then m[i, j]  0 5 else for k  i to j – 1 6 do q  LC(m,p,i,k) +LC(m,p,k+1,j)+p i-1 p k p j 7 if q < m[i, j] 8 then m[i, j]  q 9return m[i, j] Time Complexity :

33 Homework Problem: 15.2 Practice at home: , , ,