Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II

Slides:



Advertisements
Similar presentations
Welcome to our presentation
Advertisements

Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
CS 691 Computational Photography Instructor: Gianfranco Doretto Cutting Images.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Dynamic Programming (pro-gram)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming UNC Chapel Hill Z. Guo.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Greedy algorithms: CSC317
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Lecture 5 Dynamic Programming
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Advanced Design and Analysis Techniques
Matrix Chain Multiplication
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Dynamic programming techniques
Dynamic programming techniques
Dynamic Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
Lecture 5 Dynamic Programming
Dynamic Programming Several problems Principle of dynamic programming
Chapter 8 Dynamic Programming.
Dynamic Programming Comp 122, Fall 2004.
CSCE 411 Design and Analysis of Algorithms
Unit-5 Dynamic Programming
Matrix Chain Multiplication
Dynamic Programming General Idea
ICS 353: Design and Analysis of Algorithms
Topic: Divide and Conquer
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Lecture 8. Paradigm #6 Dynamic Programming
Matrix Multiplication (Dynamic Programming)
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Dynamic Programming General Idea
Longest increasing subsequence (LIS) Matrix chain multiplication
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Matrix Chain Product 張智星 (Roger Jang)
//****************************************
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Asst. Prof. Dr. İlker Kocabaş
Seminar on Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II

The problem Problem: Find the best way to multiply n matrices (where best means the one that uses the minimum number of scalar multiplications). Practical use: Computer Graphics (Long chains of matrices). Digital Signal processing: applying filters to signals, etc. Formally: Given <A1, A2, A3, ...An> are n matrices, fully parenthesize the product A1A2A3...An in a way to minimize the number of scalar multiplications. Exhaustive Search takes prohibitively long:

Number of Scalar Multiplications Multiplying 2 matrices, A1, A2, where A1 is pxq and A2 is qxr takes pqr scalar multiplications. Example: 2x3 3x1 Number of multiplications = 2x3x1 = 6

1. Characterize structure of optimal solution Notation: Ai .. j = AiAi+1Ai+2 . . . Aj An optimal solution of A1 ..n = (A1A2 ...Ak)(Ak+1Ak+2 ...An), 1<=k<n Cost = cost of computing A1 .. k + cost of computing Ak+1 .. n + cost to multiply the 2 results together. For the solution to be optimal, we must first find the optimal solutions to the subproblems: A1 .. k and Ak+1 .. n (Why must they be optimal solutions?)

Example A1A2A3A4 A1(A2A3A4) (A1A2)(A3A4) (A1A2A3)A4 k=3 k=1 k=2 Note: As we compute the optimal solutions for subproblems, we do the same computation repeatedly (e.g. A2A3). To avoid repeat computations, we will store the values as each one is computed.

2. Recursively define the value of the optimal solution For subproblem, Ai .. j, 1 <= i <= j <=n Compute m[i, j] = minimum number of scalar multiplications to compute Ai .. j Matrix Ai has dimension pi-1 x pi (rows x cols) Example: More notation: pi is number of columns in the ith matrix. A1 A2 A3 3x3 3x1 1x4 p0 p1 p1 p2 p2 p3

Number of multiplications If A1 is p0 x p1, A2 is p1 x p2, then A1A2 is p0 x p2 If A3 is p2xp3 then A1A2A3 is p0 x p3 (A1A2 ... Ak) is p0 x pk M1=(AiAi+1 ... Ak) is pi-1 x pk M2=(Ak+1 ... Aj) is pk x pj Therefore, the number of scalar multiplications to multiply M1M2 is pi-1pkpj

Defining m[i, j] recursively If i = j, no multiplication needed, so m[i, j] = 0 If i < j, m[i, j] = m[i, k] + m[k+1, j] + pi-1pkpj Example: m[2, 6], let k = 4: (A2A3A4)(A5A6) We will work this out in class.

Determining the Best split We must check each value of k to determine which gives the minimum cost: Minimum cost of Ai .. j is: Keep track of minimum cost in table, m[i, j] Keep track of k value that gives the minimum cost in table, s[i, j] = k

3. Compute the value of the optimal solution Step 3. Compute the value of the optimal solution from the bottom up. We start with the smallest subproblems: i = j m[i, i] = 0 Next compute cost of chains of length 2: m[i, i+1] Third, compute cost of chains of length 3 (using values for length 2): m[i, i+2] Total number of subproblems to solve: 1 for each i & j, 1 <=i <=j <=n Combination:

Matrix Chain Order Code Matrix-Chain-Order(p) // p is an array containing pi-1 to pj n = p.length – 1 // Number of matrices in chain Let m[1..n, 1..n] and s[1..n – 1, 2..n] be new tables for i = 1 to n m[i, i] = 0 // Single matrices take 0 multiplications for b = 2 to n // b is length of chain for i = 1 to n – b + 1 // all possible starting indices for length b j = i + b - 1 // Ending index of chain of length b m[i, j] = ∞ // Large value to start to find minimum for k = i to j - 1 // Try all possible splits of this chain q = m[i, k] + m[k + 1, j] + pi-1pkpj // smaller chains are // already computed if q < m[i, j] // If minimum value, then store it m[i, j] = q s[i, j] = k return m and s

Example Find the best parenthesization for multiplying the following chain: A1A2A3A4, where A1 is 3x3, A2 is 3x2, A3 is 2x2 and A4 is 2x4. p = <3, 3, 2, 2, 4> j j m[i, j] s[i, j] 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 i i