UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.

Slides:



Advertisements
Similar presentations
Dynamic Programming.
Advertisements

UMass Lowell Computer Science Analysis of Algorithms Prof. Giampiero Pecelli Fall, 2010 Paradigms for Optimization Problems Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Lecture 2 (Part 1) Tuesday, 9/11/01 Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 2 Tuesday, 2/5/02 Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Lecture 1 (Part 3) Tuesday, 9/4/01 Greedy Algorithms.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 2 Monday, 2/6/06 Design Patterns for Optimization.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 2 Monday, 9/13/06 Design Patterns for Optimization Problems.
Dynamic Programming Code
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Monday, 12/2/02 Design Patterns for Optimization Problems Greedy.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
Analysis of Algorithms CS 477/677
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Design Patterns for Optimization Problems Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2008 Design Patterns for Optimization Problems Dynamic Programming.
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 1 (Part 3) Tuesday, 1/29/02 Design Patterns for Optimization.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2008 Lecture 2 Tuesday, 9/16/08 Design Patterns for Optimization.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Lecture 7 Topics Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2005 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
First Ingredient of Dynamic Programming
Algorithms and Data Structures Lecture X
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
Lecture21: Dynamic Programming Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
Introduction to Algorithms Jiafen Liu Sept
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 2 Tuesday, 2/2/10 Design Patterns for Optimization.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
Dynamic Programming Typically applied to optimization problems
Lecture 5 Dynamic Programming
Lecture 5 Dynamic Programming
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Data Structure and Algorithms
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Introduction to Algorithms: Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
Longest Common Subsequence
Presentation transcript:

UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming Lecture 1 (Part 2)

Algorithmic Paradigm Context Subproblem solution order Make choice, then solve subproblem(s) Solve subproblem(s), then make choice

Dynamic Programming Approach to Optimization Problems 1. Characterize structure of an optimal solution. 2. Recursively define value of an optimal solution. 3. Compute value of an optimal solution in bottom-up fashion. 4. Construct an optimal solution from computed information. source: textbook Cormen, et al.

Dynamic Programming Matrix Parenthesization

Example: Matrix Parenthesization Definitions ä Given “chain” of n matrices: ä Given “chain” of n matrices: ä Compute product A 1 A 2 … A n efficiently ä Minimize “cost” = number of scalar multiplications ä Multiplication order matters! source: textbook Cormen, et al.

Example: Matrix Parenthesization Step 1: Characterizing an Optimal Solution Observation: Any parenthesization of A i A i+1 … A j must split it between A k and A k+1 for some k. THM: Optimal Matrix Parenthesization: If an optimal parenthesization of A i A i+1 … A j splits at k, then parenthesization of prefix A i A i+1 … A k must be an optimal parenthesization. Why? If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize A i A i+1 … A j, contradicting optimality of that parenthesization. If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize A i A i+1 … A j, contradicting optimality of that parenthesization. source: textbook Cormen, et al. common DP proof technique: “cut-and-paste” proof by contradiction

Example: Matrix Parenthesization Step 2: A Recursive Solution ä Recursive definition of minimum parenthesization cost: m[i,j]= min{m[i,k] + m[k+1,j] + p i-1 p k p j } if i < j 0 if i = j How many distinct subproblems? i <= k < j each matrix A i has dimensions p i-1 x p i source: textbook Cormen, et al.

Example: Matrix Parenthesization Step 3: Computing Optimal Costs 0 2,625 2,500 1,000 s: value of k that achieves optimal cost in computing m[i, j] source: textbook Cormen, et al.

Example: Matrix Parenthesization Step 4: Constructing an Optimal Solution PRINT-OPTIMAL-PARENS(s, i, j) if i = j then print “A” i else print “(“ PRINT-OPTIMAL-PARENS(s, i, s[i, j]) PRINT-OPTIMAL-PARENS(s, i, s[i, j]) PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j) PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j) print “)“ print “)“ source: textbook Cormen, et al.

Example: Matrix Parenthesization Memoization ä Provide Dynamic Programming efficiency ä But with top-down strategy ä Use recursion ä Fill in table “on demand” ä Example: ä RECURSIVE-MATRIX-CHAIN: source: textbook Cormen, et al. MEMOIZED-MATRIX-CHAIN(p) 1 n length[p] for i 1 to n 3 do for j i to n 4 do m[i,j] 5 return LOOKUP-CHAIN(p,1,n) LOOKUP-CHAIN(p,i,j) 1 if m[i,j] < 2 then return m[i,j] 3 if i=j 4 then m[i,j] 0 5 else for k i to j-1 6 do q LOOKUP-CHAIN(p,i,k) + LOOKUP-CHAIN(p,k+1,j) + p i-1 p k p j 7if q < m[i,j] 8 then m[i,j] q 9 return m[i,j]

Dynamic Programming Longest Common Subsequence

Example: Longest Common Subsequence (LCS): Motivation ä Strand of DNA: string over finite set {A,C,G,T} ä each element of set is a base: adenine, guanine, cytosine or thymine ä Compare DNA similarities ä S 1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA ä S 2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA ä One measure of similarity: ä find the longest string S 3 containing bases that also appear (not necessarily consecutively) in S 1 and S 2 ä S 3 = GTCGTCGGAAGCCGGCCGAA source: textbook Cormen, et al.

Example: LCS Definitions ä Sequence is a subsequence of if (strictly increasing indices of X) such that ä example: is subsequence of with index sequence ä Z is common subsequence of X and Y if Z is subsequence of both X and Y ä example: ä common subsequence but not longest ä common subsequence. Longest? Longest Common Subsequence Problem: Given 2 sequences X, Y, find maximum-length common subsequence Z. source: textbook Cormen, et al.

Example: LCS Step 1: Characterize an LCS THM 15.1: Optimal LCS Substructure Given sequences: For any LCSof X and Y: 1 if thenand Z k-1 is an LCS of X m-1 and Y n-1 2 if thenZ is an LCS of X m-1 and Y 3 if thenZ is an LCS of X and Y n-1 PROOF: based on producing contradictions 1 a) Suppose. Appending to Z contradicts longest nature of Z. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending to W yields common subsequence of length > k = contradiction. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending to W yields common subsequence of length > k = contradiction. 2 Common subsequence W of X m-1 and Y of length > k would also be common subsequence of X m, Y, contradicting longest nature of Z. 3 Similar to proof of (2) source: textbook Cormen, et al.

Example: LCS Step 2: A Recursive Solution ä Implications of Theorem 15.1: ? yes no Find LCS(X m-1, Y n-1 ) Find LCS(X m-1, Y) Find LCS(X, Y n-1 ) LCS 1 (X, Y) = LCS(X m-1, Y n-1 ) + x m LCS 2 (X, Y) = max(LCS(X m-1, Y), LCS(X, Y n-1 ))

Example: LCS Step 2: A Recursive Solution (continued) ä Overlapping subproblem structure: ä Recurrence for length of optimal solution: Conditions of problem can exclude some subproblems! c[i,j]= c[i-1,j-1]+1 if i,j > 0 and x i =y j max(c[i,j-1], c[i-1,j])if i,j > 0 and x i =y j 0 if i=0 or j=0  (mn) distinct subproblems source: textbook Cormen, et al.

Example: LCS Step 3: Compute Length of an LCS source: textbook Cormen, et al. c table c table (represent b table) (represent b table) B CB A B C B A What is the asymptotic worst- case time complexity?

Example: LCS Step 4: Construct an LCS source: textbook Cormen, et al.

Example: LCS Improve the Code ä Can eliminate b table  c[i,j] depends only on 3 other c table entries: ä c[i-1,j-1] c[i-1,j] c[i,j-1]  given value of c[i,j], can pick the one in O(1) time ä reconstruct LCS in O(m+n) time similar to PRINT-LCS  same  (mn) space, but  (mn) was needed anyway... ä Asymptotic space reduction ä leverage: need only 2 rows of c table at a time ä row being computed ä previous row ä can also do it with ~ space for 1 row of c table ä but does not preserve LCS reconstruction data source: textbook Cormen, et al.

Dynamic Programming …leading to a Greedy Algorithm… Activity Selection

Activity Selection Optimization Problem ä Problem Instance: ä Set S = {1,2,...,n} of n activities ä Each activity i has: ä start time: s i ä finish time: f i ä Activities i, j are compatible iff non-overlapping: ä Objective: ä select a maximum-sized set of mutually compatible activities source: textbook Cormen, et al.

Activity Selection Activity Time Duration Activity Number

Activity Selection Solution to S ij including a k produces 2 subproblems: 1) S ik (start after a i finishes; finish before a k starts) 2) S kj (start after a k finishes; finish before a j starts) source: textbook Cormen, et al. c[i,j]=size of maximum-size subset of mutually compatible activities in S ij.