Dynamic Programming Louis Siu 13.9.2007. What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.

Slides:



Advertisements
Similar presentations
Dynamic Programming Nithya Tarek. Dynamic Programming Dynamic programming solves problems by combining the solutions to sub problems. Paradigms: Divide.
Advertisements

Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CSC 427: Data Structures and Algorithm Analysis
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Solving Optimization Problems.
Dynamic Programming Code
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
UNIVERSITY OF SOUTH CAROLINA College of Engineering & Information Technology Bioinformatics Algorithms and Data Structures Chapter 11: Core String Edits.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Analysis of Algorithms
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming Part One HKOI Training Team 2004.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
An Example in the Design and Analysis of an Algorithm Demonstrates: -- Recurrence Relations -- Design Techniques -- The fact that analysis provide useful.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2006 Dynamic programming  top-down vs. bottom up  caching  dynamic programming vs. divide & conquer.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2008 Dynamic programming  top-down vs. bottom-up  divide & conquer vs. dynamic programming  examples:
Dynamic Programming. What is dynamic programming? Break problem into subproblems Work backwards Can use ‘recursion’ ‘Programming’ - a mathematical term.
CS 312: Algorithm Design & Analysis Lecture #23: Making Optimal Change with Dynamic Programming Slides by: Eric Ringger, with contributions from Mike Jones,
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Recursion When to use it and when not to use it. Basics of Recursion Recursion uses a method Recursion uses a method Within that method a call is made.
Dynamic Programming 21 August 2004 Presented by Eddy Chan Written by Ng Tung.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 28, 2014.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
1 Recursion n what is it? n how to build recursive algorithms n recursion analysis n tracing simple recursive functions n hands on attempts at writing.
Introduction to Algorithms Jiafen Liu Sept
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Dynamic Programming Min Edit Distance Longest Increasing Subsequence Climbing Stairs Minimum Path Sum.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Dynamic Programming (DP) By Denon. Outline Introduction Fibonacci Numbers (Review) Longest Common Subsequence (LCS) More formal view on DP Subset Sum.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
1 7.Algorithm Efficiency These factors vary from one machine/compiler (platform) to another  Count the number of times instructions are executed So, measure.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
Data Structures and Algorithms
Unit-5 Dynamic Programming
CS 3343: Analysis of Algorithms
Dynamic Programming.
Chapter 15: Dynamic Programming II
Lecture 8. Paradigm #6 Dynamic Programming
Bioinformatics Algorithms and Data Structures
Dynamic Programming.
DYNAMIC PROGRAMMING.
COMP108 Algorithmic Foundations Dynamic Programming
Longest Common Subsequence
Dynamic Programming II DP over Intervals
Longest Common Subsequence
Dynamic Programming.
Presentation transcript:

Dynamic Programming Louis Siu

What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of memory) Closely related to recursion

Why learn DP? DP problems appear in most, if not all, programming contests. Learning some common ways of making use of DP is easy but often sufficient. DP programs are usually easy to code. Good at DP => good at recursion

Dynamic Programming – Idea Fibonacci number f(n): f(0) = 0 f(1) = 1 f(n) = f(n – 1) + f(n – 2), n > 1 Code segment for finding f(n): int f(n){ if (n == 0) return 0; if (n == 1) return 1; return f(n-1) + f(n-2); }

Computing f(4) f(4) f(3)f(2) f(1) f(0) f(1)f(0) f(0), f(1), f(2) are computed multiple times! Exponential time for finding f(n)!

Modification Memorize computed result: int table[n]; int f(n){ if (table[n] is filled) return table[n]; if (n == 0) return 0; if (n == 1) return 1; table[n] = f(n-1) + f(n-2); return table[n]; }

Computing f(4) f(4) f(3)f(2) f(1) f(0) f(1)f(0) f(i) are computed once only. Linear time for finding f(n)!

Dynamic Programming – Idea Look for redundancy (re-computation) in the algorithm Memorize intermediate results

Example 1 – Tiling

Tiling (UVa #10359) Given a board of size 2×n Available tiles: 2×1, 2×2 Find the number of ways to completely fill the board. E.g. when n = 3, 5 ways

Recurrence Consider board of width n: How can we fill the right-most column? n

Recurrence Let f(n) be the number of ways of filling a board of size n × 2. f(n) = f(n–1) + f(n–2) + f(n–2) f(n–1) ways to fill f(n–2) ways to fill

Base Cases When n = 0, f(n) = 1. When n = 1, f(n) = 1.

Overlapping Subproblems Memorization can avoid re-computating f(n–3). f(n – 3) ways to fill

Implementation 1 Bottom-up approach: int ways[251]; int main(){ ways[0] = 1; ways[1] = 1; for (int i = 2; i <= 250; ++i) ways[i] = ways[i-1] + 2 * ways[i-2]; }

Implementation 2 Top-down approach: int ways[251]; int f(int n){ if (ways[n] != -1) return ways[n]; return (ways[n] = f(n-1) + 2*f(n-2)); } int main(){ ways[0] = 1; ways[1] = 1; for (int i = 2; i <= 250; ++i) ways[i] = -1; // cout << f(250) << ‘\n’; }

Bottom-up vs. Top-down Bottom-upTop-down Order of evaluation ImportantNot important Function callsNoYes Space requirement LowHigh RedundantPossibleNever

Remarks The number of solutions is Therefore, any algorithm that counts the number of solutions one by one is exponential. With memorization, f(i) is computed once only in constant time – the time complexity of our DP solution is O(n). Solve “Tiling” with big integer.

Example 2 – Tri Tiling

Tri Tiling (UVa #10918) Given a board of size 3×n Available tiles: 2×1 Find the number of ways to completely fill the board. Let f(n) be the number of ways to fill a board of size n×3 What is the relationship between f(n) and f(n–1), f(n–2), f(n–3), …?

Example The following way of filling the board cannot be counted using f(n–1), f(n–2), f(n–3), …

Recurrence How to fill the last column? g(n) is the number of ways to fill a board of size n×3 with one square at the corner filled f(n) = 2 × g(n–1) + f(n–2), f(0) = 1, f(1) = 0 g(n–1) ways to fill f(n–2) ways to fill

Recurrence for g(n) How to fill the last column? g(n) = f(n–1) + g(n–2), g(1) = 1, g(2) = 0 f(n–1) ways to fill g(n–2) ways to fill

Remarks Brute-force algorithm requires exponential time to finish. With memorization, f(i) and g(i) are computed once only in constant time – the time complexity of our DP solution is O(n). g(n) is the key for formulating the recurrence.

Example 3 – Little Stone Road (modified)

Little Stone Road (mini- contest) Input: an n×m grid with positive integers c(i, j) Output: the minimum cost of any path from any square in the top row to any square in the bottom row (can move down, left, right) E.g. minimum cost =

Attempt 1 Observations Many paths overlap significantly There are only 3 possible incoming squares that lead to square (i, j) – top, left and right f(i, j) = the min. cost of any path from the top to square (i, j) E.g. f(1, 1) = 16 (green path) f(i, j) = min { f(i-1, j), f(i, j-1), f(i, j+1) } + c(i, j) Base cases: f(0, j) = c(0, j) Answer = min { f(n-1, 0), f(n-1, 1), …, f(n-1, m-1) } Problems?

Attempt 1 – Problems f(i, j) = min { f(i-1, j), f(i, j-1), f(i, j+1) } + c(i, j) Base cases: f(0, j) = c(0, j) f(i, j) queries f(i, j±1), but they are problems of the same size – no progress towards the base case.

Attempt 2 Observation: The paths that ends at square (i, j) can be can be classified into n groups: f(i, j) = min { f(i-1, 0) + c(i, 0) + c(i, 1) + … + c(i, j), f(i-1, 1) + c(i, 1) + c(i, 2) + … + c(i, j), f(i-1, 2) + c(i, 3) + c(i, 4) + … + c(i, j), … f(i-1, j) + c(i, j), f(i-1, j+1) + c(i, j) + c(i, j+1), … } Base cases: f(0, j) = c(0, j) Time complexity: O(mn 2 ) (i, j)

Attempt 3 Observation: the cost of horizontal moves are calculated many times Let L(i, j) = minimum cost to square (i, j) “from the left or from above”, similarly for R(i, j) L(i, j) = min { L(i-1, j), R(i-1, j), L(i, j-1) } + c(i, j) R(i, j) = min { L(i-1, j), R(i-1, j), R(i, j+1) } + c(i, j) Answer = min { L(n-1, 0), L(n-1, 1), …, L(n-1, m-1), R(n-1, 0), R(n-1, 1), …, R(n-1, m-1) } Fill in the base cases yourself! Time complexity: O(mn) (i, j–1)(i, j)

Summary Recurrence must reduce big problems to small problems. There can be more than one DP formulation, with different performance.

Example 4 – Make Palindrome

Make Palindrome (UVa #10453) A palindrome is a string that is not changed when reversed. Example of palindrome: MADAM Example of non-palindrome: ADAM By inserting ‘M’ to “ADAM”, we get the palindrome “MADAM”. Find the minimum number of insertions required to make a string a palindrome. Show one palindrome produced using the minimum number of insertions.

Idea Let S[1..|S|] be the string. Observation: if S[1] and S[|S|] do not match, then an insertion either at the beginning or at the end is required. E.g. We MUST add ‘M’ at the front or ‘A’ at the end to make “ABCM” closer to a palindrome (“MABCM” or “ABCMA”) The problem remaining is to make “ABC” or “BCM” a palindrome.

Algorithm f(i, j) = minimum number of insertions required to make S[i, j] a palindrome. int table[1000][1000]; int f(i, j){ if (i >= j) return 0; if (table[i][j] is computed) return table[i][j]; if (S[i] == S[j]) return table[i][j] = f(i+1, j-1); return table[i][j] = min(f(i, j-1), f(i+1, j)) + 1; }

Backtrack – get back the resulting palindrome Let S = “ABCB” table[i][j]: min. # of insertions required to make S[i..j] a palindrome. table[i][j] = table[i+1][j-1] if S[i] = S[j] min{ table[i+1][j]+1, table[i][j-1]+1 }, otherwise jiji table[1][4] = table[2][4]+1 table[1][4] ≠ table[1][3]+1

int table[1000][1000]; string S; void backtrack(i, j){ if (i > j) return; if (i == j) cout << S[i]; else if (S[i] == S[j]){ cout << S[i]; backtrack(i+1, j-1); cout << S[j]; } else if (table[i+1][j]+1 == table[i][j]){ cout << S[i]; backtrack(i+1, j); cout << S[i]; } else { cout << S[j]; backtrack(i, j-1); cout << S[j]; }

Remarks Backtracking is done by checking how the values in the DP tables are derived. Further improvement: S = “ABAB” f(1, 2) is the same as f(3, 4), but they are computed separately. Store f(“A”), f(“B”), f(“AB”), f(“BA”), f(“ABA”), f(“BAB”), f(“ABAB”) instead. Need an efficient data structure to support this.

Example 5 – Traveling Salesman Problem

Traveling Salesman Problem (TSP) (UVa #10944) There are n cities. The costs of traveling between any pair of cities i and j, c(i, j), are given. The salesman wants to find the minimum cost to visit all cities exactly once and get back to the starting city. TSP is NP-complete.

Recurrence Let f(S, i) = the minimum cost to visit all cities in {1}  S  C, starting from city 1 and ending at city i, where C is the set of all cities. Answer = f(C, 1) Time complexity: O(2 n n 2 ) (much better than exhausting all (n-1)! possible circuits).

Summary and Conclusion DP is a powerful technique to improve running time of algorithms. The key idea of DP is avoiding re-computation by memorization. To make use of DP: identify redundancy define substructure (most difficult part) formulate recurrence Sometimes more than one DP formulations are possible. DP solutions can be implemented top-down or bottom-up. recover optimal solution by looking at how the entries in the DP table are derived. Alternatives to DP include brute-force with pruning and greedy. Practice is important.

Suggested Problems Easy: 116, 147, 231, 847, 926, 988, 10036, 10081, 10192, 10198, 10304, 10337, 10359, 10400, 10404, 10453, 10532, 10617, 10651, 10702, 10759, 10891, 10912, 10970, Medium: 562, 607, 711, 714, 882, 10003, 10130, 10944, 10890, 10981, 11002, 11003, Hard: 757, 10020, 10564, 10663, 10859, 10911, 10934, 10940, 11045, 11081