DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

Dynamic Programming Introduction Prof. Muhammad Saeed.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Analysis of Algorithms
Dynamic Programming.
Introduction to Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Solving Optimization Problems.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Algorithms and Data Structures Lecture X
Dynamic Programming Part One HKOI Training Team 2004.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Algorithms April-May 2013 Dr. Youn-Hee Han The Project for the Establishing the Korea ㅡ Vietnam College of Technology in Bac Giang.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2008 Dynamic programming  top-down vs. bottom-up  divide & conquer vs. dynamic programming  examples:
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Gary Sham HKOI 2010 Greedy, Divide and Conquer. Greedy Algorithm Solve the problem by the “BEST” choice. To find the global optimal through local optimal.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Dynamic Programming continued David Kauchak cs302 Spring 2012.
DP “It is easy to hardcode the solution, but it is difficult to come up with an algorithm.” by jack choi.___.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Dynamic Programming David Kauchak cs161 Summer 2009.
Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.
Dynamic Programming (DP) By Denon. Outline Introduction Fibonacci Numbers (Review) Longest Common Subsequence (LCS) More formal view on DP Subset Sum.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Dynamic Programming Examples By Alexandra Stefan and Vassilis Athitsos.
Greedy algorithms: CSC317
Fundamental Structures of Computer Science
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Topic 25 Dynamic Programming
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
Dynamic Programming.
Prepared by Chen & Po-Chuan 2016/03/29
Data Structures and Algorithms
Unit-5 Dynamic Programming
Algorithms: Design and Analysis
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Lecture 4 Dynamic Programming
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
This is not an advertisement for the profession
Knapsack Problem A dynamic approach.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

DP (not Daniel Park's dance party)

Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of distinct subproblems needs to be small. o Don't get confused with divide and conquer (NON-overlapping subproblems) Optimal substructure o Solve larger subproblem using solutions to smaller subproblems On USACO, if your naive method's runtime is exponential, then the problem is likely to be DP

Two Approaches Top-Down (systematic recursion) o If the problem has been solved already, return the answer. o If it hasn't been solved, solve it and save the answer  also called "memoizing" (no 'r') o "Easier" to code :(( o Russian way Bottom-Up o Solve all problems, starting from the most trivial ones. o Useful when trying to optimize memory (ex: sliding window, heap, etc) Pick what is easier for you. It doesn't really matter.

Fibonacci Classic first example Find the nth fibonacci number o f(n) = f(n-1) + f(n-2) Straightforward recursion takes exponential time: int fib(int n) { if (n == 1 || n == 2) return 1; return fib (n - 1) + fib (n - 2); } o fib(5) = fib(4) + fib(3) = ( fib(3) + fib(2) ) + fib(3) = ( fib(2) + fib(1) + fib(2) ) + fib(3) = fib(2) + fib(1) + fib(2) + fib(2) + fib(1) o fib(1) and fib(2) are computed many many times (in fact, fib(5) times) o This algorithm will take years to compute just fib(1000).

Fibonacci: Two Faster Approaches Bottom Up Way: o Keep an array fib[1...n] o fib[1] = fib[2] = 1 o fib[n] = fib[n-1] + fib[n-2] o Loop from 3 to n.  Only takes linear time o Only store last 2 values to reduce memory usage Top Down Way (Recursive): Just modify original code. int[] dp; // big array initialized to 0. int fib(int n) { if (n == 1 || n == 2) return 1; if (dp[n] > 0) return dp[n]; // already know answer. return dp[n] = fib (n - 1) + fib (n - 2); }

Number Triangles Given a triangle of numbers, find a path from the top to the bottom such that the sum of the numbers on the path is maximized. Top-Down approach! o Define a state, find the recursion. A subproblem will be finding the maximum path from any location in the triangle to the bottom. o Our state is the location.  Row and column. f(r, c) o Our recursion: We have two choices  Go down-left or down-right: pick the larger.  f(r, c) = array[r][c] + max (f(r+1, c), f(r+1, c+1)) o Want to find: f(0, 0). [0-based indexing] o Don't forget to memoize when implementing!

Knapsack You have n items in a knapsack, each with weight w_i and value v_i. You need to get the maximum value given you can only hold at most a total weight C. Variation 1: Unlimited amount of each item o dp[0...C] where dp[i] is max value for weight i o dp[i] = max(dp[i-w[j]] +v[j]) for all items j Variation 2: One of each item o dp[1...n][0...C] where dp[i][j] is max value with weight j and only items before item i. o dp[i][j] = max( dp[i-1][j], dp[i-1][j-w[i]] + v[i] ) Knapsack problems are seen a lot

Interval dp Matrix chain multiplication Find minimum number of multiplications needed. Matrix multiplication is associative. The order makes a huge difference on the # of multiplications needed. Example: 10 x 100, 100 x 5, 5 x 50 matrices Multiply A and B first multiplications Multiply B and C first - 75,000 multiplications Let dp[i, j] = min multiplications for matricies i...j Must divide into two parts and multiply each dp[i, j] = min(dp[i,k] + dp[k+1, j] + cost(i,k,j)) for k=i...j-1 cost(i,k,j) is rows(i) * cols(k) * cols(j) - cost to multiply product A_(i...k) with A_(k+1...j) Note that at each point you keep track of an interval and divide it into subintervals

Using bitset states Only when size is small and 2^n is ok. Example: Planting o Farmer John has a field 9 by 250. o Some of the squares have trees in them and you can't plant there. o Farmer John also cannot plant crops in adjacent squares. o How many ways are there to plant? Note that 2^9 is small. Let dp[i= ][j=0...2^9-1] mean the ith row and j is a number where each bit 1 means crop, and 0 no crop dp[i][j]= sum(dp[i-1][k] for k where k is valid (no crops where trees are or adjacent to crops in current row))

Other DPs for your enjoyment DP + data structure (BIT/segment tree) DP + math DP + pre-computation

POTW: Factoring Numbers Daniel Park is playing a game. He starts out by writing down a single number, N. o Each turn, he can erase N and replace it with two numbers a and b, such that a * b = N. (a, b > 1) o In the end, he remembers all the numbers he's ever written and sums up their digits. o Because Daniel Park likes big numbers, what is the largest possible sum he can make? Example o N = 24. o Answer: 27  Split 24 into 8, 3.  Split 8 into 4, 2.  Split 4 into 2, 2. o Sum = (2 + 4) = 27.

POTW: CONSTRAINTS 10 POINTS: N <= POINTS: N <= 1,000, POINTS: N <= 1,000,000,000 (hashtable-dp)