Presentation is loading. Please wait.

Presentation is loading. Please wait.

DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.

Similar presentations


Presentation on theme: "DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of."— Presentation transcript:

1 DP (not Daniel Park's dance party)

2 Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of distinct subproblems needs to be small. o Don't get confused with divide and conquer (NON-overlapping subproblems) Optimal substructure o Solve larger subproblem using solutions to smaller subproblems On USACO, if your naive method's runtime is exponential, then the problem is likely to be DP

3 Two Approaches Top-Down (systematic recursion) o If the problem has been solved already, return the answer. o If it hasn't been solved, solve it and save the answer  also called "memoizing" (no 'r') o "Easier" to code :(( o Russian way Bottom-Up o Solve all problems, starting from the most trivial ones. o Useful when trying to optimize memory (ex: sliding window, heap, etc) Pick what is easier for you. It doesn't really matter.

4 Fibonacci Classic first example Find the nth fibonacci number o f(n) = f(n-1) + f(n-2) Straightforward recursion takes exponential time: int fib(int n) { if (n == 1 || n == 2) return 1; return fib (n - 1) + fib (n - 2); } o fib(5) = fib(4) + fib(3) = ( fib(3) + fib(2) ) + fib(3) = ( fib(2) + fib(1) + fib(2) ) + fib(3) = fib(2) + fib(1) + fib(2) + fib(2) + fib(1) o fib(1) and fib(2) are computed many many times (in fact, fib(5) times) o This algorithm will take years to compute just fib(1000).

5 Fibonacci: Two Faster Approaches Bottom Up Way: o Keep an array fib[1...n] o fib[1] = fib[2] = 1 o fib[n] = fib[n-1] + fib[n-2] o Loop from 3 to n.  Only takes linear time o Only store last 2 values to reduce memory usage Top Down Way (Recursive): Just modify original code. int[] dp; // big array initialized to 0. int fib(int n) { if (n == 1 || n == 2) return 1; if (dp[n] > 0) return dp[n]; // already know answer. return dp[n] = fib (n - 1) + fib (n - 2); }

6 Number Triangles Given a triangle of numbers, find a path from the top to the bottom such that the sum of the numbers on the path is maximized. Top-Down approach! o Define a state, find the recursion. A subproblem will be finding the maximum path from any location in the triangle to the bottom. o Our state is the location.  Row and column. f(r, c) o Our recursion: We have two choices  Go down-left or down-right: pick the larger.  f(r, c) = array[r][c] + max (f(r+1, c), f(r+1, c+1)) o Want to find: f(0, 0). [0-based indexing] o Don't forget to memoize when implementing!

7 Knapsack You have n items in a knapsack, each with weight w_i and value v_i. You need to get the maximum value given you can only hold at most a total weight C. Variation 1: Unlimited amount of each item o dp[0...C] where dp[i] is max value for weight i o dp[i] = max(dp[i-w[j]] +v[j]) for all items j Variation 2: One of each item o dp[1...n][0...C] where dp[i][j] is max value with weight j and only items before item i. o dp[i][j] = max( dp[i-1][j], dp[i-1][j-w[i]] + v[i] ) Knapsack problems are seen a lot

8 Interval dp Matrix chain multiplication Find minimum number of multiplications needed. Matrix multiplication is associative. The order makes a huge difference on the # of multiplications needed. Example: 10 x 100, 100 x 5, 5 x 50 matrices Multiply A and B first - 7500 multiplications Multiply B and C first - 75,000 multiplications Let dp[i, j] = min multiplications for matricies i...j Must divide into two parts and multiply each dp[i, j] = min(dp[i,k] + dp[k+1, j] + cost(i,k,j)) for k=i...j-1 cost(i,k,j) is rows(i) * cols(k) * cols(j) - cost to multiply product A_(i...k) with A_(k+1...j) Note that at each point you keep track of an interval and divide it into subintervals

9 Using bitset states Only when size is small and 2^n is ok. Example: Planting o Farmer John has a field 9 by 250. o Some of the squares have trees in them and you can't plant there. o Farmer John also cannot plant crops in adjacent squares. o How many ways are there to plant? Note that 2^9 is small. Let dp[i=1...250][j=0...2^9-1] mean the ith row and j is a number where each bit 1 means crop, and 0 no crop dp[i][j]= sum(dp[i-1][k] for k where k is valid (no crops where trees are or adjacent to crops in current row))

10 Other DPs for your enjoyment DP + data structure (BIT/segment tree) DP + math DP + pre-computation

11 POTW: Factoring Numbers Daniel Park is playing a game. He starts out by writing down a single number, N. o Each turn, he can erase N and replace it with two numbers a and b, such that a * b = N. (a, b > 1) o In the end, he remembers all the numbers he's ever written and sums up their digits. o Because Daniel Park likes big numbers, what is the largest possible sum he can make? Example o N = 24. o Answer: 27  Split 24 into 8, 3.  Split 8 into 4, 2.  Split 4 into 2, 2. o Sum = 2 + 2 + 2 + 4 + 8 + 3 + (2 + 4) = 27.

12 POTW: CONSTRAINTS 10 POINTS: N <= 50. 20 POINTS: N <= 1,000,000 25 POINTS: N <= 1,000,000,000 (hashtable-dp)


Download ppt "DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of."

Similar presentations


Ads by Google