9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic.

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 14 Dynamic.
Dynamic Programming.
CSCI 256 Data Structures and Algorithm Analysis Lecture 13 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
DYNAMIC PROGRAMMING. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
Lecture 37 CSE 331 Nov 4, Homework stuff (Last!) HW 10 at the end of the lecture Solutions to HW 9 on Monday Graded HW 9 on.
Lecture 38 CSE 331 Dec 3, A new grading proposal Towards your final score in the course MAX ( mid-term as 25%+ finals as 40%, finals as 65%) .
Lecture 37 CSE 331 Dec 1, A new grading proposal Towards your final score in the course MAX ( mid-term as 25%+ finals as 40%, finals as 65%) .
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
9/3/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Guest lecturer: Martin Furer Algorithm Design and Analysis L ECTURE.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Dynamic programming 叶德仕
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 8.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Sequence Alignment Tanya Berger-Wolf CS502: Algorithms in Computational Biology January 25, 2011.
Chapter 6 Dynamic Programming
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
Lecture 33 CSE 331 Nov 20, HW 8 due today Place Q1, Q2 and Q3 in separate piles I will not accept HWs after 1:15pm Submit your HWs to the side of.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
CS502: Algorithms in Computational Biology
Dynamic Programming Sequence of decisions. Problem state.
Lecture 32 CSE 331 Nov 16, 2016.
Weighted Interval Scheduling
Greedy Algorithms / Interval Scheduling Yin Tat Lee
Topic 25 Dynamic Programming
CS38 Introduction to Algorithms
Algorithm Design Methods
Algorithm design and Analysis
Prepared by Chen & Po-Chuan 2016/03/29
Chapter 6 Dynamic Programming
Lecture 33 CSE 331 Nov 18, 2016.
Lecture 33 CSE 331 Nov 17, 2017.
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Lecture 32 CSE 331 Nov 15, 2017.
Lecture 33 CSE 331 Nov 14, 2014.
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Lecture 4 Dynamic Programming
Lecture 36 CSE 331 Nov 28, 2011.
Lecture 36 CSE 331 Nov 30, 2012.
Analysis of Algorithms CS 477/677
Dynamic Programming.
Week 8 - Friday CS322.
Presentation transcript:

9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic Programming Fibonacci Weighted Interval Scheduling

Design Techniques So Far Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Recursion / divide &conquer. Break up a problem into subproblems, solve subproblems, and combine solutions. Dynamic programming. Break problem into overlapping subproblems, and build up solutions to larger and larger sub-problems. 9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

Dynamic Programming History Bellman. [1950s] Pioneered the systematic study of dynamic programming. Etymology. n Dynamic programming = planning over time. n Secretary of Defense was hostile to mathematical research. n Bellman sought an impressive name to avoid confrontation. Reference: Bellman, R. E. Eye of the Hurricane, An Autobiography. "it's impossible to use dynamic in a pejorative sense" "something not even a Congressman could object to"

Dynamic Programming Applications Areas. n Bioinformatics. n Control theory. n Information theory. n Operations research. n Computer science: theory, graphics, AI, compilers, systems, …. Some famous dynamic programming algorithms. n Unix diff for comparing two files. n Viterbi for hidden Markov models / decoding convolutional codes n Smith-Waterman for genetic sequence alignment. n Bellman-Ford for shortest path routing in networks. n Cocke-Kasami-Younger for parsing context free grammars.

Fibonacci Sequence Sequence defined by a 1 = 1 a 2 = 1 a n = a n-1 + a n-2 How should you compute the Fibonacci sequence? Recursive algorithm: Running Time? 1, 1, 3, 5, 8, 13, 21, 34, … Fib(n) 1. If n =1 or n=2, then 2. return 1 3. Else 4. a = Fib(n-1) 5. b = Fib(n-2) 6. return a+b

Computing Fibonacci Sequence Faster Observation: Lots of redundancy! The recursive algorithm only solves n- 1 different sub-problems “Memoization”: Store the values returned by recursive calls in a sub- table Resulting Algorithm: Linear time, if integer operations take constant time Fib(n) 1. If n =1 or n=2, then 2. return 1 3. Else 4. f[1]=1; f[2]=1 5. For i=3 to n 6. f[i]  f[i-1]+f[i-2] 7. return f[n]

Computing Fibonacci Sequence Faster Observation: Fibonacci recurrence is linear Can compute A n using only O(log n) matrix multiplications; each one takes O(1) integer multiplications and additions. Total running time? O(log n) integer operations. Exponential improvement! Exercise: how big an improvement if we count bit operations? Multiplying k-bit numbers takes O(k log k) time. How many bits needed to write down a n ?

Weighted Interval Scheduling Weighted interval scheduling problem. n Job j starts at s j, finishes at f j, and has weight or value v j. n Two jobs compatible if they don't overlap. n Goal: find maximum weight subset of mutually compatible jobs. Time f g h e a b c d

Unweighted Interval Scheduling Review Recall. Greedy algorithm works if all weights are 1. n Consider jobs in ascending order of finish time. n Add job to subset if it is compatible with previously chosen jobs. Observation. Greedy algorithm can fail spectacularly with weights Time b a weight = 999 weight = 1

Weighted Interval Scheduling Notation. Label jobs by finishing time: f 1  f 2 ...  f n. Def. p(j) = largest index i < j such that job i is compatible with j. = largest i such that f i  s j Ex: p(8) = 5, p(7) = 3, p(2) = 0. Time

Dynamic Programming: Binary Choice Notation. OPT(j) = value of optimal solution to the problem consisting of job requests 1, 2,..., j. n Case 1: OPT selects job j. – collect profit v j – can't use incompatible jobs { p(j) + 1, p(j) + 2,..., j - 1 } – must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., p(j) n Case 2: OPT does not select job j. – must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., j-1 optimal substructure

Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n. Compute p(1), p(2), …, p(n) Compute-Opt(j) { if (j = 0) return 0 else return max(v j + Compute-Opt(p(j)), Compute-Opt(j-1)) } Weighted Interval Scheduling: Brute Force Brute force algorithm. Worst case: T(n) = T(n-1) + T(n-2) + (…)

Weighted Interval Scheduling: Brute Force Observation. Recursive algorithm fails spectacularly because of redundant sub-problems  exponential algorithms. Ex. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence p(1) = 0, p(j) = j

Memoization. Store results of each sub-problem in a table; lookup as needed. Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n. Compute p(1), p(2), …, p(n) for j = 1 to n M[j] = empty M[0] = 0 M-Compute-Opt(j) { if (M[j] is empty) M[j] = max(w j + M-Compute-Opt(p(j)), M-Compute-Opt(j-1)) return M[j] } global array Weighted Interval Scheduling: Memoization

Weighted Interval Scheduling: Running Time Claim. Memoized version of algorithm takes O(n log n) time. n Sort by finish time: O(n log n). n Computing p(  ) : O(n log n) via repeated binary search. n M-Compute-Opt(j) : each invocation takes O(1) time and either – (i) returns an existing value M[j] – (ii) fills in one new entry M[j] and makes two recursive calls n Case (ii) occurs at most n times  at most 2n recursive calls overall n Overall running time of M-Compute-Opt(n) is O(n). ▪ Remark. O(n) if jobs are pre-sorted by start and finish times.

Equivalent algorithm: Bottom-Up Bottom-up dynamic programming. Unwind recursion. Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n. Compute p(1), p(2), …, p(n) Iterative-Compute-Opt { M[0] = 0 for j = 1 to n M[j] = max(v j + M[p(j)], M[j-1]) } how fast? Total time = (sorting) + O(n) = O(n log(n))

Weighted Interval Scheduling: Finding a Solution Q. Dynamic programming algorithms computes optimal value. What if we want the solution itself? A. Do some post-processing. n # of recursive calls  n  O(n). Run M-Compute-Opt(n) Run Find-Solution(n) Find-Solution(j) { if (j = 0) output nothing else if (v j + M[p(j)] > M[j-1]) print j Find-Solution(p(j)) else Find-Solution(j-1) }

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Knapsack Problem Given n objects and a "knapsack." - Item i weighs w i > 0 kilograms and has value v i > 0. - Knapsack has capacity of W kilograms. - Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value 40. Many “packing” problems fit this model –Assigning production jobs to factories –Deciding which jobs to do on a single processor with bounded time –Deciding which problems to do on an exam 1 value weight # W = 11

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Knapsack Problem Given n objects and a "knapsack." - Item i weighs w i > 0 kilograms and has value v i > 0. - Knapsack has capacity of W kilograms. - Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value value weight # W = 11 Greedy: repeatedly add item with maximum ratio v i / w i Example: { 5, 2, 1 } achieves only value = 35  greedy not optimal.

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Knapsack Problem Given n objects and a "knapsack." - Item i weighs w i > 0 kilograms and has value v i > 0. - Knapsack has capacity of W kilograms. - Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value value weight # W = 11 Greedy algorithm?

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Dynamic programming: attempt 1 Definition: OPT(i) = maximum profit subset of items 1, …, i. –Case 1: OPT does not select item i. OPT selects best of { 1, 2, …, i-1 } –Case 2: OPT selects item i. without knowing what other items were selected before i, we don't even know if we have enough room for i Conclusion. Need more sub-problems!

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adding a new variable Definition: OPT(i, w) = max profit subset of items 1, …, i with weight limit w. –Case 1: OPT does not select item i. OPT selects best of { 1, 2, …, i-1 } with weight limit w –Case 2: OPT selects item i. new weight limit = w – w i OPT selects best of { 1, 2, …, i–1 } with new weight limit

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Bottom-up algorithm Fill up an n-by-W array. Input: n, W, w 1,…,w N, v 1,…,v N for w = 0 to W M[0, w] = 0 for i = 1 to n for w = 1 to W if (w i > w) M[i, w] = M[i-1, w] else M[i, w] = max {M[i-1, w], v i + M[i-1, w-w i ]} return M[n, W]

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Knapsack table n 1 Value Weight Item  { 1, 2 } { 1, 2, 3 } { 1, 2, 3, 4 } { 1 } { 1, 2, 3, 4, 5 } W W = 11 OPT: { 4, 3 } value = = 40

How do we make turn this into a proof of correctness? Dynamic programming (and divide and conquer) lends itself directly to induction. –Base cases: OPT(i,0)=0, OPT(0,w)=0 (no items!). –Inductive step: explaining the correctness of recursive formula If the following values are correctly computed: –OPT(0,w-1),OPT(1,w-1),…,OPT(n,w-1) –OPT(0,w),…,OPT(i-1,w) Then the recursive formula computes OPT(i,w) correctly –Case 1: …, Case 2: … (from previous slide). 10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

About proofs Proof is a rigorous argument about a mathematical statement’s truth –Should convince you –Should not feel like a shot in the dark –What makes a proof “good enough” is social –(Truly rigorous, machine-readable proofs exist but are too painful for human use). In real life, you have to check your own work –Ideal: all problems should have grades 100% or 20% 10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Time and Space Complexity Given n objects and a "knapsack." - Item i weighs w i > 0 kilograms and has value v i > 0. - Knapsack has capacity of W kilograms. - Goal: fill knapsack so as to maximize total value. What is the input size? In words? In bits? 1 value weight # W = 11

10/8/2008 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Time and space complexity Time and space:  (n W). –Not polynomial in input size! –Space can be made O(W) (exercise!) –"Pseudo-polynomial." –Decision version of Knapsack is NP-complete. [KT, chapter 8] Knapsack approximation algorithm. There is a poly-time algorithm that produces a solution with value within 0.01% of optimum. [KT, section 11.8]