Prepared by Chen & Po-Chuan 2016/03/29

Slides:



Advertisements
Similar presentations
Algorithm Design approaches Dr. Jey Veerasamy. Petrol cost minimization problem You need to go from S to T by car, spending the minimum for petrol. 2.
Advertisements

Greedy Algorithms.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Analysis of Algorithms
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
CSCI 256 Data Structures and Algorithm Analysis Lecture 13 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Lecture 7: Greedy Algorithms II
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
1 Algorithms CSCI 235, Fall 2015 Lecture 29 Greedy Algorithms.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms General principle of greedy algorithm
Dynamic Programming Sequence of decisions. Problem state.
Lecture 32 CSE 331 Nov 16, 2016.
Weighted Interval Scheduling
CS 3343: Analysis of Algorithms
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Richard Anderson Lecture 17 Dynamic Programming
Topic 25 Dynamic Programming
CS38 Introduction to Algorithms
Algorithm Design Methods
Dynamic Programming.
Dynamic Programming.
Data Structures and Algorithms
Data Structures and Algorithms
CS 3343: Analysis of Algorithms
Unit-5 Dynamic Programming
Lecture 33 CSE 331 Nov 18, 2016.
Chapter 6 Dynamic Programming
Richard Anderson Lecture 16 Dynamic Programming
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Chapter 6 Dynamic Programming
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Chapter 8 Dynamic Programming
CS 3343: Analysis of Algorithms
MCA 301: Design and Analysis of Algorithms
Richard Anderson Lecture 16a Dynamic Programming
Richard Anderson Lecture 16 Dynamic Programming
Advanced Algorithms Analysis and Design
Advanced Algorithms Analysis and Design
Lecture 6 Topics Greedy Algorithm
Lecture 32 CSE 331 Nov 15, 2017.
Instructor Neelima Gupta
Lecture 4 Dynamic Programming
Lecture 36 CSE 331 Nov 28, 2011.
Lecture 36 CSE 331 Nov 30, 2012.
COMP108 Algorithmic Foundations Dynamic Programming
Dynamic Programming II DP over Intervals
Richard Anderson Lecture 16, Winter 2019 Dynamic Programming
Analysis of Algorithms CS 477/677
This is not an advertisement for the profession
Advance Algorithm Dynamic Programming
Dynamic Programming.
Department of Computer Science & Engineering
Seminar on Dynamic Programming.
Week 8 - Friday CS322.
Presentation transcript:

Prepared by Chen & Po-Chuan 2016/03/29 Dynamic Programming Prepared by Chen & Po-Chuan 2016/03/29

Basic Idea One implicitly explores the space of all possible solutions by: Carefully decomposing things into a series of sub-problems Building up correct solutions to larger and larger sub-problems Similar to “Divide & Conquer”

Weighted Interval Scheduling Given: A set of n intervals with start/finish times, weights (values) Find: A subset S of mutually compatible intervals with maximum total values

For Unit-weighted Cases We can use greedy algorithm But doesn’t work in weighted version

A Recursive Solution Sort intervals in by finish times p( j ) is the largest index i < j such that intervals i and j do not overlap

A Recursive Solution Oj = the optimal solution for intervals 1~ j OPT( j ) = the value of the optimal solution for intervals 1~ j OPT( j ) = max { vj + OPT( p( j ) ), OPT( j - 1) }

Example O6 = ? --- Include interval 6 or not? O6 = { 6, O3 } or O5 OPT( 6 ) = max { v6 + OPT( 3 ), OPT( 5 ) }

Implementation // Preprocessing: // 1. Sort intervals by finish times // 2. Compute p(1), p(2), ..., p(n) Compute-Opt( j ) if ( j = 0 ) then return 0 else return max { vj + Compute-Opt( p( j ) ) }, Compute-Opt( j – 1 ) }

Recursion Tree

Memorization: Top-Down The tree of calls widens very quickly Too many redundant calls Store the value for future to eliminate them M-Opt( j ) if ( j = 0 ) then return 0 else if (M[ j ] is not empty) then return M[ j ] else return M[ j ] = max{ vj + M-Opt( p( j ) ), M-Opt( j – 1 ) }

Iteration: Bottom-Up We can also compute the array M[j] by an iterative algorithm. I-Opt M[ 0 ] = 0 for j = 1, 2, .., n do M[ j ] = max{vj +M[ p( j )], M[ j-1] }

Keys for DP Dynamic programming can be used if the problem satisfies the following properties: There are only a polynomial number of sub-problems The solution to the original problem can be easily computed from the solutions to the sub-problems There is a natural ordering on sub-problems from “smallest” to “largest,” together with an easy-to-compute recurrence

Keys for DP DP works best on objects that are linearly ordered and cannot be rearranged Elements of DP Optimal sub-structure Overlapping sub-problem

Fibonacci Sequence fib(n) if n ≤ 1 return n return fib( n - 1 ) + fib( n - 2 )

The Solutions Top-down Bottom-up Fibonacci( n, f ): if f[ n ] not found then f[ n ] = Fibonacci( n - 1, f ) +Fibonacci( n - 2, f ) return f[ n ] fib( n ): f[ 0 ] = 0; f[ 1 ] = 1 for i = 2 to n do f[ i ] = f[ i - 1 ]+ f[ i - 2 ]

Maze Routing Given S, T, and some obstacles, find the shortest path from S to T.

The Solution Bottom up dynamic programming: Induction on path length Procedure: Wave propagation Retrace

Maze Routing Guarantee to find connection between 2 terminals if it exists Guarantee minimum path Both memory complexity & time complexity are high. --- O(MN) Large memory and slow

The Subset Sum Problem Given a set of n items (with weights) and a knapsack (with a capacity) Fill the knapsack so as to maximize total weight Greedy algorithm doesn’t work here

The Recursion OPT( i ) = the total weight of the optimal solution for items 1, ..., i OPT( i ) depends not only on items { 1, ..., i } but also on W (capacity available) OPT( i, w ) = 0 if i or w = 0 OPT( i - 1, w ) if wi > w max {OPT( i-1, w ), wi + OPT( i-1, w-wi ) } o.w. Running time: O(nW)

The Implementation Subset-sum(n, w1 ,..., wn , W) Initialize M to 0 for i = 1, 2, ..., n do for w = 1, 2, ..., W do if ( wi > w ) then M[ i, w ] = M[ i-1, w ] else M[ i, w ] = max {M[ i -1, w ], wi + M[ i-1, w-wi ] }

Demonstration

The Knapsack Problem Same with the subset sum problem, but each item has a value Fill the knapsack so as to maximize total value Greedy algorithm doesn’t work here

The Solution Very similar to the subset sum problem OPT(i, w) = … 0 if i or w = 0 OPT( i - 1, w ) if wi > w max { OPT( i -1, w ), vi + OPT( i -1, w - wi ) } o.w. Change only from wi to vi