Dynamic Programming (II)

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Dynamic Programming.
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
Chapter 4 The Greedy Approach. Minimum Spanning Tree A tree is an acyclic, connected, undirected graph. A spanning tree for a given graph G=, where E.
Unit 4: Dynamic Programming
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Chapter 7 Dynamic Programming.
Introduction to Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Introduction to Algorithms Second Edition by Cormen, Leiserson, Rivest & Stein Chapter 15.
Chapter 7 Dynamic Programming 7.
§ 8 Dynamic Programming Fibonacci sequence
Dynamic Programming Reading Material: Chapter 7..
0-1 Knapsack Problem A burglar breaks into a museum and finds “n” items Let v_i denote the value of ith item, and let w_i denote the weight of the ith.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
7 -1 Chapter 7 Dynamic Programming Fibonacci Sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
Odds and Ends HP ≤ p HC (again) –Turing reductions Strong NP-completeness versus Weak NP-completeness Vertex Cover to Hamiltonian Cycle.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
1 Dynamic Programming Jose Rolim University of Geneva.
Backtracking.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
COSC 3101NJ. Elder Announcements Midterms are marked Assignment 2: –Still analyzing.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Advanced Algorithms Analysis and Design
CS 3343: Analysis of Algorithms
Advanced Algorithms Analysis and Design
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
Analysis of Algorithms CS 477/677
Dynamic Programming Comp 122, Fall 2004.
Ch. 15: Dynamic Programming Ming-Te Chi
DYNAMIC PROGRAMMING.
CSC 413/513- Intro to Algorithms
Dynamic Programming II DP over Intervals
0-1 Knapsack problem.
Analysis of Algorithms CS 477/677
Optimal Binary Search Tree. 1.Preface  OBST is one special kind of advanced tree.  It focus on how to reduce the cost of the search of the BST.  It.
Presentation transcript:

Dynamic Programming (II) 2012/11/27

15.1 Assembly-line scheduling An automobile chassis enters each assembly line, has parts added to it at a number of stations, and a finished auto exits at the end of the line. Each assembly line has n stations, numbered j = 1, 2,...,n. We denote the jth station on line i ( where i is 1 or 2) by Si,j. The jth station on line 1 (S1,j) performs the same function as the jth station on line 2 (S2,j). Chapter 15

The stations were built at different times and with different technologies, however, so that the time required at each station varies, even between stations at the same position on the two different lines. We denote the assembly time required at station Si,j by ai,j. As the coming figure shows, a chassis enters station 1 of one of the assembly lines, and it progresses from each station to the next. There is also an entry time ei for the chassis to enter assembly line i and an exit time xi for the completed auto to exit assembly line i. Chapter 15

a manufacturing problem to find the fast way through a factory Chapter 15

Normally, once a chassis enters an assembly line, it passes through that line only. The time to go from one station to the next within the same assembly line is negligible. Occasionally a special rush order comes in, and the customer wants the automobile to be manufactured as quickly as possible. For the rush orders, the chassis still passes through the n stations in order, but the factory manager may switch the partially-completed auto from one assembly line to the other after any station. Chapter 15

The time to transfer a chassis away from assembly line i after having gone through station Sij is ti,j, where i = 1, 2 and j = 1, 2, ..., n-1 (since after the nth station, assembly is complete). The problem is to determine which stations to choose from line 1 and which to choose from line 2 in order to minimize the total time through the factory for one auto. Chapter 15

An instance of the assembly-line problem with costs Chapter 15

Step1 The structure of the fastest way through the factory the fast way through station S1,j is either the fastest way through Station S1,j-1 and then directly through station S1,j, or the fastest way through station S2,j-1, a transfer from line 2 to line 1, and then through station S1,j. Using symmetric reasoning, the fastest way through station S2,j is either the fastest way through station S2,j-1 and then directly through Station S2,j, or the fastest way through station S1,j-1, a transfer from line 1 to line 2, and then through Station S2,j. Chapter 15

Step 2: A recursive solution Chapter 15

step 3: computing the fastest times Let ri(j)be the number of references made to fi[j] in a recursive algorithm. r1(n)=r2(n)=1 r1(j) = r2(j)=r1(j+1)+r2(j+1) The total number of references to all fi[j] values is (2n). We can do much better if we compute the fi[j] values in different order from the recursive way. Observe that for j  2, each value of fi[j] depends only on the values of f1[j-1] and f2[j-1]. Chapter 15

FASTEST-WAY procedure FASTEST-WAY(a, t, e, x, n) 1 f1[1]  e1 + a1,1 2 f2[1]  e2 + a2,1 3 for j  2 to n 4 do if f1[j-1] + a1,j  f2[j-1] + t2,j-1 +a1,j 5 then f1[j]  f1[j-1] + a1,j 6 l1[j]  1 7 else f1[j]  f2[j-1] + t2,j-1 +a1,j 8 l1[j]  2 9 if f2[j-1] + a2, j  f1[j-1] + t1,j-1 +a2,j Chapter 15

12 else f2[j]  f1[j – 1] + t1,j-1 + a2,j 13 l2[j]  1 10 then f2[j]  f2[j – 1] + a2,j 11 l2[j]  2 12 else f2[j]  f1[j – 1] + t1,j-1 + a2,j 13 l2[j]  1 14 if f1[n] + x1  f2[n] + x2 15 then f* = f1[n] + x1 16 l* = 1 17 else f* = f2[n] + x2 18 l* = 2 Chapter 15

step 4: constructing the fastest way through the factory PRINT-STATIONS(l, n) 1 i  l* 2 print “line” i “,station” n 3 for j  n downto 2 4 do i  li[j] 5 print “line” i “,station” j – 1 output line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1 Chapter 15

Optimal Binary search trees cost:2.75 optimal!! cost:2.80

expected cost the expected cost of a search in T is

For a given set of probabilities, our goal is to construct a binary search tree whose expected search is smallest. We call such a tree an optimal binary search tree.

Step 1: The structure of an optimal binary search tree Consider any subtree of a binary search tree. It must contain keys in a contiguous range ki, ...,kj, for some 1  i  j  n. In addition, a subtree that contains keys ki, ..., kj must also have as its leaves the dummy keys di-1, ..., dj. If an optimal binary search tree T has a subtree T' containing keys ki, ..., kj, then this subtree T' must be optimal as well for the subproblem with keys ki, ..., kj and dummy keys di-1, ..., dj.

Step 2: A recursive solution j j = + w ( i , j ) å p å q w[i, j] = w[i, j – 1] + pj+qj l l l = i l = i - 1 If the optimal tree containing ki,…,kj is rooted at kr, then e[i, j]=pr+(e[i, r-1]+w(i, r-1))+(e[r+1, j]+w(r+1, j)) ì = q if j i - 1, = i - 1 e [ i , j ] í - + + + £ min { e [ i , r 1 ] e [ r 1 , j ] w ( i , j )} if i j . î i £ r £ j

Step 3:computing the expected search cost of an optimal binary search tree OPTIMAL-BST(p,q,n) 1 for i  1 to n + 1 2 e[i, i – 1]  qi-1 3 w[i, i – 1]  qi-1 4 for l  1 to n 5 for i  1 to n – l + 1 6 j  i + l – 1 7 e[i, j]   8 w[i, j]  w[i, j – 1] + pj+qj

9 for r  i to j 10 t  e[i, r –1]+e[r +1, j]+w[i, j] 11 if t < e[i, j] 12 then e[i, j]  t 13 root[i, j]  r 14 return e and root the OPTIMAL-BST procedure takes (n3)

The table e[i,j], w[i,j], and root[i,j] computed by OPTIMAL-BST on the key distribution.

0/1 knapsack problem n items, with weights w1, w2, , wn values (profits) v1, v2, , vn the capacity W to maximize subject to  W xi = 0 or 1, 1in (We let T={i| xi=1}S={1,…,n})

Brute force solution The 0/1 knapsack problem is an optimization problem. We may try all 2n possible subsets of S to solve it. Question: Any solution better than the brute-force? Answer: Yes, the dynamic programming (DP)! We trade space for time.

Table (Array) We construct an array V[0..n, 0..M]. The entry V[i, w], 1in, and 0wW, will store the maximum (combined) value of any subset of items 1,2,…,i of the (combined) weight at most w. If we can compute all the entries of this array, then the array entry V[n, W] will contain the maximum value of any subset of items 1,2,…,n that can fit into the knapsack of capacity W.

Recursive relation Initial settings: V[0, w]=0 for 0wW V[i, w]=max(V[i-1, w], vi+V[i-1, w-wi]) for 0in and 0wW if wiw. Otherwise, V[i, w]=V[i-1, w].

0/1 knapsack problem in multistage representation n objects , weight W1, W2, ,Wn profit P1, P2, ,Pn capacity M maximize subject to  M xi = 0 or 1, 1in E.g.

0/1 knapsack problem in multistage representation Multi-stage graph:

The resource allocation problem Given m resources, n projects with profit p(i, j) : i resources are allocated to project j. The goal is to maximize the total profit. E.g. i: # of resources 1 2 3 1 2 8 9 2 5 6 7 Project j 3 4 4 4 4 2 4 5

Multistage graph (i, j) : the state where i resources are allocated to projects 1, 2, …, j E.g. node H=(3, 2) : 3 resources are been allocated to projects 1, 2. The Resource Allocation Problem Described as a Multi-Stage Graph.

Multistage graph Find the longest path from S to T : S C H L T , 8+5=13 2 resources allocated to project 1 1 resource allocated to project 2 0 resource allocated to projects 3, 4

Comment If a problem can be described by a multistage graph then it can be solved by dynamic programming.

Q&A