Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Set # 3 Dr. LEE Heung Wing Joseph Phone: 2766 6951 Office : HJ639.

Similar presentations


Presentation on theme: "1 Set # 3 Dr. LEE Heung Wing Joseph Phone: 2766 6951 Office : HJ639."— Presentation transcript:

1 1 Set # 3 Dr. LEE Heung Wing Joseph Email : majlee@polyu.edu.hk Phone: 2766 6951 Office : HJ639

2 2 Recall Dynamic Programming and 1 ||  T j Algorithm 3.4.4. Dynamic programming procedure: recursively the optimal solution for some job set J starting at time t is determined from the optimal solutions to subproblems defined by job subsets of S*  S with start times t*  t. J(j, l, k) contains all the jobs in a set {j, j+1,..., l} with processing time  p k V( J(j, l, k), t)total tardiness of the subset under an optimal sequence if this subset starts at time t

3 3 Initial conditions: V( , t) = 0 V( { j }, t ) = max (0, t + p j - d j ) Recursive conditions: where k' is such that p k' = max ( p j' | j'  J(j, l, k) ) Optimal value function is obtained for V( { 1,...,n }, 0 )

4 4 Recall Example

5 5 V( J(1, 4, 3), 0)=0achieved with the sequence 1, 2, 4 and 2, 1, 4

6 6

7 7 Rough estimation of worst case computation The worst case computation time required by this algorithm can “roughly” be established as follows: There are at most O(n 3 ) subsets J(j,l,k). { Choose a number “k” out of n jobs first. Then choose a pair (i, j) with i<j out of the remaining n-1 jobs, there are (n-1)(n-2)/2 ways. So, n(n-1)(n-2)/2 subsets.} There are at most  p j points in time t. There are therefore at most O(n 3  p j ) recursive equations to be solved in the dynamic programming algorithm.

8 8 As each recursive equation takes O(n) time, the overall Running time of the algorithm is bounded by O(n 4  p j ), which is clearly a polynomial in n. (Pseudopolynomial). Pseudopolynomial O(n 4  p j ) Suppose there are two scheduling problems 1 ||  T j with the same number of jobs n, and same due dates. Does it make sense to say the one with larger completion time will have a larger upper bound of computational time ?

9 9 Lemma Consider the problem 1 ||  T j with n jobs. The jobs can be scheduled with zero total tardiness if and only if the EDD schedule results in a zero total tardiness. Proof Since the smallest possible value 1 ||  T j can take is zero, if the EDD schedule results in a zero total tardiness, then these n jobs can be schedule with zero total tardiness. Suppose, without loss of generality,, and jobs can be scheduled so that 1 ||  T j =0, i.e. T j =0 for all j=1,2,…,n. Let j < k, so.

10 10 Tardiness of job k and j : Suppose job k is scheduled before j. So,

11 11 but, Therefore, if we swap job j and k, the tardiness would still be 0

12 12 So, we can keep swapping any pairs of jobs k and j with k > j and k before j in the schedule (so that job j is processed before k) until we have EDD schedule without any increase in (zero) total tardiness.□ Alternatively, we can use Lawler’s Algorithm for for 1 | | h max and Theorem 3.2.2. to get more insights. Since h max = max ( h 1 (C 1 ),...,h n (C n ) ), h j are some non-decreasing cost functions, let h i =T i =max(C i -d i, 0). Thus, h max is the maximim tardiness T max. It can be shown that Lawler’s Algorithm results in the EDD rule.

13 13 Step 1. J =  J C = {1,...,n} k = n Step 2. Let j* be such that Place j* in J in the k-th order position Delete j* from J C Step 3. If J C =  then Stop else k = k - 1 go to Step 2 Recall Lawler’s A lgorithm for 1 | | h max

14 14 1 || T max is the special case of the 1 | prec | h max where h j = T j =max( C j -d j, 0). Lawler’s algorithm results in the schedule that orders jobs in increasing order of their due dates - earliest due date first rule (EDD) Thus, the EDD rule minimize maximum tardiness T max. Therefore, zero total tardiness  T j =0 implies zero maximum tardiness T max =0 implies EDD rule results in zero tardiness.

15 15 Lemma Suppose Proof Let k be the job that under (EDD), T k (EDD)=T max (EDD). Let λ be the last job of the (opt) schedule. Let T j (EDD) be the tardiness of job j under the EDD schedule, T max (EDD)=max j {T j (EDD)},and let T j (opt) be the tardiness of job j under the optimal schedule in the sense of 1 ||  T j.

16 16 Case I : If λ ≤ k. Case II : If k < λ and there exists δ ( δ < k ) such that under (opt), job δ is scheduled after job k but before λ. Let δ * be the last of such a job in (opt) schedule so that no other job with a job number larger than k is scheduled after δ*. Case III : If k < λ, and no job with a job number larger than k is scheduled after job k in (opt). δ*δ*kλ > k kλ Three cases to consider:

17 17 Case I : If λ ≤ k, so d λ ≤d k, then

18 18 Case II : If k < λ and there exists δ ( δ < k ) such that under (opt), job δ is scheduled after job k but before λ. Let δ * be the last of such a job in (opt) schedule so that no other job with a job number larger than k is scheduled after δ*. δ*δ*kλ > k

19 19 Case III : If k < λ, and no job with a job number larger than k is scheduled after job k in (opt). kλ> k

20 20 Lemma Let T j (EDD) be the tardiness of job j under the EDD schedule, T max (EDD)=max j {T j (EDD)},and let T j (opt) be the tardiness of job j under the optimal schedule in the sense of 1 ||  T j.

21 21

22 22

23 23

24 24 Lemma Suppose sequence S minimize problem 1 ||  T j with processing times p j and due dates d j. Then, sequence S also minimize the rescaled scheduling problem with provessing times Kp i and rescaled due dates Kd j for some positive constant K. Proof Consider the total tardiness : Clearly, minimizing total tardiness and are equivalent.

25 25 Lemma Consider two scheduling problems of minimizing total tardiness. The first problem has processing times p i and due dates d i, whereas, the second problem has processing time q i and due dates d i. Let p i q i for all i. The optimal total tardiness of the first problem is less than or equal to the optimal total tardiness of the second problem. Proof Consider the total tardiness of optimal solution to the second problem:

26 26 Note that,

27 27

28 28

29 29

30 30 Lemma (Exercise 4.11) Consider the problem of 1 | d j =d |  E j +  T j where E j =max{d j -C j, 0}is the earliness of job j. The n jobs have to be processed without interruption (i.e. there should be no unforced idleness in between the processing of the jobs). Proof Suppose sequence S is optimal but there is a processing gap between two jobs j and k where job k is processed after job j immediately after the time gap. Three cases to consider:

31 31 Case I : If the time gap is beyond d. jk d time gap Let J 1 be the set of jobs processed before the time gap, and J 2 be the set of jobs processed after the time gap. Let t 0 be the time where first job starts, and t 1 be the length of the time gab.

32 32 Case II : If the time gap is before d. jk d time gap

33 33 Case III : If the time gap cover (include) d. jk d time gap

34 34 Case I : If the time gap is beyond d. Cost for jobs processed before the time gap Cost for jobs processed after the time gap Since the time gap is beyond d, thus, Therefore reduce t 1 to zero reduce total cost. So the original sequence S is not optimal.

35 35 jk d Cost for jobs processed after the time gap

36 36 Case II : If the time gap is before d. Cost for jobs processed before the time gap Cost for jobs processed after the time gap Since the time gap is before d, thus, Therefore increase t 0 to (t 0 +t 1 ) and reduce t 1 to zero will reduce total cost. So the original sequence S is not optimal.

37 37 jk d Cost for jobs processed before the time gap (decreased) Cost for jobs processed after the time gap (remain unchanged)

38 38 Case III : If the time gap cover (include) d. jk d αβ

39 39 Therefore increase t 0 to (t 0 +α) and reduce t 1 to zero will reduce total cost. So the original sequence S is not optimal. Hence, increase t 0 to (t 0 +α) would make jk d By contradiction of the 3 cases, the original sequence S is not optimal !!!□

40 40 Lemma Consider the problem of 1 | d j =d |  E j +  T j. In an optimal schedule for n≥3, there are two set of jobs. The set of early jobs, J 1 ≠, and the set of late jobs, J 2 ≠. Proof Suppose in an optimal sequence that all the jobs are scheduled early. Let j be the earliest job. Total Cost: =

41 41 Suppose d-t 0 > 2p j. Thus, |t 0 +p j -d| > |p j |. Now, re-schedule job j to start at d, and the starting time for all other jobs remain unchanged. So for the remaining jobs, J 1 becomes J 1 \{j}, and t 0 becomes t 0 +p j. d j j J1J1

42 42 Total Cost : = = So, by contradiction, the original sequence that all jobs are early is not optimal. What if d-t 0 < 2p j ?

43 43 Similarly, we can show by contradiction that, if the original sequence consists of all jobs that are late, is not optimal. □ d j j

44 44 Lemma 4.2.1 Consider the problem of 1 | d j =d |  E j +  T j. In an optimal schedule, the early jobs, set J 1, are scheduled according to LPT, and the late jobs, set J 2, are scheduled according to SPT. Proof (Exercise 4.12) Observe that all jobs in J 1 do not contribute to  T j and all jobs in J 2 do not contribute to  E j. Let |J 1 | be the number of jobs in J 1 and |J 2 | be the number of jobs in J 2. Also, observe that for J 1, the only cost contribution is:  E j =  max{d j -C j, 0}= (|J 1 |d)-(  C j ), since all jobs are early.

45 45 Similarly, for J 2, the only cost contribution is:  T j =  max{C j - d j, 0}= (  C j )-(|J 2 |d) since all jobs are late. Thus, for all late jobs (all jobs in J 2 ), we try to minimize the total flow  C j. Hence, among all jobs in J 2, we use SPT (Theorem 3.1.1). What is left is to show that among all early jobs (all jobs in J 1 ), the LPT rule minimize (-  C j ), the negative total flow, or, LPT maximize the total flow. This is left as an exercise to you.

46 46 Lemma 4.2.2 Consider 1 | d j =d |  E j +  T j. In an optimal schedule, there exists an optimal schedule in which one job is completed exactly at time d. Proof Suppose there are no such optimal schedule. There exists one job that starts its processing before d and completes it processing after d. Call this job j*. J* d αβ

47 47 Let |J 1 | be the number of early jobs and |J 2 | be the number of late jobs. If |J 1 | < |J 2 |, then shift the entire schedule to the left such that job j* completes its processing exactly at time d. J* d β

48 48 The total tardiness: decreased by β [ |J 2 |-1] The total earliness: increased by β |J 1 | But |J 1 | < |J 2 |, i.e. |J 1 | ≤ |J 2 |-1, thus, β |J 1 | ≤ β [ |J 2 |-1], so the total cost is decreased. If |J 1 | > |J 2 |, then shift the entire schedule to the right such that job j* starts its processing exactly at time d. J* d α

49 49 The total tardiness: increased by α |J 2 | The total earliness: decreased by α [ |J 1 |-1 ] But |J 1 | > |J 2 |, i.e. |J 1 |-1 ≥ |J 2 |, thus, α [ |J 1 |-1] ≥ α |J 2 |, so the total cost is decreased. If |J 1 | = |J 2 |, then there are many optimal schedules, of which only two satisfy the property stated in the Lemma. Why ? This is left as an exercise to you. Assuming

50 50 Assuming

51 51

52 52 Assuming

53 53 d 11 22 The length of the blue band is depicting  1 and the length of the yellow band is depicting  2 Can be negative

54 54

55 55 d

56 56

57 57

58 58

59 59 j+1 j

60 60

61 61

62 62

63 63

64 64

65 65 Clearly, Let

66 66 jrjr 1 j r -1 Earliness: decreased by Tardiness: increased by Total cost: decreased by

67 67

68 68

69 69

70 70

71 71

72 72 Sequence-Dependent Setup Problems An algorithm which gives an optimal schedule with the minimum makespan with sequence-dependent setup times 1 | S jk | C max Single machine: r j =0, no sequence dependent setup times  However, solving 1 | S jk | C max is NP hard

73 73 a single real variable which describes each job job j: to start the job the machine must be in state a j, at the completion of the job the machine state is b j s jk = | a k - b j | job k follows job j Travelling Salesman Problem with n+1 cities j 0, j 1, …,j n, j 0 k =  (j) travelling salesman leaving city j for city k

74 74 feasiblenot feasible 02 31 {0, 1, 2, 3}  {2, 3, 1, 0}  (0) = 2  (1) = 3  (2) = 1  (3) = 0 02 31 {0, 1, 2, 3}  {2, 1, 3, 0}

75 75 b1b1 bkbk bjbj b2b2 a(1)a(1) a(2)a(2) a  (k) a  (j) cost of going from 1 to  (1) is | a  (1) - b 1 | It can be easily shown that AAAA=I, where I is the 4x4 identity matrix. Also, BBB=I, but BBBB=B≠I. So, what can you say something about testing the feasibility of permutation mapping? Also see “Hamiltonian Circuit” (Definition C.2.8. in Appendix C). Letand

76 76 In some other text, the presentation of S jk for TSP is often in a form of a matrix. Moreover, the diagonal elements of the matrix are often undefined, since the same job cannot be repeated. How do we convert it to our form ? | a 2 - b 1 |=2 | a 3 - b 1 |=3 | a 1 – b 2 |=3 | a 3 – b 2 |=2 | a 1 – b 3 |=5 | a 2 – b 3 |=3 We have 6 equations.

77 77 From | a 2 – b 1 |=2, we have a 2 =x+2 or x – 2. If a 2 =x+2, from | a 2 – b 3 |=3, we have b 3 =x–1 or x+5. From | a 3 – b 1 |=3, we have a 3 =x+3 or x – 3. If a 3 =x+3, from | a 3 – b 2 |=2, we have b 2 =x+1 or x+5. From | a 1 – b 3 |=5, we have a 1 =x or x±4 or x±5 or x±10. From | a 1 – b 2 |=3, we have a 1 =x±2 or x±4 or x±8. Let b 1 =x. If a 2 =x–2, from | a 2 – b 3 |=3, we have b 3 =x–5 or x+1. If a 3 =x–3, from | a 3 – b 2 |=2, we have b 2 =x–1 or x–5. So, b 3 =x±1 or x±5. So, b 2 =x±1 or x±5. Choose a 1 =x+4, and thus, b 2 =x+1, b 3 =x–1, a 2 =x+2, a 3 =x+3. Thus, a 1 =x±4.

78 78 To keep all entries to be positive, we can choose b 1 =x=2. b1=xb1=x b 2 =x+1 b 3 =x–11 2 3 a 1 =x+4 a 2 =x+2 a 3 =x+3 4 5 6

79 79 Swap I(j,k) applied to a permutation  produces another permutation  ’:  ’(k) =  (j)  ’(j) =  (k)  ’(l) =  (l), l  j, k Consider the swap I(0,1) 02 31 {0, 1, 2, 3}  {2, 1, 3, 0} 02 31 {0, 1, 2, 3}  {1, 2, 3, 0}

80 80 5 1 7 8 910 11 6 4 2 3 1 7 8 910 11 6 4 2 3 5 Swap I(j,k) has the effect of creating two subtours out of one if j and k belongs to the same subtour. Conversely, it combines two subtours to which j and k belong otherwise.

81 81 change in cost due to swap I(j, k) bjbj bkbk a  (j) a  (k) Lemma 4.5.1. If the swap causes two arrows that did nor cross earlier to cross, then the cost of the tour C  I(j,k) increases and vice versa. C  I(j, k) = || [b j, b k ]  [a  (j), a  (k) ] ||

82 82 Lemma 4.5.2. An optimal permutation mapping  * is obtained if b j ≤ b k implies a  * (j) ≤ a  * (k). b 1 =1 b 2 =3 b 3 =15 a 2 =4 a 1 =7 a 3 =16 If we arrange the b j in order of size and renumber the jobs so that b 1  b 2 ...  b n and then, optimal permutation mapping  * is obtained when we arrange the a  *(j) in order of size. The permutation mapping is  * (j) = k where k being such that a k is the jth smallest of the a.

83 83 G  undirected graph where arcs link the jth and  (j)th nodes, j=1,…,n. Spanning tree - a minimal set of additional arcs that connect an unconnected graph. 16 2 5 6 8 7 4 3 1 10 12 11 9 13 15 14 16 2 5 6 8 7 4 3 1 10 12 11 9 13 15 14 16 11 2 5 6 8 7 4 3 1 10 12 9 13 15 14

84 84 2 5 6 8 7 4 3 1 10 12 9 13 15 14 2 5 6 8 7 4 3 1 10 12 9 13 15 14 2 5 6 8 7 4 3 1 10 12 9 13 15 14

85 85 b 1 =1 b 3 =10 b 2 =6 a  (1) =2 a  (2 ) =4 a  (3) =11 C  I(1, 2) = || [1,6]  [2,4] || = 2 (4-2) = 4 C  I(2, 3) = || [6,10]  [4,11] || = 2 (10-6) = 8 Lower bound, sum of increase in costs of individual swaps = 4+8=12.

86 86 b 1 =1 b 3 =10 b 2 =6 a  (1) =2 a  (2 ) =4 a  (3) =11 I(1, 2) then I(2, 3) [C  I(1, 2)] I(2, 3) = || [6,10]  [2,11] || = 2 (10-6) = 8 Total increase in cost = 4+8=12. b 1 =1 b 3 =10 b 2 =6 a  (1) =2 a  (2 ) =4 a  (3) =11 I(2, 3) then I(1, 2) [C  I(2, 3)] I(1, 2) = || [1,6]  [2,11] || = 2 (6-2) = 8 CHANGED! Total increase in cost =8+8=16.

87 87 A node is of Type 1 if b j  a  (j) (arrow points up) A node is of Type 2 if b j > a  (j) (arrow points down) A swap is of Type 1 if lower node is of Type 1 A swap is of Type 2 if lower node is of Type 2 Swaps of Type 1 arcs can be performed starting from the top going down, so that the total increase in costs due to swaps is the same as the sum of increase of costs of individual swaps, as the overlap of intervals does not change. Overlap of intervals unchangedOverlap of intervals changed Individual swaps Both Type 1 2 swaps, top first2 swaps, bottom first

88 88 Swaps of Type 2 arcs can be performed starting from the bottom going up so that the total increase in costs due to swaps is the same as the sum of increase of costs of individual swaps, as the overlap of intervals does not change. Overlap of intervals unchanged Overlap of intervals changed Individual swaps Both Type 2 2 swaps, bottom first2 swaps, top first If swaps of Type 1 are performed in decreasing order of the node indices, and swaps of Type 2 in increasing order of the node indices then there will be no change in the total increase in cost involved in the swaps as that of the sum of increase of costs of individual swaps.

89 89 If swaps of Type 1 are performed in decreasing order of the node indices, followed by swaps of Type 2 in increasing order of the node indices then a single tour is obtained with no change in the total increase in cost involved in the swaps as that of the sum of increase of costs of individual swaps. Swaps of Type 1 arcs can be performed first before swaps of Type 2 arcs so that the total increase in costs due to swaps is the same as the sum of increase of costs of individual swaps. Overlap of intervals unchangedOverlap of intervals changed Individual swaps, Type 1 & Type 2 2 swaps, Type 1 first2 swaps, Type 2 first

90 90 Algorithm4.5.5. + Example 4.5.6. Step 1. Arrange the b j in order of size and renumber the jobs so that b 1  b 2 ...  b n Arrange the a j in order of size. The permutation mapping  * is defined by  * (j) = k, k being such that a k is the jth smallest of the a. 7 jobs

91 91 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45

92 92 Step 2. Form an undirected graph with n nodes and undirected arcs connecting the jth and  *(j) nodes, j=1,…n. If the current graph has only one component then STOP ; otherwise go to Step 3. 3 2 7 1 6 4 5

93 93 Step 3. Compute the swap costs C  * I(j, j+1) for j=1,…,n C  * I(j, j+1) = 2 max ( min (b j+1, a  *(j+1) ) - max (b j, a  *(j) ) ), 0 ) C  * I(1, 2) = 2 max ( (3-4), 0 ) = 0 C  * I(2, 3) = 2 max ( (15-7), 0 ) = 16 C  * I(3, 4) = 2 max ( (18-16), 0 ) = 4 C  * I(4, 5) = 2 max ( (22-19), 0 ) = 6 C  * I(5, 6) = 2 max ( (31-26), 0 ) = 10 C  * I(6, 7) = 2 max ( (40-34), 0 ) = 12

94 94 Step 4. Select the smallest value C  * I(j, j+1) such that j is in one component and j+1 in another. In case of a tie for smallest, choose any. Insert the undirected arc R j, j+1 into the graph. Repeat this step until all the components in the undirected graph are connected. 3 2 7 1 6 4 5 4 3 2 7 1 6 4 5 4 6 3 2 7 1 6 4 5 4 6 10 3 2 7 1 6 4 5 16 4 6 10

95 95 Step 5. Divide the arcs added in Step 4 into two groups. Those R j, j+1 for which b j  a  (j) go in group 1, those for which b j > a  (j) go in group 2. Step 6. Find the largest index j 1 such that R j 1, j 1 +1 is in group 1. Find the second largest index, and so on, up to j l assuming there are l elements in the group. Find the smallest index k 1 such that R k 1, k 1 +1 is in group 2. Find the second smallest index, and so on, up to k m assuming there are m elements in the group. j 1 = 3, j 2 = 2, k 1 = 4, k 2 = 5

96 96 Step 7. The optimal tour  ** is constructed by applying the following sequence of swaps on the permutation  *:  ** =  * I(j 1, j 1 +1) I(j 2, j 2 +1) … I(j l, j l +1) I(k 1, k 1 +1) I(k 2, k 2 +1) … I(k m, k m +1)  ** =  * I(3,4) I(2,3) I(4,5) I(5,6) Type 1Type 2

97 97 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45  * I(3,4)  * I(3,4) I(2,3)

98 98 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45  * I(3,4) I(2,3) I(4,5) b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45  ** =  * I(3,4) I(2,3) I(4,5) I(5,6)

99 99 The optimal tour is: 1  2  7  4  5  6  3  1 The cost of the tour is: 3 + 15 + 5 + 3 + 8 + 15 + 8 = 57  ** =  * I(3,4) I(2,3) I(4,5) I(5,6) b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45

100 100 Summary * 1 | S jk | C max, s jk = | a k - b j | a polynomial time algorithm is given * 1 | S jk | C max is NP hard


Download ppt "1 Set # 3 Dr. LEE Heung Wing Joseph Phone: 2766 6951 Office : HJ639."

Similar presentations


Ads by Google