1 Set # 3 Dr. LEE Heung Wing Joseph Phone: 2766 6951 Office : HJ639.

Slides:



Advertisements
Similar presentations
Completeness and Expressiveness
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
5.1 Real Vector Spaces.
 Review: The Greedy Method
Longest Common Subsequence
Transportation Problem (TP) and Assignment Problem (AP)
Greedy Algorithms Greed is good. (Some of the time)
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Greedy Algorithms Basic idea Connection to dynamic programming
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
Representing Relations Using Matrices
Combinatorial Algorithms
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Lateness Models Contents
Spring, Scheduling Operations. Spring, Scheduling Problems in Operations Job Shop Scheduling. Personnel Scheduling Facilities Scheduling.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Approximation Algorithms
1 IOE/MFG 543 Chapter 3: Single machine models (Sections 3.1 and 3.2)
1 Tardiness Models Contents 1. Moor’s algorithm which gives an optimal schedule with the minimum number of tardy jobs 1 ||  U j 2. An algorithm which.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
1 Single Machine Deterministic Models Jobs: J 1, J 2,..., J n Assumptions: The machine is always available throughout the scheduling period. The machine.
Approximation Algorithms
1 Set # 4 Dr. LEE Heung Wing Joseph Phone: Office : HJ639.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
1 IOE/MFG 543 Chapter 3: Single machine models (Sections )
Linear Programming Applications
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
1 IOE/MFG 543 Chapter 7: Job shops Sections 7.1 and 7.2 (skip section 7.3)
Backtracking.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
Zvi Kohavi and Niraj K. Jha 1 Capabilities, Minimization, and Transformation of Sequential Machines.
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Great Theoretical Ideas in Computer Science.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
1 Greedy algorithm 叶德仕 2 Greedy algorithm’s paradigm Algorithm is greedy if it builds up a solution in small steps it chooses a decision.
Extensions of the Basic Model Chapter 6 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha R1.
Notes 5IE 3121 Knapsack Model Intuitive idea: what is the most valuable collection of items that can be fit into a backpack?
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
Approximation Schemes Open Shop Problem. O||C max and Om||C max {J 1,..., J n } is set of jobs. {M 1,..., M m } is set of machines. J i : {O i1,..., O.
Lectures on Greedy Algorithms and Dynamic Programming
Data Structures & Algorithms Graphs
Outline Introduction Minimizing the makespan Minimizing total flowtime
1 Network Models Transportation Problem (TP) Distributing any commodity from any group of supply centers, called sources, to any group of receiving.
Operational Research & ManagementOperations Scheduling Economic Lot Scheduling 1.Summary Machine Scheduling 2.ELSP (one item, multiple items) 3.Arbitrary.
1. 2 You should know by now… u The security level of a strategy for a player is the minimum payoff regardless of what strategy his opponent uses. u A.
Chapter 8 Maximum Flows: Additional Topics All-Pairs Minimum Value Cut Problem  Given an undirected network G, find minimum value cut for all.
Earliness and Tardiness Penalties Chapter 5 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha R1.
Chapter 8: Relations. 8.1 Relations and Their Properties Binary relations: Let A and B be any two sets. A binary relation R from A to B, written R : A.
1 JOB SEQUENCING WITH DEADLINES The problem is stated as below. There are n jobs to be processed on a machine. Each job i has a deadline d i ≥ 0 and profit.
Branch and Bound Searching Strategies
Introduction to NP Instructor: Neelima Gupta 1.
Approximation Algorithms based on linear programming.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Lecture 20. Graphs and network models 1. Recap Binary search tree is a special binary tree which is designed to make the search of elements or keys in.
Single Machine Scheduling Problem Lesson 5. Maximum Lateness and Related Criteria Problem 1|r j |L max is NP-hard.
Linear program Separation Oracle. Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer.
The NP class. NP-completeness
The minimum cost flow problem
Shop Scheduling Problem
Chapter 5. Optimal Matchings
Computability and Complexity
3.3 Applications of Maximum Flow and Minimum Cut
Integer Programming (정수계획법)
Chapter 6 Network Flow Models.
Presentation transcript:

1 Set # 3 Dr. LEE Heung Wing Joseph Phone: Office : HJ639

2 Recall Dynamic Programming and 1 ||  T j Algorithm Dynamic programming procedure: recursively the optimal solution for some job set J starting at time t is determined from the optimal solutions to subproblems defined by job subsets of S*  S with start times t*  t. J(j, l, k) contains all the jobs in a set {j, j+1,..., l} with processing time  p k V( J(j, l, k), t)total tardiness of the subset under an optimal sequence if this subset starts at time t

3 Initial conditions: V( , t) = 0 V( { j }, t ) = max (0, t + p j - d j ) Recursive conditions: where k' is such that p k' = max ( p j' | j'  J(j, l, k) ) Optimal value function is obtained for V( { 1,...,n }, 0 )

4 Recall Example

5 V( J(1, 4, 3), 0)=0achieved with the sequence 1, 2, 4 and 2, 1, 4

6

7 Rough estimation of worst case computation The worst case computation time required by this algorithm can “roughly” be established as follows: There are at most O(n 3 ) subsets J(j,l,k). { Choose a number “k” out of n jobs first. Then choose a pair (i, j) with i<j out of the remaining n-1 jobs, there are (n-1)(n-2)/2 ways. So, n(n-1)(n-2)/2 subsets.} There are at most  p j points in time t. There are therefore at most O(n 3  p j ) recursive equations to be solved in the dynamic programming algorithm.

8 As each recursive equation takes O(n) time, the overall Running time of the algorithm is bounded by O(n 4  p j ), which is clearly a polynomial in n. (Pseudopolynomial). Pseudopolynomial O(n 4  p j ) Suppose there are two scheduling problems 1 ||  T j with the same number of jobs n, and same due dates. Does it make sense to say the one with larger completion time will have a larger upper bound of computational time ?

9 Lemma Consider the problem 1 ||  T j with n jobs. The jobs can be scheduled with zero total tardiness if and only if the EDD schedule results in a zero total tardiness. Proof Since the smallest possible value 1 ||  T j can take is zero, if the EDD schedule results in a zero total tardiness, then these n jobs can be schedule with zero total tardiness. Suppose, without loss of generality,, and jobs can be scheduled so that 1 ||  T j =0, i.e. T j =0 for all j=1,2,…,n. Let j < k, so.

10 Tardiness of job k and j : Suppose job k is scheduled before j. So,

11 but, Therefore, if we swap job j and k, the tardiness would still be 0

12 So, we can keep swapping any pairs of jobs k and j with k > j and k before j in the schedule (so that job j is processed before k) until we have EDD schedule without any increase in (zero) total tardiness.□ Alternatively, we can use Lawler’s Algorithm for for 1 | | h max and Theorem to get more insights. Since h max = max ( h 1 (C 1 ),...,h n (C n ) ), h j are some non-decreasing cost functions, let h i =T i =max(C i -d i, 0). Thus, h max is the maximim tardiness T max. It can be shown that Lawler’s Algorithm results in the EDD rule.

13 Step 1. J =  J C = {1,...,n} k = n Step 2. Let j* be such that Place j* in J in the k-th order position Delete j* from J C Step 3. If J C =  then Stop else k = k - 1 go to Step 2 Recall Lawler’s A lgorithm for 1 | | h max

14 1 || T max is the special case of the 1 | prec | h max where h j = T j =max( C j -d j, 0). Lawler’s algorithm results in the schedule that orders jobs in increasing order of their due dates - earliest due date first rule (EDD) Thus, the EDD rule minimize maximum tardiness T max. Therefore, zero total tardiness  T j =0 implies zero maximum tardiness T max =0 implies EDD rule results in zero tardiness.

15 Lemma Suppose Proof Let k be the job that under (EDD), T k (EDD)=T max (EDD). Let λ be the last job of the (opt) schedule. Let T j (EDD) be the tardiness of job j under the EDD schedule, T max (EDD)=max j {T j (EDD)},and let T j (opt) be the tardiness of job j under the optimal schedule in the sense of 1 ||  T j.

16 Case I : If λ ≤ k. Case II : If k < λ and there exists δ ( δ < k ) such that under (opt), job δ is scheduled after job k but before λ. Let δ * be the last of such a job in (opt) schedule so that no other job with a job number larger than k is scheduled after δ*. Case III : If k < λ, and no job with a job number larger than k is scheduled after job k in (opt). δ*δ*kλ > k kλ Three cases to consider:

17 Case I : If λ ≤ k, so d λ ≤d k, then

18 Case II : If k < λ and there exists δ ( δ < k ) such that under (opt), job δ is scheduled after job k but before λ. Let δ * be the last of such a job in (opt) schedule so that no other job with a job number larger than k is scheduled after δ*. δ*δ*kλ > k

19 Case III : If k < λ, and no job with a job number larger than k is scheduled after job k in (opt). kλ> k

20 Lemma Let T j (EDD) be the tardiness of job j under the EDD schedule, T max (EDD)=max j {T j (EDD)},and let T j (opt) be the tardiness of job j under the optimal schedule in the sense of 1 ||  T j.

21

22

23

24 Lemma Suppose sequence S minimize problem 1 ||  T j with processing times p j and due dates d j. Then, sequence S also minimize the rescaled scheduling problem with provessing times Kp i and rescaled due dates Kd j for some positive constant K. Proof Consider the total tardiness : Clearly, minimizing total tardiness and are equivalent.

25 Lemma Consider two scheduling problems of minimizing total tardiness. The first problem has processing times p i and due dates d i, whereas, the second problem has processing time q i and due dates d i. Let p i q i for all i. The optimal total tardiness of the first problem is less than or equal to the optimal total tardiness of the second problem. Proof Consider the total tardiness of optimal solution to the second problem:

26 Note that,

27

28

29

30 Lemma (Exercise 4.11) Consider the problem of 1 | d j =d |  E j +  T j where E j =max{d j -C j, 0}is the earliness of job j. The n jobs have to be processed without interruption (i.e. there should be no unforced idleness in between the processing of the jobs). Proof Suppose sequence S is optimal but there is a processing gap between two jobs j and k where job k is processed after job j immediately after the time gap. Three cases to consider:

31 Case I : If the time gap is beyond d. jk d time gap Let J 1 be the set of jobs processed before the time gap, and J 2 be the set of jobs processed after the time gap. Let t 0 be the time where first job starts, and t 1 be the length of the time gab.

32 Case II : If the time gap is before d. jk d time gap

33 Case III : If the time gap cover (include) d. jk d time gap

34 Case I : If the time gap is beyond d. Cost for jobs processed before the time gap Cost for jobs processed after the time gap Since the time gap is beyond d, thus, Therefore reduce t 1 to zero reduce total cost. So the original sequence S is not optimal.

35 jk d Cost for jobs processed after the time gap

36 Case II : If the time gap is before d. Cost for jobs processed before the time gap Cost for jobs processed after the time gap Since the time gap is before d, thus, Therefore increase t 0 to (t 0 +t 1 ) and reduce t 1 to zero will reduce total cost. So the original sequence S is not optimal.

37 jk d Cost for jobs processed before the time gap (decreased) Cost for jobs processed after the time gap (remain unchanged)

38 Case III : If the time gap cover (include) d. jk d αβ

39 Therefore increase t 0 to (t 0 +α) and reduce t 1 to zero will reduce total cost. So the original sequence S is not optimal. Hence, increase t 0 to (t 0 +α) would make jk d By contradiction of the 3 cases, the original sequence S is not optimal !!!□

40 Lemma Consider the problem of 1 | d j =d |  E j +  T j. In an optimal schedule for n≥3, there are two set of jobs. The set of early jobs, J 1 ≠, and the set of late jobs, J 2 ≠. Proof Suppose in an optimal sequence that all the jobs are scheduled early. Let j be the earliest job. Total Cost: =

41 Suppose d-t 0 > 2p j. Thus, |t 0 +p j -d| > |p j |. Now, re-schedule job j to start at d, and the starting time for all other jobs remain unchanged. So for the remaining jobs, J 1 becomes J 1 \{j}, and t 0 becomes t 0 +p j. d j j J1J1

42 Total Cost : = = So, by contradiction, the original sequence that all jobs are early is not optimal. What if d-t 0 < 2p j ?

43 Similarly, we can show by contradiction that, if the original sequence consists of all jobs that are late, is not optimal. □ d j j

44 Lemma Consider the problem of 1 | d j =d |  E j +  T j. In an optimal schedule, the early jobs, set J 1, are scheduled according to LPT, and the late jobs, set J 2, are scheduled according to SPT. Proof (Exercise 4.12) Observe that all jobs in J 1 do not contribute to  T j and all jobs in J 2 do not contribute to  E j. Let |J 1 | be the number of jobs in J 1 and |J 2 | be the number of jobs in J 2. Also, observe that for J 1, the only cost contribution is:  E j =  max{d j -C j, 0}= (|J 1 |d)-(  C j ), since all jobs are early.

45 Similarly, for J 2, the only cost contribution is:  T j =  max{C j - d j, 0}= (  C j )-(|J 2 |d) since all jobs are late. Thus, for all late jobs (all jobs in J 2 ), we try to minimize the total flow  C j. Hence, among all jobs in J 2, we use SPT (Theorem 3.1.1). What is left is to show that among all early jobs (all jobs in J 1 ), the LPT rule minimize (-  C j ), the negative total flow, or, LPT maximize the total flow. This is left as an exercise to you.

46 Lemma Consider 1 | d j =d |  E j +  T j. In an optimal schedule, there exists an optimal schedule in which one job is completed exactly at time d. Proof Suppose there are no such optimal schedule. There exists one job that starts its processing before d and completes it processing after d. Call this job j*. J* d αβ

47 Let |J 1 | be the number of early jobs and |J 2 | be the number of late jobs. If |J 1 | < |J 2 |, then shift the entire schedule to the left such that job j* completes its processing exactly at time d. J* d β

48 The total tardiness: decreased by β [ |J 2 |-1] The total earliness: increased by β |J 1 | But |J 1 | < |J 2 |, i.e. |J 1 | ≤ |J 2 |-1, thus, β |J 1 | ≤ β [ |J 2 |-1], so the total cost is decreased. If |J 1 | > |J 2 |, then shift the entire schedule to the right such that job j* starts its processing exactly at time d. J* d α

49 The total tardiness: increased by α |J 2 | The total earliness: decreased by α [ |J 1 |-1 ] But |J 1 | > |J 2 |, i.e. |J 1 |-1 ≥ |J 2 |, thus, α [ |J 1 |-1] ≥ α |J 2 |, so the total cost is decreased. If |J 1 | = |J 2 |, then there are many optimal schedules, of which only two satisfy the property stated in the Lemma. Why ? This is left as an exercise to you. Assuming

50 Assuming

51

52 Assuming

53 d 11 22 The length of the blue band is depicting  1 and the length of the yellow band is depicting  2 Can be negative

54

55 d

56

57

58

59 j+1 j

60

61

62

63

64

65 Clearly, Let

66 jrjr 1 j r -1 Earliness: decreased by Tardiness: increased by Total cost: decreased by

67

68

69

70

71

72 Sequence-Dependent Setup Problems An algorithm which gives an optimal schedule with the minimum makespan with sequence-dependent setup times 1 | S jk | C max Single machine: r j =0, no sequence dependent setup times  However, solving 1 | S jk | C max is NP hard

73 a single real variable which describes each job job j: to start the job the machine must be in state a j, at the completion of the job the machine state is b j s jk = | a k - b j | job k follows job j Travelling Salesman Problem with n+1 cities j 0, j 1, …,j n, j 0 k =  (j) travelling salesman leaving city j for city k

74 feasiblenot feasible {0, 1, 2, 3}  {2, 3, 1, 0}  (0) = 2  (1) = 3  (2) = 1  (3) = {0, 1, 2, 3}  {2, 1, 3, 0}

75 b1b1 bkbk bjbj b2b2 a(1)a(1) a(2)a(2) a  (k) a  (j) cost of going from 1 to  (1) is | a  (1) - b 1 | It can be easily shown that AAAA=I, where I is the 4x4 identity matrix. Also, BBB=I, but BBBB=B≠I. So, what can you say something about testing the feasibility of permutation mapping? Also see “Hamiltonian Circuit” (Definition C.2.8. in Appendix C). Letand

76 In some other text, the presentation of S jk for TSP is often in a form of a matrix. Moreover, the diagonal elements of the matrix are often undefined, since the same job cannot be repeated. How do we convert it to our form ? | a 2 - b 1 |=2 | a 3 - b 1 |=3 | a 1 – b 2 |=3 | a 3 – b 2 |=2 | a 1 – b 3 |=5 | a 2 – b 3 |=3 We have 6 equations.

77 From | a 2 – b 1 |=2, we have a 2 =x+2 or x – 2. If a 2 =x+2, from | a 2 – b 3 |=3, we have b 3 =x–1 or x+5. From | a 3 – b 1 |=3, we have a 3 =x+3 or x – 3. If a 3 =x+3, from | a 3 – b 2 |=2, we have b 2 =x+1 or x+5. From | a 1 – b 3 |=5, we have a 1 =x or x±4 or x±5 or x±10. From | a 1 – b 2 |=3, we have a 1 =x±2 or x±4 or x±8. Let b 1 =x. If a 2 =x–2, from | a 2 – b 3 |=3, we have b 3 =x–5 or x+1. If a 3 =x–3, from | a 3 – b 2 |=2, we have b 2 =x–1 or x–5. So, b 3 =x±1 or x±5. So, b 2 =x±1 or x±5. Choose a 1 =x+4, and thus, b 2 =x+1, b 3 =x–1, a 2 =x+2, a 3 =x+3. Thus, a 1 =x±4.

78 To keep all entries to be positive, we can choose b 1 =x=2. b1=xb1=x b 2 =x+1 b 3 =x– a 1 =x+4 a 2 =x+2 a 3 =x

79 Swap I(j,k) applied to a permutation  produces another permutation  ’:  ’(k) =  (j)  ’(j) =  (k)  ’(l) =  (l), l  j, k Consider the swap I(0,1) {0, 1, 2, 3}  {2, 1, 3, 0} {0, 1, 2, 3}  {1, 2, 3, 0}

Swap I(j,k) has the effect of creating two subtours out of one if j and k belongs to the same subtour. Conversely, it combines two subtours to which j and k belong otherwise.

81 change in cost due to swap I(j, k) bjbj bkbk a  (j) a  (k) Lemma If the swap causes two arrows that did nor cross earlier to cross, then the cost of the tour C  I(j,k) increases and vice versa. C  I(j, k) = || [b j, b k ]  [a  (j), a  (k) ] ||

82 Lemma An optimal permutation mapping  * is obtained if b j ≤ b k implies a  * (j) ≤ a  * (k). b 1 =1 b 2 =3 b 3 =15 a 2 =4 a 1 =7 a 3 =16 If we arrange the b j in order of size and renumber the jobs so that b 1  b 2 ...  b n and then, optimal permutation mapping  * is obtained when we arrange the a  *(j) in order of size. The permutation mapping is  * (j) = k where k being such that a k is the jth smallest of the a.

83 G  undirected graph where arcs link the jth and  (j)th nodes, j=1,…,n. Spanning tree - a minimal set of additional arcs that connect an unconnected graph

85 b 1 =1 b 3 =10 b 2 =6 a  (1) =2 a  (2 ) =4 a  (3) =11 C  I(1, 2) = || [1,6]  [2,4] || = 2 (4-2) = 4 C  I(2, 3) = || [6,10]  [4,11] || = 2 (10-6) = 8 Lower bound, sum of increase in costs of individual swaps = 4+8=12.

86 b 1 =1 b 3 =10 b 2 =6 a  (1) =2 a  (2 ) =4 a  (3) =11 I(1, 2) then I(2, 3) [C  I(1, 2)] I(2, 3) = || [6,10]  [2,11] || = 2 (10-6) = 8 Total increase in cost = 4+8=12. b 1 =1 b 3 =10 b 2 =6 a  (1) =2 a  (2 ) =4 a  (3) =11 I(2, 3) then I(1, 2) [C  I(2, 3)] I(1, 2) = || [1,6]  [2,11] || = 2 (6-2) = 8 CHANGED! Total increase in cost =8+8=16.

87 A node is of Type 1 if b j  a  (j) (arrow points up) A node is of Type 2 if b j > a  (j) (arrow points down) A swap is of Type 1 if lower node is of Type 1 A swap is of Type 2 if lower node is of Type 2 Swaps of Type 1 arcs can be performed starting from the top going down, so that the total increase in costs due to swaps is the same as the sum of increase of costs of individual swaps, as the overlap of intervals does not change. Overlap of intervals unchangedOverlap of intervals changed Individual swaps Both Type 1 2 swaps, top first2 swaps, bottom first

88 Swaps of Type 2 arcs can be performed starting from the bottom going up so that the total increase in costs due to swaps is the same as the sum of increase of costs of individual swaps, as the overlap of intervals does not change. Overlap of intervals unchanged Overlap of intervals changed Individual swaps Both Type 2 2 swaps, bottom first2 swaps, top first If swaps of Type 1 are performed in decreasing order of the node indices, and swaps of Type 2 in increasing order of the node indices then there will be no change in the total increase in cost involved in the swaps as that of the sum of increase of costs of individual swaps.

89 If swaps of Type 1 are performed in decreasing order of the node indices, followed by swaps of Type 2 in increasing order of the node indices then a single tour is obtained with no change in the total increase in cost involved in the swaps as that of the sum of increase of costs of individual swaps. Swaps of Type 1 arcs can be performed first before swaps of Type 2 arcs so that the total increase in costs due to swaps is the same as the sum of increase of costs of individual swaps. Overlap of intervals unchangedOverlap of intervals changed Individual swaps, Type 1 & Type 2 2 swaps, Type 1 first2 swaps, Type 2 first

90 Algorithm Example Step 1. Arrange the b j in order of size and renumber the jobs so that b 1  b 2 ...  b n Arrange the a j in order of size. The permutation mapping  * is defined by  * (j) = k, k being such that a k is the jth smallest of the a. 7 jobs

91 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45

92 Step 2. Form an undirected graph with n nodes and undirected arcs connecting the jth and  *(j) nodes, j=1,…n. If the current graph has only one component then STOP ; otherwise go to Step

93 Step 3. Compute the swap costs C  * I(j, j+1) for j=1,…,n C  * I(j, j+1) = 2 max ( min (b j+1, a  *(j+1) ) - max (b j, a  *(j) ) ), 0 ) C  * I(1, 2) = 2 max ( (3-4), 0 ) = 0 C  * I(2, 3) = 2 max ( (15-7), 0 ) = 16 C  * I(3, 4) = 2 max ( (18-16), 0 ) = 4 C  * I(4, 5) = 2 max ( (22-19), 0 ) = 6 C  * I(5, 6) = 2 max ( (31-26), 0 ) = 10 C  * I(6, 7) = 2 max ( (40-34), 0 ) = 12

94 Step 4. Select the smallest value C  * I(j, j+1) such that j is in one component and j+1 in another. In case of a tie for smallest, choose any. Insert the undirected arc R j, j+1 into the graph. Repeat this step until all the components in the undirected graph are connected

95 Step 5. Divide the arcs added in Step 4 into two groups. Those R j, j+1 for which b j  a  (j) go in group 1, those for which b j > a  (j) go in group 2. Step 6. Find the largest index j 1 such that R j 1, j 1 +1 is in group 1. Find the second largest index, and so on, up to j l assuming there are l elements in the group. Find the smallest index k 1 such that R k 1, k 1 +1 is in group 2. Find the second smallest index, and so on, up to k m assuming there are m elements in the group. j 1 = 3, j 2 = 2, k 1 = 4, k 2 = 5

96 Step 7. The optimal tour  ** is constructed by applying the following sequence of swaps on the permutation  *:  ** =  * I(j 1, j 1 +1) I(j 2, j 2 +1) … I(j l, j l +1) I(k 1, k 1 +1) I(k 2, k 2 +1) … I(k m, k m +1)  ** =  * I(3,4) I(2,3) I(4,5) I(5,6) Type 1Type 2

97 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45  * I(3,4)  * I(3,4) I(2,3)

98 b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45  * I(3,4) I(2,3) I(4,5) b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45  ** =  * I(3,4) I(2,3) I(4,5) I(5,6)

99 The optimal tour is: 1  2  7  4  5  6  3  1 The cost of the tour is: = 57  ** =  * I(3,4) I(2,3) I(4,5) I(5,6) b 4 =19 b 7 =40 b 6 =31 b 1 =1 b 2 =3 b 3 =15 b 5 =26 a 2 =4 a 1 =7 a 3 =16 a 7 =18 a 5 =22 a 6 =34 a 4 =45

100 Summary * 1 | S jk | C max, s jk = | a k - b j | a polynomial time algorithm is given * 1 | S jk | C max is NP hard