Download presentation

Presentation is loading. Please wait.

Published bySelina Hamby Modified over 2 years ago

1
Seçil Sözüer 1

2
1-) Introduction 2-) Problem Defn. 3-) Single machine subproblem (P m ) 4-) Cost lower bounds for a partial schedule (for B&B and BS) 5-) Initial solution (IS) (a heuristic for finding IS for B&B) 6-) B&B algorithm (exact algorithm) 7-) Beam search algorithm (BS) (if B&B: not computationally efficient) 8-) Improvement search heuristic (ISH) (improves any given feasible schedule) 9-) Recovering beam search (RBS) 10-) Computational results 2

3
Turning (metal cutting) operation on non-identical parallel CNC machines Controllable processing times in practice (attaining small proc. time by cutting speed or feed rate ) Decide on ◦ Processing times of the jobs ◦ Machine / Job Allocation Bicriteria Problem (Two objectives): COST and TIME Total Manuf. Cost: & Makespan Converting Bicriteria Problem to Single Criterion Problem: e-constraint Approach: min. s.t. Makespan ≤ K (Upper Limit) Decision Maker: Interactively specify and modify K and analyze the influence of the changes on solutions 3

4
Controllable process. times: Pioneer: Vickson(1980) Problem: Total Process. Times & Total Weighted Compl. Time on a single machine Trick(1994): (Linear Process. Cost function) Nonidentical Parallel Mach. – NP-hard problem Problem: min. Total Process. Cost s.t. Makespan ≤ K (This paper considers nonlinear convex manuf. Cost Function) Kayan and Aktürk(2005): Determining upper and lower bounds for process. Times and manuf. Cost function 4

5
Parameters N: number of jobs, card( J )= N M: number of machines m=1..M 5 Each job is different in terms of its length and diameter, depth of cut, maximum allowable surface roughness, and its cutting tool type. Each job must be performed on a single machine without preemption Each machine is different in terms of its maximum applicable cutting power H m, and its unit operating cost, C m ($/minute). Operating Cost + Tooling Cost is a convex fnc and minimized at

6
Decision Var. Model Non-convex obj. Func. 6 Non-convex Feasible Region Non-convex Mixed Integer Nonlinear Prog. (MINLP) Non-convexities: major difficulty for finding global opt. soln EXPLOITING THE STRUCTURE

7
With given X jm (mach/job allocation), we can find optimal p jm by solving P m for each machine m: P m is convex. Hence local opt = global opt. Lemma 1 is sufficient for opt. 7 Non-linear convex obj. func. J m : set of jobs assigned to machine m. Convex Feasible Region is the Lagr. Dual var. for the makespan constraint (5). ≤ 0 since f jm i s non-increasing on the interval (6) For the Lemma 1 proof: replace (6) and use KKT - CS Conditions

8
8 Immediate extension of Lemma 1 to the non-ident. Parallel machine

9
S p : partial schedule J p : subset of jobs assigned to machines J m p : subset of jobs assigned to machine m optimal p jm decisions were made by solving the P m We assume that when we add an unscheduled job to S p, processing times of previously scheduled jobs may change, but the machine/job assignments in S p stay the same. Adding an unscheduled job j to machine m does not violate the makespan constraint (1): hence The processing time decisions for the jobs in J p were made previously by solving the P m for each machine m, we have at hand the optimal dual price for each for each m. 9

10
lb jm : Cost change –increase- lower bound for adding job j to mach. m 10 P jm ub satisfying Lemma 1. and P jm * ≤ P jm ub Additionalcost of adding job j to m ≥ f jm (P jm ub ) Compressing jobs to mach. M. Marg. C ost of decreasing makespan: - IP: By using lb jm, Cost increase lower bound for adding all unschedled jobs (Forming a complete sch. by starting with S p ) sum of lb jm for the possible assignments of unscheduled jobs to the machines. ( 7): makespan constr. (8): assign unsch. Jobs to machines

11
11 LB IP : lower bound found by solving the IP If an IP for S p turns out to be infeasible, this means no complete schedule can be achieved from S p LB LP : Relaxing (9) LB R : Relaxing (7):makespan const.

12
IS: will be ub soln. For B&B 12 Scheduling min-cost job first ! Each iteration: Adds a new job to schedule by choosing the best machine (that will give min. cost increase) Performance: N x M : iteration

13
The major difficulty in a B&B algorithm for a non-convex MINLP problem : computing a lb at a node of a B&B tree Each node: partial schedule (level 0: all jobs: unscheduled) (level k: jobs (j 1,... j k ) : scheduled) (level N: all jobs: scheduled) Reducing the size of tree: Pruned by Infeasiblity? If feasible, solve P m. Find a lb LB P ; LB IP or LB LP or LB R :Prune by inf? LB C = LB P + F P If LB C ≥ UB C, Then Prune by Optimality 13 For eah node, we find the opt. Cost and opt p jm by P m LB for complete sch. achievable from node Cost incr. lb of the node Cost of the partial sch. Cost upperbound: initally found by IS, (then updated if we find a feas. complete sched)

14
14 Modified depth first search: Selects the one with minimum lb as new parent node

15
NP-hard problem and the size of the search tree for the B&B algorithm increases exponentially as N and M increase. For higher levels of N, M and K, we use BS algorithm. BS: polynomial time algorithm. Complexity O(m.n.b) BS: Fast B&B method. Keeps the best b nodes at a level of the tree. Eliminates the rest. Choosing the nodes to be saved: LB LP 15

16
Starts with an initial schedule which satisfies the optimality condition in Corollary 1 so that we assume the P m is solved for each machine m. (We have at hand the optimal dual price for each m.) Define two moves to describe the neighborhood of a solution. 1-move : move a job j from its current machine m 1 to another machine m 2 2-swap : exchange job j1 on machine m1 with another job j2 on machine m2. 16 Cost of job j in m 1 Cost change lb by expanding proc. Times of the remaining jobs Removing job j from m 1 Adding job j to m 2

17
If < 0, then it is a promosing move! (obj. Value may not improve) 17

18
RBS: BS + local search techniques (2-swap moves) IDEA in recovering step : prevent the elimination of promising nodes (nodes that could lead to optimal or near optimal solutions) due to errors in the node evaluation step of beam search algorithms by applying local search techniques to achieve better partial solutions at each level of the beam search algorithm. RBS Complexity: O(m. n 2.b) In Step 3 of the BS, we generate child nodes for a given level of search tree. K child nodes generated at level l 18

19
Selecting K: makespan limit (In order to see the effect of K, solving each replication of the problem for 5 different levels of K.) To find proper K: First, solving the makespan minimization problem for each replication for fixed processing times case where for each j and m.: NP-HARD Polyn.-Time Algorithm: finding a feasible makespan level K: K DJ K = k x K DJ k = 0.6, 0.8, 1, 1.2, 1.4 Solving each B&B : with LB IP, LB LP, LB R If B&B finds opt. soln: Solving the problem with BS and RBS (LB LP ) kn 19

20
Critical Step in B&B: Deciding on (j 1,... j k ) Reduce Tree Size of B&B by lower bounds 20 Schedule Higher Proc. Time Job Earlier: Catch Infeasibility of schedules earlier Schedule Lower Cost Jobs Earlier: Schedule Higher Cost Jobs Earlier: We can reach opt. By traversing % of tree

21
The effect of Increasing N and M on running time of B&B 21 Calculating LB R is faster. As N and M increase, the CPU time required by LB R approaches to LB LP, so we may expect to see that LB LP will have shorter CPU times for larger problem sizes.. If we check the CPU time ratio LB R / LB IP, we observe that as N is increased, performance of the LB R gets closer to the performance of LB IP, but as M is increased, we observe the opposite. This is due to the fact that computing LB IP is itself an NP-hard problem and requires much more time when M is increased. Big Gap btwn «Min» and «Max» for each lower bounding method due to different K levels

22
Size of the eliminated and traversed nodes with different K Soln. Quality Results of IS, BS, RBS: (RBS is the best quality and time) R A : : % deviation from the opt. with algo. A R A = (cost achieved by A) – (opt cost:B&B) / (opt cost: B&B) 22 As K is increased, the size of traversed tree increases since fewer number of nodes will be eliminated due to feasibility. Hence CPU increases too.

23
ISH achieves a significant improvement for all 3 cases R A+ISH = starting soln with algo A and improve through ISH Comparing Table 5 and 6 23

24
Soln quality of BS and ISH improves as K is increased Hence, if B&B: not efficient, BS and ISH can achieve soln closer to optimum. This is due to the shape of manuf. Cost func. When K is increased, we have higher process. Time: flatter manuf. Cost func. 24

25
Testing IS, RBS, ISH for 50-100 jobs and 2,3,4 machines I RBS = % deviation of RBS from initial soln. achieved by IS We cannot solve these instances to the optimum due to the CPU time requirements. Therefore, we compared the results achieved by the RBS algorithm with respect to the initial results given by the IS algorithm The required CPU times are still reasonable even for the large problem instances. 25

26
Our computational results show that B&B can solve the problems by just traversing the 5% of the maximal possible (Slide 20)-Table-2 B&B tree size and the proposed lower bounding methods can eliminate up to 80% of the search tree. (Table 2) For the cases where B&B is not computationally efficient, our BS and improvement search algorithms achieved solutions within 1% of the optimum on the average in a very short computation time. 26

Similar presentations

Presentation is loading. Please wait....

OK

Approximation Algorithms

Approximation Algorithms

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on meta search engine Ppt on 9/11 conspiracy essay A ppt on robotics Ppt on urbanism as a way of life as opposed to rural ways of life Ppt on balanced diet for adolescent Ppt on energy crisis in india Ppt on bridges in networking Ppt on road accidents in sri Ppt on council of ministers nepal Ppt on microcontroller based home security system