Presentation is loading. Please wait.

Presentation is loading. Please wait.

Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and.

Similar presentations


Presentation on theme: "Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and."— Presentation transcript:

1 Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and Large Scale Optimization Vilnius, Lithuania May 19-21, 2005 Metaheuristics in Optimization a

2 Metaheuristics in Optimization2 1.Quadratic Assignment & GRASP 2.Classical Metaheuristics 3.Parallelization of Metaheuristcs 4.Evaluation of Metaheuristics 5.Success Stories 6.Concluding Remarks Outline 1.Quadratic Assignment & GRASP 2.Classical Metaheuristics 3.Parallelization of Metaheuristcs 4.Evaluation of Metaheuristics 5.Success Stories 6.Concluding Remarks (joint work with Mauricio Resende and Claudio Meneses)

3 Metaheuristics in Optimization3 Metaheuristics are high level procedures that coordinate simple heuristics, such as local search, to find solutions that are of better quality than those found by the simple heuristics alone. Examples: simulated annealing, genetic algorithms, tabu search, scatter search, variable neighborhood search, and GRASP. Metaheuristics Metaheuristics are high level procedures that coordinate simple heuristics, such as local search, to find solutions that are of better quality than those found by the simple heuristics alone. Examples: simulated annealing, genetic algorithms, tabu search, scatter search, variable neighborhood search, and GRASP.

4 Metaheuristics in Optimization4 Quadratic assignment problem (QAP) Given N facilities f 1,f 2,…,f N and N locations l 1,l 2,…,l N Let A N×N = (a i,j ) be a positive real matrix where a i,j is the flow between facilities f i and f j Let B N×N = (b i,j ) be a positive real matrix where b i,j is the distance between locations l i and l j

5 Metaheuristics in Optimization5 Quadratic assignment problem (QAP) Given N facilities f 1,f 2,…,f N and N locations l 1,l 2,…,l N Let A N×N = (a i,j ) be a positive real matrix where a i,j is the flow between facilities f i and f j Let B N×N = (b i,j ) be a positive real matrix where b i,j is the distance between locations l i and l j

6 Metaheuristics in Optimization6 Quadratic assignment problem (QAP) Given N facilities f 1,f 2,…,f N and N locations l 1,l 2,…,l N Let A N×N = (a i,j ) be a positive real matrix where a i,j is the flow between facilities f i and f j Let B N×N = (b i,j ) be a positive real matrix where b i,j is the distance between locations l i and l j

7 Metaheuristics in Optimization7 Quadratic assignment problem (QAP) Let p: {1,2,…,N}  {1,2,…,N} be an assignment of the N facilities to the N locations Define the cost of assignment p to be QAP: Find a permutation vector p  ∏ N that minimizes the assignment cost: min c(p): subject to p  ∏ N

8 Metaheuristics in Optimization8 Quadratic assignment problem (QAP) Let p: {1,2,…,N}  {1,2,…,N} be an assignment of the N facilities to the N locations Define the cost of assignment p to be QAP: Find a permutation vector p  ∏ N that minimizes the assignment cost: min c(p): subject to p  ∏ N

9 Metaheuristics in Optimization9 Quadratic assignment problem (QAP) Let p: {1,2,…,N}  {1,2,…,N} be an assignment of the N facilities to the N locations Define the cost of assignment p to be QAP: Find a permutation vector p  ∏ N that minimizes the assignment cost: min c(p): subject to p  ∏ N

10 Metaheuristics in Optimization10 Quadratic assignment problem (QAP) l1l1 l2l2 l3l3 10 30 40 locations and distances f1f1 f2f2 f3f3 1 5 10 facilities and flows cost of assignment: 10×1+ 30×10 + 40×5 = 510

11 Metaheuristics in Optimization11 Quadratic assignment problem (QAP) l1l1 l2l2 l3l3 10 30 40 f1f1 f2f2 f3f3 1 5 10 cost of assignment: 10×1+ 30×10 + 40×5 = 510 f1f1 f2f2 f3f3 1 5 10 facilities and flows cost of assignment: 10×10+ 30×1 + 40×5 = 330 swap locations of facilities f 2 and f 3

12 Metaheuristics in Optimization12 Quadratic assignment problem (QAP) l1l1 l2l2 l3l3 10 30 40 f1f1 f3f3 f2f2 5 1 10 cost of assignment: 10×10+ 30×5 + 40×1 = 290 Optimal! f1f1 f2f2 f3f3 1 5 10 facilities and flows swap locations of facilities f 1 and f 3

13 Metaheuristics in Optimization13 GRASP for QAP GRASP  multi-start metaheuristic: greedy randomized construction, followed by local search (Feo & Resende, 1989, 1995; Festa & Resende, 2002; Resende & Ribeiro, 2003) GRASP for QAP –Li, Pardalos, & Resende (1994): GRASP for QAP –Resende, Pardalos, & Li (1996): Fortran subroutines for dense QAPs –Pardalos, Pitsoulis, & Resende (1997): Fortran subroutines for sparse QAPs –Fleurent & Glover (1999): memory mechanism in construction

14 Metaheuristics in Optimization14 GRASP for QAP GRASP  multi-start metaheuristic: greedy randomized construction, followed by local search (Feo & Resende, 1989, 1995; Festa & Resende, 2002; Resende & Ribeiro, 2003) GRASP for QAP –Li, Pardalos, & Resende (1994): GRASP for QAP –Resende, Pardalos, & Li (1996): Fortran subroutines for dense QAPs –Pardalos, Pitsoulis, & Resende (1997): Fortran subroutines for sparse QAPs –Fleurent & Glover (1999): memory mechanism in construction

15 Metaheuristics in Optimization15 repeat { x = GreedyRandomizedConstruction(  ); x = LocalSearch(x); save x as x* if best so far; } return x * ; GRASP for QAP

16 Metaheuristics in Optimization16 Construction Stage 1: make two assignments {f i  l k ; f j  l l } Stage 2: make remaining N–2 assignments of facilities to locations, one facility/location pair at a time

17 Metaheuristics in Optimization17 Construction Stage 1: make two assignments {f i  l k ; f j  l l } Stage 2: make remaining N–2 assignments of facilities to locations, one facility/location pair at a time

18 Metaheuristics in Optimization18 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

19 Metaheuristics in Optimization19 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

20 Metaheuristics in Optimization20 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

21 Metaheuristics in Optimization21 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

22 Metaheuristics in Optimization22 Stage 2 construction If Ω = {(i 1,k 1 ),(i 2,k 2 ), …, (i q,k q )} are the q assignments made so far, then Cost of assigning f j  l l is Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω

23 Metaheuristics in Optimization23 Stage 2 construction If Ω = {(i 1,k 1 ),(i 2,k 2 ), …, (i q,k q )} are the q assignments made so far, then Cost of assigning f j  l l is Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω

24 Metaheuristics in Optimization24 Stage 2 construction If Ω = {(i 1,k 1 ),(i 2,k 2 ), …, (i q,k q )} are the q assignments made so far, then Cost of assigning f j  l l is Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω Sped up in Pardalos, Pitsoulis, & Resende (1997) for QAPs with sparse A or B matrices.

25 Metaheuristics in Optimization25 Swap based local search a)For all pairs of assignments {f i  l k ; f j  l l }, test if swapped assignment {f i  l l ; f j  l k } improves solution. b)If so, make swap and return to step (a)

26 Metaheuristics in Optimization26 Swap based local search a)For all pairs of assignments {f i  l k ; f j  l l }, test if swapped assignment {f i  l l ; f j  l k } improves solution. b)If so, make swap and return to step (a) repeat (a)-(b) until no swap improves current solution

27 Metaheuristics in Optimization27 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

28 Metaheuristics in Optimization28 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

29 Metaheuristics in Optimization29 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

30 Metaheuristics in Optimization30 Path-relinking Exploration of trajectories that connect high quality (elite) solutions: initial solution guiding solution path in the neighborhood of solutions

31 Metaheuristics in Optimization31 Path-relinking Path is generated by selecting moves that introduce in the initial solution attributes of the guiding solution. At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected: initial solution guiding solution

32 Metaheuristics in Optimization32 Path-relinking Path is generated by selecting moves that introduce in the initial solution attributes of the guiding solution. At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected: initial solution guiding solution

33 Metaheuristics in Optimization33 Combine solutions x and y  (x,y): symmetric difference between x and y while ( |  (x,y)| > 0 ) { -evaluate moves corresponding in  (x,y) -make best move -update  (x,y) } Path-relinking x y

34 Metaheuristics in Optimization34 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution. –Perform path-relinking between these two solutions.

35 Metaheuristics in Optimization35 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution. –Perform path-relinking between these two solutions.

36 Metaheuristics in Optimization36 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution. –Perform path-relinking between these two solutions.

37 Metaheuristics in Optimization37 GRASP with path-relinking Repeat for Max_Iterations: Construct a greedy randomized solution. Use local search to improve the constructed solution. Apply path-relinking to further improve the solution. Update the pool of elite solutions. Update the best solution found.

38 Metaheuristics in Optimization38 P-R for QAP (permutation vectors)

39 Metaheuristics in Optimization39 If swap improves solution: local search is applied Path-relinking for QAP initial solution guiding solution local min If local min improves incumbent, it is saved.

40 Metaheuristics in Optimization40 Results of path relinking: S * Path-relinking for QAP initial solution guiding solution path in the neighborhood of solutions S*S* If c(S*) < min {c(S), c(T)}, and c(S*) ≤ c(S i ), for i=1,…,N, i.e. S* is best solution in path, then S* is returned. S T S0S0 S1S1 S2S2 S3S3 SNSN

41 Metaheuristics in Optimization41 initial solution guiding solution Path-relinking for QAP S*S* S T S0S0 S i–1 SiSi S i+1 SNSN S i is a local minimum w.r.t. PR: c(S i ) < c(S i–1 ) and c(S i ) < c(S i+1 ), for all i=1,…,N. If path-relinking does not improve (S,T), then if S i is a best local min w.r.t. PR: return S * = S i If no local min exists, return S*=argmin{S,T}

42 Metaheuristics in Optimization42 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

43 Metaheuristics in Optimization43 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

44 Metaheuristics in Optimization44 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

45 Metaheuristics in Optimization45 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

46 Metaheuristics in Optimization46 PR pool management S is initial solution for path-relinking: favor choice of target solution T with large symmetric difference with S. This leads to longer paths in path-relinking. Probability of choosing S e  P:

47 Metaheuristics in Optimization47 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

48 Metaheuristics in Optimization48 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

49 Metaheuristics in Optimization49 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

50 Metaheuristics in Optimization50 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

51 Metaheuristics in Optimization51 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

52 Metaheuristics in Optimization52 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

53 Metaheuristics in Optimization53 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

54 Metaheuristics in Optimization54 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

55 Metaheuristics in Optimization55 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

56 Metaheuristics in Optimization56 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

57 Metaheuristics in Optimization57 Random variable time-to-target-solution value fits a two-parameter exponential distribution (Aiex, Resende, & Ribeiro, 2002). Time-to-target-value plots Sort times such that t 1 ≤ t 2 ≤ ∙∙∙ ≤ t 100 and plot {t i,p i }, for i=1,…,N, where p i = (i–.5)/100

58 Metaheuristics in Optimization58 Time-to-target-value plots In 80% of trials target solution is found in less than 1.4 s Probability of finding target solution in less than 1 s is about 70%.

59 Metaheuristics in Optimization59 Time-to-target-value plots ALG 1ALG 2 For a given time, compare probabilities of finding target solution in at most that time. For a given probability, compare times required to find with given probability. We say ALG 1 is faster than ALG 2

60 Metaheuristics in Optimization60 C.E. Nugent, T.E. Vollmann and J. Ruml [1968] nug12nug20 nug25 nug30

61 Metaheuristics in Optimization61 E.D. Taillard [1991, 1994] tai15a tai17a tai20atai25a

62 Metaheuristics in Optimization62 Y. Li and P.M. Pardalos [1992] lipa20a lipa30a lipa40a

63 Metaheuristics in Optimization63 U.W. Thonemann and A. Bölte [1994] tho30 tho40

64 Metaheuristics in Optimization64 L. Steinberg [1961] ste36a ste36c ste36b

65 Metaheuristics in Optimization65 M. Scriabin and R.C. Vergin [1975] scr12scr15 scr20

66 Metaheuristics in Optimization66 S.W. Hadley, F. Rendl and H. Wolkowicz [1992] had14had16 had18 had20

67 Metaheuristics in Optimization67 R.E. Burkard and J. Offermann [1977] bur26abur26b bur26c bur26d

68 Metaheuristics in Optimization68 N. Christofides and E. Benavent [1989] chr18achr20a chr22a chr25a

69 Metaheuristics in Optimization69 C. Roucairol [1987] rou12 rou15 rou20

70 Metaheuristics in Optimization70 J. Krarup and P.M. Pruzan [1978] kra30a kra30b

71 Metaheuristics in Optimization71 B. Eschermann and H.J. Wunderlich [1990] esc32a esc32b esc32d esc32h

72 Metaheuristics in Optimization72 B. Eschermann and H.J. Wunderlich [1990] esc32c esc32e esc32f esc32g

73 Metaheuristics in Optimization73 Remarks New heuristic for the QAP is described. Path-relinking shown to improve performance of GRASP on almost all instances. Experimental results and code are available at http://www.research.att.com/~mgcr/exp/gqapspr

74 Metaheuristics in Optimization74 Remarks New heuristic for the QAP is described. Path-relinking shown to improve performance of GRASP on almost all instances. Experimental results and code are available at http://www.research.att.com/~mgcr/exp/gqapspr

75 Metaheuristics in Optimization75 Remarks New heuristic for the QAP is described. Path-relinking shown to improve performance of GRASP on almost all instances. Experimental results and code are available at http://www.research.att.com/~mgcr/exp/gqapspr

76 Metaheuristics in Optimization76 Classical Metaheuristics Simulated Annealing Genetic Algorithms Memetic Algorithms Tabu Search GRASP Variable Neighborhood Search etc (see Handbook of Applied Optimization, P. M. Pardalos and M. G. Resende, Oxford University Press, Inc., 2002)Oxford University Press

77 Metaheuristics in Optimization77 Input: A problem instance Output: A (sub-optimal) solution 1. Generate an initial solution at random and initialize the temperature T 2. While (T > 0) do (a) While (thermal equilibrium not reached) do (i) Generate a neighbor state at random and evaluate the change in energy level ΔE (ii) If ΔE < 0, update current state with new state (iii) If ΔE < 0, update current state with new state with probability (b) Decrease temperature T according to annealing schedule 3. Output the solution having the lowest energy Simulated Annealing Input: A problem instance Output: A (sub-optimal) solution 1. Generate an initial solution at random and initialize the temperature T 2. While (T > 0) do (a) While (thermal equilibrium not reached) do (i) Generate a neighbor state at random and evaluate the change in energy level ΔE (ii) If ΔE < 0, update current state with new state (iii) If ΔE < 0, update current state with new state with probability (b) Decrease temperature T according to annealing schedule 3. Output the solution having the lowest energy

78 Metaheuristics in Optimization78 Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t) and evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t) and evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Genetic Algorithms

79 Metaheuristics in Optimization79 Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t), perform local search on each individual of P(t), evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t), perform local search on each individual of P(t), evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Memetic Algorithms

80 Metaheuristics in Optimization80 Input: A problem instance Output: A (sub-optimal) solution 1. Initialization (i) Generate an initial solution x and set x * =x (ii) Initialize the tabu list T=Ø (ii) Set iteration cunters k=0 and m=0 2. While (N(x)\T ≠ Ø ) do (i) k=k+1; m=m+1 (ii) Select x as the best solution from set N(x)\T (iii) If f(x) < f( x * ) then update x * =x and set m=0 (iv) if k=k max or m=m max go to step 3 3. Output the best solution found x * Input: A problem instance Output: A (sub-optimal) solution 1. Initialization (i) Generate an initial solution x and set x * =x (ii) Initialize the tabu list T=Ø (ii) Set iteration cunters k=0 and m=0 2. While (N(x)\T ≠ Ø ) do (i) k=k+1; m=m+1 (ii) Select x as the best solution from set N(x)\T (iii) If f(x) < f( x * ) then update x * =x and set m=0 (iv) if k=k max or m=m max go to step 3 3. Output the best solution found x * Tabu Search

81 Metaheuristics in Optimization81 Input: A problem instance Output: A (sub-optimal) solution 1. Repeat for Max_Iterations (i) Construct a greedy randomized solution (ii) Use local search to improve the constructed solution (ii) Update the best solution found 2. Output the best solution among all the population as the (sub-optimal) solution GRASP Input: A problem instance Output: A (sub-optimal) solution 1. Repeat for Max_Iterations (i) Construct a greedy randomized solution (ii) Use local search to improve the constructed solution (ii) Update the best solution found 2. Output the best solution among all the population as the (sub-optimal) solution

82 Metaheuristics in Optimization82 Input: A problem instance Output: A (sub-optimal) solution 1. Initialization: (i) Select the set of neighborhood structures N k, k=1,…,k max, that will be used in the search (ii) Find an initial solution x (iii) Choose a stopping condition 2. Repeat until stopping condition is met (i) k=1 (ii) While (k ≤ k max ) do (a) Shaking: Generate a point y at random from N k (x) (b) Local Search: Apply some local search method with y as initial solution; Let z be the local optimum (c) Move or not: If z is better than the incumbent, move there (x = z), and set k=1; otherwise set k=k+1 3. Output the incumbent solution Input: A problem instance Output: A (sub-optimal) solution 1. Initialization: (i) Select the set of neighborhood structures N k, k=1,…,k max, that will be used in the search (ii) Find an initial solution x (iii) Choose a stopping condition 2. Repeat until stopping condition is met (i) k=1 (ii) While (k ≤ k max ) do (a) Shaking: Generate a point y at random from N k (x) (b) Local Search: Apply some local search method with y as initial solution; Let z be the local optimum (c) Move or not: If z is better than the incumbent, move there (x = z), and set k=1; otherwise set k=k+1 3. Output the incumbent solution VNS (Variable Neighborhood Search)

83 Metaheuristics in Optimization83 Construction phase: greediness + randomization –Builds a feasible solution: Use greediness to build restricted candidate list and apply randomness to select an element from the list. Use randomness to build restricted candidate list and apply greediness to select an element from the list. Local search: search in the current neighborhood until a local optimum is found –Solutions generated by the construction procedure are not necessarily optimal: Effectiveness of local search depends on: neighborhood structure, search strategy, and fast evaluation of neighbors, but also on the construction procedure itself. GRASP in more detaitls

84 Metaheuristics in Optimization84 Greedy Randomized Construction: –Solution   –Evaluate incremental costs of candidate elements –While Solution is not complete do: Build restricted candidate list (RCL) Select an element s from RCL at random Solution  Solution  {s} Reevaluate the incremental costs. endwhile Construction phase

85 Metaheuristics in Optimization85 Construction phase Minimization problem Basic construction procedure: –Greedy function c(e): incremental cost associated with the incorporation of element e into the current partial solution under construction –c min (resp. c max ): smallest (resp. largest) incremental cost –RCL made up by the elements with the smallest incremental costs.

86 Metaheuristics in Optimization86 Construction phase Cardinality-based construction: –p elements with the smallest incremental costs Quality-based construction: –Parameter  defines the quality of the elements in RCL. –RCL contains elements with incremental cost c min  c(e)  c min +  (c max –c min ) –  = 0 : pure greedy construction –  = 1 : pure randomized construction Select at random from RCL using uniform probability distribution

87 Metaheuristics in Optimization87 Illustrative results: RCL parameter weighted MAX-SAT instance random greedy RCL parameter α SGI Challenge 196 MHz

88 Metaheuristics in Optimization88 best solution average solution time time (seconds) for 1000 iterations solution value RCL parameter α Illustrative results: RCL parameter randomgreedy weighted MAX-SAT instance: 100 variables and 850 clauses SGI Challenge 196 MHz

89 Metaheuristics in Optimization89 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

90 Metaheuristics in Optimization90 Path-relinking Exploration of trajectories that connect high quality (elite) solutions: initial solution guiding solution path in the neighborhood of solutions

91 Metaheuristics in Optimization91 Path-relinking Path is generated by selecting moves that introduce in the initial solution attributes of the guiding solution. At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected: initial solution guiding solution

92 Metaheuristics in Optimization92 Elite solutions x and y  (x,y): symmetric difference between x and y while ( |  (x,y)| > 0 ) { evaluate moves corresponding in  (x,y) make best move update  (x,y) } Path-relinking

93 Metaheuristics in Optimization93 GRASP: 3-index assignment (AP3) cost = 10 Complete tripartite graph: Each triangle made up of three distinctly colored nodes has a cost. cost = 5 AP3: Find a set of triangles such that each node appears in exactly one triangle and the sum of the costs of the triangles is minimized.

94 Metaheuristics in Optimization94 3-index assignment (AP3) Construction: Solution is built by selecting n triplets, one at a time, biased by triplet costs. Local search: Explores O(n 2 ) size neighborhood of current solution, moving to better solution if one is found Aiex, Pardalos, Resende, & Toraldo (2003)

95 Metaheuristics in Optimization95 3-index assignment (AP3) Path relinking is done between: –Initial solution S = { (1, j 1 S, k 1 S ), (2, j 2 S, k 2 S ), …, (n, j n S, k n S ) } –Guiding solution T = { (1, j 1 T, k 1 T ), (2, j 2 T, k 2 T ), …, (n, j n T, k n T ) }

96 Metaheuristics in Optimization96 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution (may also be selected with probabilities proportional to the symmetric difference w.r.t. the initial solution). –Perform path-relinking between these two solutions.

97 Metaheuristics in Optimization97 GRASP with path-relinking Repeat for Max_Iterations: –Construct a greedy randomized solution. –Use local search to improve the constructed solution. –Apply path-relinking to further improve the solution. –Update the pool of elite solutions. –Update the best solution found.

98 Metaheuristics in Optimization98 GRASP with path-relinking Variants: trade-offs between computation time and solution quality –Explore different trajectories (e.g. backward, forward): better start from the best, neighborhood of the initial solution is fully explored! –Explore both trajectories: twice as much the time, often with marginal improvements only! –Do not apply PR at every iteration, but instead only periodically: similar to filtering during local search. –Truncate the search, do not follow the full trajectory. –May also be applied as a post-optimization step to all pairs of elite solutions.

99 Metaheuristics in Optimization99 GRASP with path-relinking Successful applications: 1)Prize-collecting minimum Steiner tree problem: Canuto, Resende, & Ribeiro (2001) (e.g. improved all solutions found by approximation algorithm of Goemans & Williamson) 2)Minimum Steiner tree problem: Ribeiro, Uchoa, & Werneck (2002) (e.g., best known results for open problems in series dv640 of the SteinLib) 3)p-median: Resende & Werneck (2002) (e.g., best known solutions for problems in literature)

100 Metaheuristics in Optimization100 GRASP with path-relinking Successful applications (cont’d): 4)Capacitated minimum spanning tree: Souza, Duhamel, & Ribeiro (2002) (e.g., best known results for largest problems with 160 nodes) 5)2-path network design: Ribeiro & Rosseti (2002) (better solutions than greedy heuristic) 6)Max-Cut: Festa, Pardalos, Resende, & Ribeiro (2002) (e.g., best known results for several instances) 7)Quadratic assignment: Oliveira, Pardalos, & Resende (2003)

101 Metaheuristics in Optimization101 GRASP with path-relinking Successful applications (cont’d): 8)Job-shop scheduling: Aiex, Binato, & Resende (2003) 9)Three-index assignment problem: Aiex, Resende, Pardalos, & Toraldo (2003) 10)PVC routing: Resende & Ribeiro (2003) 11)Phylogenetic trees: Ribeiro & Vianna (2003)

102 Metaheuristics in Optimization102 GRASP with path-relinking P is a set (pool) of elite solutions. Each iteration of first |P| GRASP iterations adds one solution to P (if different from others). After that: solution x is promoted to P if: –x is better than best solution in P. –x is not better than best solution in P, but is better than worst and is sufficiently different from all solutions in P.

103 Metaheuristics in Optimization103

104 Metaheuristics in Optimization104 GRASP is easy to implement in parallel: –parallelization by problem decomposition Feo, R., & Smith (1994) –iteration parallelization Pardalos, Pitsoulis, & R. (1995); Pardalos, Pitsoulis, & R. (1996) Alvim (1998); Martins & Ribeiro (1998) Murphey, Pardalos, & Pitsoulis (1998) R. (1998); Martins, R., & Ribeiro (1999) Aiex, Pardalos, R., & Toraldo (2000) GRASP is easy to implement in parallel: –parallelization by problem decomposition Feo, R., & Smith (1994) –iteration parallelization Pardalos, Pitsoulis, & R. (1995); Pardalos, Pitsoulis, & R. (1996) Alvim (1998); Martins & Ribeiro (1998) Murphey, Pardalos, & Pitsoulis (1998) R. (1998); Martins, R., & Ribeiro (1999) Aiex, Pardalos, R., & Toraldo (2000) Parallelization of GRASP

105 Metaheuristics in Optimization105 Parallel independent implementation Parallelism in metaheuristics: robustness Cung, Martins, Ribeiro, & Roucairo (2001) Multiple-walk independent-thread strategy: –p processors available –Iterations evenly distributed over p processors –Each processor keeps a copy of data and algorithms. –One processor acts as the master handling seeds, data, and iteration counter, besides performing GRASP iterations. –Each processor performs Max_Iterations/p iterations.

106 Metaheuristics in Optimization106 Parallel independent implementation seed(1)seed(2)seed(3) seed(4)seed(p-1) Best solution is sent to the master. 12 3 4p-1 Elite p seed(p)

107 Metaheuristics in Optimization107 Parallel cooperative implementation Multiple-walk cooperative-thread strategy: –p processors available –Iterations evenly distributed over p-1 processors –Each processor has a copy of data and algorithms. –One processor acts as the master handling seeds, data, and iteration counter and handles the pool of elite solutions, but does not perform GRASP iterations. –Each processor performs Max_Iterations/(p–1) iterations.

108 Metaheuristics in Optimization108 Parallel cooperative implementation 2 Elite 1 p 3 Elite solutions are stored in a centralized pool. Master Slave

109 Metaheuristics in Optimization109 Cooperative vs. independent strategies (for 3AP) Same instance: 15 runs with different seeds, 3200 iterations Pool is poorer when fewer GRASP iterations are done and solution quality deteriorates procsbestavg.bestavg. 1673678.6-- 2676680.4676681.6 4680685.1673681.2 8687690.3676683.1 16692699.1674682.3 32702708.5678684.8 IndependentCooperative

110 Metaheuristics in Optimization110 Speedup on 3-index assignment: bs24 3-index assignment (AP3) SGI Challenge 196 MHz

111 Metaheuristics in Optimization111 Evaluation of Heuristics Experimental design - problem instances - problem characteristics of interest (e.g., instance size, density, etc.) - upper/lower/optimal values

112 Metaheuristics in Optimization112 Evaluation of Heuristics (cont.) Sources of test instances - Real data sets It is easy to obtain real data sets - Random variants of real data sets The structure of the instance is preserved (e.g., graph), but details are changed (e.g., distances, costs)

113 Metaheuristics in Optimization113 Evaluation of Heuristics (cont.) - Test Problem Libraries - Test problem collections with “best known” solution - Test problem generators with known optimal solutions (e.g., QAP generators, Maximum Clique, Steiner Tree Problems, etc)

114 Metaheuristics in Optimization114 Evaluation of Heuristics (cont.) Test problem generators with known optimal solutions (cont.) C.A. Floudas, P.M. Pardalos, C.S. Adjiman, W.R. Esposito, Z. Gumus, S.T. Harding, J.L. Klepeis, C.A. Meyer, and C.A. Schweiger, Handbook of Test Problems for Local and Global Optimization, Kluwer Academic Publishers, (1999). C.A. Floudas and P.M. Pardalos, A Collection of Test Problems for Constrained Global Optimization Algorithms, Springer-Verlag, Lecture Notes in Computer Science 455 (1990). J. Hasselberg, P.M. Pardalos and G. Vairaktarakis, Test case generators and computational results for the maximum clique problem, Journal of Global Optimization 3 (1993), pp. 463-482. B. Khoury, P.M. Pardalos and D.-Z. Du, A test problem generator for the steiner problem in graphs, ACM Transactions on Mathematical Software, Vol. 19, No. 4 (1993), pp. 509-522. Y. Li and P.M. Pardalos, Generating quadratic assignment test problems with known optimal permutations, Computational Optimization and Applications Vol. 1, No. 2 (1992), pp. 163-184. P. Pardalos, "Generation of Large-Scale Quadratic Programs", ACM Transactions on Mathematical Software, Vol. 13, No. 2, p. 133. P.M. Pardalos, Construction of test problems in quadratic bivalent programming, ACM Transactions on Mathematical Software, Vol. 17, No. 1 (1991), pp. 74-87. P.M. Pardalos, Generation of large-scale quadratic programs for use as global optimization test problems, ACM Transactions on Mathematical Software, Vol. 13, No. 2 (1987), pp. 133-137.

115 Metaheuristics in Optimization115 Evaluation of Heuristics (cont.) - Random generated instances (quickest and easiest way to obtain supply of test instances)

116 Metaheuristics in Optimization116 Evaluation of Heuristics (cont.) Performance measurement - Time (most used, but difficult to assess due to differences among computers) - Solution Quality

117 Metaheuristics in Optimization117 Evaluation of Heuristics (cont.) Solution Quality - Exact solutions of small instances For “small” instances verify results with exact algorithms - Lower and upper bounds In many cases the problem of finding good bounds is as difficult as solving the original problem

118 Metaheuristics in Optimization118 Evaluation of Heuristics (cont.) Space covering techniques

119 Metaheuristics in Optimization119 Success Stories The success of metaheuristics can be seen by the numerous applications for which they have been applied. Examples: Scheduling, routing, logic, partitioning location graph theoretic QAP & other assignment problems miscellaneous problems

120 Metaheuristics in Optimization120 Concluding Remarks Metaheuristics have been shown to perform well in practice Many times the globally optimal solution is found but there is no “certificate of optimality” Large problem instances can be solved implementing metaheuristics in parallel It seems it is the most practical way to deal with massive data set

121 Metaheuristics in Optimization121 References Handbook of Applied Optimization Edited by Panos M. Pardalos and Mauricio G. C. Resende, Oxford University Press, Inc., 2002Panos M. PardalosMauricio G. C. ResendeOxford University Press Handbook of Massive Data Sets Series: Massive Computing, Vol. 4 Edited by J. Abello, P.M. Pardalos, M.G. Resende, Kluwer Academic Publishers, 2002.Massive Computing

122 Metaheuristics in Optimization122 THANK YOU ALL.


Download ppt "Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and."

Similar presentations


Ads by Google