Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and.

Slides:



Advertisements
Similar presentations
September 2003 GRASP and path-relinking: Advances and applications 1/82 CEMAPRE GRASP and Path- Relinking: Advances and Applications Celso C. RIBEIRO 7th.
Advertisements

G5BAIM Artificial Intelligence Methods
Minimum Clique Partition Problem with Constrained Weight for Interval Graphs Jianping Li Department of Mathematics Yunnan University Jointed by M.X. Chen.
Novembro 2003 Tabu search heuristic for partition coloring1/29 XXXV SBPO XXXV SBPO Natal, 4-7 de novembro de 2003 A Tabu Search Heuristic for Partition.
Multi-Objective Optimization NP-Hard Conflicting objectives – Flow shop with both minimum makespan and tardiness objective – TSP problem with minimum distance,
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
GREEDY RANDOMIZED ADAPTIVE SEARCH PROCEDURES Reporter : Benson.
February 2002Parallel GRASP for the 2-path network design problem Slide 1/25 (ROADEF) 4 èmes Journées de la ROADEF Paris, February 20-22, 2002 A Parallel.
May 2002Parallel GRASP with PR for the 2-path network design problem 1/35 PAREO’2002 PAREO 2002 Guadeloupe, May 20-24, 2002 A Parallel GRASP with Path-Relinking.
Reporter : Mac Date : Multi-Start Method Rafael Marti.
November 2003 GRASP and path-relinking: Advances and applications 1/52 MP in Rio Maculan torce pelo América.
A cooperative parallel tabu search algorithm for the quadratic assignment problem Ya-Tzu, Chiang.
MAE 552 – Heuristic Optimization
2-Layer Crossing Minimisation Johan van Rooij. Overview Problem definitions NP-Hardness proof Heuristics & Performance Practical Computation One layer:
February 2002GRASP with path-relinking for PVC routingSlide 1/42 (ROADEF) A GRASP with path- relinking heuristic for PVC routing Celso C. Ribeiro Computer.
Carmine Cerrone, Raffaele Cerulli, Bruce Golden GO IX Sirmione, Italy July
September 2002 Parallel GRASP with PR for the 2-path network design problem 1/37 AIRO’2002 AIRO’2002 L’Aquila, September 10-13, 2002 A Parallel GRASP with.
August 2003 GRASP and path-relinking: Advances and applications 1/104 MIC’2003 Maurício G.C. RESENDE AT&T Labs Research USA Celso C. RIBEIRO Catholic University.
Ant Colony Optimization: an introduction
Tabu Search Manuel Laguna. Outline Background Short Term Memory Long Term Memory Related Tabu Search Methods.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Elements of the Heuristic Approach
T ABU S EARCH Ta-Chun Lien. R EFERENCE Fred G., Manuel L., Tabu Search, Kluwer Academic Publishers, USA(1997)
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
Genetic Algorithm.
Analysis of Algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
MIC’2011 1/58 IX Metaheuristics International Conference, July 2011 Restart strategies for GRASP+PR Talk given at the 10 th International Symposium on.
1 Paper Review for ENGG6140 Memetic Algorithms By: Jin Zeng Shaun Wang School of Engineering University of Guelph Mar. 18, 2002.
SOFT COMPUTING (Optimization Techniques using GA) Dr. N.Uma Maheswari Professor/CSE PSNA CET.
Tabu Search UW Spring 2005 INDE 516 Project 2 Lei Li, HongRui Liu, Roberto Lu.
GRASP: A Sampling Meta-Heuristic
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Optimization Problems - Optimization: In the real world, there are many problems (e.g. Traveling Salesman Problem, Playing Chess ) that have numerous possible.
Maximum Network Lifetime in Wireless Sensor Networks with Adjustable Sensing Ranges Cardei, M.; Jie Wu; Mingming Lu; Pervaiz, M.O.; Wireless And Mobile.
Heuristic Optimization Methods Tabu Search: Advanced Topics.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
Course: Logic Programming and Constraints
Solving the Maximum Cardinality Bin Packing Problem with a Weight Annealing-Based Algorithm Kok-Hua Loh University of Maryland Bruce Golden University.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
G5BAIM Artificial Intelligence Methods
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
1 Introduction to Scatter Search ENGG*6140 – Paper Review Presented by: Jason Harris & Stephen Coe M.Sc. Candidates University of Guelph.
METAHEURISTICS Genetic Algorithm Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal
A Membrane Algorithm for the Min Storage problem Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca WMC.
Local Search and Optimization Presented by Collin Kanaley.
Optimization Problems
Heuristic Methods for the Single- Machine Problem Chapter 4 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha R2.
Preliminary Background Tabu Search Genetic Algorithm.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Selection and Recombination Temi avanzati di Intelligenza Artificiale - Lecture 4 Prof. Vincenzo Cutello Department of Mathematics and Computer Science.
Metaheuristics for the New Millennium Bruce L. Golden RH Smith School of Business University of Maryland by Presented at the University of Iowa, March.
Constraint Programming for the Diameter Constrained Minimum Spanning Tree Problem Thiago F. Noronha Celso C. Ribeiro Andréa C. Santos.
Optimization Problems
Scientific Research Group in Egypt (SRGE)
CSCI 4310 Lecture 10: Local Search Algorithms
Heuristic Optimization Methods
Meta-heuristics Introduction - Fabien Tricoire
Lectures on Network Flows
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Optimization Problems
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
Subset of Slides from Lei Li, HongRui Liu, Roberto Lu
Boltzmann Machine (BM) (§6.4)
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Presentation transcript:

Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and Large Scale Optimization Vilnius, Lithuania May 19-21, 2005 Metaheuristics in Optimization a

Metaheuristics in Optimization2 1.Quadratic Assignment & GRASP 2.Classical Metaheuristics 3.Parallelization of Metaheuristcs 4.Evaluation of Metaheuristics 5.Success Stories 6.Concluding Remarks Outline 1.Quadratic Assignment & GRASP 2.Classical Metaheuristics 3.Parallelization of Metaheuristcs 4.Evaluation of Metaheuristics 5.Success Stories 6.Concluding Remarks (joint work with Mauricio Resende and Claudio Meneses)

Metaheuristics in Optimization3 Metaheuristics are high level procedures that coordinate simple heuristics, such as local search, to find solutions that are of better quality than those found by the simple heuristics alone. Examples: simulated annealing, genetic algorithms, tabu search, scatter search, variable neighborhood search, and GRASP. Metaheuristics Metaheuristics are high level procedures that coordinate simple heuristics, such as local search, to find solutions that are of better quality than those found by the simple heuristics alone. Examples: simulated annealing, genetic algorithms, tabu search, scatter search, variable neighborhood search, and GRASP.

Metaheuristics in Optimization4 Quadratic assignment problem (QAP) Given N facilities f 1,f 2,…,f N and N locations l 1,l 2,…,l N Let A N×N = (a i,j ) be a positive real matrix where a i,j is the flow between facilities f i and f j Let B N×N = (b i,j ) be a positive real matrix where b i,j is the distance between locations l i and l j

Metaheuristics in Optimization5 Quadratic assignment problem (QAP) Given N facilities f 1,f 2,…,f N and N locations l 1,l 2,…,l N Let A N×N = (a i,j ) be a positive real matrix where a i,j is the flow between facilities f i and f j Let B N×N = (b i,j ) be a positive real matrix where b i,j is the distance between locations l i and l j

Metaheuristics in Optimization6 Quadratic assignment problem (QAP) Given N facilities f 1,f 2,…,f N and N locations l 1,l 2,…,l N Let A N×N = (a i,j ) be a positive real matrix where a i,j is the flow between facilities f i and f j Let B N×N = (b i,j ) be a positive real matrix where b i,j is the distance between locations l i and l j

Metaheuristics in Optimization7 Quadratic assignment problem (QAP) Let p: {1,2,…,N}  {1,2,…,N} be an assignment of the N facilities to the N locations Define the cost of assignment p to be QAP: Find a permutation vector p  ∏ N that minimizes the assignment cost: min c(p): subject to p  ∏ N

Metaheuristics in Optimization8 Quadratic assignment problem (QAP) Let p: {1,2,…,N}  {1,2,…,N} be an assignment of the N facilities to the N locations Define the cost of assignment p to be QAP: Find a permutation vector p  ∏ N that minimizes the assignment cost: min c(p): subject to p  ∏ N

Metaheuristics in Optimization9 Quadratic assignment problem (QAP) Let p: {1,2,…,N}  {1,2,…,N} be an assignment of the N facilities to the N locations Define the cost of assignment p to be QAP: Find a permutation vector p  ∏ N that minimizes the assignment cost: min c(p): subject to p  ∏ N

Metaheuristics in Optimization10 Quadratic assignment problem (QAP) l1l1 l2l2 l3l locations and distances f1f1 f2f2 f3f facilities and flows cost of assignment: 10×1+ 30× ×5 = 510

Metaheuristics in Optimization11 Quadratic assignment problem (QAP) l1l1 l2l2 l3l f1f1 f2f2 f3f cost of assignment: 10×1+ 30× ×5 = 510 f1f1 f2f2 f3f facilities and flows cost of assignment: 10×10+ 30×1 + 40×5 = 330 swap locations of facilities f 2 and f 3

Metaheuristics in Optimization12 Quadratic assignment problem (QAP) l1l1 l2l2 l3l f1f1 f3f3 f2f cost of assignment: 10×10+ 30×5 + 40×1 = 290 Optimal! f1f1 f2f2 f3f facilities and flows swap locations of facilities f 1 and f 3

Metaheuristics in Optimization13 GRASP for QAP GRASP  multi-start metaheuristic: greedy randomized construction, followed by local search (Feo & Resende, 1989, 1995; Festa & Resende, 2002; Resende & Ribeiro, 2003) GRASP for QAP –Li, Pardalos, & Resende (1994): GRASP for QAP –Resende, Pardalos, & Li (1996): Fortran subroutines for dense QAPs –Pardalos, Pitsoulis, & Resende (1997): Fortran subroutines for sparse QAPs –Fleurent & Glover (1999): memory mechanism in construction

Metaheuristics in Optimization14 GRASP for QAP GRASP  multi-start metaheuristic: greedy randomized construction, followed by local search (Feo & Resende, 1989, 1995; Festa & Resende, 2002; Resende & Ribeiro, 2003) GRASP for QAP –Li, Pardalos, & Resende (1994): GRASP for QAP –Resende, Pardalos, & Li (1996): Fortran subroutines for dense QAPs –Pardalos, Pitsoulis, & Resende (1997): Fortran subroutines for sparse QAPs –Fleurent & Glover (1999): memory mechanism in construction

Metaheuristics in Optimization15 repeat { x = GreedyRandomizedConstruction(  ); x = LocalSearch(x); save x as x* if best so far; } return x * ; GRASP for QAP

Metaheuristics in Optimization16 Construction Stage 1: make two assignments {f i  l k ; f j  l l } Stage 2: make remaining N–2 assignments of facilities to locations, one facility/location pair at a time

Metaheuristics in Optimization17 Construction Stage 1: make two assignments {f i  l k ; f j  l l } Stage 2: make remaining N–2 assignments of facilities to locations, one facility/location pair at a time

Metaheuristics in Optimization18 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

Metaheuristics in Optimization19 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

Metaheuristics in Optimization20 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

Metaheuristics in Optimization21 Stage 1 construction sort distances b i,j in increasing order: b i(1),j(1) ≤b i(2),j(2) ≤    ≤ b i(N),j(N). sort flows a k,l in decreasing order: a k(1),l(1)  a k(2),l(2)      a k(N),l(N). sort products: a k(1),l(1)  b i(1),j(1), a k(2),l(2)  b i(2),j(2), …, a k(N),l(N)  b i(N),j(N) among smallest products, select a k(q),l(q)  b i(q),j(q) at random: corresponding to assignments {f k(q)  l i(q) ; f l(q)  l j(q) }

Metaheuristics in Optimization22 Stage 2 construction If Ω = {(i 1,k 1 ),(i 2,k 2 ), …, (i q,k q )} are the q assignments made so far, then Cost of assigning f j  l l is Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω

Metaheuristics in Optimization23 Stage 2 construction If Ω = {(i 1,k 1 ),(i 2,k 2 ), …, (i q,k q )} are the q assignments made so far, then Cost of assigning f j  l l is Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω

Metaheuristics in Optimization24 Stage 2 construction If Ω = {(i 1,k 1 ),(i 2,k 2 ), …, (i q,k q )} are the q assignments made so far, then Cost of assigning f j  l l is Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω Sped up in Pardalos, Pitsoulis, & Resende (1997) for QAPs with sparse A or B matrices.

Metaheuristics in Optimization25 Swap based local search a)For all pairs of assignments {f i  l k ; f j  l l }, test if swapped assignment {f i  l l ; f j  l k } improves solution. b)If so, make swap and return to step (a)

Metaheuristics in Optimization26 Swap based local search a)For all pairs of assignments {f i  l k ; f j  l l }, test if swapped assignment {f i  l l ; f j  l k } improves solution. b)If so, make swap and return to step (a) repeat (a)-(b) until no swap improves current solution

Metaheuristics in Optimization27 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

Metaheuristics in Optimization28 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

Metaheuristics in Optimization29 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

Metaheuristics in Optimization30 Path-relinking Exploration of trajectories that connect high quality (elite) solutions: initial solution guiding solution path in the neighborhood of solutions

Metaheuristics in Optimization31 Path-relinking Path is generated by selecting moves that introduce in the initial solution attributes of the guiding solution. At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected: initial solution guiding solution

Metaheuristics in Optimization32 Path-relinking Path is generated by selecting moves that introduce in the initial solution attributes of the guiding solution. At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected: initial solution guiding solution

Metaheuristics in Optimization33 Combine solutions x and y  (x,y): symmetric difference between x and y while ( |  (x,y)| > 0 ) { -evaluate moves corresponding in  (x,y) -make best move -update  (x,y) } Path-relinking x y

Metaheuristics in Optimization34 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution. –Perform path-relinking between these two solutions.

Metaheuristics in Optimization35 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution. –Perform path-relinking between these two solutions.

Metaheuristics in Optimization36 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution. –Perform path-relinking between these two solutions.

Metaheuristics in Optimization37 GRASP with path-relinking Repeat for Max_Iterations: Construct a greedy randomized solution. Use local search to improve the constructed solution. Apply path-relinking to further improve the solution. Update the pool of elite solutions. Update the best solution found.

Metaheuristics in Optimization38 P-R for QAP (permutation vectors)

Metaheuristics in Optimization39 If swap improves solution: local search is applied Path-relinking for QAP initial solution guiding solution local min If local min improves incumbent, it is saved.

Metaheuristics in Optimization40 Results of path relinking: S * Path-relinking for QAP initial solution guiding solution path in the neighborhood of solutions S*S* If c(S*) < min {c(S), c(T)}, and c(S*) ≤ c(S i ), for i=1,…,N, i.e. S* is best solution in path, then S* is returned. S T S0S0 S1S1 S2S2 S3S3 SNSN

Metaheuristics in Optimization41 initial solution guiding solution Path-relinking for QAP S*S* S T S0S0 S i–1 SiSi S i+1 SNSN S i is a local minimum w.r.t. PR: c(S i ) < c(S i–1 ) and c(S i ) < c(S i+1 ), for all i=1,…,N. If path-relinking does not improve (S,T), then if S i is a best local min w.r.t. PR: return S * = S i If no local min exists, return S*=argmin{S,T}

Metaheuristics in Optimization42 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

Metaheuristics in Optimization43 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

Metaheuristics in Optimization44 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

Metaheuristics in Optimization45 PR pool management S * is candidate for inclusion in pool of elite solutions (P) If c(S * ) < c(S e ), for all S e  P, then S * is put in P Else, if c(S * ) < max{c(S e ), S e  P} and |  (S *,S e )|  3, for all S e  P, then S * is put in P If pool is full, remove argmin {|  (S *,S e )|,  S e  P s.t. c(S e )  c(S * )}

Metaheuristics in Optimization46 PR pool management S is initial solution for path-relinking: favor choice of target solution T with large symmetric difference with S. This leads to longer paths in path-relinking. Probability of choosing S e  P:

Metaheuristics in Optimization47 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ independent runs of each algorithm, recording CPU time to find the best known solution for instance

Metaheuristics in Optimization48 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ independent runs of each algorithm, recording CPU time to find the best known solution for instance

Metaheuristics in Optimization49 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ independent runs of each algorithm, recording CPU time to find the best known solution for instance

Metaheuristics in Optimization50 Experimental results Compare GRASP with and without path- relinking. New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ independent runs of each algorithm, recording CPU time to find the best known solution for instance

Metaheuristics in Optimization51 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

Metaheuristics in Optimization52 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

Metaheuristics in Optimization53 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

Metaheuristics in Optimization54 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

Metaheuristics in Optimization55 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

Metaheuristics in Optimization56 SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory) Single processor used for each run GRASP RCL parameter  chosen at random in interval [0,1] at each GRASP iteration. Size of elite set: 30 Path-relinking done in both directions (S to T to S) Care taken to ensure that GRASP and GRASP with path-relinking iterations are in sync Experimental results

Metaheuristics in Optimization57 Random variable time-to-target-solution value fits a two-parameter exponential distribution (Aiex, Resende, & Ribeiro, 2002). Time-to-target-value plots Sort times such that t 1 ≤ t 2 ≤ ∙∙∙ ≤ t 100 and plot {t i,p i }, for i=1,…,N, where p i = (i–.5)/100

Metaheuristics in Optimization58 Time-to-target-value plots In 80% of trials target solution is found in less than 1.4 s Probability of finding target solution in less than 1 s is about 70%.

Metaheuristics in Optimization59 Time-to-target-value plots ALG 1ALG 2 For a given time, compare probabilities of finding target solution in at most that time. For a given probability, compare times required to find with given probability. We say ALG 1 is faster than ALG 2

Metaheuristics in Optimization60 C.E. Nugent, T.E. Vollmann and J. Ruml [1968] nug12nug20 nug25 nug30

Metaheuristics in Optimization61 E.D. Taillard [1991, 1994] tai15a tai17a tai20atai25a

Metaheuristics in Optimization62 Y. Li and P.M. Pardalos [1992] lipa20a lipa30a lipa40a

Metaheuristics in Optimization63 U.W. Thonemann and A. Bölte [1994] tho30 tho40

Metaheuristics in Optimization64 L. Steinberg [1961] ste36a ste36c ste36b

Metaheuristics in Optimization65 M. Scriabin and R.C. Vergin [1975] scr12scr15 scr20

Metaheuristics in Optimization66 S.W. Hadley, F. Rendl and H. Wolkowicz [1992] had14had16 had18 had20

Metaheuristics in Optimization67 R.E. Burkard and J. Offermann [1977] bur26abur26b bur26c bur26d

Metaheuristics in Optimization68 N. Christofides and E. Benavent [1989] chr18achr20a chr22a chr25a

Metaheuristics in Optimization69 C. Roucairol [1987] rou12 rou15 rou20

Metaheuristics in Optimization70 J. Krarup and P.M. Pruzan [1978] kra30a kra30b

Metaheuristics in Optimization71 B. Eschermann and H.J. Wunderlich [1990] esc32a esc32b esc32d esc32h

Metaheuristics in Optimization72 B. Eschermann and H.J. Wunderlich [1990] esc32c esc32e esc32f esc32g

Metaheuristics in Optimization73 Remarks New heuristic for the QAP is described. Path-relinking shown to improve performance of GRASP on almost all instances. Experimental results and code are available at

Metaheuristics in Optimization74 Remarks New heuristic for the QAP is described. Path-relinking shown to improve performance of GRASP on almost all instances. Experimental results and code are available at

Metaheuristics in Optimization75 Remarks New heuristic for the QAP is described. Path-relinking shown to improve performance of GRASP on almost all instances. Experimental results and code are available at

Metaheuristics in Optimization76 Classical Metaheuristics Simulated Annealing Genetic Algorithms Memetic Algorithms Tabu Search GRASP Variable Neighborhood Search etc (see Handbook of Applied Optimization, P. M. Pardalos and M. G. Resende, Oxford University Press, Inc., 2002)Oxford University Press

Metaheuristics in Optimization77 Input: A problem instance Output: A (sub-optimal) solution 1. Generate an initial solution at random and initialize the temperature T 2. While (T > 0) do (a) While (thermal equilibrium not reached) do (i) Generate a neighbor state at random and evaluate the change in energy level ΔE (ii) If ΔE < 0, update current state with new state (iii) If ΔE < 0, update current state with new state with probability (b) Decrease temperature T according to annealing schedule 3. Output the solution having the lowest energy Simulated Annealing Input: A problem instance Output: A (sub-optimal) solution 1. Generate an initial solution at random and initialize the temperature T 2. While (T > 0) do (a) While (thermal equilibrium not reached) do (i) Generate a neighbor state at random and evaluate the change in energy level ΔE (ii) If ΔE < 0, update current state with new state (iii) If ΔE < 0, update current state with new state with probability (b) Decrease temperature T according to annealing schedule 3. Output the solution having the lowest energy

Metaheuristics in Optimization78 Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t) and evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t) and evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Genetic Algorithms

Metaheuristics in Optimization79 Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t), perform local search on each individual of P(t), evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Input: A problem instance Output: A (sub-optimal) solution 1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t) 2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t), perform local search on each individual of P(t), evaluate P(t) 3. Output the best solution among all the population as the (sub-optimal) solution Memetic Algorithms

Metaheuristics in Optimization80 Input: A problem instance Output: A (sub-optimal) solution 1. Initialization (i) Generate an initial solution x and set x * =x (ii) Initialize the tabu list T=Ø (ii) Set iteration cunters k=0 and m=0 2. While (N(x)\T ≠ Ø ) do (i) k=k+1; m=m+1 (ii) Select x as the best solution from set N(x)\T (iii) If f(x) < f( x * ) then update x * =x and set m=0 (iv) if k=k max or m=m max go to step 3 3. Output the best solution found x * Input: A problem instance Output: A (sub-optimal) solution 1. Initialization (i) Generate an initial solution x and set x * =x (ii) Initialize the tabu list T=Ø (ii) Set iteration cunters k=0 and m=0 2. While (N(x)\T ≠ Ø ) do (i) k=k+1; m=m+1 (ii) Select x as the best solution from set N(x)\T (iii) If f(x) < f( x * ) then update x * =x and set m=0 (iv) if k=k max or m=m max go to step 3 3. Output the best solution found x * Tabu Search

Metaheuristics in Optimization81 Input: A problem instance Output: A (sub-optimal) solution 1. Repeat for Max_Iterations (i) Construct a greedy randomized solution (ii) Use local search to improve the constructed solution (ii) Update the best solution found 2. Output the best solution among all the population as the (sub-optimal) solution GRASP Input: A problem instance Output: A (sub-optimal) solution 1. Repeat for Max_Iterations (i) Construct a greedy randomized solution (ii) Use local search to improve the constructed solution (ii) Update the best solution found 2. Output the best solution among all the population as the (sub-optimal) solution

Metaheuristics in Optimization82 Input: A problem instance Output: A (sub-optimal) solution 1. Initialization: (i) Select the set of neighborhood structures N k, k=1,…,k max, that will be used in the search (ii) Find an initial solution x (iii) Choose a stopping condition 2. Repeat until stopping condition is met (i) k=1 (ii) While (k ≤ k max ) do (a) Shaking: Generate a point y at random from N k (x) (b) Local Search: Apply some local search method with y as initial solution; Let z be the local optimum (c) Move or not: If z is better than the incumbent, move there (x = z), and set k=1; otherwise set k=k+1 3. Output the incumbent solution Input: A problem instance Output: A (sub-optimal) solution 1. Initialization: (i) Select the set of neighborhood structures N k, k=1,…,k max, that will be used in the search (ii) Find an initial solution x (iii) Choose a stopping condition 2. Repeat until stopping condition is met (i) k=1 (ii) While (k ≤ k max ) do (a) Shaking: Generate a point y at random from N k (x) (b) Local Search: Apply some local search method with y as initial solution; Let z be the local optimum (c) Move or not: If z is better than the incumbent, move there (x = z), and set k=1; otherwise set k=k+1 3. Output the incumbent solution VNS (Variable Neighborhood Search)

Metaheuristics in Optimization83 Construction phase: greediness + randomization –Builds a feasible solution: Use greediness to build restricted candidate list and apply randomness to select an element from the list. Use randomness to build restricted candidate list and apply greediness to select an element from the list. Local search: search in the current neighborhood until a local optimum is found –Solutions generated by the construction procedure are not necessarily optimal: Effectiveness of local search depends on: neighborhood structure, search strategy, and fast evaluation of neighbors, but also on the construction procedure itself. GRASP in more detaitls

Metaheuristics in Optimization84 Greedy Randomized Construction: –Solution   –Evaluate incremental costs of candidate elements –While Solution is not complete do: Build restricted candidate list (RCL) Select an element s from RCL at random Solution  Solution  {s} Reevaluate the incremental costs. endwhile Construction phase

Metaheuristics in Optimization85 Construction phase Minimization problem Basic construction procedure: –Greedy function c(e): incremental cost associated with the incorporation of element e into the current partial solution under construction –c min (resp. c max ): smallest (resp. largest) incremental cost –RCL made up by the elements with the smallest incremental costs.

Metaheuristics in Optimization86 Construction phase Cardinality-based construction: –p elements with the smallest incremental costs Quality-based construction: –Parameter  defines the quality of the elements in RCL. –RCL contains elements with incremental cost c min  c(e)  c min +  (c max –c min ) –  = 0 : pure greedy construction –  = 1 : pure randomized construction Select at random from RCL using uniform probability distribution

Metaheuristics in Optimization87 Illustrative results: RCL parameter weighted MAX-SAT instance random greedy RCL parameter α SGI Challenge 196 MHz

Metaheuristics in Optimization88 best solution average solution time time (seconds) for 1000 iterations solution value RCL parameter α Illustrative results: RCL parameter randomgreedy weighted MAX-SAT instance: 100 variables and 850 clauses SGI Challenge 196 MHz

Metaheuristics in Optimization89 Path-relinking Path-relinking: –Intensification strategy exploring trajectories connecting elite solutions: Glover (1996) –Originally proposed in the context of tabu search and scatter search. –Paths in the solution space leading to other elite solutions are explored in the search for better solutions: selection of moves that introduce attributes of the guiding solution into the current solution

Metaheuristics in Optimization90 Path-relinking Exploration of trajectories that connect high quality (elite) solutions: initial solution guiding solution path in the neighborhood of solutions

Metaheuristics in Optimization91 Path-relinking Path is generated by selecting moves that introduce in the initial solution attributes of the guiding solution. At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected: initial solution guiding solution

Metaheuristics in Optimization92 Elite solutions x and y  (x,y): symmetric difference between x and y while ( |  (x,y)| > 0 ) { evaluate moves corresponding in  (x,y) make best move update  (x,y) } Path-relinking

Metaheuristics in Optimization93 GRASP: 3-index assignment (AP3) cost = 10 Complete tripartite graph: Each triangle made up of three distinctly colored nodes has a cost. cost = 5 AP3: Find a set of triangles such that each node appears in exactly one triangle and the sum of the costs of the triangles is minimized.

Metaheuristics in Optimization94 3-index assignment (AP3) Construction: Solution is built by selecting n triplets, one at a time, biased by triplet costs. Local search: Explores O(n 2 ) size neighborhood of current solution, moving to better solution if one is found Aiex, Pardalos, Resende, & Toraldo (2003)

Metaheuristics in Optimization95 3-index assignment (AP3) Path relinking is done between: –Initial solution S = { (1, j 1 S, k 1 S ), (2, j 2 S, k 2 S ), …, (n, j n S, k n S ) } –Guiding solution T = { (1, j 1 T, k 1 T ), (2, j 2 T, k 2 T ), …, (n, j n T, k n T ) }

Metaheuristics in Optimization96 GRASP with path-relinking Originally used by Laguna and Martí (1999). Maintains a set of elite solutions found during GRASP iterations. After each GRASP iteration (construction and local search): –Use GRASP solution as initial solution. –Select an elite solution uniformly at random: guiding solution (may also be selected with probabilities proportional to the symmetric difference w.r.t. the initial solution). –Perform path-relinking between these two solutions.

Metaheuristics in Optimization97 GRASP with path-relinking Repeat for Max_Iterations: –Construct a greedy randomized solution. –Use local search to improve the constructed solution. –Apply path-relinking to further improve the solution. –Update the pool of elite solutions. –Update the best solution found.

Metaheuristics in Optimization98 GRASP with path-relinking Variants: trade-offs between computation time and solution quality –Explore different trajectories (e.g. backward, forward): better start from the best, neighborhood of the initial solution is fully explored! –Explore both trajectories: twice as much the time, often with marginal improvements only! –Do not apply PR at every iteration, but instead only periodically: similar to filtering during local search. –Truncate the search, do not follow the full trajectory. –May also be applied as a post-optimization step to all pairs of elite solutions.

Metaheuristics in Optimization99 GRASP with path-relinking Successful applications: 1)Prize-collecting minimum Steiner tree problem: Canuto, Resende, & Ribeiro (2001) (e.g. improved all solutions found by approximation algorithm of Goemans & Williamson) 2)Minimum Steiner tree problem: Ribeiro, Uchoa, & Werneck (2002) (e.g., best known results for open problems in series dv640 of the SteinLib) 3)p-median: Resende & Werneck (2002) (e.g., best known solutions for problems in literature)

Metaheuristics in Optimization100 GRASP with path-relinking Successful applications (cont’d): 4)Capacitated minimum spanning tree: Souza, Duhamel, & Ribeiro (2002) (e.g., best known results for largest problems with 160 nodes) 5)2-path network design: Ribeiro & Rosseti (2002) (better solutions than greedy heuristic) 6)Max-Cut: Festa, Pardalos, Resende, & Ribeiro (2002) (e.g., best known results for several instances) 7)Quadratic assignment: Oliveira, Pardalos, & Resende (2003)

Metaheuristics in Optimization101 GRASP with path-relinking Successful applications (cont’d): 8)Job-shop scheduling: Aiex, Binato, & Resende (2003) 9)Three-index assignment problem: Aiex, Resende, Pardalos, & Toraldo (2003) 10)PVC routing: Resende & Ribeiro (2003) 11)Phylogenetic trees: Ribeiro & Vianna (2003)

Metaheuristics in Optimization102 GRASP with path-relinking P is a set (pool) of elite solutions. Each iteration of first |P| GRASP iterations adds one solution to P (if different from others). After that: solution x is promoted to P if: –x is better than best solution in P. –x is not better than best solution in P, but is better than worst and is sufficiently different from all solutions in P.

Metaheuristics in Optimization103

Metaheuristics in Optimization104 GRASP is easy to implement in parallel: –parallelization by problem decomposition Feo, R., & Smith (1994) –iteration parallelization Pardalos, Pitsoulis, & R. (1995); Pardalos, Pitsoulis, & R. (1996) Alvim (1998); Martins & Ribeiro (1998) Murphey, Pardalos, & Pitsoulis (1998) R. (1998); Martins, R., & Ribeiro (1999) Aiex, Pardalos, R., & Toraldo (2000) GRASP is easy to implement in parallel: –parallelization by problem decomposition Feo, R., & Smith (1994) –iteration parallelization Pardalos, Pitsoulis, & R. (1995); Pardalos, Pitsoulis, & R. (1996) Alvim (1998); Martins & Ribeiro (1998) Murphey, Pardalos, & Pitsoulis (1998) R. (1998); Martins, R., & Ribeiro (1999) Aiex, Pardalos, R., & Toraldo (2000) Parallelization of GRASP

Metaheuristics in Optimization105 Parallel independent implementation Parallelism in metaheuristics: robustness Cung, Martins, Ribeiro, & Roucairo (2001) Multiple-walk independent-thread strategy: –p processors available –Iterations evenly distributed over p processors –Each processor keeps a copy of data and algorithms. –One processor acts as the master handling seeds, data, and iteration counter, besides performing GRASP iterations. –Each processor performs Max_Iterations/p iterations.

Metaheuristics in Optimization106 Parallel independent implementation seed(1)seed(2)seed(3) seed(4)seed(p-1) Best solution is sent to the master p-1 Elite p seed(p)

Metaheuristics in Optimization107 Parallel cooperative implementation Multiple-walk cooperative-thread strategy: –p processors available –Iterations evenly distributed over p-1 processors –Each processor has a copy of data and algorithms. –One processor acts as the master handling seeds, data, and iteration counter and handles the pool of elite solutions, but does not perform GRASP iterations. –Each processor performs Max_Iterations/(p–1) iterations.

Metaheuristics in Optimization108 Parallel cooperative implementation 2 Elite 1 p 3 Elite solutions are stored in a centralized pool. Master Slave

Metaheuristics in Optimization109 Cooperative vs. independent strategies (for 3AP) Same instance: 15 runs with different seeds, 3200 iterations Pool is poorer when fewer GRASP iterations are done and solution quality deteriorates procsbestavg.bestavg IndependentCooperative

Metaheuristics in Optimization110 Speedup on 3-index assignment: bs24 3-index assignment (AP3) SGI Challenge 196 MHz

Metaheuristics in Optimization111 Evaluation of Heuristics Experimental design - problem instances - problem characteristics of interest (e.g., instance size, density, etc.) - upper/lower/optimal values

Metaheuristics in Optimization112 Evaluation of Heuristics (cont.) Sources of test instances - Real data sets It is easy to obtain real data sets - Random variants of real data sets The structure of the instance is preserved (e.g., graph), but details are changed (e.g., distances, costs)

Metaheuristics in Optimization113 Evaluation of Heuristics (cont.) - Test Problem Libraries - Test problem collections with “best known” solution - Test problem generators with known optimal solutions (e.g., QAP generators, Maximum Clique, Steiner Tree Problems, etc)

Metaheuristics in Optimization114 Evaluation of Heuristics (cont.) Test problem generators with known optimal solutions (cont.) C.A. Floudas, P.M. Pardalos, C.S. Adjiman, W.R. Esposito, Z. Gumus, S.T. Harding, J.L. Klepeis, C.A. Meyer, and C.A. Schweiger, Handbook of Test Problems for Local and Global Optimization, Kluwer Academic Publishers, (1999). C.A. Floudas and P.M. Pardalos, A Collection of Test Problems for Constrained Global Optimization Algorithms, Springer-Verlag, Lecture Notes in Computer Science 455 (1990). J. Hasselberg, P.M. Pardalos and G. Vairaktarakis, Test case generators and computational results for the maximum clique problem, Journal of Global Optimization 3 (1993), pp B. Khoury, P.M. Pardalos and D.-Z. Du, A test problem generator for the steiner problem in graphs, ACM Transactions on Mathematical Software, Vol. 19, No. 4 (1993), pp Y. Li and P.M. Pardalos, Generating quadratic assignment test problems with known optimal permutations, Computational Optimization and Applications Vol. 1, No. 2 (1992), pp P. Pardalos, "Generation of Large-Scale Quadratic Programs", ACM Transactions on Mathematical Software, Vol. 13, No. 2, p P.M. Pardalos, Construction of test problems in quadratic bivalent programming, ACM Transactions on Mathematical Software, Vol. 17, No. 1 (1991), pp P.M. Pardalos, Generation of large-scale quadratic programs for use as global optimization test problems, ACM Transactions on Mathematical Software, Vol. 13, No. 2 (1987), pp

Metaheuristics in Optimization115 Evaluation of Heuristics (cont.) - Random generated instances (quickest and easiest way to obtain supply of test instances)

Metaheuristics in Optimization116 Evaluation of Heuristics (cont.) Performance measurement - Time (most used, but difficult to assess due to differences among computers) - Solution Quality

Metaheuristics in Optimization117 Evaluation of Heuristics (cont.) Solution Quality - Exact solutions of small instances For “small” instances verify results with exact algorithms - Lower and upper bounds In many cases the problem of finding good bounds is as difficult as solving the original problem

Metaheuristics in Optimization118 Evaluation of Heuristics (cont.) Space covering techniques

Metaheuristics in Optimization119 Success Stories The success of metaheuristics can be seen by the numerous applications for which they have been applied. Examples: Scheduling, routing, logic, partitioning location graph theoretic QAP & other assignment problems miscellaneous problems

Metaheuristics in Optimization120 Concluding Remarks Metaheuristics have been shown to perform well in practice Many times the globally optimal solution is found but there is no “certificate of optimality” Large problem instances can be solved implementing metaheuristics in parallel It seems it is the most practical way to deal with massive data set

Metaheuristics in Optimization121 References Handbook of Applied Optimization Edited by Panos M. Pardalos and Mauricio G. C. Resende, Oxford University Press, Inc., 2002Panos M. PardalosMauricio G. C. ResendeOxford University Press Handbook of Massive Data Sets Series: Massive Computing, Vol. 4 Edited by J. Abello, P.M. Pardalos, M.G. Resende, Kluwer Academic Publishers, 2002.Massive Computing

Metaheuristics in Optimization122 THANK YOU ALL.