1 Nuno Abreu, Zafeiris Kokkinogenis, Behdad Bozorg, Muhammad Ajmal.

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

Subset of Slides from Lei Li, HongRui Liu, Roberto Lu
G5BAIM Artificial Intelligence Methods
RCQ-ACS: RDF Chain Query Optimization Using an Ant Colony System WI 2012 Alexander Hogenboom Erasmus University Rotterdam Ewout Niewenhuijse.
Novembro 2003 Tabu search heuristic for partition coloring1/29 XXXV SBPO XXXV SBPO Natal, 4-7 de novembro de 2003 A Tabu Search Heuristic for Partition.
Tabu Search Strategy Hachemi Bennaceur 5/1/ iroboapp project, 2013.
Models and Methods for the Judge Assignment Problem Amina Lamghari Jacques A. Ferland Computer Science & OR Dept. University of Montreal.
Gizem ALAGÖZ. Simulation optimization has received considerable attention from both simulation researchers and practitioners. Both continuous and discrete.
Spie98-1 Evolutionary Algorithms, Simulated Annealing, and Tabu Search: A Comparative Study H. Youssef, S. M. Sait, H. Adiche
FEUP | PDEEC | Decision Support January 3 rd, 2011 Metaheuristics: GRASP Group 1: Clara Gouveia Daniel Oliveira Fabrício Sperandio Filipe Sousa [Presenter]
MAE 552 – Heuristic Optimization Lecture 24 March 20, 2002 Topic: Tabu Search.
Unifying Local and Exhaustive Search John Hooker Carnegie Mellon University September 2005.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Reporter : Mac Date : Multi-Start Method Rafael Marti.
MAE 552 – Heuristic Optimization
A TABU SEARCH APPROACH TO POLYGONAL APPROXIMATION OF DIGITAL CURVES.
MAE 552 – Heuristic Optimization Lecture 25 March 22, 2002 Topic: Tabu Search.
Scatter search for project scheduling with resource availability cost Jia-Xian Zhu.
A Tabu Search algorithm for the optimisation of telecommunication networks 蔣雅慈.
Ant Colony Optimization: an introduction
A solution clustering based guidance mechanism for parallel cooperative metaheuristic Jianyong Jin Molde University College, Specialized University in.
G5BAIM Artificial Intelligence Methods Tabu Search.
Tabu Search Manuel Laguna. Outline Background Short Term Memory Long Term Memory Related Tabu Search Methods.
R OBERTO B ATTITI, M AURO B RUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Feb 2014.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Elements of the Heuristic Approach
T ABU S EARCH Ta-Chun Lien. R EFERENCE Fred G., Manuel L., Tabu Search, Kluwer Academic Publishers, USA(1997)
Escaping local optimas Accept nonimproving neighbors – Tabu search and simulated annealing Iterating with different initial solutions – Multistart local.
Heuristic Optimization Methods
Heuristic Optimization Methods
Tabu Search Glover and Laguna, Tabu search in Pardalos and Resende (eds.), Handbook of Applied Optimization, Oxford Academic Press, 2002 Glover and Laguna,
Tabu Search UW Spring 2005 INDE 516 Project 2 Lei Li, HongRui Liu, Roberto Lu.
Algorithms and their Applications CS2004 ( )
Heuristic Optimization Methods
GRASP: A Sampling Meta-Heuristic
Optimization Problems - Optimization: In the real world, there are many problems (e.g. Traveling Salesman Problem, Playing Chess ) that have numerous possible.
Heuristic Optimization Methods Tabu Search: Advanced Topics.
A Hybrid Genetic Algorithm for the Periodic Vehicle Routing Problem with Time Windows Michel Toulouse 1,2 Teodor Gabriel Crainic 2 Phuong Nguyen 2 1 Oklahoma.
0 Weight Annealing Heuristics for Solving Bin Packing Problems Kok-Hua Loh University of Maryland Bruce Golden University of Maryland Edward Wasil American.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
Course: Logic Programming and Constraints
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
FORS 8450 Advanced Forest Planning Lecture 11 Tabu Search.
Solving the Maximum Cardinality Bin Packing Problem with a Weight Annealing-Based Algorithm Kok-Hua Loh University of Maryland Bruce Golden University.
CAS 721 Course Project Implementing Branch and Bound, and Tabu search for combinatorial computing problem By Ho Fai Ko ( )
G5BAIM Artificial Intelligence Methods
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
Reactive Tabu Search Contents A brief review of search techniques
METAHEURISTICS Genetic Algorithm Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal
Presenter: Leo, Shih-Chang, Lin Advisor: Frank, Yeong-Sung, Lin /12/16.
Heuristic Methods for the Single- Machine Problem Chapter 4 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha R2.
METAHEURISTIC Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal
Hub Location–Allocation in Intermodal Logistic Networks Hüseyin Utku KIYMAZ.
Preliminary Background Tabu Search Genetic Algorithm.
SITE, HARDWARE/SOFTWARE CODESIGN OF EMBEDDED SYSTEMS 1 Hardware/Software Codesign of Embedded Systems OPTIMIZATION II Voicu Groza SITE Hall, Room.
1 Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations Simulated Annealing (SA)
Tabu Search Subset of Slides from Lei Li, HongRui Liu, Roberto Lu Edited by J. Wiebe.
Tabu Search for Solving Personnel Scheduling Problem
Tabu Search Glover and Laguna, Tabu search in Pardalos and Resende (eds.), Handbook of Applied Optimization, Oxford Academic Press, 2002 Glover and Laguna,
Scientific Research Group in Egypt (SRGE)
Heuristic Optimization Methods
Tabu Search Review: Branch and bound has a “rigid” memory structure (i.e. all branches are completed or fathomed). Simulated Annealing has no memory structure.
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Comparing Genetic Algorithm and Guided Local Search Methods
Study Guide for ES205 Yu-Chi Ho Jonathan T. Lee Nov. 7, 2000
CSE 589 Applied Algorithms Spring 1999
Multi-Objective Optimization
School of Computer Science & Engineering
Subset of Slides from Lei Li, HongRui Liu, Roberto Lu
MOEA Local Search and Coevolution
Presentation transcript:

1 Nuno Abreu, Zafeiris Kokkinogenis, Behdad Bozorg, Muhammad Ajmal

 Introduction to TS  Parameters of TS  Basic concepts of TS  TS vs. other Meta-heuristics  Memory in TS  Use of Memory in TS  Tabus  Aspiration criteria  Stop condition  TS basic algorithm  Search Process – 1  Search Process – 2  Flowchart of a Standard TS Algorithm  Example  Pros and Cons 2

 TS is an iterative procedure designed for the solution of optimization problems  Invented by Glover (1986)  Uses a neighborhood search procedure to iteratively move from a solution x to a solution x* in the neighborhood of x  Uses memory structures so that the algorithm does not visit a given solution repeatedly  Tabu Search Benefits  Cycle avoidance which also saves time  Guide search to promising regions of the search space 3

 Space search procedure  Neighborhood structure  Short-term memory: Tabu list  Types of moves  Addition of a Tabu move  Maximum size of Tabu list  Aspiration conditions  Stopping rule 4

 Saves information according to the exploration process  It will be used to limit the moves through the neighborhood  Structure of the neighborhood of the solutions varies from iteration to iteration  Infeasible solutions can be accepted and evaluated to escape local minimum.  To prevent from cycling, recent moves are forbidden.  A tabu list records forbidden moves, which are referred to as tabu moves  A tabu move can be accepted using aspiration criteria.  Allows exploitation of good solution and exploration of unvisited region of the search space 5

 Traditional descent methods cannot allow non-improving moves, TS can.  SA and GA does not have memory, TS has.  SA uses randomness to escape local minimum, TS uses forbidden moves.  TS claims that a bad strategic choice can yield more information than a random choice. 6

 TS uses mainly two types of memory:  Short-term memory  Recent solutions  Structure were tabu moves are stored  Avoids cycling  Long-term memory (frequency-based)  Number of iterations that “solution components” have been present in the current solution 7

 Use of memory leads to learning  Prevent the search from repeating moves  Explore the unvisited area of the solution space  By using memory to avoid certain moves, TS can be seen as global optimizer rather than local. 8

 Tabus are one of the distinctive elements of TS when compared to LS  Prevent cycling when moving away from local minimum through non-improving moves  Stored in the short-term memory – Tabu list  Tabu tenure is the number of iteration the move is tabu  Tabu list can be of fixed-length or dynamically varying 9

 If a move which is tabu can lead to better solution, the tabu status should be overruled  This will be performed using aspiration level conditions  Aspiration criteria: accepting an improving solution even if generated by a tabu move  A tabu move becomes admissible if it yields a solution that is better than an aspiration value  A tabu move becomes admissible if the direction of the search (improving or non-improving) does not change 10

 Step 1: Choose an initial solution i in S. Set i*=i and k=0.  Step 2: Set k=k+1 and generate a subset V of solutions in N(i,k) such that the Tabu conditions are not violated or the aspiration conditions hold.  Step 3: Choose a best j in V and set i=j.  Step 4: If f(i) < f(i*) then set i* = i.  Step 5: Update Tabu and aspiration conditions.  Step 6: If a stopping condition is met then stop. Else go to Step 2. 11

 N(i, K+1) = 0 (no feasible solution in the neighborhood of solution i)  K is larger than the maximum number of iterations allowed  The number of iterations since the last improvement of i* is larger than a specified number  Evidence can be given than an optimum solution has been obtained 12

 Intensification:  searches solutions similar to the current solution  wants to intensively explore known promising areas of the search space  creates solutions using the more attractive components of the best solutions in memory (recency memory)  Another technique consists in changing the neighborhood’s structure by allowing different moves  Diversification:  Examines unvisited regions of the search space  Generates different solutions  Using rarely components present in the current solution  Biasing the evaluation of a move by modifying the objective function adding a term related to component frequencies  Intensification and diversification phases alternate during the search 13

 Allowing infeasible solution  Constraints defining the search space can lead the process to mediocre solutions  Induces diversification  By dropping some constraints (relaxation) a larger space can be explored  Penalize objective for the violation  A well known technique: Strategic oscillation  Surrogate objective  Evaluates neighbors using a simpler function than the objective in order to spot promising candidates. (Intensification)  Auxiliary objective  Objective function can’t drive the search to more interesting areas  Orient the search by measuring desirable attributes of solutions 14

Generate initial solution and initialize memory structures Construct modified neighborhood Select best neighbor Execute specializes procedures Update best solution Update memory structures More iteration? Stop No Yes Heuristic procedure Tabu restrictions Candidate lists Aspiration criteria Elite solutions Tabu restrictions Candidate lists Aspiration criteria Elite solutions Modified choice rules for diversification or intensification Restarting Strategic oscillation Restarting Strategic oscillation Short and long term memory 15

Minimum spanning tree problem with constraints. Objective: Connects all nodes with minimum costs A B D CE A B D CE Costs An optimal solution without considering constraints Constraints 1: Link AD can be included only if link DE also is included. (penalty:100) Constraints 2: At most one of the three links – AD, CD, and AB – can be included. (Penalty of 100 if selected two of the three, 200 if all three are selected.) 16

 New cost = 75 (iteration 2)  ( local optimum) A B D CE DeleteAdd Iteration 1 Cost= (constraint penalties) Constraints 1: Link AD can be included only if link DE also is included. (penalty:100) Constraints 2: At most one of the three links – AD, CD, and AB – can be included. (Penalty of 100 if selected two of the three, 200 if all three are selected.) 17

 * A tabu move will be considered only if it would result in a better solution than the best trial solution found previously (Aspiration Condition)  Iteration 3 new cost = 85 Escape local optimum A B D CE Tabu Delete Add Tabu list: DE Iteration 2 Cost=75 Constraints 1: Link AD can be included only if link DE also is included. (penalty:100) Constraints 2: At most one of the three links – AD, CD, and AB – can be included. (Penalty of 100 if selected two of the three, 200 if all three are selected.) 18

Add 25  * A tabu move will be considered only if it would result in a better solution than the best trial solution found previously (Aspiration Condition)  Iteration 4 new cost = 70 Override tabu status A B D CE Tabu Delete Tabu list: DE & BE Iteration 3 Cost=85 Constraints 1: Link AD can be included only if link DE also is included. (penalty:100) Constraints 2: At most one of the three links – AD, CD, and AB – can be included. (Penalty of 100 if selected two of the three, 200 if all three are selected.) 19

 Optimal Solution  Cost = 70  Additional iterations only find  inferior solutions A B D CE

 Pros:  Allows non-improving solution to be accepted in order to escape from a local optimum  The use of Tabu list  For larger and more difficult problems tabu search can beat other approaches  Cons:  Too many parameters to be determined  Number of iterations could be very large  Global optimum may not be found, depends on parameter settings  Too complex 21

 Glover, F., Kelly, J. P., and Laguna, M Genetic Algorithms and Tabu Search: Hybrids for Optimization. Computers and Operations Research. Vol. 22, No. 1, pp. 111 – 134.  Glover, F. and Laguna, M Tabu Search. Norwell, MA: Kluwer Academic Publishers.  Hanafi, S On the Convergence of Tabu Search. Journal of Heuristics. Vol. 7, pp. 47 – 58.  Hertz, A., Taillard, E. and Werra, D. A Tutorial on Tabu Search. Accessed on April 14, 2005:  Gendreau, M An Introduction To Tabu Search, Centre de Recherche sur les Transports et Département d´informatique et de Recherche opérationnelle, Université de Montréal.  Hillier, F.S. and Lieberman, G.J Introduction to Operations Research. New York, NY: McGraw-Hill. 8th Ed. 22

23

 Introduction  Tabu Search  Basic Approach – 1  Basic Approach – 2  The TS framework for multi-dimensional bin packing  Search methods  Computational test - Dataset  Computational test  Sample of results  Conclusions 24

 The paper deals with multi-dimensional cutting and packing algorithms  Assumes that the item will be packed with fixed orientation (no rotation)  Lodi, Martello and Vigo present an effective BP Tabu Search framework  The main characteristic is the adoption of a search scheme and a neighborhood independent of the packing problem to be solved 25

 The goal is empty a specified target bin  Given a current solution, the moves modify it by changing the packing of a set S of items  The target bin is the one minimizing a filling function φ(.)  φ(.) measures the easiness of emptying the bin  The idea is to favor target bins packing a small area and a relatively large number of items 26

 Select target bin  Include one item, j, from the target bin, and contents of k other bins in a set S  Execute an greedy-type heuristic A on S  The size of the neighborhood k is automatically updated  If the items of S are packed into k (or less) bins, item j is removed from the target bin. Otherwise, S is changed by selecting a different set of k bins, or a different item from the target bin  Then, a new item is selected, a new set S is defined and a new move is performed  The execution is halted as soon as a proven optimal solution is found, or a time limit is reached 27

 If the algorithm gets stuck  The target bin is not emptied, the neighborhood is enlarged by increasing the value of up to a prefixed upper limit  Tabu list stores,for each forbidden move, the sum of the filling function values of the k+1 involved bins  Variable Neighborhood Search strategy  Small values of k = Small neighborhoods, fast to explore  Accepting moves dealing with increased k = Enlarged neighborhood, more chances to improve solution  By changing the size of the neighborhood, the algorithm plays “Intensification” and “Diversification”  The execution is halted as soon as a proven optimal solution is found, or a time limit is reached 28

29

 Intensification procedure explores the neighborhood of the current solution (Inner loop)  Two types of diversification are defined:  “soft” diversification - select as target bin the one having the second smallest filling function value  “hard” diversification - re-pack into separate bins the items currently packed in the z/2 bins ( z being the number of bins in the current solution) with smallest filling function value  The tabu list stores, for each forbidden move:  the sum of the filling function values  the last moves 30

 The benchmark consists of 500 random instances with n = {20,40,60,80,100}  Ten different classes of instances were used  wi (width), hi (height) generated from uniform distribution of varying intervals 31

 Measures the improvement brought by TS when comparing to three greedy-type heuristics:  Hybrid Best-Fit algorithm (HBP)  Knapsack Packing (KP)  Alternate Directions (AD)  Results show that the TS allows a good improvement in the quality of the solution obtained by the greedy-type heuristics 32

33

 The main idea of the framework:  Isolate the information concerning the problem  Let a greedy-type heuristic take care of the structure and construct feasible solutions  Tabu Search is then used to drive the search through the solution space by:  re-combining the packed items,  exploring a neighborhood by alternating between “intensification” and “diversification”  This results in a very general algorithm for multidimensional bin packing 34