Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014

Slides:



Advertisements
Similar presentations
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Advertisements

Simulated Annealing Premchand Akella. Agenda Motivation The algorithm Its applications Examples Conclusion.
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Random numbers and optimization techniques Jorge Andre Swieca School Campos do Jordão, January,2003 second lecture.
Simulated Annealing Student (PhD): Umut R. ERTÜRK Lecturer : Nazlı İkizler Cinbiş
Local search algorithms
Local search algorithms
Two types of search problems
CS 561, Session 7 1 Recall: breadth-first search, step by step.
MAE 552 – Heuristic Optimization Lecture 8 February 8, 2002.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Nature’s Algorithms David C. Uhrig Tiffany Sharrard CS 477R – Fall 2007 Dr. George Bebis.
1 Simulated Annealing Terrance O ’ Regan. 2 Outline Motivation The algorithm Its applications Examples Conclusion.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
EDA (CS286.5b) Day 7 Placement (Simulated Annealing) Assignment #1 due Friday.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Stochastic Relaxation, Simulating Annealing, Global Minimizers.
MAE 552 – Heuristic Optimization Lecture 6 February 4, 2002.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
By Rohit Ray ESE 251.  Most minimization (maximization) strategies work to find the nearest local minimum  Trapped at local minimums (maxima)  Standard.
Elements of the Heuristic Approach
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
1 IE 607 Heuristic Optimization Simulated Annealing.
Informed Search Chapter 4 (b)
MonteCarlo Optimization (Simulated Annealing) Mathematical Biology Lecture 6 James A. Glazier.
The Basics and Pseudo Code
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory Mixed Integer Problems Most optimization algorithms deal.
Simulated Annealing.
Markov Chain Monte Carlo and Gibbs Sampling Vasileios Hatzivassiloglou University of Texas at Dallas.
FORS 8450 Advanced Forest Planning Lecture 11 Tabu Search.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
CALTECH CS137 Winter DeHon CS137: Electronic Design Automation Day 10: February 6, 2002 Placement (Simulated Annealing…)
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
B. Stochastic Neural Networks
Simulated Annealing G.Anuradha.
Simulated Annealing. Difficulty in Searching Global Optima starting point descend direction local minima global minima barrier to local search.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Ramakrishna Lecture#2 CAD for VLSI Ramakrishna
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
Lecture 18, CS5671 Multidimensional space “The Last Frontier” Optimization Expectation Exhaustive search Random sampling “Probabilistic random” sampling.
CS621: Artificial Intelligence
Parallel Simulated Annealing using Genetic Crossover Tomoyuki Hiroyasu Mitsunori Miki Maki Ogura November 09, 2000 Doshisha University, Kyoto, Japan.
Optimization Problems
Scientific Research Group in Egypt (SRGE)
Optimization via Search
Informed Search Chapter 4 (b)
Recall: breadth-first search, step by step
Simulated Annealing Premchand Akella.
Department of Computer Science
Informed Search Chapter 4 (b)
Heuristic Optimization Methods
Van Laarhoven, Aarts Version 1, October 2000
STEREO MATCHING USING POPULATION-BASED MCMC
By Rohit Ray ESE 251 Simulated Annealing.
Artificial Intelligence (CS 370D)
Maria Okuniewski Nuclear Engineering Dept.
Haim Kaplan and Uri Zwick
Heuristic search INT 404.
Optimization Problems
Informed Search Chapter 4 (b)
Introduction to Simulated Annealing
Temperature Parallel Simulated Annealing with Adaptive Neighborhood for Continuous Optimization problem Mitsunori MIKI Tomoyuki HIROYASU Masayuki KASAI.
5.2.3 Optimization, Search and
More on Search: A* and Optimization
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Local Search Algorithms
Greg Knowles ECE Fall 2004 Professor Yu Hu Hen
Stochastic Methods.
Presentation transcript:

Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014 4 Simulated Annealing Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014

Simulated annealing (SA), which is a trajectory-based, random search technique for global optimization. It mimics the annealing process in materials processing when a metal cools and freezes into a crystalline state with the minimum energy and larger crystal sizes so as to reduce the defects in metallic structures.

The annealing process involves the careful control of temperature and its cooling rate, often called the annealing schedule.

4.1 Annealing and Boltzmann Distribution Simulated annealing was proposed by Kirkpatrick et al. in 1983. The metaphor of SA came from the annealing characteristics in metal processing.

Unlike the gradient-based methods and other deterministic search methods that have the disadvantage of being trapped into local minima, SA’s main advantage is its ability to avoid being trapped in local minima. In fact, it has been proved that simulated annealing will converge to its global optimality if enough randomness is used in combination with very slow cooling.

Metaphorically speaking, this is equivalent to dropping some bouncing balls over a landscape, and as the balls bounce and lose energy, they settle down to some local minima. If the balls are allowed to bounce enough times and lose energy slowly enough, some of the balls will eventually fall into the globally lowest locations; hence the global minimum can be reached.

The basic idea of the SA algorithm is to use random search in terms of a Markov chain, which not only accepts changes that improve the objective function but also keeps some changes that are not ideal. In a minimization problem, for example, any better moves or changes that decrease the value of the objective function f will be accepted; however, some changes that increase f will also be accepted with a probability p.

4.2 Parameters Here the choice of the right initial temperature is crucially important. For a given change Δf , if T is too high (T → ∞), then p → 1, which means almost all the changes will be accepted. Random walk If T is too low (T → 0), then any Δ f > 0 (worse solution) will rarely be accepted as p → 0. Gradient-based method

if T is too high, the system is at a high energy state on the topological landscape, and the minima are not easily reached. If T is too low, the system may be trapped in a local minimum (not necessarily the global minimum), and there is not enough energy for the system to jump out the local minimum to explore other minima, including the global minimum. So, a proper initial temperature should be calculated.

Another important issue is how to control the annealing or cooling process so that the system cools down gradually from a higher temperature to ultimately freeze to a global minimum state. There are many ways of controlling the cooling rate or the decrease of the temperature.

In addition, for a given temperature, multiple evaluations of the objective function are needed. If there are too few evaluations, there is a danger that the system will not stabilize and subsequently will not converge to its global optimality. If there are too many evaluations, it is time-consuming, and the system will usually converge too slowly.

There are two major ways to set the number of iterations: fixed or varied. The first uses a fixed number of iterations at each temperature. The second intends to increase the number of iterations at lower temperatures so that the local minima can be fully explored.

4.3 SA Algorithm