Topic Outline ? Black-Box Optimization Optimization Algorithm: only allowed to evaluate f (direct search) decision vector x objective vector f(x) objective.

Slides:



Advertisements
Similar presentations
GA Approaches to Multi-Objective Optimization
Advertisements

ZEIT4700 – S1, 2014 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Non-dominated Sorting Genetic Algorithm (NSGA-II)
MOEAs University of Missouri - Rolla Dr. T’s Course in Evolutionary Computation Matt D. Johnson November 6, 2006.
Angers, 10 June 2010 Multi-Objective Optimisation (II) Matthieu Basseur.
1 An Adaptive GA for Multi Objective Flexible Manufacturing Systems A. Younes, H. Ghenniwa, S. Areibi uoguelph.ca.
Elitist Non-dominated Sorting Genetic Algorithm: NSGA-II
Optimal Design Laboratory | University of Michigan, Ann Arbor 2011 Design Preference Elicitation Using Efficient Global Optimization Yi Ren Panos Y. Papalambros.
Multiobjective Optimization Chapter 7 Luke, Essentials of Metaheuristics, 2011 Byung-Hyun Ha R1.
Non-Linear Problems General approach. Non-linear Optimization Many objective functions, tend to be non-linear. Design problems for which the objective.
A Study on Recent Fast Ways of Hypervolume Calculation for MOEAs Mainul Kabir ( ) and Nasik Muhammad Nafi ( ) Department of Computer Science.
Multi-Objective Optimization Using Evolutionary Algorithms
Spring, 2013C.-S. Shieh, EC, KUAS, Taiwan1 Heuristic Optimization Methods Pareto Multiobjective Optimization Patrick N. Ngatchou, Anahita Zarei, Warren.
A New Evolutionary Algorithm for Multi-objective Optimization Problems Multi-objective Optimization Problems (MOP) –Definition –NP hard By Zhi Wei.
Multi-Objective Evolutionary Algorithms Matt D. Johnson April 19, 2007.
Diversity Maintenance Behavior on Evolutionary Multi-Objective Optimization Presenter : Tsung Yu Ho at TEILAB.
Multimodal Problems and Spatial Distribution Chapter 9.
Genetic Algorithms in Materials Processing N. Chakraborti Department of Metallurgical & Materials Engineering Indian Institute of Technology Kharagpur.
Torcs Simulator Presented by Galina Volkinshtein and Evgenia Dubrovsky.
Design of Curves and Surfaces by Multi Objective Optimization Rony Goldenthal Michel Bercovier School of Computer Science and Engineering The Hebrew University.
The Pareto fitness genetic algorithm: Test function study Wei-Ming Chen
Resource Allocation Problem Reporter: Wang Ching Yu Date: 2005/04/07.
A New Algorithm for Solving Many-objective Optimization Problem Md. Shihabul Islam ( ) and Bashiul Alam Sabab ( ) Department of Computer Science.
Multiobjective Optimization Athens 2005 Department of Architecture and Technology Universidad Politécnica de Madrid Santiago González Tortosa Department.
Optimal Arrangement of Ceiling Cameras for Home Service Robots Using Genetic Algorithms Stefanos Nikolaidis*, ** and Tamio Arai** *R&D Division, Square.

Osyczka Andrzej Krenich Stanislaw Habel Jacek Department of Mechanical Engineering, Cracow University of Technology, Krakow, Al. Jana Pawla II 37,
Evolutionary Multi-objective Optimization – A Big Picture Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
On comparison of different approaches to the stability radius calculation Olga Karelkina Department of Mathematics University of Turku MCDM 2011.
Selected Topics in Evolutionary Algorithms II Pavel Petrovič Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics
MOGADES: Multi-Objective Genetic Algorithm with Distributed Environment Scheme Intelligent Systems Design Laboratory , Doshisha University , Kyoto Japan.
A two-stage approach for multi- objective decision making with applications to system reliability optimization Zhaojun Li, Haitao Liao, David W. Coit Reliability.
Doshisha Univ., Japan Parallel Evolutionary Multi-Criterion Optimization for Block Layout Problems ○ Shinya Watanabe Tomoyuki Hiroyasu Mitsunori Miki Intelligent.
Omni-Optimizer A Procedure for Single and Multi-objective Optimization Prof. Kalyanmoy Deb and Santosh Tiwari.
Kanpur Genetic Algorithms Laboratory IIT Kanpur 25, July 2006 (11:00 AM) Multi-Objective Dynamic Optimization using Evolutionary Algorithms by Udaya Bhaskara.
DIVERSITY PRESERVING EVOLUTIONARY MULTI-OBJECTIVE SEARCH Brian Piper1, Hana Chmielewski2, Ranji Ranjithan1,2 1Operations Research 2Civil Engineering.
Genetic Algorithms What is a GA Terms and definitions Basic algorithm.
METAHEURISTICS Genetic Algorithm Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal
Evolutionary Design (2) Boris Burdiliak. Topics Representation Representation Multiple objectives Multiple objectives.
2/29/20121 Optimizing LCLS2 taper profile with genetic algorithms: preliminary results X. Huang, J. Wu, T. Raubenhaimer, Y. Jiao, S. Spampinati, A. Mandlekar,
Soft Computing Multiobjective Optimization Richard P. Simpson.
Multi-objective Optimization
D Nagesh Kumar, IIScOptimization Methods: M8L5 1 Advanced Topics in Optimization Evolutionary Algorithms for Optimization and Search.
Genetic Algorithms D Nagesh Kumar, IISc Water Resources Planning and Management: M9L2 Advanced Topics.
Multi-objective Evolutionary Algorithms (for NACST/Seq) summarized by Shin, Soo-Yong.
Introduction to GAs: Genetic Algorithms Quantitative Analysis: How to make a decision? Thank you for all pictures and information referred.
Tamaki Okuda ● Tomoyuki Hiroyasu   Mitsunori Miki   Shinya Watanabe  
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Multiobjective Optimization  Particularities of multiobjective optimization  Multiobjective.
Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical.
Evolutionary Computing Chapter 12. / 26 Chapter 12: Multiobjective Evolutionary Algorithms Multiobjective optimisation problems (MOP) -Pareto optimality.
Zhengli Huang and Wenliang (Kevin) Du
Multimodal Problems and Spatial Distribution A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Chapter 9.
ZEIT4700 – S1, 2016 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Introduction to genetic algorithm
L-Dominance: An Approximate-Domination Mechanism
Multiobjective Optimization Richard P. Simpson
Fundamental of genetic Algorithms Part 11
Multy- Objective Differential Evolution (MODE)
Multimodal Problems and Spatial Distribution
EASTERN MEDITERRANEAN UNIVERSITY
Multimodal Problems and Spatial Distribution
Heuristic Optimization Methods Pareto Multiobjective Optimization
Multi-Objective Optimization
Doshisha Univ., Kyoto Japan
Chen-Yu Lee, Jia-Fong Yeh, and Tsung-Che Chiang
○ Hisashi Shimosaka (Doshisha University)
Multimodal Problems and Spatial Distribution
RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm BISCuit EDA Seminar
MOEA Testing and Analysis
Multiobjective Optimization
Presentation transcript:

Topic Outline

? Black-Box Optimization Optimization Algorithm: only allowed to evaluate f (direct search) decision vector x objective vector f(x) objective function (e.g. simulation model)

So what is the general problem? The Multiobjective optimization problem (MOP) can be defined as the problem of finding [Osyczka 1985] a vector of decision variables which satisfies constraints and optimizes a vector function show elements represent the objective functions. Hence, the term “optimize” means finding such a solution which would give the values of all objective functions acceptable to the designer.

What does optimum mean here? Having several objective functions implies that we are trying to find a good compromise rather than a single optimal solution. Francis Ysidro Edgeworth first proposed a meaning for “optimum” in 1881 which was generalized in 1896 by Vilfredo Pareto

Mapping from decision space to objective space assuming maximization:

dominance

Pareto Optimal Fronts

Issues in EMO y2y2 y1y1 Diversity Convergence How to maintain a diverse Pareto set approximation?  density estimation How to prevent nondominated solutions from being lost?  environmental selection How to guide the population towards the Pareto set?  fitness assignment

General Diversity Preservation Various techniques are available for maintaining diversity in a MOEA like: Weight Vector Approach the Fitness Sharing/Niching Approach Crowding/Clustering Restricted Mating Relaxed Dominance

A graphical illustration of fitness sharing

density estimation Kernel approach: The density estimator is based on the sum of F values, where F is a function of the distance (vector) measured either in genotypic or in phenotypic space (e.g., MOGA and the NPGA). Nearest neighbor approach: The density estimator is based on the volume of the hyper-rectangle defined by the nearest neighbors (e.g., the NSGA-II and SPEA2). Histogram approach : The density estimator is based on the number of solutions that lie within the same hyper-box (e.g., PAES and PESA).

Classifying EMOO approaches (Evolutionary Multi-Objective Optimization) First Generation Techniques Non-Pareto approaches Pareto approaches Second Generation Techniques PAES SPEA NSGA-II MOMGA micro-GA

Non-Pareto Techniques These are methods that do not use information about Pareto fronts explicitly. Incapable of producing certain portions of the Pareto front. Efficient and easy to implement, but appropriate to handle only a few objectives.

Aggregate Objective Model (weighted sum method) Aggregated fitness functions are basically just a weighted sum of the objective functions. This is what we did in Landscape smoothing. The weighted sum creates a single objective function from the multi-objective fitness function. Determining the weights to use in this sum is non trivial and is almost always problem dependent.

Aggregate Function The weighted sum is basically in the following form. where represents the weights often we assume

Weighted Sum Method Construct a weighted sum of Objectives and optimize User supplies weight vector w

Disadvantage Need to know w Non-uniformity in Pareto- Optimal solutions Inability to find some Pareto- optimal solution

Vector Evaluated Genetic Algorithm (VEGA) This work was performed by J. D. Schaffer in 1985 and can be found in paper Schaffer, J.D., Multiple objective optimization with vector evaluated genetic algorithms. In this method appropriate fractions of the next generation, or subpopulations, were selected from the whole of the old generation according to each of the objectives, separately. Crossover and mutation were applied as usual after combining the sub-populations

Schematic of VEGA selection step 1 step 2 step 3 select n subgroups using each objective in turn shuffleapply genetic operators n 1 2 …... geneperformance n popSize 1 … gen ( t )parentsgen ( t +1) popSize 1 … Vector evaluation approach

Advantages and Disadvantages Efficient and easy to implement It does not have an explicit mechanism to maintain diversity. It doesn’t necessarily produce non-dominated vectors.

Lexicographic Ordering Here the user is asked to rand the objectives in order of importance. The optimal solution is then obtained by minimized the objective functions, starting with the most important one and proceeding according to the assigned order

Target Vector Approaches Definition of a set of goals (or targets) that we wish to achieve for each objective function. The EA is set up to minimize differences between the current solution and these goals. Can also be considered aggregating approaches, but in the case, concave portions of the Pareto front could be obtained.

Advantages and Disadvantages Efficient and easy to implement Definition of goals may be difficult in some cases Some methods have been known to introduce misleading selection pressure under certain circumstances. Goals must lie in the feasible region so that the solutions generated are members of the Pareto optimal set.

Pareto-based Techniques Suggested by Goldberg (1989) to solve the problems with Schaffer’s VEGA. Use of non-dominated ranking and selection to move the population towards the Pareto front Requires a ranking procedure and a technique to maintain diversity in the population (Otherwise, that GA will tend to converge to a sing solution)

Dominance-Based Ranking Types of information: dominance rankby how many individuals is an individual dominated (+1)? dominance counthow many individuals does an individual dominate? dominance depthat which front is an individual located? Examples: MOGA, NPGAdominance rank NSGA/NSGA-IIdominance depth SPEA/SPEA2dominance count + rank

Dominance rank with grouping of equal ranks for sorting.

Dominance count with grouping of equal counts for sorting

MOGA, NPGA : dominance rank NSGA/NSGA-II : dominance depth SPEA/SPEA2 : dominance count and dominance rank MOMGA/MOMGA-II : dominance rank

Multi-Objective Genetic Algorithm (MOGA) Proposed by Fonseca and Fleming (1993) see “Genetic Algorithms for Multiobjective Optimization:Formulation, Discussion and Generalization” This approach consists of a scheme in which the rank of an individual corresponds to the number of individuals in the current population by which it is dominated. It uses fitness sharing and mating restrictions.

MOGA Ranking A vector X=(u 1,u 2,…,u n ) is superior (dominates) another vector Y =(v 1,v 2,…,v n ) if for every i=1,…,n u i <=v i there exists i=1,…,n such that u i <v i If X is superior to Y then Y is inferior to X. Let x be an individual in the population t then rank(x,t)=1 + p(x) where p(x) is the number of individuals in population t that it is inferior to. Note that if it is a Pareto point then it is inferior to no one hence its rank is 1.

MOGA Ranking Assigning fitness according to rank Sort population according to rank. Note that some rank values may not be represented. Assign fitnesses to individuals by interpolation from the best (rank 1) to the worst in the usual way, according to some function, usually linear. Average the fitnesses of individuals with the same rank, so that all of them will be sampled at the same rate. Note that this procedure keeps the global population fitness constant while maintaining appropriate selective pressure, as defined by the function used.

Ranking example Suppose that we have 10 individuals in population that have ranks of 1, 2, 3,1,1,2,5, 3, 2, 5 Since there are fitnesses of 1,2,3,and 5 we could create a roulette wheel obtaining the following fitness for each rank. Sort them obtaining 1, 1, 1, 2, 2, 2, 3, 3, 5, 5 Map these guys to it fitness via function, say, f(x)=6-x giving 5,5,5,4,4,4,3,3,1,1 for fitnesses The pie is then broken into 35 slices, the first three getting 5 slices, the next three getting 4 etc.

Advantages and Disadvantages Efficient and relative easy to implement Its performance depends on the appropriate selection of the sharing factor. MOGA was the most popular first-generation MOEA and it normally outperformed all of its contemporary competitors.

Niched-Pareto Genetic Algorithm (NPGA) Proposed by Horn et al. (1993,1994) It uses a tournament selection scheme based on Pareto dominance. Two individuals randomly chosen are compared against a subset of the entire population(10% or so). When both competitors are either dominated or non-dominated(ie a tie), the result of the tournament is decided through fitness sharing in the objective domain.

Niched Pareto GA (NPGA)

Advantages and Disadvantages Easy to implement Efficient because does not apply Pareto ranking to the entire pop. It seems to have a good overal performance. Besides requiring a sharing factor, it requires another parameter (tournament size)

Non-dominated Sorting Genetic Algorithm Proposed by Srinivas and Deb (1994) Uses classifications layers. layer 1 is the set of non-dominated individuals layer 2 is the set of non-dominated individuals that occur when layer 1 is removed. etc. Sharing is performed at each layer using dummy fitnesses for that layer. Sharing spreads out the search over each classification layer. High fitness of the upper levels implies that the Pareto front is heavily searched.

NSGA

Research Questions at this time were: Are aggregating functions really doomed to fail when the Pareto front is non-convex? Can we find ways to maintain diversity in the pop. without using niches, which requires O(M 2 ) work where M refers to the pop. size? If assume that there is no way to reduce the O(kM 2 ) complexity required to perform Pareto ranking, How can we design a more efficient MOEA. Do we have appropriate test functions and metrics to evaluate quantitatively an MOEA? Will somebody develop theoretical foundations for MOEA’s?

Generation 2 (Elitism) A new generation of algorithms came about with the introduction of the notion of elitism. Elitism (in this context) refers to the use of an external pop to retain the non-dominated individual. Design issues include How does the external file interact with the main population? What do we do when the external file is full Do we impose additional criteria to enter the file instead of just using Pareto dominance?

Archiving + elitism of chromosome population

Multiobjective : Archiving + elitism archivepopulation new population new archive evaluate sample vary update truncate

Second Generations Algorithms include Strength Pareto Evolutionary Algorithm(SPEA), Zitzler and Thiele(1999) Strength Pareto Evolutionary Algorithm 2 (SPEA 2) by Zitzler Laumanns and Thiele 2001 Pareto Archived Evolution Strategy(PAES) by Knowles and Corne(2000) Nondominated Sorting Genetic Algorithm II Deb et al.(2002) Niched Pareto Genetic Algorithm 2(NPGA 2), Erickson et al.(2001)

A quick look at the Pareto Archived Evolution Strategy (PAES) (1+1) PAES is made up of 3 parts. The candidate solution generator this is basically simple random mutation hillclimbing it maintains a single current solution at each iteration productes a single new candidate via random mutation the candidate solution acceptance function the Nondominated-Solutions (NDS) archive

PAES(1+1) Pseudocode Generate initial random solution c and add it to the archive Mutate c to produce m and evaluate m if (c dominates m) discard m else if (m dominates c) replace c with m, and add m to the archive else if (m is dominated by any member of the archive) discard m else apply test(c, m, archive) to determine which becomes the new current solution and whether to add m to the archive until a termination criterion has been reached, return to line 2

Test(c, m, archive) if the archive is not full add m to the archive if (m is in a less crowded region of the archive than c) accept m as the new current solution else maintain c as the current solution else if (m is in a less crowded region of the archive than x for some member x on the archive) add m to the archive, and remove a member of the archive from the most crowded region if (m is in a less crowded region of the archive than c) accept m as the new courrent solution else maintain c as the current solution

The Adaptive grid PAES uses a new crowding procedure based on recursively dividing up the d-dimensional objective space. This is done to minimize cost and to avoid niche-size parameter setting. Phenotype space is divided into hypercubes, which have a width of d r /2 k in each dimension, where d r is the range (maximum minus minimum) of values in objective d of the solutions currently in the archive, and k is the subdivision parameter.

Example grid for d=2 objectives If we use 5 levels with 2 objectives we basically have a quad-tree structure. Each level has 4 times the number of cells the previous level has. 1,4, 16, 64, 256, 1024 Hence we have 1024 regions of size (max- min)/2 5 For the simple case of k=3 the indicated cell has grid-location or in binary Grid cell

So how do we find the grid location of X Recursively (for each dimension) go down the tree left (0) or right(1) creating a binary number. This requires k comparisons Then concat the binary strings creating a single binary number Note that the grid location of the previous 1024 cells is just a 10 bit string. Converting this 10 bit string to an integer gives one an index into a array Count[1024] that can be used to store the crowding number.