Download presentation
Presentation is loading. Please wait.
1
Techniques for Improving Online and Offline History-assisted Evolutionary Algorithms
Yang Lou Centre for Chaos and Complex Networks Department of Electronic Engineering City University of Hong Kong
2
Contents Introduction Online Offline & Online Conclusions
Non-revisiting Genetic Algorithm (cNrGA) with Constant Memory Offline & Online Sequential Learnable Evolutionary Algorithm (SLEA) Hierarchical Fitness Based Evolving Benchmark Generator A Modified SLEA Conclusions
3
Introduction Optimization is realistically important.
To find the best investment to obtain the maximum benefit. To find the shortest route to send out all the mails. Shinkansen N700 Series: To design the best nose shape (and other parts) that satisfies a number of criteria. [Figures from Internet]
4
Introduction Global Optimization aims at finding the optimal solution(s) from all feasible solutions. Optimal ={minimal; maximal; extremal defined by criteria}. A Randomly-generated Gaussian Landscape Landscape of the Rastrigin function
5
Introduction In practice, landscape is never known.
Evolutionary Computation (EC) mathematical form of the problem differentiability smoothness continuousness
6
Introduction – Online Search History
The online search history {(x1, f1), (x2, f2), ..., (xn, fn)} can be stored in a database (tree structured) avoid revisiting adaptive operator
7
Introduction – Offline Search History
8
Contents Introduction Online Offline & Online Conclusions
Non-revisiting Genetic Algorithm (cNrGA) with Constant Memory Offline & Online Sequential Learnable Evolutionary Algorithm (SLEA) Hierarchical Fitness Based Evolving Benchmark Generator A Modified SLEA Conclusions
9
Non-revisiting (Nr) Stochastic Search
Stochastic Search Methods BSP Tree, KD Tree, etc. newly generated solutions Non-revisited solutions GA, DE, PSO, ES, EP, etc. Non-revisit Scheme
10
Nr Genetic Algorithm (NrGA)
11
An Example of NrGA in 2D Space
Binary Space Partitioning (BSP) Tree Insertion of nodes (solutions) a and b Search Space Partitioning Insertion of node (solution) c
12
NrGA Summary To Keep To Overcome To Overcome Objective:
Use the historical information and parameter-less adaptive mutation Usage of memory grows as the number of Function Evaluations [Memory Usage ∝ (Max)FEs] Conventionally, MaxFEs = 40, [Tradeoff: Memory Usage vs. Performance] To Keep To Overcome To Overcome Objective: record less information record useful information only record all, but prune something useless
13
NrGA Pruning Methods Least Recently Used (LRU) Pruning
Time stamps are attached when nodes are created That also means these regions in the search space are suggested by GA to search In contrast, LRU nodes indicates those regions are rarely searched recently (non-promising regions) Random (R) Pruning To randomly prune some tree nodes Uniformly select left of right child node Deeper data has less chance to prune 0.5 0.25
14
cNrGA Insert &. Prune Let’s Prune ! x2 z7 z3 z2 z1 z6 z5 z4 x1 R A B A
D A B G G z2 A F Warning: Memory usage meets its Maximum size ! z1 z6 z5 Let’s Prune ! z4 x1
15
cNrGA Insert &. Prune Let’s Prune C ! x2 z7 z3 z3 z3 z2 z1 z6 z5 z4 x1
B z7 A C C E B z3 z3 z3 D A B G G z2 A F Node C, and the sub-region of z3 is Least Recently Used (LRU) space for search. z1 z6 z5 Let’s Prune C ! z4 x1
16
cNrGA Insert &. Prune x2 z7 z3 z3 z2 z1 z6 z5 z4 x1 R A B A C E B D A
F z1 z6 z5 z4 x1
17
cNrGA Insert &. Prune R x2 A B z7 A E B D A B G z2 A F Now the BSP Tree is ready to insert the new solution … z1 z6 z5 z4 x1
18
cNrGA Insert &. Prune x2 z7 z2 z1 z6 z8 z5 z4 x1 R A B E B D A B G A F
H will be inserted in BSP Tree z6 z8 will be allocated a sub-region z5 z4 and continue … x1
19
Experimental Results Problems: CEC 2013 Benchmark Test Suite (28 problems) Information Loss: Max FEs 1x104 2x104 3x104 4x104 5x104 6x104 7x104 8x104 9x104 1x105 Max Memo Info Loss 0.50 0.67 0.75 0.80 0.83 0.86 0.88 0.89 0.90 Total Trials: 28 problems × 9 different MaxFEs = 252 Trials cNrGA cNrGA/CM/LRU cNrGA/CM/R Avg Rank 1.92 1.98 2.10 U Test: 244 - no significant difference 8 - not one-sided J. J. Liang, et.al., “Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session and Competition on Real-Parameter Optimization”. Technical Report, Zhengzhou University, Nanyang Technological University, (2013).
20
Contents Introduction Online Offline & Online Conclusions
Non-revisiting Genetic Algorithm (cNrGA) with Constant Memory Offline & Online Sequential Learnable Evolutionary Algorithm (SLEA) Hierarchical Fitness Based Evolving Benchmark Generator A Modified SLEA Conclusions
21
SLEA Which evolutionary algorithm should I choose? GA DE ES EP PSO
CMA-ES JADE CoDE SaDE NrGA SPSO CLPSO Algorithm Selection Offline Search History (Knowledge Base) Sequential Learnable Evolutionary Algorithm
22
SLEA Pick up the best of a sequence of solutions.
Black box continuous design optimization problems Only one optimized design is needed. vs. repetitive optimization problems Stable performance is required. Only one nose shape is needed. Pick up the best of a sequence of solutions. A.E. Eiben and J.E. Smith, Introduction to evolutionary computing, 2nd ed. Springer-Verlag Berlin Heidelberg, (2015).
23
SLEA Rice’s algorithm selection framework: Problem Space Feature Space
feature extraction apply algorithm mapping with max performance measurement Problem Space Feature Space Performance Space Algorithm Space Problem Space Feature Space Performance Space Algorithm Space e.g. dimensionality number of local optima area of attraction sizes separability symmetry and so on … J.R. Rice, “The algorithm selection problem,” Advances in Computer, vol. 15, pp. 65–118, (1976).
24
SLEA Algorithm-problem Feature: the entire converging history
trained offline performed online
25
Experimental Settings
Algorithms before training: 𝐴𝑖𝑛𝑖𝑡={𝐴1, …, 𝐴9} = {ABC, CLPSO, CMA-ES, CoDE, JADE, jDE, SPSO2011, RGA, SaDE} 𝐴final={𝐴1, 𝐴3, 𝐴4, 𝐴5, 𝐴6, 𝐴7, 𝐴9} = {ABC, CMA-ES, CoDE, JADE, jDE, SaDE, SPSO2011} Algorithms after training: Training problems: CEC 2013 Benchmark Test Suite Testing problems: CEC 2013 & CEC 2011 Benchmark Test Suites J. J. Liang, et.al., “Problem definitions and evaluation criteria for the CEC 2013 special session and competition on real-parameter optimization,” Technical Report, Zhengzhou University, Nanyang Technological University, (2013). S. Das and P.N. Suganthan, “Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems,” Technical Report, Kolkata India; Singapore, (2011).
26
Experimental Results Comparison of average rank
multi-ABC multi-CLPSO multi-CMA-ES multi-CoDE multi-JADE multi-jDE multi-SPSO2011 multi-RGA multi-SaDE SLEA CEC 2013 6.95 9.54 5.14 4.02 3.86 5.5 7.3 4.09 6.04 2.57 CEC 2011 2.21 4.93 3.48 6.74 6.45 5.64 5.93 7.31 9.86 2.45 SLEA identifying the problem correctly on CEC 2013: % SLEA finding the best EA(s) for on CEC 2011: %
27
Contents Introduction Online Offline & Online Conclusions
Non-revisiting Genetic Algorithm (cNrGA) with Constant Memory Offline & Online Sequential Learnable Evolutionary Algorithm (SLEA) Hierarchical Fitness Based Evolving Benchmark Generator A Modified SLEA Conclusions
28
Hierarchical Fitness based Evolving Benchmark Generator (HFEBG)
Purpose of HFEBG: To enrich the training data for SLEA. Hierarchical fitness assignment: a problem instance does not meet any criterion meet part of criteria is satisfactory Mann-Whitney U test statistical guarantee on performance of algorithms “to the best of our knowledge” cannot be simplified written as “to our best knowledge” Note: HF & U test approaches are first employed for generating benchmark problems, to the best of our knowledge.
29
HFEBG – An Example Three EAs: SA = {A1, A2, A3}
Target EA: AT, (e.g., A1) BA = SA - AT = {A2, A3}, denoted by BA = {B1, B2} Objective: To find a problem instance which is uniquely easy for the Target EA AT . Hierarchical Fitness: AT outperforms none in BA AT outperforms some but not all EAs in BA AT outperforms all EAs in BA U Test: algorithm X outperforms algorithm Y Is it statistically significant?
30
HFEBG – An Example An easy approximation !
Black line: the best result of BA = {B1, B2} at each run. Let denote it as a metaheuristic AM. AT vs. {B1, B2} AT vs. AM An easy approximation !
31
Experimental Settings & Results
Input: a tunable benchmark generator (max-set of Gaussian, MSG) SA = {ABC, CoDE, SPSO2011, NBIPOP-aCMAES} Objective: To find a uniquely easy problem instance for each EA. Output: ABC CoDE SPSO 2011 NBIPOP-aCMAES UE-ABC 1 3 2 4 UE-CoDE UE-PSO UE-CMA Avg. Rank 2.00 2.25 2.75 3.00 Test ABC CoDE SPSO 2011 NBIPOP-aCMAES UE-ABC 1 3.5 2 UE-CoDE 4 3 UE-PSO UE-CMA Avg. Rank 2.00 2.625 2.50 2.88 M. Gallagher and B. Yuan, “A general-purpose tunable landscape generator,” IEEE Transactions on Evolutionary Computation, 10(5) pp. 590–603, (2006).
32
Contents Introduction Online Offline & Online Conclusions
Non-revisiting Genetic Algorithm (cNrGA) with Constant Memory Offline & Online Sequential Learnable Evolutionary Algorithm (SLEA) Hierarchical Fitness Based Evolving Benchmark Generator A Modified SLEA Conclusions
33
SLEA Collecting the algorithm-problem feature
Identifying the most similar problem Suggesting the best algorithm
34
The Modified SLEA – An Example
Algorithms: A = {A1, A2, A3} Problems: P = {P1, P2, P3, P4, P5} Training: A1 wins Rank#1 on P1, P2, P3 A2 wins Rank#1 on P4, P5 A3 wins Rank#1 on Φ A SLEA= {A1, A2} A problem instance P6 is generated using HFEBG, such that A3 wins Rank#1 on P6 . A’ SLEA = {A1, A2, A3} By HFEBG, we can ensure P6 is uniquely easy for A3 , while P1~5 do not have such property.
35
Experimental Settings & Results
Training problems: CEC 2013 (F2, F9, & F21) & 1 MSG instance Testing problems: CEC 2013 (the rest 25) & 12 (other) MSG instances Training SLEA Modified SLEA ABC f21 CMA-ES f2 CoDE f9 SPSO2011 N/A MSG-SPSO Results: Testing SLEA Modified SLEA Avg. rank on CEC 2013 (28 problems) 1.56 1.44 (3+, 1-) Avg. rank on 12 random MSG problems 1.63 1.38 (3+, 0-)
36
Conclusions Online: Offline+ :
Online search history is efficiently managed by the two proposed pruning strategies – the LRU (least recently used) pruning and the R (random) pruning. Offline+ : A sequential algorithm selection framework is proposed, which uses a algorithm-problem feature is used to perform algorithm selection. A knowledge base is trained offline, while the algorithm-problem feature comparison is performed online.
37
Conclusions Offline+:
A novel benchmark problem generator is designed for generating uniquely easy and difficult problem instances, such that each algorithm is able to find its favourable problem instance. A modified SLEA is proposed, of which the knowledge base is trained in a unbiased way.
38
Thank you !
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.