Scientific Research Group in Egypt (SRGE)

Slides:



Advertisements
Similar presentations
Artificial Bee Colony Algorithm
Advertisements

Comparing Effectiveness of Bioinspired Approaches to Search and Rescue Scenarios Emily Shaeffer and Shena Cao 4/28/2011Shaeffer and Cao- ESE 313.
Artificial Bee Colony Algorithm
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
1 A Novel Binary Particle Swarm Optimization. 2 Binary PSO- One version In this version of PSO, each solution in the population is a binary string. –Each.
A rtificial B ee C olony. Dancing aea for A Dancing area for B Unloading nectar from A Unloading nectar from B ES EF OB SB RF ER OB ER ES RF EF Behaviour.
Ant Colony Optimization: an introduction
Differential Evolution Hossein Talebi Hassan Nikoo 1.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Evaluating Performance for Data Mining Techniques
Population-based metaheuristics Nature-inspired Initialize a population A new population of solutions is generated Integrate the new population into the.
 C. C. Hung, H. Ijaz, E. Jung, and B.-C. Kuo # School of Computing and Software Engineering Southern Polytechnic State University, Marietta, Georgia USA.
Genetic Algorithm.
Swarm Intelligence 虞台文.
Neural and Evolutionary Computing - Lecture 11 1 Nature inspired metaheuristics  Metaheuristics  Swarm Intelligence  Ant Colony Optimization  Particle.
Method of Hooke and Jeeves
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Ant colony optimization. HISTORY introduced by Marco Dorigo (MILAN,ITALY) in his doctoral thesis in 1992 Using to solve traveling salesman problem(TSP).traveling.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
Parallelization of a Swarm Intelligence System
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Genetic Algorithm(GA)
Since the 1970s that the idea of a general algorithmic framework, which can be applied with relatively few modifications to different optimization problems,
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Intelligent Exploration for Genetic Algorithms Using Self-Organizing.
1 Genetic Algorithms Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations.
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Differential Evolution (DE) and its Variant Enhanced DE (EDE)
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Discrete ABC Based on Similarity for GCP
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Spider Monkey Optimization Algorithm
Whale Optimization Algorithm
School of Computer Science & Engineering
Traffic Simulator Calibration
B. Jayalakshmi and Alok Singh 2015
CONSTRAINED NON-STATIONARY STATE FEEDBACK SPEED CONTROL OF PMSM
Chapter 6: Genetic Algorithms
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Comparing Genetic Algorithm and Guided Local Search Methods
Advanced Artificial Intelligence Evolutionary Search Algorithm
CS621: Artificial Intelligence
metaheuristic methods and their applications
Computational Intelligence
Scientific Research Group in Egypt (SRGE)
فهرست مطالب 1- الگوریتم کلونی زنبورعسل مصنوعی
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
School of Computer Science & Engineering
“Hard” Optimization Problems
Design & Analysis of Algorithms Combinatorial optimization
traveling salesman problem
EE368 Soft Computing Genetic Algorithms.
Boltzmann Machine (BM) (§6.4)
Artificial Bee Colony Algorithm
Introduction to Genetic Algorithm and Some Experience Sharing
Computational Intelligence
Reinforcement Learning (2)
Mutation Operators of Fireworks Algorithm
Reinforcement Learning (2)
Presentation transcript:

Scientific Research Group in Egypt (SRGE) Swarm Intelligence (4) Artificial Bee Colony (ABC) Scientific Research Group in Egypt (SRGE) Dr. Ahmed Fouad Ali Suez Canal University, Dept. of Computer Science, Faculty of Computers and informatics Member of the Scientific Research Group in Egypt

Scientific Research Group in Egypt www.egyptscience.net

Meta-heuristics techniques

Outline 1. Artificial Bee Colony(ABC)(History and main idea) 2. Artificial Bee Colony(ABC ) Algorithm 3. ABC control parameters 4. Advantage and disadvantage 5. Example 6. References

Artificial Bee Colony (ABC) (History) Artificial Bee Colony (ABC) is one of the most recently defined algorithms by Dervis Karaboga in 2005, motivated by the intelligent behavior of honey bees. Since 2005, D. Karaboga and his research group have studied on ABC algorithm and its applications to real world-problems.

Artificial Bee Colony (ABC) (Main idea) The ABC algorithm is a swarm based meta-heuristics algorithm. It based on the foraging behavior of honey bee colonies. The artificial bee colony contains three groups: Scouts Onlookers Employed bees

Artificial Bee Colony(ABC ) Algorithm The ABC generates a randomly distributed initial population of SN solutions (food source positions), where SN denotes the size of population. Each solution xi (i = 1, 2, ..., SN) is a D-dimensional vector. After initialization, the population of the positions (solutions) is subjected to repeated cycles, C = 1, 2, ...,MCN, of the search processes of the employed bees, the onlooker bees and scout bees.

Artificial Bee Colony(ABC ) Algorithm An employed bee produces a modification on the position (solution) in her memory depending on the nectar amount (fitness value) of the new source (new solution). Provided that the nectar amount of the new one is higher than that of the previous one, the bee memorizes the new position and forgets the old one. After all employed bees complete the search process, they share the nectar information of the food sources and their position information with the onlooker bees on the dance area.

Artificial Bee Colony(ABC ) Algorithm An onlooker bee evaluates the nectar information taken from all employed bees and chooses a food source with a probability related to its nectar amount. As in the case of the employed bee, it produces a modification on the position in its memory and checks the nectar amount of the candidate source. Providing that its nectar is higher than that of the previous one, the bee memorizes the new position and forgets the old one.

Artificial Bee Colony(ABC ) Algorithm An artificial onlooker bee chooses a food source depending on the probability value associated with that food source, pi , fiti is the fitness value of the solution i SN is the number of food sources which is equal to the number of employed bees (BN).

Artificial Bee Colony(ABC ) Algorithm In order to produce a candidate food position from the old one in memory, the ABC uses the following expression where k ∈ {1, 2,..., SN} and j ∈ {1, 2,...,D} are randomly chosen indexes. k is determined randomly, it has to be different from i. φi,j is a random number between [-1, 1].

Artificial Bee Colony(ABC ) Algorithm The food source of which the nectar is abandoned by the bees is replaced with a new food source by the scouts. In ABC, providing that a position can not be improved further through a predetermined number of cycles, which is called “limit” then that food source is assumed to be abandoned.

Artificial Bee Colony(ABC ) Algorithm

ABC control parameters Swarm size Employed bees(50% of swarm) Onlookers(50% of swarm) Scouts(1) Limit Dimension

Advantage and disadvantage Advantages Few control parameters Fast convergence Both exploration & exploitation Disadvantages Search space limited by initial solution (normal distribution sample should use in initialize step)

Example Consider the optimization problem as follows: Minimize f (x) = x21 + x22 -5≤x1,x2≤5 Control Parameters of ABC Algorithm are set as: Colony size, CS = 6 Limit for scout, L = (CS*D)/2 = 6 Dimension of the problem, D = 2

Example First, we initialize the positions of 3 food sources (CS/2) of employed bees, randomly using uniform distribution in the range (-5, 5). x = 1.4112 -2.5644 0.4756 1.4338 -0.1824 -1.0323 f(x) values are: 8.5678 2.2820 1.0990

Example Initial fitness vector is: 0.1045 0.3047 0.4764

Example Maximum fitness value is 0.4764, the quality of the best food source. Cycle=1 Employed bees phase •1st employed bee with this formula, produce a new solution. k=1 k is a random selected index. j=0 j is a random selected index.

Example Φ = 0.8050 Φ is randomly produced number in the range [-1, 1]. υ0= 2.1644 -2.5644 Calculate f(υ0) and the fitness of υ0. f(υ0) = 11.2610 and the fitness value is 0.0816. Apply greedy selection between x0 and υ0 0.0816 < 0.1045, the solution 0 couldn’t be improved, increase its trial counter.

Example 2nd employed bee with this formula produce a new solution. k=2 k is a random selected solution in the neighborhood of i. j=1 j is a random selected dimension of the problem. Φ = 0.0762 Φ is randomly produced number in the range [-1, 1]. υ1= 0.4756 1.6217 Calculate f(υ1) and the fitness of υ1. f(υ1) = 2.8560 and the fitness value is 0.2593. Apply greedy selection between x1 and υ1 0.2593 < 0.3047, the solution 1 couldn’t be improved, increase its trial counter.

Example 3rd employed bee with this formula produce a new solution. k=0 //k is a random selected solution in the neighborhood of i. j=0 //j is a random selected dimension of the problem. Φ = -0.0671 // Φ is randomly produced number in the range [-1, 1]. υ2= -0.0754 -1.0323 Calculate f(υ2) and the fitness of υ2. f(υ2) = 1.0714 and the fitness value is 0.4828. Apply greedy selection between x2 and υ2. 0.4828 > 0.4764, the solution 2 was improved, set its trial counter as 0 and replace the solution x2 with υ2.

Example x = 1.4112 -2.5644 0.4756 1.4338 -0.0754 -1.0323 f(x) values are; 8.5678 2.2820 1.0714 fitness vector is: 0.1045 0.3047 0.4828

Example Calculate the probability values p for the solutions x by means of their fitness values by using the formula; p = 0.1172 0.3416 0.5412

Example Onlooker bees phase Produce new solutions υi for the onlookers from the solutions xi selected depending on pi and evaluate them. 1st onlooker bee i=2 υ2= -0.0754 -2.2520 Calculate f(υ2) and the fitness of υ2. f(υ2) = 5.0772 and the fitness value is 0.1645. Apply greedy selection between x2 and υ2 0.1645 < 0.4828, the solution 2 couldn’t be improved, increase its trial counter.

Example 2nd onlooker bee i=1 υ1= 0.1722 1.4338 υ1= 0.1722 1.4338 Calculate f(υ1) and the fitness of υ1. f(υ1) = 2.0855 and the fitness value is 0.3241. Apply greedy selection between x1 and υ1 0.3241 > 0.3047, the solution 1 was improved, set its trial counter as 0 and replace the solution x1 with υ1.

Example x = 1.4112 -2.5644 0.1722 1.4338 -0.0754 -1.0323 f(x) values are 8.5678 2.0855 1.0714 fitness vector is: 0.1045 0.3241 0.4828

Example 3rd onlooker bee i=2 υ2= 0.0348 -1.0323 υ2= 0.0348 -1.0323 Calculate f(υ2) and the fitness of υ2. f(υ2) = 1.0669 and the fitness value is 0.4838. Apply greedy selection between x2 and υ2 0.4838 > 0.4828, the solution 2 was improved, set its trial counter as 0 and replace the solution x2 with υ2.

Example x = 1.4112 -2.5644 0.1722 1.4338 0.0348 -1.0323 f(x) values are 8.5678 2.0855 1.0669 fitness vector is: 0.1045 0.3241 0.4838

Example Memorize best Best = 0.0348 -1.0323 Scout bee phase Trial Counter = 1 There is no abandoned solution since L = 6 If there is an abandoned solution (the solution of which the trial counter value is higher than L = 6); Generate a new solution randomly to replace with the abandoned one. Cycle = Cycle+1 The procedure is continued until the termination criterion is attained.

References D. Karaboga and B. Basturk, “Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems”, IFSA 2007, LNAI 4529, pp. 789–798, 2007. Springer-Verlag Berlin Heidelberg 2007

Thank you Ahmed_fouad@ci.suez.edu.eg http://www.egyptscience.net