Presentation is loading. Please wait.

Presentation is loading. Please wait.

Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad.

Similar presentations


Presentation on theme: "Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad."— Presentation transcript:

1 Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad

2 What is Optimization? Optimization can be defined as the art of obtaining best policies to satisfy certain objectives, at the same time satisfying fixed requirements.- Gotfried Optimization can be defined as the art of obtaining best policies to satisfy certain objectives, at the same time satisfying fixed requirements.- Gotfried Unconstrained Optimization Example: Maximize Z, Example: Maximize Z, where Z= x 1 2 x 2 –x 2 2 x 1 -2 x 1 x 2 where Z= x 1 2 x 2 –x 2 2 x 1 -2 x 1 x 2

3 Unconstrained & Constrained Optimization Unconstrained Approach: Unconstrained Approach: Set δZ/ δx 1 = 0, & Set δZ/ δx 1 = 0, & δZ/ δx 2 = 0. δZ/ δx 2 = 0. Constrained Optimization Constrained Optimization Example: Design a box with maximum volume and minimum surface area. Example: Design a box with maximum volume and minimum surface area.

4 Constrained Optimization (Contd.) Approach: Let Approach: Let L(x, y, z)= xyz – λ [2(xy + yz +zx)] L(x, y, z)= xyz – λ [2(xy + yz +zx)] volume Lag. Multiplier sur. Area volume Lag. Multiplier sur. Area Set δL/ δx =0, δL/ δy =0, δL/ δz =0, Set δL/ δx =0, δL/ δy =0, δL/ δz =0, and δL/ δ λ =0, and δL/ δ λ =0, Solution: x= y= z Solution: x= y= z In most engineering system, the solution returns numerical values of the variables. In most engineering system, the solution returns numerical values of the variables.

5 Numerical Approach to Optimization Steepest Descent Steepest Descent Problem: Minimize f(x 1, x 2, …, x n ) Problem: Minimize f(x 1, x 2, …, x n ) Approach: x 1 := x 1 -  δf/ δx 1 Approach: x 1 := x 1 -  δf/ δx 1 x 2 := x 2 -  δf/ δx 2 x 2 := x 2 -  δf/ δx 2 …… ……. …….. …… …… ……. …….. …… x n := x n -  δf/ δx n x n := x n -  δf/ δx n Loop through until δf/ δx i, for all i, are zero. Loop through until δf/ δx i, for all i, are zero.

6 But what about these multi-modal, noisy and even discontinuous functions? Gradient based methods get trapped in a local minima or the Function itself may be non differentiable. How a single agent can find global optima by following gradient descent?

7 Way Out: Multi-Agent Optimization in Continuous Space Randomly Initialized Agents Agents

8 Most Agents are near Global Optima After Convergence

9 Particle Swarm Optimization (PSO) (Kennedy and Eberhart, 1995)

10 Principles of Particle Swarm Optimization Current direction Direction of local maximum global maximum Resulting direction of motion

11 The PSO Algorithm 1. Initialize position and velocity of n particles randomly. 2. Evaluate fitness of all the particles. 3. Adapt velocity of each particle by taking into consideration of its current velocity, Global best and local best positions, so far experienced. 4. Evaluate new position from current position and velocity. 5. Update Local best and global best positions based on the fitness of the new positions. 6. Repeat from (2) until most of the particles cease motions.

12 PSO: Starting Situation Randomly Scattered Particles over the fitness landscape and their randomly oriented velocities

13 All Particles in A close vicinity of the Global optimum The best Particle Conquering the Peak Situation after a few iterations

14 Best Position found so far By the particle Globally Best position Found by the swarm Definitions

15 Best Position found By the agent so far (P lb ) Globally best position found so far (P gb ). Current Position V i (t) Resultant Velocity V i (t+1) Particle Swarm Optimization Kennedy & Eberhart(1995) V i (t+1)=φ.V i (t)+C 1.rand(0,1).(P lb -X i (t))+C 2.rand(0,1).(P gb -X i (t)) X i (t+1)=X i (t)+V i (t+1) C1.rand(0,1).(Plb-Xi(t)) C2.rand(0,1).(Pgb-Xi(t))

16 Example: f( x) = x(x-8) Consider a small swarm of particles for the above single dimensional function: Initial Position and velocities of the particles at time t=0: Randomly initialized In the range (-10, 10) Particle number Position x(0) at t = 0 Velocity v at t = 0 f (x) 173-7 2-2520 3969 4-6-484 So, the fittest particle is particle 1 and we set P gb = -7 and P lb =X 1

17 Initial Distribution of the Particles over the fitness landscape

18 Change in position of the particles in next iteration: For this small scale PSO problem, we set C 1 = C 2 = 2.0,ω = 0.5 Particle 1> V 1 (1) = 0.5*3 + 2*0.6*(7 – 7) + 2*0.4*(7 – 7) = 1.5 X 1 (1) = 7+1.5 = 8.5 Fitness f (X(1)) =4.25 V i (t+1)=φ.V i (t)+C 1.rand(0,1).(P lb -X i (t))+C 2.rand(0,1).(P gb -X i (t)) X i (t+1)=X i (t)+V i (t+1) Particle 2> V 2 (1) = 0.5*5 + 2*0.3*(-2 + 2 ) + 2*0.4*(7 – (-2)) = 6.5 X 2 (1) = -2+6.5 = 3.5 Fitness f (X(1)) =-9.75

19 Particle 3> V 3 (1) = 0.5*6 + 2*0.8*(9 - 9 ) + 2*0.95*(7 – 9) = -0.8 X 3 (1) = 6 – (-0.8) = 6.8 Fitness f (X(1)) =-8.16 Particle 4> V 4 (1) = 0.5*(-4) + 2*0.38*(-6 + 6 ) + 2*0.45*(7 – (-6)) = 9.7 X 4 (1) = -6 + 9.3 = 3.7 Fitness f (X(1)) =-15.91 Here we go for the next iteration: Particle number Position at t = 1Velocity at t = 1f (x)P lb for t = 2 P gb for t = 2 18.5 (at t = 0, 7)1.5 (at t = 0, 3)4.25 (at t = 0, -7) 73.7 26.5 (-2)3.5 (5)-9.75 (20)6.53.7 36.8 (9)-0.8 (6)-8.16 (9)6.83.7 43.7 (-6)9.7 (-4)-15.91(84)3.7

20 Distribution of the Particles over the fitness landscape at t = 1 Distribution of the Particles over the fitness landscape at t = 5 Best particle at t = 5 P gb = 3.95 f(P gb ) = -15.99

21 Optimization by PSO: Egg crate Function Minimize X1X1 X2X2 f(x) Known global minima at [0,0] and optimum function value 0

22 Eggcrate Function Optimization by PSO Position of the particles on a 2D parameter Space at different instances

23 To know more THE site: Particle Swarm Central, http://www.particleswarm.net Clerc M., Kennedy J., "The Particle Swarm-Explosion, Stability, and Convergence in a Multidimensional Complex Space", IEEE Transaction on Evolutionary Computation, 2002,vol. 6, p. 58-73.

24 Problem with PSO If the fitness function is too wavy and irregular, the particles will get trapped in some local minimum. If the fitness function is too wavy and irregular, the particles will get trapped in some local minimum. The result is that we get a sub optimal solution The result is that we get a sub optimal solution

25 Perceptive Particle Swarm Optimization The particles fly around in (n+1) dimensional search space for n dimensional optimization problem The particles fly around in (n+1) dimensional search space for n dimensional optimization problem The particles fly over a physical fitness landscape observing its crests and trough from a far The particles fly over a physical fitness landscape observing its crests and trough from a far Particles observe the search space within their perception ranges by sampling a fixed number of directions to observe and sampling a finite number of points along those directions. Particles observe the search space within their perception ranges by sampling a fixed number of directions to observe and sampling a finite number of points along those directions. The particles attempt to observe the search space for landscape at several sampled distances from its position, in each direction. The particles attempt to observe the search space for landscape at several sampled distances from its position, in each direction. If the sampled point is within the landscape, the particle perceives the height of the landscape at that point. If the sampled point is within the landscape, the particle perceives the height of the landscape at that point. The particles can observe neighboring particles in their perception range. The particles can observe neighboring particles in their perception range. The particle randomly chooses the neighboring particles which will influence the particle to move towards them. The particle randomly chooses the neighboring particles which will influence the particle to move towards them. The position of the chosen neighbor will be used as the local best position of the particle. The position of the chosen neighbor will be used as the local best position of the particle.

26 PPSO Illustration

27 Adaptive Perceptive Particle Swarm Optimization (APPSO) In APPSO algorithm, if the local best position of the particle at the current iteration does improve the performance of the particle, then one or more of the following things are done: In APPSO algorithm, if the local best position of the particle at the current iteration does improve the performance of the particle, then one or more of the following things are done: (1) Spacing between the sample points along any direction within the perception radius is minimized (2) number of sampling directions is increased (3) Perception radius is minimized

28 APPSO Illustration

29 Different types of APPSO AlgorithmPerception radiusNo. of directionsNo. of sample points APPSO1 (PPSO)Fixed APPSO2Fixed Variable APPSO3FixedVariableFixed APPSO4FixedVariable APPSO5VariableFixed APPSO6VariableFixedVariable APPSO7Variable Fixed APPSO8Variable

30 THANK YOU


Download ppt "Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad."

Similar presentations


Ads by Google