Download presentation

Presentation is loading. Please wait.

Published byAmy Scholar Modified about 1 year ago

1

2
Ayon Chakraborty1, Kaushik Chakraborty1, Swarup Kumar Mitra2,M. K. Naskar3 2 Department of ECE, MCKV Institute of Engineering 1 Department of Computer Science and Engineering 3 Department of Electronics and Telecommunications Engineering Jadavpur University An Optimized Lifetime Enhancement Scheme for Data Gathering in Wireless Sensor Networks

3
Design Challenges in Wireless Sensor Networks Data Gathering Algorithms Proposed Algorithm Wireless Sensor Networks Conclusion Simulation Results Contents

4
Collect data / information from the sensor field. Ad-hoc nature of WSNs Typically, severely energy constrained Limited energy sources (e.g., batteries) Trade-off between performance and lifetime Self-organizing and self-healing Remote deployments Scalable Arbitrarily large number of nodes GOAL Lifetime enhancement of sensor nodes POINTS Sensor nodes lose power while transmitting or receiving data at the time of data gathering SOLUTION Develop efficient algorithm for data gathering OBJECTIVE OF DEPLOYING SENSOR NODES DESIGN CHALLENGES

5
Node Deployment Scenario BS C0C0 C1C1 C2C2 C3C3 LEACH PEGASIS PHILOSOPHY Distribute the energy dissipation by the sensor nodes at the time of data gathering equally around the network. LEACH protocol randomizes the selection of cluster heads for equal energy dissipation, the PEGASIS protocol uses a greedy chain to the sink. Optimized Lifetime Enhancement (OLE) Scheme PHILOSOPHY Increase the network performance by ensuring a sub-optimal energy dissipation of the individual nodes despite their random deployment. Use of modern Heuristic Techniques. DIFFERENT DATA GATHERING SCHEMES PEGASIS

6
Particle Swarm Optimization (PSO):Kennedy and Eberhart, 1995 Randomly Scattered Particles over the fitness landscape and their randomly oriented velocities

7
All Particles in A close vicinity of the Global optimum The best Particle Conquering the Peak Situation after a few iterations

8
x y v(t) v(t+1) x(t) x(t+1) p(t) g(t) PSO (2) - Visually in 2D A Close Look at Velocity Update vid= vid Inertia Cognitive learning Social learning Update Position

9
pbest = the personal best solution fitness) a particle has achieved so far. gbest = the global best solution of all particles. Initialize particles with random position and zero velocity Evaluate fitness value Compare & update fitness value with pbest and gbest Meet stopping criterion? Update velocity and position Start End YES NO Flow-Chart: PSO algorithm

10
Evolutionary Algorithms

11
Use a more complex Evaluation Function: Do sometimes accept candidates with higher cost to escape from local optimum Adapt the parameters of this Evaluation Function during execution Based upon the analogy with the simulation of the annealing of solids Simulated Annealing

12
Analogy Slowly cool down a heated solid, so that all particles arrange in the ground energy state At each temperature wait until the solid reaches its thermal equilibrium Probability of being in a state with energy E : Pr { E = E } = 1/Z(T). exp (-E / k B.T) EEnergy TTemperature k B Boltzmann constant Z(T) Normalization factor (temperature dependant)

13
Metropolis Acceptance At a fixed temperature T : Perturb (randomly) the current state to a new state E is the difference in energy between current and new state If E < 0 (new state is lower), accept new state as current state If E 0, accept new state with probability Pr (accepted) = exp (- E / k B.T) Eventually the systems evolves into thermal equilibrium at temperature T ; then the formula mentioned before holds When equilibrium is reached, temperature T can be lowered and the process can be repeated

14
Simulated Annealing in Combinatorial Optimization (S. Kirkpatrick et al.) Same algorithm can be used for combinatorial optimization problems: Energy E corresponds to the Cost function C Temperature T corresponds to control parameter c Pr { configuration = i } = 1/Q(c). exp (-C(i) / c) CCost cControl parameter Q(c) Normalization factor (not important)

15
PROBLEM MODEL Total no. of nodes are N Solution space, collection of arrangements of {1,2,3, …,n} Every arrangement C i represents a chain, where U = {C i | C i is a permutation of (1,2,.. n)} DATA GATHERING SCHEME USING PSO Consider n th dimensional system, C i denotes the i th particle in our n-dimensional system Energy Function for C i is ∆f = f(C new ) – f(C old ) = ∆E, PROBABILITY FUNCTION P P = 1 if ∆E ≤ 0 = exp (-∆f/ Ө) if ∆E > 0 If P > rand(0,1) accept the solution else reject it. COOLING SCHEDULE The Control Parameter is Ө, called the annealing temperature. PROPERTIES OF Ө Decremented every time when the system of particles approach a better solution (or a low energy state) Ө i = initial temperature, Ө f = final temperature, t = cooling time Ө(t) = Ө f + (Ө i - Ө f )*α t α = rate of cooling, usually 0.7 ≤ α < 1.0 Here t is the number of iterations. OUR APPROACH TO SOLUTION

16
Step 1: Initialization : Initialize the m particles C 1,C 2,C 3 ….C m, Ci={node[1], node[2],...,node[N]}. Initialize parameters α, Ө i, Ө f Step 2 : Finding a local best chain At a temperature Ө and for for L iterations, the searching of local best chain is done by a random binary swapping C old = {n 1,n 2,…….,n n } Select randomly two nodes say n i, n j, C old = {n 1,n 2,…, n i,…, n j,….,n n } Swap them C new = {n 1,n 2,…, n j,…, n i,….,n n } Calculate P for the new chain i.e for C new If P > rand (0, 1), a random number between 0 and 1 C old = C new = C ilpbest = local best solution of particle C Step 3 : Updating the pbest and gbest values C ipbest = best solution for all particles C ipbest = C ilbest if { f(C ilbest ) – f(C ipbest ) } < 0 = C ipbest if { f(C ilbest ) – f(C ipbest ) } ≥ 0 comparing the all the C ipbest values C gbest = global best solution C gbest = min{ f(C ipbest )} PROPOSED ALGORITHM

17
Step 4 : Formation of a new chain : Chain is formed based on C ipbest and C gbest Cross over technique Suppose C ipbest = {4,5,2,3,6,1} and C gbest = {5,2,1,4,3,6} The slot {2,1,4} is randomly chosen from C gbest and inserted in the same Position in C ipbest and the node ids that are repeated are deleted C inew = {5,2,1,4,3,6} Step 5 : The temperature Ө(t) is calculated. If its value is less than or equal to Өf or the total number of iterations up to now exceeds the value of t, the algorithm comes to a halt. The best chain formed is C gbest. Leader selection phase Formation of sub-optimal chain max[ E resi /D 4. Here E resi denotes the residual energy of an individual node before starting a data gathering round and D is the distance of the base station from that node. The node with the maximum value of E resi / D 4 becomes the leader. Here we consider the multipath fading (distance4 power loss) channel mode, as the leader is concerned with communicating to the distant base station. PROPOSED ALGORITHM

18
Simulation Results NUMBER OF DATA GATHERING ROUNDS FOR VARIOUS SCHEMES WITH PERCENTAGE OF DEAD NODES

19
Simulation Results Performance analysis of different protocols with Energy/node 1J and base station at (25,150). The mean packet loss rate versus distance is shown, with error bars indicating one standard deviation from the mean. The model is highly variable at intermediate distances.TOSSIM radio loss model based on empirical data

20
Greedy Chain Chain by OLE Scheme Simulation Results

21
CONCLUSION Optimal energy utilization occurs thereby increasing network lifetime as is validated by simulation results. PSO along with Simulated Annealing helps to enhance the performance of our scheme. Two major advantages: (i) Development time is much shorter rather than using more traditional approaches. (ii) The systems are very robust, being relatively insensitive to noisy and/or missing data. Moreover, the OLE scheme has been coded in nesC, which justifies it to be feasible on real motes. Also, we have considered the TOSSIM interference model, while simulating packet loss rates for the various scheme Our future goal is to study the problem using Genetic Algorithms and compare it to the OLE scheme

22
Reference REFERENCES [1] Clare, Pottie, and Agre, “Self-Organizing Distributed Sensor Networks”,In SPIE Conference on Unattended Ground Sensor Technologies and Applications,pages 229–237, Apr. 1999. [2] Yunxia Chen and Qing Zhao, “On the Lifetime of Wireless Sensor Networks”, Communications Letters, IEEE, Volume 9, Issue 11, pp:976–978,DigitalObjectIdentifier10.1109/ LCOMM.2005.11010., Nov. 2005. [3] S. Lindsey, C. S. Raghavendra and K. Sivalingam, “Data Gathering in Sensor Networks using energy*delay metric”, In Proceedings of the 15th International Parallel and Distributed Processing Symposium, pp 188-200, 2001. [4] W. Heinzelman, A. Chandrakasan, H. Balakrishnan, “Energy- Efficient Communication Protocol for Wireless Microsensor Networks”, IEEE Proc. Of the Hawaii International Conf. on System Sciences, pp. 1-10,Jan 2000. [5] S. Lindsey, C.S. Raghavendra, “PEGASIS: Power Efficient Gathering in Sensor Information Systems”, In Proceedings of IEEE ICC 2001, pp. 1125-1130,June 2001. [6] Ayan Acharya, Anand Seetharam, Abhishek Bhattacharyya, Mrinal KantiNaskar, “Balancing Energy Dissipation in Data Gathering Wireless Sensor Networks Using Ant Colony optimization”,10th International Conference on Distributed Computing and Networking-ICDCN 2009, pp437-443, January 3-6,2009. [7] Eberhart, R. C, Kennedy, J. “A new optimizer using particle swarm theory”,1995. [8] Kirkpatrick S, “Simulated Annealing”, Sci, Vol 220, 1983. [9] David Gay, Philip Levis, David Culler, Eric Brewer, nesC1.1 Language Reference Manual, May 2003. [10] Philip Levis,TinyOS Programming, June 28, 2006. [11] P. Levis, N. Lee, M. Welsh, and D. Culler. TOSSIM: Accurate and Scalable Simulation of Entire TinyOS, [12] N. Metropolis et. al. J. Chem. Phys. 21. 1087 (1953). [13] Zhi-Feng Hao, Zhi-Gang Wang; Han Huang, “A Particle Swarm Optimization Algorithm with Crossover Operator”, International Conference on Machine Learning and Cybernetics 2007, pp -19-22, Aug.2007.

23
Particle Swarm Optimization Particle Swarm Optimization (PSO) Algorithm was developed in 1995 by James Kennedy and Russ Eberhart It was inspired by social behavior of bird flocking or fish schooling PSO was applied to the concept of social interaction to problem solving The Particle Swarm Optimization Algorithm

24
Homogeneous Algorithm initialize; REPEAT perturb ( config.i config.j, C ij ); IF C ij < 0 THEN accept ELSE IF exp(- C ij /c) > random[0,1) THEN accept; IF accept THEN update(config.j); UNTIL equilibrium is approached sufficient closely; c := next_lower(c); UNTIL system is frozen or stop criterion is reached

25
Inhomogeneous Algorithm Previous algorithm is the homogeneous variant: c is kept constant in the inner loop and is only decreased in the outer loop Alternative is the inhomogeneous variant: There is only one loop; c is decreased each time in the loop, but only very slightly

26
Parameters Choose the start value of c so that in the beginning nearly all perturbations are accepted (exploration), but not too big to avoid long run times The function next_lower in the homogeneous variant is generally a simple function to decrease c, e.g. a fixed part (80%) of current c At the end c is so small that only a very small number of the perturbations is accepted (exploitation) If possible, always try to remember explicitly the best solution found so far; the algorithm itself can leave its best solution and not find it again

27
Pitfalls of PSO Particles tend to cluster, i.e., converge too fast and get stuck at local optimum especially in gbest PSO: Premature Convergence Movement of particle carried it into infeasible region, unnecessary loss of computational power. Inappropriate mapping of particle space into solution space

28
Other Names Monte Carlo Annealing Statistical Cooling Probabilistic Hill Climbing Stochastic Relaxation Probabilistic Exchange Algorithm

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google