Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Optimization in Sensor Networks

Similar presentations


Presentation on theme: "Distributed Optimization in Sensor Networks"— Presentation transcript:

1 Distributed Optimization in Sensor Networks
Mike Rabbat & Rob Nowak Monday, April 26, 2004 IPSN’04, Berkeley, CA

2 A Motivating Example “What is µ(x)?” “It’s !” (E.g., ) x1 x2 x3 x4 x5
xn

3 Two Extreme Approaches
n sensors distributed uniformly over a square region compute 1) Transmit Data 2) Transmit A Result x1 x2 x3 x4 x5 x6 x7 x… xn x1 x2 x3 x4 x5 x6 x7 x… xn

4 Energy-Accuracy Tradeoffs
Multi-hop communication b(n) = total number of bits h(n) = avg number of hops per bit e(n) = avg energy per hop Total Energy Consumption Consider two situations Sensors transmit data, result computed at destination Sensors process in-network, result transmitted to destination

5 Distributed Iterative Optimization
n sensors Minimize w.r.t. µ E.g., fi(µ; xi) = (µ – xi)2 Strategy: Cycle over each sensor Update using previous value and local data x… x7 x6 xn x5 x4 x1 x2 x3

6 Distributed Iterative Optimization
n sensors Minimize w.r.t. µ E.g., fi(µ; xi) = (µ – xi)2 Strategy: Cycle over each sensor Update using previous value and local data #1(1) x… x7 x6 xn x5 x4 x1 x2 x3 Energy used:

7 Distributed Iterative Optimization
n sensors Minimize w.r.t. µ E.g., fi(µ; xi) = (µ – xi)2 Strategy: Cycle over each sensor Update using previous value and local data #1(1) x… x7 x6 xn x5 x4 x1 x2 x3 #2(1) Energy used:

8 Distributed Iterative Optimization
n sensors Minimize w.r.t. µ E.g., fi(µ; xi) = (µ – xi)2 Strategy: Cycle over each sensor Update using previous value and local data #n(1) #1(1) x… x7 x6 xn x5 x4 x1 x2 x3 #2(1) Energy used:

9 Distributed Iterative Optimization
n sensors Minimize w.r.t. µ E.g., fi(µ; xi) = (µ – xi)2 Strategy: Cycle over each sensor Update using previous value and local data Repeat K times #n(K) #1(K) x… x7 x6 xn x5 x4 x1 x2 x3 #2(K) Energy used: Compared with:

10 Incremental Subgradient Methods
Have data xi at sensor i, for i=1,2,…,n Find µ that minimizes Distributed iterative procedure small positive step size

11 Incremental Subgradient Methods
Have data xi at sensor i, for i=1,2,…,n Find µ that minimizes Distributed iterative procedure use subgradient of fi is non-differentiable

12 Convergence Theorem (Nedić & Bertsekas, ’01):
Assume the fi are convex on a convex set , µ*2 , and kr fi(µ)k<C. Define D = diam(). Then after K cycles, with we are guaranteed that

13 Comparing Resource Usage
1) Transmit Data 2) Transmit A Result x1 x2 x3 x4 x5 x6 x7 x… xn x1 x2 x3 x4 x5 x6 x7 x… xn

14 Energy Savings When does distributed processing use less energy? vs.

15 Robust Estimation Estimate the mean (squared error loss)
Robust estimate (robust loss function)

16 Robust Estimates and “Bad Sensors”
E.g., Monitoring ozone levels, µ = average level Normal sensor measures µ ± 1 Damaged sensor measures µ ± 10 Energy used (K = 25 iterations) compared to 200 sensors 10 measurements each 10% “bad” robust loss squared error loss

17 Source Localization Isotropic energy source located at µ
Sensor i at location ri, measures received signal strength Local cost functions

18 Source Localization 100 sensors 50 £ 50 square 10 measurements
Avg SNR = 3dB Converged in 45 cycles Compare with

19 In Conclusion When is in-network processing more energy efficient?
Incremental subgradient optimization Simple to implement Applicable to a general class of problems Analyzable rate of convergence Distributed in-network processing uses less energy: vs.

20 Ongoing & Future Work xn x… x7 x1 x6 x5 x2 x3 x4 rabbat@cae.wisc.edu


Download ppt "Distributed Optimization in Sensor Networks"

Similar presentations


Ads by Google