Presentation is loading. Please wait.

Presentation is loading. Please wait.

Carnegie Mellon Distributed Inference: High Dimensional Consensus TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA.

Similar presentations


Presentation on theme: "Carnegie Mellon Distributed Inference: High Dimensional Consensus TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA."— Presentation transcript:

1 Carnegie Mellon Distributed Inference: High Dimensional Consensus TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A AAA AA A A A A A José M. F. Moura Work with: Usman A. Khan (Upenn), Soummya Kar (Princeton) The Australian National University RSISE Systems and Control Series Seminar Canberra, Australia, July 30, 2010 Acknowledgements: AFOSR grant # FA95501010291, NSF grant # CCF1011903, ONR MURI N000140710747

2 Carnegie Mellon Outline  Motivation for networked systems and distributed algorithms  Identify main characteristics of networked systems and distributed algorithms  Consensus algorithms and emerging behavior  Example: Localization  Conclusions 2

3 Carnegie Mellon Motivation  Networked systems: agents, sensors  Applications: inference (detection, estimation, filtering, …)  Distributed algorithms:  Consensus:  More general algorithms – High dimensional consensus  Realistic conditions:  Randomness: infrastructure (link failures), random protocols (gossip), communication noise  Quantization effects  Measurement updates  Issues: convergence – design topology to speed convergence; prove  Applications  Localization

4 Carnegie Mellon 4 Example 12 3 If link not available, W is symmetric, sparse W reflects the topology of network Neighborhoods: In matrix form, consensus is: Consensus is linear & iterative – issues: convergence and rate of convergence Networked Systems: Consensus

5 Carnegie Mellon Consensus: Optimization  Consensus  Convergence  Limit  Spectral condition 12 3 Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

6 Carnegie Mellon Topology Design  Speed convergence by making small  Choose where nonzero entries of W are and the actual values of the nonzero entries of W 12 3 12 3 Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

7 Carnegie Mellon Topology Design  Design Laplacean to minimize  Equal weights : weight α (Xiao and Boyd, CDC, Dec 2003)  Graph design: subject to constraints, e.g., number of edges M, structure of graph Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

8 Carnegie Mellon Topology Design  Nonrandom topology: (topology static or fixed)  Class 1: Noiseless communication  Class 2: Noisy communication  Random topology: Class 3 links may fail intermittently  Random topology with communication costs and budget constraint: Class 4  Communication in link (i,j) has cost  Link (i,j) fails with probability  Average comm. network budget constraint per iteration random Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

9 Carnegie Mellon 9 Topology Design – Class 1: Ramanujan (LPS) Fig.1. A non-bipartite LPS Ramanujan graph with degree k = 6, and number of vertices N = 62 (Figure constructed using software Pajek) Lubotzky, Phillips, Sarnak (LPS) (1988) and Margulis (1988) (22/12/1887 – 26/4/1920) Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

10 Carnegie Mellon 10 Comparison Studies Performance Metric LPS Ramanujan (We use a non-bipartite Ramanujan graph construction from LPS and call it LPS-II.) Regular Ring Lattice (RRL) Highly-structured regular graphs with nearest-neighbor connectivity Erdos-Renyi (ER) Random networks Watts-Strogatz (WS-I) Small-world networks using Watts-Strogatz construction vs

11 Carnegie Mellon LPS Ramanujan vs Regular Ring Lattice (RRL) Regular graph Ratio speed convergence Ramanujan Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

12 Carnegie Mellon 12 LPS-II vs Erdös-Renýi (ER)  The top blue line corresponds to the LPS-II graphs. The LPS-II graphs perform much better than the best ER graphs.  The relative performance of the LPS-II graphs over the ER graphs increases steadily with increasing N. Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

13 Carnegie Mellon 13 LPS-II vs Watts-Strogatz (WS-I) Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

14 Carnegie Mellon Topology Design–Class 4: Communication Costs  Communication in link (i,j) has cost  Link (i,j) fails with probability  Average comm. network budget constraint per iteration  Convex optimization (SDP): Kar & Moura, Transactions Signal Processing, vol. 56:7, July 2008

15 Carnegie Mellon 15 Random Topology w/ Comm. Cost Fig. 4. Per step convergence gain Sg: N = 80 and |E| = 9N=720 Kar & Moura, Transactions Signal Processing, vol. 56:7, July 2008

16 Carnegie Mellon High Dimensional Consensus  LOCAL INTERACTIONS: The local updates are given as  GLOBAL BEHAVIOR: Under what conditions does HDC converge:  for some appropriate function, w l 16 Khan, Kar, Moura, ICASSP ‘09, ‘10, ASILOMAR ‘09, IEEE TSP ‘10.

17 Carnegie Mellon Distributed Localization m = 2-D plane  Localize M sensors with unknown locations in m-dimensional Euclidean space [1]  Minimal number, n=m+1, of anchors with known locations  Sensors only communicate in a neighborhood  Only local distances in the neighborhood are known to the sensor  There is no central fusion center [1] Khan, Kar, Moura, “Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes,” IEEE Tr. on Sign. Pr., 57(5), pp. 2000-2016, May 2009.

18 Carnegie Mellon Distributed Sensor Localization  Assumptions  Sensors lie in convex hull of anchors  Anchors not on a hyper-plane  Sensors find m+1 neighbors so they lie in their convex hull  Only local distances available  Distributed localization (DILOC) algorithm  Sensor updates position estimate as convex l.c. of n=m+1 neighbors  Weights of l.c. are barycentric coordinates  Barycentric coordinates: ratio of generalized volumes  Barycentric coordinates: Cayley-Menger determinants (local distances) (TRIANGULATION)

19 Carnegie Mellon Barycentric Coord. & Cayley-Menger Det. 1 3 2 l  Barycentric coordinates:  Example 2D:  Cayley-Menger determinants:

20 Carnegie Mellon Set-up phase: Triangulation  Test to find a triangulation set,  Convex hull inclusion test: based on the following observation.  The test becomes 3 2 3 2 1 l 1 l

21 Carnegie Mellon Distributed Localization  Distributed localization algorithm (DILOC)  K anchors and M sensors (K+M=N) in m dimensions:  Matrix form:

22 Carnegie Mellon Distributed Localization: DILOC  DILOC:  Assume: { (Triangulation) (Barycentric Coordinates) Theorem [Convergence]: Under above assumptions: 1.The underlying Markov chain with transition probability is absorbing. 2.DILOC converges to the exact sensor coordinates:

23 Carnegie Mellon Distributed Localization: Simulations  N=7 node network in 2 -d plane  M= 4 sensors, K = m+1 = 3 anchors  M = 497 sensors 23

24 Carnegie Mellon Convergence of DILOC  Theorem [Convergence]:  Connected on average  Random network  Persistence cond.  Noisy communication Distributed distance localization algorithm converges  Errors in intersensor distances, Khan, Kar, and Moura, “DILAND: An Algorithm for Distributed Sensor Localiz. with Noisy Distance Meas.,” IEEE Tr. Signal Pr., 58:3, 1940-1947, March 2010

25 Carnegie Mellon Proof of Theorem  Proof: Cannot use standard stochastic approx. techniques because function of past measurements, non Markovian Study path behavior of error process wrt idealized update Define error process wrt idealized update: Dynamics of error process: Error goes to zero:

26 Carnegie Mellon Conclusion  High Dimensional Consensus  Optimization: Topology design  Distributed localization (DILOC):  Linear iterative  Local communications  Barycentric coordinates  Cayley-Menger determinants  Convergence:  Deterministic networks (protocols): standard Markov chain arguments  Random networks: structural (link) failures, noisy comm, quantized data − standard stochastic approximation algorithms not sufficient to prove convergence

27 Carnegie Mellon Bibliography  Soummya Kar, Saeed Aldosari, and José M. F. Moura, “Topology for Distributed Inference on Graphs,” IEEE Transactions on Signal Processing, volume 56 number 6, pp. 2609-2613, June 2008.Topology for Distributed Inference on GraphsIEEE Transactions on Signal Processing  Soummya Kar and José M. F. Moura, “Sensor Networks with Random Links: Topology Design for Distributed Consensus,” IEEE Transactions on Signal Processing, 56:7, pp. 3315-3326, July 2008.Sensor Networks with Random Links: Topology Design for Distributed ConsensusIEEE Transactions on Signal Processing  U. A. Khan, S. Kar, and J. M. F. Moura, “Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes,” IEEE Transactions on Signal Processing, 57: 5, pp. 2000-2016, May 2009; DOI:10.1109/TSP.2009.2014812.  U. A. Khan, S. Kar, and J. M. F. Moura, “DILAND: An Algorithm for Distributed Sensor Localization with Noisy Distance Measurements,” IEEE Transactions on Signal Processing, Vol. 58:3, pp.:1940-1947, March 2010.  U. A. Khan, S. Kar, and J. M. F. Moura, “Higher dimensional consensus: Learning in large-scale Networks,” IEEE Transactions on Signal Processing, Vol. 58:5, pp. 2836 - 2849, May 2010.

28 Carnegie Mellon


Download ppt "Carnegie Mellon Distributed Inference: High Dimensional Consensus TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA."

Similar presentations


Ads by Google