Presentation is loading. Please wait.

Presentation is loading. Please wait.

Robust Network Compressive Sensing Lili Qiu UT Austin NSF Workshop Nov. 12, 2014.

Similar presentations


Presentation on theme: "Robust Network Compressive Sensing Lili Qiu UT Austin NSF Workshop Nov. 12, 2014."— Presentation transcript:

1 Robust Network Compressive Sensing Lili Qiu UT Austin NSF Workshop Nov. 12, 2014

2 Network Matrices and Applications Network matrices –Traffic matrix –Loss matrix –Delay matrix –Channel State Information (CSI) matrix –RSS matrix 2

3 3 Q: How to fill in missing values in a matrix?

4 1 3 2 router flow 1 flow 3 flow 2 link 2 link 1 link 3 flow 1 flow 2 flow 3 time 1 time 2 … Applications need complete network matrices –Traffic engineering –Spectrum sensing –Channel estimation –Localization –Multi-access channel design –Network coding, wireless video coding –Anomaly detection –Data aggregation –…–… Missing Values: Why Bother? 4 subcarrier 1 subcarrier 2 subcarrier 3 time 1 time 2 … Vacant freq,loc1 freq 2, loc1 freq 3, loc1 time 1 time 2 …

5 5 The Problem Interpolation: fill in missing values from incomplete, erroneous, and/or indirect measurements Anomaly Future Missing x 1,3

6 State of the Art Existing works exploit low-rank nature of network matrices Many factors contribute to network matrices –Anomalies, measurement errors, and noise –These factors may destroy low-rank structure and spatio-temporal locality –Limit the effectiveness of existing works 6

7 Network Matrices NetworkDateDurationSize (flows/links x #timeslot) 3G traffic11/20101 day472 x 144 WiFi traffic1/20131 day50 x 118 Abilene traffic4/20031 week121 x 1008 GEANT traffic4/20051 week529 x 672 1 channel CSI2/200915 min.90 x 9000 Multi. Channel CSI 2/201415 min.270 x 5000 Cister RSSI11/20104 hours16 x 10000 CU RSSI8/2007500 frames895 x 500 Umich RSS4/200630 min.182 x 3127 UCSB Meshnet4/20063 days425 x 1527 7

8 Rank Analysis 8 Adding anomalies increases ranks in all cases. Without anomalies With anomalies

9 LENS Decomposition: Basic Formulation 9 = + y 1,3 0 … … … y 3,n 0 00 0 0 00 0 0 0 0 0 0 0 0 0 0 + [Input] D: Original matrix [Output] X: A low rank matrix (r « m,n) [Output] Y: A sparse anomaly matrix [Output] Z: A small noise matrix d 1,3 d 1,2 … … … d 2,n d m,n d 3,n d 1,4 d 2,1 d 2,2 d 2,3 d 3,1 d 3,4 d m,2 d m,4 … x m,r x r,n … x 1,1 x 2,1 x 3,1 x m,1 x 3,r x 2,r x 1,r x 1,1 x r,1 x r,2 x 1,2 x r,3 x 1,3 x 1,n … … … … …

10 LENS Decomposition: Basic Formulation Formulate it as a convex opt. problem: 10 min: subject to: = + + d 1,3 d 1,2 d 1,4 [Input] D: Original matrix x 1,2 x 1,4 [Output] X: A low rank matrix 0 0 y 1,3 0 0 000 0 000 [Output] Y: A sparse anomaly matrix [Output] Z: A small noise matrix α β σ

11 LENS Decomposition: Support Indirect Measurement The matrix of interest may not be directly observable (e.g., traffic matrices) –AX + BY + CZ + W = D A: routing matrix B: an over-complete anomaly profile matrix C: noise profile matrix 11 1 3 2 router flow 1 flow 3 flow 2 link 2 link 1 link 3

12 LENS Decomposition: Account for Domain Knowledge Domain Knowledge –Temporal stability –Spatial locality –Initial solution 12 min: subject to:

13 Optimization Algorithm One of many challenges in optimization: –X and Y appear in multiple places in the objective and constraints –Coupling makes optimization hard Reformulation for optimization by introducing auxiliary variables 13 min: subject to:

14 Optimization Algorithm Alternating Direction Method (ADM) –For each iteration, alternate among the optimization of the augmented Lagrangian function by varying each one of X, X k, Y, Y 0, Z, W, M, M k, N while fixing the other variables –Improve efficiency through approximate SVD 14

15 Setting Parameters where (m X,n X ) is the size of X, (m Y,n Y ) is the size of Y, η(D) is the fraction of entries neither missing or erroneous, θ is a control parameter that limits contamination of dense measurement noise 15 min: α β σ σ

16 Setting Parameters (Cont.) ϒ reflects the importance of domain knowledge –e.g. temporal-stability varies across traces Self-tuning algorithm –Drop additional entries in the matrix –Quantify the error of the entries that were present in the matrix but dropped intentionally during the search –Pick ϒ that gives lowest error 16 min: σ γ

17 17 Algorithms Compared AlgorithmDescription Baseline Baseline estimate via rank-2 approximation SVD-base SRSVD with baseline removal SVD-base +KNN Apply KNN after SVD-base SRMF [SIGCOMM09] Sparsity Regularized Matrix Factorization SRMF+KNN [SIGCOMM09] Hybrid of SRMF and KNN LENS Robust network compressive sensing

18 Self Learned ϒ 18 Best ϒ = 0 Best ϒ = 1Best ϒ = 10 No single ϒ works for all traces. Self tuning allows us to automatically select the best ϒ.

19 Interpolation under anomalies 19 CU RSSI LENS performs the best under anomalies.

20 Interpolation without anomalies 20 CU RSSI LENS performs the best even without anomalies.

21 Conclusion Main contributions –Important impact of anomalies in matrix interpolation –Decompose a matrix into a low-rank matrix, a sparse anomaly matrix, a dense but small noise matrix –An efficient optimization algorithm –A self-learning algorithm to automatically tune the parameters Future work –Applying it to spectrum sensing, channel estimation, localization, etc. 21

22 22 Thank you!

23 Evaluation Methodology Metric –Normalized Mean Absolute Error for missing values Report the average of 10 random runs Anomaly generation –Inject anomalies to a varying fraction of entries with varying sizes Different dropping models 23

24 Summary of Other Results The improvement of LENS increases with anomaly sizes and # anomalies. LENS consistently performs the best under different dropping modes. LENS yields the lowest prediction error. LENS achieves higher anomaly detection accuracy. 24


Download ppt "Robust Network Compressive Sensing Lili Qiu UT Austin NSF Workshop Nov. 12, 2014."

Similar presentations


Ads by Google