Download presentation

Presentation is loading. Please wait.

Published byMartin Peters Modified over 2 years ago

1
Fast Bayesian Matching Pursuit Presenter: Changchun Zhang ECE / CMR Tennessee Technological University November 12, 2010 Reading Group (Authors: Philip Schniter, Lee C. Potter, and Justin Ziniel)

2
Outline ■ Introduction ■ Signal Model ■ Estimation of Basis and Parameters ■ Numerical Results ■ Conclusions 211/5/10

3
Introduction ■ Linear regression model □ Unknown parameter □ Unit columns in the regressor matrix □ Additive noise ■ Rough survey of existing approaches □ Greedy approaches □ Penalized least-squares solution ■ In the literature, primary focus is placed on the detection of the few significant entries of the sparse ■ In contrast, this paper adopts a MMSE (minimum mean-squared error) estimation formulation and focus on accurately inferring x from the noisy observations, y. 311/5/10

4
Signal Model 411/5/10 ■ Observing, - a noisy linear combination of the parameters in : Assumptions: 1.The noise is assumed to be white Gaussian with variance, i.e., 2.The columns of, are taken to be unit-norm. 3.The parameters are generated from Gaussian mixture density: a)Covariance is determined by a discrete random vector of mixture parameters. b)Also, take to be diagonal with, implying that are independent with. c) are Bernoulli( ) and to model spare x, choose and. 4.N>>M, and

5
Signal Model (continued) 511/5/10 ■ From the previous assumptions, it can be seen that

6
Estimation of Basis and Parameters 611/5/10 ■ Basis Selection Metric □ The non-zero locations in specify which of the basis elements(i.e., columns of A) are “active”. Thus, basis selection reduces to estimation of. □ Estimation of, not only computes the which of the basis configurations are most likely, but also how likely these bases are. The latter is accomplished by the estimation of dominant posteriors while it can be written as below, via Bayes rule. where. The estimation is reduced to computing the

7
Estimation of Basis and Parameters 711/5/10 ■ Basis Selection Metric (continued) □ The size of makes it impractical to compute or, the set occupying the dominant posteriors is used for simplicity. □ Working on the log domain, it is found that: is refer as basis selection metric.

8
Estimation of Basis and Parameters 811/5/10 ■ MMSE Parameter Estimation The MMSE estimate of x from y is while. Due to 2^N terms for, the MMSE estimate is closely approximately using only the dominant posteriors: Likewise, the covariance of the corresponding estimation error can be closely approximated as The primary challenge becomes that of obtaining and for each belongs to.

9
Estimation of Basis and Parameters 911/5/10 ■ Bayesian Matching Pursuit The efficient means of determining, the set of mixture parameters s yielding the dominant values of, or, equivalently, the dominant values of. Steps: 1.Start from and turns on one mixture parameters each time, yielding a set of N binary vectors s, i.e, 2.Compute the metrics for these vectors and collect the D largest metrics into 3.Based on the, all locations of a second active mixture parms are considered, yielding ND-D*(D+1)/2 unique vectors to store in 4.In, also collect the D largest elements into 5.Repeat the process until is obtained satisfying that is very small. 6. constitutes the final estimate of

10
Estimation of Basis and Parameters 1011/5/10 ■ Fast Metric Update A fast metric update which computes the change in u(.) that results from the activation of a single mixture parameter. That is to compute the in which the is the vector identical to except for the coefficient, which is active in but inactive in. Property of :

11
Estimation of Basis and Parameters 1111/5/10 ■ Fast Metric Update (continued) This equation quantifies the change in the basis selection metric due to the activation of the tap of. So just computing the based on its nearest, will simplify the computation of basic selection metric.

12
Estimation of Basis and Parameters 1211/5/10 ■ Fast Bayesian Matching Pursuit Here the complexity of computing can be simplified from to be linear in M by exploiting the structure of. Say that contains the indices of active elements in s. Here where and are the value of and when activating index. Then only need to be computed for surviving indices. This trick makes that the number of multiplications required by the algorithm to become

13
Numerical Results 1311/5/10 ■ FBMP behavior □ Parameters to be considered ▪ N ▪ M ▪ SNR ▪ p1 ▪ P = where P0 = 0.00005 is the target value of ▪ D □ Results: ▪ Figure 1 – Normalized MSE ~ M, D ▪ Figure2 – Average number of missed coeffcients ~ M, D ▪ Figure3 – Normalized MSE ~ Number of active coefficents, D ▪ Figure4 – Normalized MSE ~ SNR, D ▪ Figure5- Normalized MSE ~ SNR, for MMSE and MAP cases ▪ Figure6- Average FBMP runtime ~ D

14
1411/5/10

15
1511/5/10

16
1611/5/10

17
1711/5/10

18
1811/5/10

19
1911/5/10

20
Numerical Results (continued) 2011/5/10 ■ Comparison To other Algorithms □ Algorithms used comparison ▪ SparseBayes ▪ OMP ▪ StOMP ▪ GPSR-Basic ▪ BCS □ Measurements for comparison ▪ Normalized MSE ~ M ▪ Normalized MSE ~ SNR ▪ Average Runtime ~ M

21
2111/5/10

22
2211/5/10

23
2311/5/10

24
Conclusion 2411/5/10 ■ Brief Review of the Process 1.FBMP models each unknown coefficient as either inactive or active (with prior probability p1) 2.The Gaussian distribution (zero mean and variance )is assigned to the values 3.Observation y is modeled as an AWGN-corrupted version of the unknown coefficients mixed by a known matrix A 4.FBMP searches the active/inactive configuration S to find the subset S* with dominant posterior probability. 5.Parameter D is used to control the tradeoff between complexity and accuracy. ■ Numerical results show that the FBMP estimation outperform (NMSE) those other popular algorithms by several dB in certain situations.

25
Thank you! 2511/5/10

Similar presentations

OK

Clustering with Bregman Divergences Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, Joydeep Ghosh Presented by Rohit Gupta CSci 8980: Machine Learning.

Clustering with Bregman Divergences Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, Joydeep Ghosh Presented by Rohit Gupta CSci 8980: Machine Learning.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on diode circuits and rectifiers Ppt on unity in diversity logo Ppt on wireless network technologies Ppt on elements and compounds Maths ppt on surface area and volume free download Ppt on electrical circuit breaker Ppt on ready to serve beverages and more coupons Ppt on world diabetes day 2015 Ppt on 60 years of indian parliament fight Ppt on 3 idiots movie last part