Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adaptive & Array Signal Processing AASP

Similar presentations


Presentation on theme: "Adaptive & Array Signal Processing AASP"— Presentation transcript:

1 Adaptive & Array Signal Processing AASP
Prof. Dr.-Ing. João Paulo C. Lustosa da Costa University of Brasília (UnB) Department of Electrical Engineering (ENE) Laboratory of Array Signal Processing PO Box 4386 Zip Code , Brasília - DF Homepage:

2 Mathematical Background: Linear Algebra (1)
Eigenvalue Decomposition (EVD) Comparison to the Fourier Transform (FT) For the Fourier Transform, the data is projected over complex exponential functions (oscilations). Each vector of the FT maps a certain frequency. The vectors of the FT do not take into account the structure of the data! Karhunen-Loeve Transform (KLT), also known as Hotelling Transform or Eigenvector Transform It takes into account the data structure. Definition of an eigenvector where  is the eigenvalue and v is the eigenvector.

3 Mathematical Background: Linear Algebra (2)
Eigenvalue Decomposition (EVD) To find out the eigenvalues Once the eigenvalues are computed, the eigenvectors are given by the following relation replacing each eigenvalue at once. Once the eigenvalues and the eigenvectors are computed

4 Mathematical Background: Linear Algebra (3)
Eigenvalue Decomposition (EVD): Example Compute the eigenvalues and eigenvectors of the following matrix Computing the eigenvalues

5 Mathematical Background: Linear Algebra (4)
Eigenvalue Decomposition (EVD): Example Compute the eigenvectors The eigenvector is unitary! The same procedure for the next eigenvector!

6 Mathematical Background: Linear Algebra (5)
Eigenvalue Decomposition (EVD): Example Compute the eigenvectors

7 Mathematical Background: Linear Algebra (6)
Eigenvalue Decomposition (EVD): Example Which is the physical meaning of the EVD?

8 Mathematical Background: Linear Algebra (7)
Eigenvalue Decomposition (EVD): Example Which is the physical meaning of the EVD?

9 Mathematical Background: Linear Algebra (8)
Eigenvalue Decomposition (EVD): Example Which is the physical meaning of the EVD?

10 Mathematical Background: Linear Algebra (9)
Eigenvalue Decomposition (EVD): Example Which is the physical meaning of the EVD?

11 Mathematical Background: Linear Algebra (10)
Eigenvalue Decomposition (EVD): Example Which is the physical meaning of the EVD?

12 Mathematical Background: Linear Algebra (11)
Eigenvalue Decomposition (EVD): Example Which is the physical meaning of the EVD?

13 Mathematical Background: Linear Algebra (12)
Eigenvalue Decomposition (EVD): Example Eigenvalues of the noise white noise correlation matrix One eigenvalue with multiplicity M and M different eigenvectors. Eigenvalues of a complex sinusoid matrix One eigenvalue M and all the other eigenvalues are equal to zero.

14 Mathematical Background: Linear Algebra (13)
Eigenvalue Decomposition (EVD): Properties The eigenvalues of the correlation matrix R to the power of k are the eigenvalues of R to the power of k.

15 Mathematical Background: Linear Algebra (14)
Eigenvalue Decomposition (EVD): Properties The eigenvectors are linearly independent. There is only the trivial solution, i.e. all constants are zero. If there is no other solution, they are linearly independent.

16 Mathematical Background: Linear Algebra (15)
Eigenvalue Decomposition (EVD): Other properties The eigenvalues of the correlation matrix are real and nonnegative. The eigenvectors are orthogonal to each other. The eigenvectors diagonalize the correlation matrix. Projection matrix Trace of the correlation matrix is equal to the sum of the eigenvalues.

17 Mathematical Background: Linear Algebra (16)
Singular Value Decomposition (SVD) Example using the EVD Therefore, it is also possible If A is full rank, then the rank of A is min{M,N}. The EVD of R is given by The EVD of R’ is given by

18 Mathematical Background: Linear Algebra (17)
Singular Value Decomposition (SVD) Therefore, we can represent the SVD as U are the singular vectors of the left side, V are the singular vectors of the right side and S is a diagonal matrix with the singular values.

19 Mathematical Background: Linear Algebra (18)
Model order selection Is the noise colored? No Parameter Estimation Measurements Yes Subspace Prewhitening Measurements or data from several applications, for instance, MIMO channels, EEG, stock markets, chemistry, pharmacology, medical imaging, radar, and sonar Model order selection estimation of the number of the main components (total number of parameters) often assumed known in the literature Parameter estimation techniques extraction of the parameters from the main components Subspace prewhitening schemes application of the noise statistics to improve the parameter estimation

20 Mathematical Background: Linear Algebra (19)
Example: Model Order Selection Matrix data model Our objective is to estimate d from the noisy observations

21 Mathematical Background: Linear Algebra (20)
The eigenvalues of the sample covariance matrix d = 2, M = 8 M - d zero eigenvalues d nonzero signal eigenvalues

22 Mathematical Background: Linear Algebra (21)
The eigenvalues of the sample covariance matrix d = 2, M = 8, SNR = 0 dB M - d equal noise eigenvalues d signal plus noise eigenvalues Asymptotic theory of the noise [1] This is the assumption in AIC, MDL [1]: T. W. Anderson, “Asymptotic theory for principal component analysis”, Annals of Mathematical Statistics, vol. 34, no. 1, pp , 1963.

23 Mathematical Background: Linear Algebra (22)
The eigenvalues of the sample covariance matrix d = 2, M = 8, SNR = 0 dB, N = 10 Finite SNR, Finite N M - d noise eigenvalues follow a Wishart distribution. d signal plus noise eigenvalues

24 Mathematical Background: Linear Algebra (23)
MOS approach Classification Scenario Gaussian Noise outperforms EDC [2] 1986 Eigenvalue based - White AIC, MDL ESTER [3] 2004 Subspace based M = 128; N = 128 White/Colored EDC, AIC, MDL RADOI [4] 2004 M = 4; N = 16 Greschgörin Disk Estimator (GDE), AIC, MDL EFT [5,6] 2007 M = 5; N = 6 AIC, MDL, MDLB, PDL SAMOS [7] M = 65; N = 65 ESTER NEMO [8] 2008 N = 8*M (various) SURE [9] M = 64; N = [96,128] Laplace, BIC The subspace based MOS schemes require that the matrix A has Vandermonde structure.

25 Mathematical Background: Linear Algebra (24)
[2]: P. R. Krishnaiah, L. C. Zhao, and Z. D. Bai, “On detection of the number of signals in presence of white noise”, Journal of Multivariate Analysis, vol. 20, pp. 1-25, 1986. [3]: R. Badeau, B. David, and G. Richard, “Selecting the modeling order for the ESPRIT high resolution method: an alternative approach”, in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2004), Montreal, Canada, May 2004. [4]: E. Radoi and A. Quinquis, “A new method for estimating the number of harmonic components in noise with application in high resolution RADAR”, EURASIP Journal on App. Sig. Proc., pp , 2004. [5]: J. Grouffad, P. Larzabal, and H. Clergeot, “Some properties of ordered eigenvalues of a Wishart matrix: application in detection test and model order selection”, in Proc. IEEE Internation Conference on Acoustics, Speech, and Signal Processing (ICASSP 1996), May 1996, vol. 5, pp [6]: A. Quinlan, J.-P. Barbot, P. Larzabal, and M. Haardt, “Model Order selection for short data: An Exponential Fitting Test (EFT)”, EURASIP Journal on Applied Signal Processing, 2007, Special Issue on Advances in Subspace-based Techniques for Signal Processing and Communications. [7]: J.-M. Papy, L. De Lathauwer, and S. Van Huffel, “A shift invariance-based order-selection technique for exponential data modeling”, IEEE Signal Processing Letters, vol. 14, pp , July 2007. [8]:R. R. Nadakuditi and A. Edelman, “Sample eigenvalue based detection of high-dimensional signals in white noise using relatively few samples”, IEEE Trans. of Sig. Proc., vol. 56, pp , July 2008. [9]:M. O. Ulfarsson and V. Solo, “Rank selection in noisy PCA with SURE and random matrix theory”, in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2008), Las Vegas, USA, Apr

26 Mathematical Background: Linear Algebra (25)
Kullback-Leibler Distance (or Divergence), also known as relative entropy How close a certain Probability Distribution p is to a model Probability Distribution q

27 Mathematical Background: Linear Algebra (26)
KLD and the mutual information X and Y are random variables. p(x,y) is the global PDF, and p(x) and p(y) are the marginal PDFs. Example: If X is known, it does not help to find Y. Then, the mutual information is zero. If X and Y are independent.

28 Mathematical Background: Linear Algebra (27)
Maximum likelihood estimation (MLE) basis for parameter estimators used to determine the parameters that maximize the probability (likelihood) of the sample data Given a random variable x with PDF There are d parameters to be estimated and N samples. The likelihood function is defined as (note that the independence of temporal samples is assumed)

29 Mathematical Background: Linear Algebra (28)
Maximum likelihood estimation (MLE) We define the logarithmic likelihood function as To estimate the parameters, we can maximize: the likelihood function, or the log likelihood function

30 Mathematical Background: Linear Algebra (29)
Maximum likelihood estimation (MLE): Example Given some N samples of a certain date, estimate the mean and the variance considering that the samples are modeled by a normal distribution. The log likelihood function is given by Computing the partial derivatives: mean

31 Mathematical Background: Linear Algebra (30)
Maximum likelihood estimation (MLE): Example Computing the partial derivatives: variance

32 Mathematical Background: Linear Algebra (31)

33 Mathematical Background: Linear Algebra (32)
Assuming N i.i.d. Gaussian samples and the model order equal to k

34 Mathematical Background: Linear Algebra (33)
Assuming that the samples are i.i.d. Gaussian Applying the Eigenvalue Decomposition

35 Mathematical Background: Linear Algebra (34)
Computing the logarithmic likelihood function and removing the constant terms Removing the constant term The trace of a scalar is equal to the scalar itself, therefore:

36 Mathematical Background: Linear Algebra (35)
The trace operator allows the change of the position of the matrices inside of it. Using the definition of the sample covariance matrix

37 Mathematical Background: Linear Algebra (36)
Code to check in MATLAB the previous identity for N = 5; M = 3; x = 0; y = 0; for kk = 1:N a(:,kk) = randn(M,1); x = a(:,kk)*a(:,kk)' + x; y = trace(a(:,kk)*a(:,kk)') + y; end y trace(x)

38 Mathematical Background: Linear Algebra (37)
Replacing the matrices by the eigenvalues

39 Mathematical Background: Linear Algebra (38)
Replacing the matrices by the eigenvalues We can add a constant (adding or removing constant terms does not change the expression to be minimized)

40 Mathematical Background: Linear Algebra (39)
More algebra… Replacing…

41 Mathematical Background: Linear Algebra (40)
More algebra…

42 Mathematical Background: Linear Algebra (41)
Computing the degrees of freedom

43 Mathematical Background: Linear Algebra (42)
AIC

44 Mathematical Background: Linear Algebra (43)
It was shown in [5,6], that in the noise-only case

45 Mathematical Background: Linear Algebra (44)
Observation is a superposition of noise and signal The noise eigenvalues still exhibit the exponential profile. Therefore, the scheme is Exponential Fitting Test (EFT). We can predict the profile of the noise eigenvalues to find the “breaking point” Let P denote the number of candidate noise eigenvalues. choose the largest P such that the P noise eigenvalues can be fitted with a decaying exponential d = 3, M = 8, SNR = 20 dB, N = 10

46 Mathematical Background: Linear Algebra (45)
Finding the breaking point. For P = 2 Predict M-2 based on M-1 and M relative distance d = 3, M = 8, SNR = 20 dB, N = 10

47 Mathematical Background: Linear Algebra (46)
Finding the breaking point. For P = 3 Predict M-3 based on M-2, M-1, and M relative distance d = 3, M = 8, SNR = 20 dB, N = 10

48 Mathematical Background: Linear Algebra (47)
Finding the breaking point. For P = 4 Predict M-4 based on M-3, M-2, M-1, and M relative distance d = 3, M = 8, SNR = 20 dB, N = 10

49 Mathematical Background: Linear Algebra (48)
Finding the breaking point. For P = 5 Predict M-5 based on M-4 , M-3, M-2, M-1, and M relative distance The relative distance becomes very big, we have found the break point. d = 3, M = 8, SNR = 20 dB, N = 10

50 Mathematical Background: Linear Algebra (49)
(1) Set the number of candidate noise eigenvalues to P = 1 (2) Estimation step: Estimate noise eigenvalue M - P (3) Comparison step: Compare estimate with observation. If set P = P + 1, go to (2). (4) The final estimate is (second modification w.r.t. original EFT)

51 Mathematical Background: Linear Algebra (50)
Determining the threshold coefficients For every P: vary and determine numerically the probability to detect a signal in noise-only data. Then choose such that the desired is met. Example:

52 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

53 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

54 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

55 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

56 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

57 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

58 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

59 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

60 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

61 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

62 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

63 Mathematical Background: Linear Algebra (51)
Comparing the state-of-the-art model order selection techniques

64 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

65 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

66 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

67 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

68 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

69 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

70 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

71 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

72 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

73 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

74 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

75 Mathematical Background: Linear Algebra (52)
Comparing the state-of-the-art model order selection techniques

76 Mathematical Background: Linear Algebra (53)
“Full SVD” “Economy size SVD” Low-rank approximation Truncated SVD

77 Mathematical Background: Linear Algebra (54)
Truncated SVD

78 Mathematical Background: Linear Algebra (55)
Truncated SVD

79 Mathematical Background: Linear Algebra (56)
Truncated SVD

80 Mathematical Background: Linear Algebra (57)
Truncated SVD

81 Mathematical Background: Linear Algebra (58)
Truncated SVD

82 Mathematical Background: Linear Algebra (59)
Truncated SVD


Download ppt "Adaptive & Array Signal Processing AASP"

Similar presentations


Ads by Google