Presentation is loading. Please wait.

Presentation is loading. Please wait.

Independent Component Analysis Zhen Wei, Li Jin, Yuxue Jin Department of Statistics Stanford University An Introduction.

Similar presentations


Presentation on theme: "Independent Component Analysis Zhen Wei, Li Jin, Yuxue Jin Department of Statistics Stanford University An Introduction."— Presentation transcript:

1 Independent Component Analysis Zhen Wei, Li Jin, Yuxue Jin Department of Statistics Stanford University An Introduction

2 Outline Introduction  History, Motivation and Problem Formulation Algorithms  Stochastic Gradient Algorithm  FastICA  Ordering Algorithm Applications Concluding Remark

3 Introduction There has been a wide discussion about the application of Independence Component Analysis (ICA) in Signal Processing, Neural Computation and Finance, first introduced as a novel tool to separate blind sources in a mixed signal. The Basic idea of ICA is to reconstruct from observation sequences the hypothesized independent original sequences.

4 ICA versus PCA Similarity  Feature extraction  Dimension reduction Difference  PCA uses up to second order moment of the data to produce uncorrelated components  ICA strives to generate components as independent as possible

5 Motivation - Blind Source Separation Suppose that there are k unknown independent sources A data vector x(t) is observed at each time point t, such that where A is a full rank scalar matrix

6 Mixing process A Blind source separation Independent components … Blind Source De-mixing process W … Observed sequences Recovered independent components

7 Problem formulation The goal of ICA is to find a linear mapping W such that the unmixed sequences u are maximally statistically independent Find some where C is a diagonal matrix and P is a permutation matrix.

8 Principle of ICA: Nongaussianity The fundamental restriction in ICA is that the independent components must be nongaussian for ICA to be possible. This is because gaussianity is invariant under orthogonal transformation and hence make the matrix A not identifiable for gaussian independent components.

9 Measures of nongaussianity (1) Kurtosis   Kurtosis can be very sensitive to outliers, when its value has to be estimate from a measured sample.

10 Measures of nongaussianity (2) Negentropy  A guassian variable has the largest entropy among all random variables of equal variance.  Definition: where is entropy and y gauss is a gaussian random variable of the same covariance matrix as y

11 Measures of nongaussianity (3) Mutual information  Definition:  Mutual information is a natural measure of the dependence between random variables.  It is always non-negative, and zero if and only if the variables are statistically independent.

12 Relation between negentropy and Mutual Information If we constrain y i to be uncorrelated and of unit variance where C is a constant that does not depend on W. This shows that finding an invertible transformation W that minimizes the mutual information is equivalent to finding directions in which the negentropy is maximized.

13 Algorithms Maximum likelihood Bell and Sejnowski (1995)  Maximum entropy  Minimum mutual information Low-Complexity Coding and Decoding (LOCOCODE ) Sepp Hochreiter et al. (1998) Neuro-mimetic approach

14 Maximum Likelihood The log-likelihood is: where the f i are the density functions of the s i Connection to mutual information: if the f i were equal to the true distributions of

15 Stochastic Gradient Algorithm Initialize the weight matrix W Iteration:  where is the learning rate, g is a nonlinear function, e.g.  Repeat until converges to The ICAs are the components of

16 FastICA - Preprocessing Centering:  Make the x-s mean 0 variables Whitening  Transform the observed vector x linearly so that it has unit variance:  One can show that: where

17 FastICA algorithm Initialize the weight matrix W Iteration:  where  Repeat until convergence The ICAs are the components of

18 Ordering of the ICAs Unlike PCA which has well-defined and intuitive explanation of the ordering of its components, i.e. the eigen values of its covariance matrix, ICA, however, deserves further investigation on this particular problem since a particular kind of ordering is not readily at hand. Follow a heuristic scheme called: testing-and- acceptance (TNA)

19 Ordering Algorithm

20 Applications (1) Feature extraction: Recognize the pattern of excess returns of Mutual Funds in the financial market of China Data: the time series of excess returns of four mutual funds in the financial market of China

21 ICA components

22 ICA reconstruction

23 Applications (2) Image de-noising  ICA  Sparse Code Shrinkage The example is exacted from (Hyvarinen, 1999).

24 Image de-noising (1) Suppose a noisy image model holds: where n is uncorrelated noise. where W is an orthogonal matrix that is the best orthogonal approximation of the inverse of the ICA mixing matrix.

25 Image de-noising (1) Sparse code shrinkage transformation: Function g(.) is zero close to the origin and linear after a cutting value depending on the parameters of the Laplacian density and the Gaussian noise density.

26 1. Original image2. Corrupted with noise 3. Recover by ICA and Sparse Code Shrinkage 3. Recover by classical wiener filtering

27 Concluding Remarks ICA is a very flexible and widely-applicable tool which searches the linear transformation of the observed data into statistically maximally independent components It is also interesting to note that the methods to compute ICA: maximum negentropy, minimum Mutual Information, maximum likelihood are equivalent to each other (at least in the statistical sense). There is also resemblance between the forms of the gradient descent (Newton Raphson) algorithm and the FastICA algorithm. Other application prospects: audio (signal) processing, image processing, telecommunication, Finance, Education

28 References [1] Amari, S., Cichocki, A., and Yang, H. (1996). A New Learning Algorithm for Blind Signal Separation, Advances in Neural Information Processing Systems 8, pages 757-763. [2] Bell, A. J. and Sejnowski, T. J. (1995). An Information- Maximization Approach to Blind Separation and Blind Deconvolution. Neural Computation, 7:1129-1159 [3] Cardoso, J. and Soloumiac, A. (1993). Blind beamforming for non-Gaussian signals. IEEE Proceedings-F, 140(46):362-370. [4] Chatfield, C. (1989). Analysis of Time Series: An Introduction, Fourth Edition. London: Chapman and Hall.

29 References continued [5] Moulines, E., Cardoso, J.-F., and Cassiat, E. (1997). Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models. Proc. ICASSP’ 97, volume 5, pages 3617- 3620, Munich. [6] Nadal, J.-P. and Parga, N. (1997). Redundancy reduction and independent component analysis: Conditions on cumulants and adaptive approaches. Neural Computation, 9:1421-1456. [7] Xu, L., Cheung, C., Yang, H., and Amari, S. (1997). Maximum equalization by entropy maximization and mixture of cumulative distribution functions. Proc. Of ICNN’97, pages 1821-1826, Houston [8] Yang, H., Amari, S., and Cichocki, A. (1997). Information back- propagation for blind separation of sources from non-linear mixtures. Proc. of ICNN, pages 2141-2146, Houston


Download ppt "Independent Component Analysis Zhen Wei, Li Jin, Yuxue Jin Department of Statistics Stanford University An Introduction."

Similar presentations


Ads by Google