Blind Source Separation: PCA & ICA

Slides:



Advertisements
Similar presentations
Independent Component Analysis
Advertisements

Independent Component Analysis: The Fast ICA algorithm
Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Dimension reduction (2) Projection pursuit ICA NCA Partial Least Squares Blais. “The role of the environment in synaptic plasticity…..” (1998) Liao et.
Color Imaging Analysis of Spatio-chromatic Decorrelation for Colour Image Reconstruction Mark S. Drew and Steven Bergner
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Dimension reduction (1)
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Started detailed look at PCA Reviewed linear algebra Today: More linear algebra Multivariate.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
Independent Component Analysis (ICA)
Dimensional reduction, PCA
Independent Component Analysis (ICA) and Factor Analysis (FA)
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Bayesian belief networks 2. PCA and ICA
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Principal Component Analysis. Philosophy of PCA Introduced by Pearson (1901) and Hotelling (1933) to describe the variation in a set of multivariate data.
Multidimensional Data Analysis : the Blind Source Separation problem. Outline : Blind Source Separation Linear mixture model Principal Component Analysis.
Survey on ICA Technical Report, Aapo Hyvärinen, 1999.
Summarized by Soo-Jin Kim
Chapter 2 Dimensionality Reduction. Linear Methods
Independent Components Analysis with the JADE algorithm
Heart Sound Background Noise Removal Haim Appleboim Biomedical Seminar February 2007.
Independent Component Analysis Zhen Wei, Li Jin, Yuxue Jin Department of Statistics Stanford University An Introduction.
Md. Zia Uddin Bio-Imaging Lab, Department of Biomedical Engineering Kyung Hee University.
Blind Source Separation by Independent Components Analysis Professor Dr. Barrie W. Jervis School of Engineering Sheffield Hallam University England
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
A note about gradient descent: Consider the function f(x)=(x-x 0 ) 2 Its derivative is: By gradient descent (If f(x) is more complex we usually cannot.
Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 1 BME452 Biomedical Signal Processing Lecture 3  Signal conditioning.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Principal Component Analysis (PCA)
Independent Component Analysis Independent Component Analysis.
Introduction to Independent Component Analysis Math 285 project Fall 2015 Jingmei Lu Xixi Lu 12/10/2015.
An Introduction of Independent Component Analysis (ICA) Xiaoling Wang Jan. 28, 2003.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 09: Discriminant Analysis Objectives: Principal.
Chapter 13 Discrete Image Transforms
HST.582J/6.555J/16.456J Gari D. Clifford Associate Director, Centre for Doctoral Training, IBME, University of Oxford
Unsupervised Learning II Feature Extraction
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
Principal Component Analysis
Principal Component Analysis (PCA)
Ch 12. Continuous Latent Variables ~ 12
LECTURE 11: Advanced Discriminant Analysis
Background on Classification
LECTURE 10: DISCRIMINANT ANALYSIS
9.3 Filtered delay embeddings
Brain Electrophysiological Signal Processing: Preprocessing
Orthogonal Subspace Projection - Matched Filter
Lecture 8:Eigenfaces and Shared Features
Principal Component Analysis (PCA)
Application of Independent Component Analysis (ICA) to Beam Diagnosis
Additive Data Perturbation: data reconstruction attacks
PCA vs ICA vs LDA.
Probabilistic Models with Latent Variables
Lecture 14 PCA, pPCA, ICA.
Bayesian belief networks 2. PCA and ICA
Principal Component Analysis
Recitation: SVD and dimensionality reduction
X.1 Principal component analysis
Feature space tansformation methods
Principal Components What matters most?.
LECTURE 09: DISCRIMINANT ANALYSIS
Independent Factor Analysis
Curse of Dimensionality
Principal Component Analysis
Presentation transcript:

Blind Source Separation: PCA & ICA HST 6.555 Gari D. Clifford gari@mit.edu http://www.mit.edu/~gari © G. D. Clifford 2005

What is BSS? Assume an observation (signal) is a linear mix of >1 unknown independent source signals The mixing (not the signals) is stationary We have as many observations as unknown sources To find sources in observations - need to define a suitable measure of independence … For example - the cocktail party problem (sources are speakers and background noise):

The cocktail party problem - find Z x1 z1 x2 z2 A xN XT=AZT zN ZT XT

Formal statement of problem N independent sources … Zmn ( M xN ) linear square mixing … Ann ( N xN ) (#sources=#sensors) produces a set of observations … Xmn ( M xN ) ….. XT = AZT

Formal statement of solution ‘demix’ observations … XT ( N xM ) into YT = WXT YT ( N xM )  ZT W ( N xN )  A-1 How do we recover the independent sources? ( We are trying to estimate W  A-1 ) …. We require a measure of independence!

‘Signal’ source ‘Noise’ sources Observed mixtures ZT XT=AZT YT=WXT

X = A Z Y = W X

(Independence between components is assumed) The Fourier Transform (Independence between components is assumed)

Recap: non-Causal Wiener Filtering Observation Sx(f ) x[n] - observation y[n] - ideal signal d[n] - noise component Noise Power Sd(f ) Ideal Signal Sy(f ) Filtered signal: Sfilt (f ) = Sx(f ).H(f ) f

BSS is a transform? Like Fourier, we decompose into components by transforming the observations into another vector space which maximises the separation between interesting (signal) and unwanted (noise). Unlike Fourier, separation is not based on frequency- It’s based on independence Sources can have the same frequency content No assumptions about the signals (other than they are independent and linearly mixed) So you can filter/separate in-band noise/signals with BSS

Principal Component Analysis Second order decorrelation = independence Find a set of orthogonal axes in the data (independence metric = variance) Project data onto these axes to decorrelate Independence is forced onto the data through the orthogonality of axes Conventional noise/signal separation technique

Singular Value Decomposition Decompose observation X=AZ into…. X=USVT S is a diagonal matrix of singular values with elements arranged in descending order of magnitude (the singular spectrum) The columns of V are the eigenvectors of C=XTX (the orthogonal subspace … dot(vi,vj)=0 ) … they ‘demix’ or rotate the data U is the matrix of projections of X onto the eigenvectors of C … the ‘source’ estimates

Singular Value Decomposition Decompose observation X=AZ into…. X=USVT

Eigenspectrum of decomposition S = singular matrix … zeros except on the leading diagonal Sij (i=j) are the eigenvalues½ Placed in order of descending magnitude Correspond to the magnitude of projected data along each eigenvector Eigenvectors are the axes of maximal variation in the data Eigenspectrum= Plot of eigenvalues [stem(diag(S).^2)] Variance = power (analogous to Fourier components in power spectra)

SVD noise/signal separation To perform SVD filtering of a signal, use a truncated SVD decomposition (using the first p eigenvectors) Y=USpVT [Reduce the dimensionality of the data by discarding noise projections Snoise=0 Then reconstruct the data with just the signal subsapce] Most of the signal is contained in the first few principal components. Discarding these and projecting back into the original observation space effects a noise-filtering or a noise/signal separation

Two dimensional example

Real Data

Independent Component Analysis Like PCA, we are looking for N different vectors onto which we can project our observations to give a set of N maximally independent signals (sources) output data (discovered sources) dimensionality = dimensionality of observations Instead of using variance as our independence measure (i.e. decorrelating) as we do in PCA, we use a measure of how statistically independent the sources are.

The basic idea ... Assume underlying source signals (Z ) are independent. Assume a linear mixing matrix (A )… XT=AZT in order to find Y (Z ), find W, (A-1 ) ... YT=WXT How? Initialise W & iteratively update W to minimise or maximise a cost function that measures the (statistical) independence between the columns of the YT.

Non-Gaussianity  statistical independence? From the Central Limit Theorem, - add enough independent signals together, Gaussian PDF ( =3) Sources, Z P(Z) ( <3 (1.8) ) Mixtures (XT=AZT) P(X) ( =3.4)

Recap: Moments of a distribution

Higher Order Moments (3rd -skewness) x

Higher Order Moments (4th-kurtosis) Gaussians are mesokurtic with κ =3 SubGaussian SuperGaussian x

Entropy-based cost function Kurtosis is highly sensitive to small changes in distribution tails. A more robust measures of Gaussianity is based on differential entropy H(y), … negentropy: where ygauss is a Gaussian variable with the same covariance matrix as y. J(y) can be calculated from kurtosis … Entropy: measure of randomness- Gaussians are maximally random

Minimising Mutual Information Mutual information (MI) between two vectors x and y : always non-negative and zero if variables are independent … therefore we want to minimise MI. MI can be re-written in terms of negentropy … where c is a constant. … differs from negentropy by a constant and a sign change I = Hx + Hy - Hxy

Measures of Statistical Independence In general we require a measure of statistical independence which we maximise between each of the N components. Non-Gaussianity is one approximation, but sensitive to small changes in the distribution tail. Other measures include: Mutual Information I, Entropy (Negentropy, J )… and Maximum (Log) Likelihood

Independent source discovery using Maximum Likelihood Generative latent variable modeling N observables, X ... from N sources, zi through a linear mapping W=wij Latent variables assumed to be independently distributed Find elements of W by gradient ascent - iterative update by where is some learning rate (const) … and is our objective cost function, the log likelihood

Weight updates W=[1 3; -2 -1] wij Iterations/100

The Cocktail Party Problem Revisited … some practical examples using ICA

Observations Order of sources has changed Signals have been scaled Separation of mixed observations into source estimates is excellent … apart from: Order of sources has changed Signals have been scaled Why? … In XT=AZT, insert a permutation matrix B … XT=ABB-1ZT  B-1ZT … = sources with different col. order.  sources change by a scaling A AB … ICA solutions are order and scale independent because κ is dimensionless & measured between the sources … - It is a relative measure between the outputs.

Separation of sources in the ECG

The ECG has a 1/f frequency scaling, Gaussian is broadband noise as expected, EM is highly peaked near DC and MA is more evenly distributed beteen 10 and 20 Hz. Sine is a peak in freq, and it’s pdf is weighted twoards the tails. This latter fact gives a negative kurtosis. Some plots are bi-modal – and so ICA might try to separate these out =3  =11 =0  =3 =1  =5 =0  =1.5

Transformation inversion for filtering Problem - can never know if sources are really reflective of the actual source generators - no gold standard De-mixing might alter the clinical relevance of the ECG features Solution: Identify unwanted sources, set corresponding (p) columns in W-1 to zero (Wp-1 ), then multiply back through to remove ‘noise’ sources and transform back into original observation space.

Transformation inversion for filtering

Real Data X Y=WX Z Xfilt = Wp-1Y

ICA results 4 X Y=WX Z Xfilt = Wp-1Y

Summary ICA is good for non-Gaussian ‘noise’ separation PCA is good for Gaussian noise separation ICA is good for non-Gaussian ‘noise’ separation PCs have obvious meaning - highest energy components ICA - derived sources : arbitrary scaling/inversion & ordering …. need energy-independent heuristic to identify signals / noise Order of ICs change - IC space is derived from the data. - PC space only changes if SNR changes. ICA assumes linear mixing matrix ICA assumes stationary mixing De-mixing performance is function of lead position ICA requires as many sensors (ECG leads) as sources Filtering - discard certain dimensions then invert transformation In-band noise can be removed - unlike Fourier!