Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented at the Alabany Chapter of the ASA February 25, 2004 Washinghton DC.

Similar presentations


Presentation on theme: "Presented at the Alabany Chapter of the ASA February 25, 2004 Washinghton DC."— Presentation transcript:

1 Presented at the Alabany Chapter of the ASA February 25, 2004 Washinghton DC

2 Magnetocardiography at CardioMag Imaging inc. With Bolek Szymanski and Karsten Sternickel

3 Left: Filtered and averaged temporal MCG traces for one cardiac cycle in 36 channels (the 6x6 grid). Right Upper: Spatial map of the cardiac magnetic field, generated at an instant within the ST interval. Right Lower: T3-T4 sub-cycle in one MCG signal trace

4 Pseudo inverse Classical (Linear) Regression Analysis: Predict y from X X nm y Prediction model Can we apply wisdom to data and forecast them right? (n = 19 & m = 7) 19 data and 7 attributes (1 response)

5 Fundamental Machine Learning Paradox How to resolve Machine Learning Paradox? Learning occurs because of redundancy (patterns) in the data Machine Learning Paradox: If data contain redundancies (i) we can learn from data (ii) the “feature kernel matrix” K F is ill-conditioned (i) fix rank deficiency of K F with principal components (PCA) (ii) regularization: use K F + I instead of K F (ridge regression) (iii) local learning

6 Principal Component Regression (PCR): Replace X nm by T nh T nh  principal components projection of the (n) data records on the (h) “most important” eigenvectors of the feature kernel K F

7 Ridge Regression in Data Space “Wisdom” is now obtained from the right-hand inverse or Penrose inverse Ridge term is added to resolve learning paradox Data Kernel K D Needs kernels only

8 Implementing Direct Kernel Methods Linear Model: - PCA model - PLS model - Ridge Regression - Self-Organizing Map...

9 What have we learned so far? There is a “learning paradox” because of redundancies in the data We resolved this paradox by “regularization” - In the case of PCA we used the eigenvectors of the feature kernel - In the case of ridge regression we added a ridge to the data kernel So far prediction models involved only linear algebra  stricly linear What is in a kernel? The data kernel contains linear similarity measures (correlations) of data records xixi xjxj 

10 Kernels What is a kernel? - The data kernel expresses a similarity measure between data records - So far, the kernel contains linear similarity measures  linear kernel xixi xjxj  We actually can make up nonlinear similarity measures as well Radial Basis Function Kernel Nonlinear Distance or difference

11 Review: What is in a Kernel? A kernel can be considered as a (nonlinear) data transformation - Many different choices for the kernel are possible - The Radial Basis Function (RBF) or Gaussian kernel is an effective nonlinear kernel The RBF or Gaussian kernel is a symmetric matrix - Entries reflect nonlinear similarities amongst data descriptions - As defined by:

12 Direct Kernel Methods for Nonlinear Regression/Classification Consider the Kernel as a (nonlinear) data transformation - This is the so-called “kernel trick” (Hilbert, early 1900’s) - The Radial Basis Function (RBF) or Gaussian kernel is an efficient nonlinear kernel Linear regression models can be “tricked” into nonlinear models by applying such regression models on kernel transformed data - PCA  DK-PCA - PLS  DK-PLS (Partial Least Squares Support Vector Machines) - (Direct) Kernel Ridge Regression  Least Squares Support Vector Machines - Direct Kernel Self-Organizing maps (DK-SOM) These methods work in the same space as SVMs - DK models can usually be derived also from an optimization formulation (similar to SVMs) - Unlike the original SVMs DK methods are not sparse (i.,e., all data are support vectors) - Unlike SVMs there is no patent on direct kernel methods - Performance on hunderds of benchmark problems compare favorably with SVMs Classification can be considered as a special cae of regression Data Pre-processing: Data are usually Mahalanobis scaled first

13 Nonlinear PCA in Kernel Space Like PCA Consider a nonlinear data kernel transformation up front: Data  Kernel Derive principal components for that kernel (e.g. with NIPALS) Examples: - Haykin’s Spiral - Cherkassky’s nonlinear function model

14 PCA Example: Haykin’s Spiral (demo: haykin1) PCA

15 Linear PCR Example: Haykin’s Spiral (demo: haykin2)

16 K-PCR Example: Haykin’s Spiral 3 PCAs 12 PCAs (demo: haykin3)

17 Training Data Test Data Mahalanobis-scaled Training Data Kernel Transformed Training Data Centered Direct Kernel (Training Data) Mahalanobis-scaled Test Data Mahalanobis Scaling Factors Vertical Kernel Centering Factors Kernel Transformed Test Data Centered Direct Kernel (Test Data) Scaling, centering & making the test kernel centering consistent

18 36 MCG T3-T4 Traces Preprocessing: - horizontal Mahalanobis scaling - D4 wavlet transform - vertical Mahalanobis scaling (features and response)

19 SVMLib Linear PCA Direct Kernel PLS SVMLib

20 Direct Kernel PLS with 3 Latent Variables

21 Predictions on Test Cases with K-PLS

22 Benchmark Predictions on Test Cases

23 Direct Kernel with Robert Bress and Thanakorn Naenna


Download ppt "Presented at the Alabany Chapter of the ASA February 25, 2004 Washinghton DC."

Similar presentations


Ads by Google