Presentation is loading. Please wait.

Presentation is loading. Please wait.

Enhancing Tensor Subspace Learning by Element Rearrangement

Similar presentations


Presentation on theme: "Enhancing Tensor Subspace Learning by Element Rearrangement"— Presentation transcript:

1 Enhancing Tensor Subspace Learning by Element Rearrangement
Dong XU School of Computer Engineering Nanyang Technological University

2 Outline Summary of our recent works on Tensor (or Bilinear) Subspace Learning Element Rearrangement for Tensor (or Bilinear) Subspace Learning

3 What is Tensor? Tensors are arrays of numbers which transform in certain ways under coordinate transformations. Vector Matrix 3rd-order Tensor Bilinear (or 2D) Subspace Learning: each image is represented as a 2nd-order tensor (i.e., a matrix) Tensor Subspace Learning (more general case): each image is represented as a higher order tensor

4 Definition of Mode-k Product
Original Tensor Projection: high-dimensional space -> low-dimensional space Reconstruction: low-dimensional space -> high-dimensional space Product for two Matrices Projection Matrix 2 = New Tensor Notation: Original Matrix Projection Matrix New Matrix

5 Definition of Mode-k Flattening
Matrix Tensor Potential Assumption in Previous Tensor-based Subspace Learning: Intra-tensor correlations: Correlations along column vectors of mode-k flattened matrices.

6 Data Representation in Dimensionality Reduction
Vector Matrix 3rd-order Tensor Gray-level Image Filtered Image Video Sequence High Dimension Low Dimension PCA, LDA Rank-1 Decomposition, 2001 Shashua and A. Levin, Our Work Xu et al., 2005 Yan et al., 2005 Examples Tensorface, 2002 M. Vasilescu and D. Terzopoulos,

7 What is Gabor Features? …
Gabor features can improve recognition performance in comparison to grayscale features. Chengjun Liu T-IP, 2002 Five Scales Input: Grayscale Image Eight Orientations Output: 40 Gabor-filtered Images Gabor Wavelet Kernels

8 Why Represent Image Objects as Tensors instead of Vectors?
Natural Representation Gray-level Images (2D structure) Videos (3D structure) Gabor-filtered Images (3D structure) Enhance Learnability in Real Application Curse of Dimensionality (Gabor-filtered image: 100*100*40 -> Vector: 400,000) Small sample size problem (less than 5,000 images in common face databases) Reduce Computation Cost

9 Concurrent Subspace Analysis (CSA) as an Example (Criterion: Optimal Reconstruction)
Dimensionality Reduction Reconstruction Input sample Sample in Low- dimensional space The reconstructed sample Objective Function: Projection Matrices? D. Xu, S. Yan, Lei Zhang, H. Zhang et al., CVPR 2005 and T-CSVT 2008

10 Tensorization - New Research Direction: Other Related Works
Discriminant Analysis with Tensor Representation (DATER): CVPR 2005 and T-IP 2007 Coupled Kernel-based Subspace Learning (CKDA): CVPR 2005 Rank-one Projections with Adaptive Margins (RPAM): CVPR 2006 and T-SMC-B 2007 Enhancing Tensor Subspace Learning by Element Rearrangement: CVPR 2007 and T-PAMI 2009 Discriminant Locally Linear Embedding with High Order Tensor Data (DLLE/T): T-SMC-B 2008 Convergent 2D Subspace Learning with Null Space Analysis (NS2DLDA): T-CSVT 2008 Semi-supervised Bilinear Subspace Learning: T-IP 2009 Applications in Human Gait Recognition CSA+DATER: T-CSVT 2006 Tensor Marginal Fisher Analysis (TMFA): T-IP 2007 Note: Other researchers also published several papers along this direction!!!

11 Tensorization - New Research Direction Tensorization in Graph Embedding Framework
Google Citation: 174 (until 15-Sep-2009) Direct Graph Embedding Linearization Kernelization Original PCA & LDA, ISOMAP, LLE, Laplacian Eigenmap PCA, LDA, LPP, MFA KPCA, KDA, KMFA Tensorization Type Formulation CSA, DATER, TMFA Example S. Yan, D. Xu, H. Zhang et al., CVPR, 2005 and T-PAMI,2007

12 Element Rearrangement: Motivations
The success of tensor-based subspace learning relies on the redundancy among the unfolded vector However, such correlation/redundancy is usually not strong for real data Our Solution: Element rearrangement is employed as a preprocessing step to increase the intra-tensor correlations for existing tensor subspace learning methods

13 Motivations-Continued
Intra-tensor correlations: Correlations among the features within certain tensor dimensions, such as rows, columns and Gabor features… Sets of highly correlated pixels Columns of highly Element Rearrangement Low correlation High correlation

14 Problem Definition The task of enhancing correlation/redundancy among 2nd–order tensor is to search for a pixel rearrangement operator R, such that is the rearranged matrix from sample 2. The column numbers of U and V are predefined After the pixel rearrangement, we can use the rearranged tensors as input for tensor subspace learning algorithms!

15 Solution to Pixel Rearrangement Problem
Initialize U0, V0 Compute reconstructed matrices n=n+1 Optimize U and V Optimize operator R

16 Step for Optimizing R It is Integer Programming problem
Sender Original matrix q Receiver Reconstructed matrix Linear programming problem in Earth Mover’s Distance (EMD) has integer solution. We constrain the rearrangement within spatially local neighborhood or feature-based neighborhood for speedup.

17 Convergence Speed

18 Rearrangement Results

19 Reconstruction Visualization

20 Reconstruction Visualization

21 Classification Accuracy

22 Summary . Our papers published in CVPR 2005 are the first works to address dimensionality reduction with the image objects represented as high-order tensors of arbitrary order. Our papers published in CVPR 2005 opens a new research direction. We also published a series of works along this direction. Element arrangement can further improve data compression performance and classification accuracy.


Download ppt "Enhancing Tensor Subspace Learning by Element Rearrangement"

Similar presentations


Ads by Google