Enhancing Tensor Subspace Learning by Element Rearrangement

Slides:



Advertisements
Similar presentations
1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.
Advertisements

DONG XU, MEMBER, IEEE, AND SHIH-FU CHANG, FELLOW, IEEE Video Event Recognition Using Kernel Methods with Multilevel Temporal Alignment.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction Keywords: Dimensionality reduction, manifold learning, subspace learning,
Machine Learning Lecture 8 Data Processing and Representation
Dimension reduction (1)
Face Recognition By Sunny Tang.
AGE ESTIMATION: A CLASSIFICATION PROBLEM HANDE ALEMDAR, BERNA ALTINEL, NEŞE ALYÜZ, SERHAN DANİŞ.
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Kernelized Discriminant Analysis and Adaptive Methods for Discriminant Analysis Haesun Park Georgia Institute of Technology, Atlanta, GA, USA (joint work.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
One-Shot Multi-Set Non-rigid Feature-Spatial Matching
Hilbert Space Embeddings of Hidden Markov Models Le Song, Byron Boots, Sajid Siddiqi, Geoff Gordon and Alex Smola 1.
Principal Component Analysis
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Subspaces, Basis, Dimension, Rank
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
Representative Previous Work
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
Problem Statement A pair of images or videos in which one is close to the exact duplicate of the other, but different in conditions related to capture,
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Cs: compressed sensing
Element 2: Discuss basic computational intelligence methods.
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.
Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University
Access Control Via Face Recognition Progress Review.
Local Non-Negative Matrix Factorization as a Visual Representation Tao Feng, Stan Z. Li, Heung-Yeung Shum, HongJiang Zhang 2002 IEEE Presenter : 張庭豪.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Exploring Intrinsic Structures from Samples: Supervised, Unsupervised, and Semisupervised Frameworks Supervised by Prof. Xiaoou Tang & Prof. Jianzhuang.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Presented by Xianwang Wang Masashi Sugiyama.
单击此处编辑母版标题样式 Class-oriented Regression Embedding 报告人:陈 燚 2011 年 8 月 25 日.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
GRASP Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Kilian Q. Weinberger, Fei Sha and Lawrence K. Saul ICML’04 Department of Computer.
CSE 185 Introduction to Computer Vision Face Recognition.
Manifold learning: MDS and Isomap
H. Lexie Yang1, Dr. Melba M. Crawford2
Optimal Dimensionality of Metric Space for kNN Classification Wei Zhang, Xiangyang Xue, Zichen Sun Yuefei Guo, and Hong Lu Dept. of Computer Science &
A Convergent Solution to Tensor Subspace Learning.
Supervisor: Nakhmani Arie Semester: Winter 2007 Target Recognition Harmatz Isca.
Principal Component Analysis and Linear Discriminant Analysis for Feature Reduction Jieping Ye Department of Computer Science and Engineering Arizona State.
Algorithm Development with Higher Order SVD
2D-LDA: A statistical linear discriminant analysis for image matrix
Nonlinear Dimension Reduction: Semi-Definite Embedding vs. Local Linear Embedding Li Zhang and Lin Liao.
Unsupervised Learning II Feature Extraction
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
Spectral Methods for Dimensionality
Nonlinear Dimensionality Reduction
Shuang Hong Yang College of Computing, Georgia Tech, USA Hongyuan Zha
School of Computer Science & Engineering
Lecture 8:Eigenfaces and Shared Features
CS 2750: Machine Learning Dimensionality Reduction
Face Recognition and Feature Subspaces
Outline Multilinear Analysis
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
CIS 5590: Large-Scale Matrix Decomposition Tensors and Applications
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
Introduction PCA (Principal Component Analysis) Characteristics:
Cheng-Yi, Chuang (莊成毅), b99
Feature space tansformation methods
Paper Reading Dalong Du April.08, 2011.
Nonlinear Dimension Reduction:
Research Institute for Future Media Computing
Presentation transcript:

Enhancing Tensor Subspace Learning by Element Rearrangement Dong XU School of Computer Engineering Nanyang Technological University http://www.ntu.edu.sg/home/dongxu dongxu@ntu.edu.sg

Outline Summary of our recent works on Tensor (or Bilinear) Subspace Learning Element Rearrangement for Tensor (or Bilinear) Subspace Learning

What is Tensor? Tensors are arrays of numbers which transform in certain ways under coordinate transformations. Vector Matrix 3rd-order Tensor Bilinear (or 2D) Subspace Learning: each image is represented as a 2nd-order tensor (i.e., a matrix) Tensor Subspace Learning (more general case): each image is represented as a higher order tensor

Definition of Mode-k Product Original Tensor Projection: high-dimensional space -> low-dimensional space Reconstruction: low-dimensional space -> high-dimensional space Product for two Matrices Projection Matrix 2 = New Tensor Notation: Original Matrix Projection Matrix New Matrix

Definition of Mode-k Flattening Matrix Tensor Potential Assumption in Previous Tensor-based Subspace Learning: Intra-tensor correlations: Correlations along column vectors of mode-k flattened matrices.

Data Representation in Dimensionality Reduction Vector Matrix 3rd-order Tensor Gray-level Image Filtered Image Video Sequence High Dimension Low Dimension PCA, LDA Rank-1 Decomposition, 2001 Shashua and A. Levin, Our Work Xu et al., 2005 Yan et al., 2005 … Examples Tensorface, 2002 M. Vasilescu and D. Terzopoulos,

What is Gabor Features? … Gabor features can improve recognition performance in comparison to grayscale features. Chengjun Liu T-IP, 2002 Five Scales … Input: Grayscale Image Eight Orientations Output: 40 Gabor-filtered Images Gabor Wavelet Kernels

Why Represent Image Objects as Tensors instead of Vectors? Natural Representation Gray-level Images (2D structure) Videos (3D structure) Gabor-filtered Images (3D structure) Enhance Learnability in Real Application Curse of Dimensionality (Gabor-filtered image: 100*100*40 -> Vector: 400,000) Small sample size problem (less than 5,000 images in common face databases) Reduce Computation Cost

Concurrent Subspace Analysis (CSA) as an Example (Criterion: Optimal Reconstruction) Dimensionality Reduction Reconstruction Input sample Sample in Low- dimensional space The reconstructed sample Objective Function: Projection Matrices? D. Xu, S. Yan, Lei Zhang, H. Zhang et al., CVPR 2005 and T-CSVT 2008

Tensorization - New Research Direction: Other Related Works Discriminant Analysis with Tensor Representation (DATER): CVPR 2005 and T-IP 2007 Coupled Kernel-based Subspace Learning (CKDA): CVPR 2005 Rank-one Projections with Adaptive Margins (RPAM): CVPR 2006 and T-SMC-B 2007 Enhancing Tensor Subspace Learning by Element Rearrangement: CVPR 2007 and T-PAMI 2009 Discriminant Locally Linear Embedding with High Order Tensor Data (DLLE/T): T-SMC-B 2008 Convergent 2D Subspace Learning with Null Space Analysis (NS2DLDA): T-CSVT 2008 Semi-supervised Bilinear Subspace Learning: T-IP 2009 Applications in Human Gait Recognition CSA+DATER: T-CSVT 2006 Tensor Marginal Fisher Analysis (TMFA): T-IP 2007 Note: Other researchers also published several papers along this direction!!!

Tensorization - New Research Direction Tensorization in Graph Embedding Framework Google Citation: 174 (until 15-Sep-2009) Direct Graph Embedding Linearization Kernelization Original PCA & LDA, ISOMAP, LLE, Laplacian Eigenmap PCA, LDA, LPP, MFA KPCA, KDA, KMFA Tensorization Type Formulation CSA, DATER, TMFA Example S. Yan, D. Xu, H. Zhang et al., CVPR, 2005 and T-PAMI,2007

Element Rearrangement: Motivations The success of tensor-based subspace learning relies on the redundancy among the unfolded vector However, such correlation/redundancy is usually not strong for real data Our Solution: Element rearrangement is employed as a preprocessing step to increase the intra-tensor correlations for existing tensor subspace learning methods

Motivations-Continued Intra-tensor correlations: Correlations among the features within certain tensor dimensions, such as rows, columns and Gabor features… Sets of highly correlated pixels Columns of highly Element Rearrangement Low correlation High correlation

Problem Definition The task of enhancing correlation/redundancy among 2nd–order tensor is to search for a pixel rearrangement operator R, such that 1. is the rearranged matrix from sample 2. The column numbers of U and V are predefined After the pixel rearrangement, we can use the rearranged tensors as input for tensor subspace learning algorithms!

Solution to Pixel Rearrangement Problem Initialize U0, V0 Compute reconstructed matrices n=n+1 Optimize U and V Optimize operator R

Step for Optimizing R It is Integer Programming problem Sender Original matrix q Receiver Reconstructed matrix Linear programming problem in Earth Mover’s Distance (EMD) has integer solution. We constrain the rearrangement within spatially local neighborhood or feature-based neighborhood for speedup.

Convergence Speed

Rearrangement Results

Reconstruction Visualization

Reconstruction Visualization

Classification Accuracy

Summary . Our papers published in CVPR 2005 are the first works to address dimensionality reduction with the image objects represented as high-order tensors of arbitrary order. Our papers published in CVPR 2005 opens a new research direction. We also published a series of works along this direction. Element arrangement can further improve data compression performance and classification accuracy.