Presentation is loading. Please wait.

Presentation is loading. Please wait.

Representative Previous Work

Similar presentations


Presentation on theme: "Representative Previous Work"— Presentation transcript:

1 Representative Previous Work
PCA LDA ISOMAP: Geodesic Distance Preserving J. Tenenbaum et al., 2000 LLE: Local Neighborhood Relationship Preserving S. Roweis & L. Saul, 2000 LE/LPP: Local Similarity Preserving, M. Belkin, P. Niyogi et al., 2001, 2003

2 Dimensionality Reduction Algorithms
Statistics-based Geometry-based Hundreds PCA/KPCA LDA/KDA ISOMAP LLE LE/LPP Matrix Tensor Any common perspective to understand and explain these dimensionality reduction algorithms? Or any unified formulation that is shared by them? Any general tool to guide developing new algorithms for dimensionality reduction?

3 Our Answers Direct Graph Embedding Linearization Kernelization Original PCA & LDA, ISOMAP, LLE, Laplacian Eigenmap PCA, LDA, LPP KPCA, KDA Tensorization Type Formulation CSA, DATER Example S. Yan, D. Xu, H. Zhang and et al., CVPR, 2005, T-PAMI,2007

4 Direct Graph Embedding
Intrinsic Graph: S, SP: Similarity matrix (graph edge) Similarity in high dimensional space L, B: Laplacian matrix from S, SP; Data in high-dimensional space and low-dimensional space (assumed as 1D space here): Penalty Graph

5 Direct Graph Embedding -- Continued
Intrinsic Graph: S, SP: Similarity matrix (graph edge) L, B: Laplacian matrix from S, SP; Similarity in high dimensional space Data in high-dimensional space and low-dimensional space (assumed as 1D space here): Criterion to Preserve Graph Similarity: Penalty Graph Special case B is Identity matrix (Scale normalization) Problem: It cannot handle new test data.

6 Linearization Objective function in Linearization
Intrinsic Graph Linear mapping function Penalty Graph Objective function in Linearization Problem: linear mapping function is not enough to preserve the real nonlinear structure?

7 Kernelization Objective function in Kernelization Nonlinear mapping:
Intrinsic Graph Nonlinear mapping: the original input space to another higher dimensional Hilbert space. Penalty Graph Constraint: Kernel matrix: Objective function in Kernelization

8 Tensorization Objective function in Tensorization
Intrinsic Graph Low dimensional representation is obtained as: Penalty Graph Objective function in Tensorization where

9 Common Formulation Direct Graph Embedding Linearization Kernelization
Intrinsic graph S, SP: Similarity matrix L, B: Laplacian matrix from S, SP; Penalty graph Direct Graph Embedding Linearization Kernelization Tensorization where

10 A General Framework for Dimensionality Reduction
Algorithm S & B Definition Embedding Type PCA/KPCA/CSA L/K/T LDA/KDA/DATER ISOMAP D LLE LE/LPP if ; B=D D/L D: Direct Graph Embedding L: Linearization K: Kernelization T: Tensorization

11 New Dimensionality Reduction Algorithm: Marginal Fisher Analysis
Important Information for face recognition: 1) Label information 2) Local manifold structure (neighborhood or margin) 1: if xi is among the k1-nearest neighbors of xj in the same class; 0 : otherwise 1: if the pair (i,j) is among the k2 shortest pairs among the data set; 0: otherwise

12 Marginal Fisher Analysis: Advantage
No Gaussian distribution assumption

13 Experiments: Face Recognition
ORL G3/P7 G4/P6 PCA+LDA (Linearization) 87.9% 88.3% PCA+MFA (Ours) 89.3% 91.3% KDA (Kernelization) 87.5% 91.7% KMFA (Ours) 88.6% 93.8% DATER-2 (Tensorization) 92.0% TMFA-2 (Ours) 95.0% 96.3% PIE-1 G3/P7 G4/P6 PCA+LDA (Linearization) 65.8% 80.2% PCA+MFA (Ours) 71.0% 84.9% KDA (Kernelization) 70.0% 81.0% KMFA (Ours) 72.3% 85.2% DATER-2 (Tensorization) 80.0% 82.3% TMFA-2 (Ours) 82.1%

14 Summary Optimization framework that unifies previous dimensionality reduction algorithms as special cases. A new dimensionality reduction algorithm: Marginal Fisher Analysis.

15 Event Recognition in News Video
Online and offline video search 56 events are defined in LSCOM Airplane Flying Existing Car Riot Geometric and photometric variances Clutter background Complex camera motion and object motion More diverse !

16 Earth Mover’s Distance in Temporal Domain (T-MM, Under Review)
Key Frames of two video clips in class “riot” EMD can efficiently utilize the information from multiple frames.

17 Multi-level Pyramid Matching (CVPR 2007, Under Review)
One Clip = several subclips (stages of event evolution) . No prior knowledge about the number of stages in an event, and videos of the same event may include a subset of stage only. Smoke Fire Level-1 Level-1 Level-0 Level-0 Fire Smoke Level-1 Level-1 Solution: Multi-level Pyramid Matching in Temporal Domain

18 Other Publications & Professional Activities
Kernel based Learning: Coupled Kernel-based Subspace Analysis: CVPR 2005 Fisher+Kernel Criterion for Discriminant Analysis: CVPR 2005 Manifold Learning: Nonlinear Discriminant Analysis on Embedding Manifold : T-CSVT (Accepted) Face Verification: Face Verification with Balanced Thresholds: T-IP (Accepted) Multimedia: Insignificant Shadow Detection for Video Segmentation: T-CSVT 2005 Anchorperson extraction for Picture in Picture News Video: PRL 2005 Guest Editor: Special issue on Video Analysis, Computer Vision and Image Understanding Special issue on Video-based Object and Event Analysis, Pattern Recognition Letters Book Editor: Semantic Mining Technologies for Multimedia Databases Publisher: Idea Group Inc. (

19 Future Work Machine Learning Computer Vision Pattern Recognition
Event Recognition Biometric Computer Vision Pattern Recognition Multimedia Content Analysis Web Search Multimedia

20 Acknowledgement Shuicheng Yan UIUC Steve Lin Microsoft Lei Zhang
Hong-Jiang Zhang Microsoft Zhengkai Liu, USTC Xuelong Li UK Shih-Fu Chang Columbia Xiaoou Tang Hong Kong

21 Thank You very much!

22 What is Gabor Features? …
Gabor features can improve recognition performance in comparison to grayscale features. Chengjun Liu T-IP, 2002 Five Scales Input: Grayscale Image Eight Orientations Output: 40 Gabor-filtered Images Gabor Wavelet Kernels

23 How to Utilize More Correlations?
Potential Assumption in Previous Tensor-based Subspace Learning: Intra-tensor correlations: Correlations among the features within certain tensor dimensions, such as rows, columns and Gabor features… Sets of highly correlated pixels Columns of highly Pixel Rearrangement Pixel Rearrangement

24 Tensor Representation: Advantages
Enhanced Learnability 2. Appreciable reductions in computational costs 3. Large number of available projection directions 4. Utilize the structure information This figure shows the procedure of the first step of this algorithm PCA CSA Feature Dimension Sample Number Computation Complexity

25 Connection to Previous Work –Tensorface (M. Vasilescu and D
Connection to Previous Work –Tensorface (M. Vasilescu and D. Terzopoulos, 2002) From an algorithmic view or mathematics view, CSA and Tensorface are both variants of Rank-(R1,R2,…,Rn) decomposition. Tensorface CSA Motivation Characterize external factors Characterize internal factors Input: Gray-level Image Vector Matrix Input: Gabor-filtered Image (Video Sequence ) Not address 3rd-order tensor When equal to PCA The number of images per person are only one or are a prime number Never Number of Images per Person for Training Lots of images per person One image per person This figure shows the procedure of the first step of this algorithm


Download ppt "Representative Previous Work"

Similar presentations


Ads by Google