Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University

Slides:



Advertisements
Similar presentations
Coherent Laplacian 3D protrusion segmentation Oxford Brookes Vision Group Queen Mary, University of London, 11/12/2009 Fabio Cuzzolin.
Advertisements

1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1.
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction Keywords: Dimensionality reduction, manifold learning, subspace learning,
Manifold Learning Dimensionality Reduction. Outline Introduction Dim. Reduction Manifold Isomap Overall procedure Approximating geodesic dist. Dijkstra’s.
Presented by: Mingyuan Zhou Duke University, ECE April 3, 2009
1 A Survey on Distance Metric Learning (Part 1) Gerry Tesauro IBM T.J.Watson Research Center.
Non-linear Dimensionality Reduction CMPUT 466/551 Nilanjan Ray Prepared on materials from the book Non-linear dimensionality reduction By Lee and Verleysen,
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Isomap Algorithm.
One-Shot Multi-Set Non-rigid Feature-Spatial Matching
“Random Projections on Smooth Manifolds” -A short summary
Graph Based Semi- Supervised Learning Fei Wang Department of Statistical Science Cornell University.
Principal Component Analysis
LLE and ISOMAP Analysis of Robot Images Rong Xu. Background Intuition of Dimensionality Reduction Linear Approach –PCA(Principal Component Analysis) Nonlinear.
Dimensional reduction, PCA
Manifold Learning: ISOMAP Alan O'Connor April 29, 2008.
Three Algorithms for Nonlinear Dimensionality Reduction Haixuan Yang Group Meeting Jan. 011, 2005.
1 NHDC and PHDC: Local and Global Heat Diffusion Based Classifiers Haixuan Yang Group Meeting Sep 26, 2005.
Distance Metric Learning: A Comprehensive Survey
A Global Geometric Framework for Nonlinear Dimensionality Reduction Joshua B. Tenenbaum, Vin de Silva, John C. Langford Presented by Napat Triroj.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
NonLinear Dimensionality Reduction or Unfolding Manifolds Tennenbaum|Silva|Langford [Isomap] Roweis|Saul [Locally Linear Embedding] Presented by Vikas.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Nonlinear Dimensionality Reduction by Locally Linear Embedding Sam T. Roweis and Lawrence K. Saul Reference: "Nonlinear dimensionality reduction by locally.
Distance Metric Learning in Data Mining
Diffusion Maps and Spectral Clustering
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Representative Previous Work
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Enhancing Tensor Subspace Learning by Element Rearrangement
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
FODAVA-Lead Education, Community Building, and Research: Dimension Reduction and Data Reduction: Foundations for Interactive Visualization Haesun Park.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Learning a Kernel Matrix for Nonlinear Dimensionality Reduction By K. Weinberger, F. Sha, and L. Saul Presented by Michael Barnathan.
1 Learning from Shadows Dimensionality Reduction and its Application in Artificial Intelligence, Signal Processing and Robotics Ali Ghodsi Department of.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
Transductive Regression Piloted by Inter-Manifold Relations.
Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Presented by Xianwang Wang Masashi Sugiyama.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
GRASP Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Kilian Q. Weinberger, Fei Sha and Lawrence K. Saul ICML’04 Department of Computer.
Manifold learning: MDS and Isomap
CSC2535: Computation in Neural Networks Lecture 12: Non-linear dimensionality reduction Geoffrey Hinton.
1 LING 696B: MDS and non-linear methods of dimension reduction.
Nonlinear Dimensionality Reduction Approach (ISOMAP)
Jan Kamenický.  Many features ⇒ many dimensions  Dimensionality reduction ◦ Feature extraction (useful representation) ◦ Classification ◦ Visualization.
H. Lexie Yang1, Dr. Melba M. Crawford2
Data Mining Course 0 Manifold learning Xin Yang. Data Mining Course 1 Outline Manifold and Manifold Learning Classical Dimensionality Reduction Semi-Supervised.
Non-Linear Dimensionality Reduction
Optimal Dimensionality of Metric Space for kNN Classification Wei Zhang, Xiangyang Xue, Zichen Sun Yuefei Guo, and Hong Lu Dept. of Computer Science &
A Convergent Solution to Tensor Subspace Learning.
Data Mining Course 2007 Eric Postma Clustering. Overview Three approaches to clustering 1.Minimization of reconstruction error PCA, nlPCA, k-means clustering.
Math 285 Project Diffusion Maps Xiaoyan Chong Department of Mathematics and Statistics San Jose State University.
June 25-29, 2006ICML2006, Pittsburgh, USA Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Masashi Sugiyama Tokyo Institute of.
2D-LDA: A statistical linear discriminant analysis for image matrix
CSC321: Lecture 25: Non-linear dimensionality reduction Geoffrey Hinton.
Nonlinear Dimension Reduction: Semi-Definite Embedding vs. Local Linear Embedding Li Zhang and Lin Liao.
CSC321: Extra Lecture (not on the exam) Non-linear dimensionality reduction Geoffrey Hinton.
Spectral Methods for Dimensionality
Nonlinear Dimensionality Reduction
Shuang Hong Yang College of Computing, Georgia Tech, USA Hongyuan Zha
Unsupervised Riemannian Clustering of Probability Density Functions
کاربرد نگاشت با حفظ تنکی در شناسایی چهره
CS 2750: Machine Learning Dimensionality Reduction
Machine Learning Basics
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
Nonlinear Dimension Reduction:
NonLinear Dimensionality Reduction or Unfolding Manifolds
Presentation transcript:

Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University

What is Dimensionality Reduction? PCALDA Examples: 2D space to 1D space

What is Dimensionality Reduction? Example: 3D space to 2D space ISOMAP: Geodesic Distance Preserving J. Tenenbaum et al., 2000

Why Conduct Dimensionality Reduction? Pose Variation Expression Variation LPP, 2003 He et al. Uncover intrinsic structure Visualization Feature Extraction Computation Efficiency Broad Applications Face Recognition Human Gait Recognition CBIR

Representative Previous Work ISOMAP: Geodesic Distance Preserving J. Tenenbaum et al., 2000 LLE: Local Neighborhood Relationship Preserving S. Roweis & L. Saul, 2000 LE/LPP: Local Similarity Preserving, M. Belkin, P. Niyogi et al., 2001, 2003 PCALDA

Dimensionality Reduction Algorithms Any common perspective to understand and explain these dimensionality reduction algorithms? Or any unified formulation that is shared by them? Any general tool to guide developing new algorithms for dimensionality reduction? Statistics-basedGeometry-based PCA/KPCA LDA/KDA … ISOMAPLLELE/LPP… MatrixTensor Hundreds

Our Answers Direct Graph Embedding Original PCA & LDA, ISOMAP, LLE, Laplacian Eigenmap Linearization PCA, LDA, LPP Kernelization KPCA, KDA Tensorization CSA, DATER Type Formulation Example S. Yan, D. Xu, H. Zhang et al., CVPR 2005 and T-PAMI 2007 Google Citation: 174 (until 15-Sep-2009)

Direct Graph Embedding Data in high-dimensional space and low- dimensional space (assumed as 1D space here): L, B: Laplacian matrix from S, S P ; Intrinsic Graph: Penalty Graph S, S P : Similarity matrix (graph edge) Similarity in high dimensional space

Direct Graph Embedding -- Continued Data in high-dimensional space and low- dimensional space (assumed as 1D space here): L, B: Laplacian matrix from S, S P ; Criterion to Preserve Graph Similarity: Intrinsic Graph: Penalty Graph S, S P : Similarity matrix (graph edge) Special case B is Identity matrix ( Scale normalization) Problem: It cannot handle new test data. Similarity in high dimensional space

Linearization Linear mapping function Objective function in Linearization Intrinsic Graph Penalty Graph Problem: linear mapping function is not enough to preserve the real nonlinear structure?

Kernelization the original input space to another higher dimensional Hilbert space. Nonlinear mapping: Kernel matrix: Constraint: Objective function in Kernelization Intrinsic Graph Penalty Graph

Tensorization Low dimensional representation is obtained as: Objective function in Tensorization where Intrinsic Graph Penalty Graph

Common Formulation Tensorization where Linearization Kernelization Direct Graph Embedding L, B: Laplacian matrix from S, S P ; S, S P : Similarity matrix Intrinsic graph Penalty graph

A General Framework for Dimensionality Reduction AlgorithmS & B DefinitionEmbedding Type PCA/KPCA/CSAL/K/T LDA/KDA/DATERL/K/T ISOMAPD LLED LE/LPP if ; B=D D/L D: Direct Graph Embedding L: Linearization K: Kernelization T: Tensorization

New Dimensionality Reduction Algorithm: Marginal Fisher Analysis Important Information for face recognition: 1) Label information 2) Local manifold structure (neighborhood or margin) 1: if x i is among the k 1 -nearest neighbors of x j in the same class; 0 : otherwise 1: if the pair (i,j) is among the k 2 shortest pairs among the data set; 0: otherwise

Marginal Fisher Analysis: Advantage No Gaussian distribution assumption

Experiments: Face Recognition PIE-1G3/P7G4/P6 PCA+LDA (Linearization)65.8%80.2% PCA+MFA (Ours)71.0%84.9% KDA (Kernelization)70.0%81.0% KMFA (Ours)72.3%85.2% DATER-2 (Tensorization)80.0%82.3% TMFA-2 (Ours)82.1%85.2% ORLG3/P7G4/P6 PCA+LDA (Linearization)87.9%88.3% PCA+MFA (Ours)89.3%91.3% KDA (Kernelization)87.5%91.7% KMFA (Ours)88.6%93.8% DATER-2 (Tensorization)89.3%92.0% TMFA-2 (Ours)95.0%96.3%

Summary Optimization framework that unifies previous dimensionality reduction algorithms as special cases. A new dimensionality reduction algorithm: Marginal Fisher Analysis.