General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Text mining Gergely Kótyuk Laboratory of Cryptography and System Security (CrySyS) Budapest University of Technology and Economics
Component Analysis (Review)
Dimension reduction (1)
Face Recognition and Biometric Systems
A Comprehensive Study on Third Order Statistical Features for Image Splicing Detection Xudong Zhao, Shilin Wang, Shenghong Li and Jianhua Li Shanghai Jiao.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Principal Component Analysis
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
L15:Microarray analysis (Classification). The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
Implementing a reliable neuro-classifier
Gait Recognition Simon Smith Jamie Hutton Thomas Moore David Newman.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
Comparing Kernel-based Learning Methods for Face Recognition Zhiguo Li
Dimensionality reduction Usman Roshan CS 675. Supervised dim reduction: Linear discriminant analysis Fisher linear discriminant: –Maximize ratio of difference.
PCA & LDA for Face Recognition
Summarized by Soo-Jin Kim
Enhancing Tensor Subspace Learning by Element Rearrangement
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Local Non-Negative Matrix Factorization as a Visual Representation Tao Feng, Stan Z. Li, Heung-Yeung Shum, HongJiang Zhang 2002 IEEE Presenter : 張庭豪.
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Generalizing Linear Discriminant Analysis. Linear Discriminant Analysis Objective -Project a feature space (a dataset n-dimensional samples) onto a smaller.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
Berkay Topcu Sabancı University, 2009
Lecture 4 Linear machine
Discriminant Analysis
Feature extraction using fuzzy complete linear discriminant analysis The reporter : Cui Yan
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Dimensionality reduction
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
June 25-29, 2006ICML2006, Pittsburgh, USA Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Masashi Sugiyama Tokyo Institute of.
Principal Component Analysis and Linear Discriminant Analysis for Feature Reduction Jieping Ye Department of Computer Science and Engineering Arizona State.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
2D-LDA: A statistical linear discriminant analysis for image matrix
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
3D Face Recognition Using Range Images Literature Survey Joonsoo Lee 3/10/05.
Zhiming Liu and Chengjun Liu, IEEE. Introduction Algorithms and procedures Experiments Conclusion.
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Principal Component Analysis (PCA)
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Bag-of-Visual-Words Based Feature Extraction
LECTURE 10: DISCRIMINANT ANALYSIS
Recognition with Expression Variations
Glenn Fung, Murat Dundar, Bharat Rao and Jinbo Bi
Principal Component Analysis (PCA)
Simon Smith Jamie Hutton Thomas Moore David Newman
PCA vs ICA vs LDA.
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
Principal Component Analysis
Introduction PCA (Principal Component Analysis) Characteristics:
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Feature space tansformation methods
CS4670: Intro to Computer Vision
LECTURE 09: DISCRIMINANT ANALYSIS
INTRODUCTION TO Machine Learning
Presentation transcript:

General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu Duke University Machine Learning Group Friday, June 8 th, 2007

Outline 1.Introduction 2.Gabor representation 3.Linear discriminant analysis 4.General tensor discriminant analysis 5.Results (from the paper) 6.ISA vs. Gabor features 7.Results on video analysis 8.Conclusions 1/20

1. Introduction 1.The under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. 2.General tensor discriminant analysis as a preprocessing step for LDA has some benefits compared with PCA or simple LDA: the USP is reduced and the discriminative information in the training tensors is preserved. 3.Gabor functions are used as a preprocessing step for feature extraction in image representation. 4.The LDA is used for classification combined with a dissimilarity measure: the distance between the gallery sequence and the probe sequence. 2/20

2. Gabor representation 1.A Gabor function is the product of an elliptical Gaussian envelope and a complex plane wave where is the variable in a spatial domain and is the frequency vector which determines the scale and direction of Gabor functions 3/20 The real part of Gabor functions (5 scales, 8 directions)

2. Gabor representation (contd.) 4/20

3. Linear discriminant analysis given a number of training samples in known classes, where is the class number, and is the sample ID in the class with, the aim of LDA is to find a projection of the, which is optimal for separating the different classes in a low dimensional space. we define two scatter matrices between-class within-class the projection is chosen such as 5/20

3. Linear discriminant analysis (contd.) if, LDA reduces to the Fisher linear discriminant and the solution corresponds to the largest eigenvalue of the following equation: 6/20

the general tensor discriminant analysis allows us to chose the optimal reduction in the feature space. The projection matrix has a number of columns calculated in order to get the best performance. if we want to extract features, we estimate as, where are the largest eigenvalues of. tuning parameter 4. General tensor discriminant analysis 7/20

Alternating projection optimization procedure for GTDA Step 1: Step 2: Convergence: where is the number of classes (in our case ) and is the current step; indicates the feature dimension which is minimized. The tuning parameter and the dimension of the output tensors are determined automatically. 4. General tensor discriminant analysis (contd.) 8/20

The experiments are carried out upon the USF HumanID outdoor gait (1,870 sequences from 122 subjects). For algorithm training, the database provides a gallery that has all the 122 subjects, collected at a separate moment in time. For testing they use the dissimilarity measure: the distance between the gallery sequence and the probe sequence. 5. Results (from the paper) 9/20

6. ISA vs. Gabor features 10/20 Gabor functions: we use Gabor functions with five different scales and eight different orientations, making a total of forty Gabor functions. GTDA + Gabor features: The original Gabor features : dim = 80 x 60 x 5 x 8. The GaborSD features : dim = 80 x 60. The GTDA + GaborSD features : dim = 10 x 6. GTDA + ISA features: The original ISA features : dim = 40. The GTDA + ISA features : dim = 32.

GTDA + Gabor features (dim = 60) GTDA + ISA features (dim = 32) ISA features (dim = 40) 11/20

GTDA + Gabor features GTDA + ISA features ISA features 12/20

Conclusion 1.Gabor functions and general tensor discriminant analysis have been introduced for visual information processing and recognition. 2.Tensor gait is also introduced to represent the Gabor features. 3.To further take the feature selection into account, the size of tensor gait is reduced by the GTDA 4.Gabor features and ISA are compared in abnormal event detection. 13/20