Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Active Appearance Models
The economical eigen-computation is where the sufficient spanning set is The final updated LDA component is given by Motivation Incremental Linear Discriminant.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
COLORCOLOR A SET OF CODES GENERATED BY THE BRAİN How do you quantify? How do you use?
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Zhenwen Dai Jӧrg Lücke Frankfurt Institute for Advanced Studies,
A novel supervised feature extraction and classification framework for land cover recognition of the off-land scenario Yan Cui
Face Recognition and Biometric Systems
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Face Recognition Under Varying Illumination Erald VUÇINI Vienna University of Technology Muhittin GÖKMEN Istanbul Technical University Eduard GRÖLLER Vienna.
Graz University of Technology, AUSTRIA Institute for Computer Graphics and Vision Fast Visual Object Identification and Categorization Michael Grabner,
A Study of Approaches for Object Recognition
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Robust Real-time Object Detection by Paul Viola and Michael Jones ICCV 2001 Workshop on Statistical and Computation Theories of Vision Presentation by.
Rodent Behavior Analysis Tom Henderson Vision Based Behavior Analysis Universitaet Karlsruhe (TH) 12 November /9.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Reduced Support Vector Machine
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION
Fitting a Model to Data Reading: 15.1,
Face Detection and Recognition
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Person-Specific Domain Adaptation with Applications to Heterogeneous Face Recognition (HFR) Presenter: Yao-Hung Tsai Dept. of Electrical Engineering, NTU.
Face Recognition and Feature Subspaces
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Access Control Via Face Recognition Progress Review.
Machine Learning Using Support Vector Machines (Paper Review) Presented to: Prof. Dr. Mohamed Batouche Prepared By: Asma B. Al-Saleh Amani A. Al-Ajlan.
Learning to perceive how hand-written digits were drawn Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Gap-filling and Fault-detection for the life under your feet dataset.
Face Detection Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
CSE 185 Introduction to Computer Vision Face Recognition.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Digital Image Processing
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Notes on HW 1 grading I gave full credit as long as you gave a description, confusion matrix, and working code Many people’s descriptions were quite short.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
2D-LDA: A statistical linear discriminant analysis for image matrix
Locations. Soil Temperature Dataset Observations Data is – Correlated in time and space – Evolving over time (seasons) – Gappy (Due to failures) – Faulty.
3D Face Recognition Using Range Images Literature Survey Joonsoo Lee 3/10/05.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Deeply learned face representations are sparse, selective, and robust
ROBUST SUBSPACE LEARNING FOR VISION AND GRAPHICS
University of Ioannina
Outlier Processing via L1-Principal Subspaces
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Machine Learning Dimensionality Reduction
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Mean transform , a tutorial
Object Modeling with Layers
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
Feature space tansformation methods
Presentation transcript:

Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer Graphics and Vision Robust Incremental Linear Discriminant Analysis Learning by Autonomous Outlier Detection

2 Overview  Motivation  Standard LDA  How is it defined?  What is its purpose?  Where are the drawbacks?  Robust Incremental LDA  What is the idea?  How is it constructed?  Why does it work?  Results  What type of noise can be handled?  How is the classification performance?  Does the data itself influence the efficiency?  Conclusion

3 Motivation  Goal  Find an appropriate description of images  Enable recognition of trained objects  Properties of LDA  Stores only image representations  Allows to discard the original images  Enables a simple classification  Incorporate new information  Update instead new construction  Discard images after addition  Handle noisy data (occlusions)

4 Development  Idea  Incorporate some reconstructive information  Adapt robust PCA approaches  Deviation  Automatic outlier detection  Subspace update from inliers only  Final achievement  Reliable subspace Training dataTest data Non-robust results Robust results

5 Eigenvalue Problem Min(Fisher Criterion) Linear Discriminant Analysis … within-class-scatter … between-class-scatter … data mean … class mean

6 LDA: Efficiently separates the data  good classification PCA: Approximates the data  good reconstruction Combine reconstructive model and discriminative classifier  Embed LDA learning and classification into PCA framework (Fidler et al., PAMI’06) First k << n principal vectors contain most visual variability The Augmented Basis I

7 Augment k-dimensional PCA subspace with c-1 additional basis vectors  Keep discriminative information The Augmented Basis II

8  Robust learning  Basis for a reliable recognition  Two types of outliers (due to non-optimal conditions during data acquisition)  global noise  outlying images  local noise  outlying pixels (non-Gaussian noise)  Incremental learning  No recalculation of model  Build representations incrementally (update representation)  Not all images are given in advance  Keep only image representations (discard training images) Robust Incremental LDA

9 Outlier Detection I A) Creation of N hypothesis 1.Choose subset of pixels from 2.Iterate (until error is small)  Calculate the aPCA coefficient from the current subset  Reconstruct the image  Remove pixels with the largest reconstruction error 3.Calculate the aPCA coefficient B) Selection of best hypothesis 1.Calculate the reconstruction error for each hypothesis 2.Count the values exceeding a predefined threshold 3.Select the hypothesis for according to the smallest error count

10 Outlier Detection II C) Iterate (until small error) 1.Reconstruct image 2.Remove pixels with largest reconstruction error 3.Recalculate the aPCA coefficient D) Finalize 1.Reconstruct image 2.Calculate the reconstruction error 3.Determine outliers for predefined threshold 4.Replace missing pixels

11 1. Project outlier free image in current eigenspace 2. Difference between reconstruction and input image is orthogonal to current eigenspace 3. Enlarge eigenspace with normalized residual  Update I

12 4. Perform PCA on the actual representation  new principal axes 5. Project coefficient vectors into new eigenspace  6. Rotate current basis vectors to match new basis vectors 7. Perform LDA on coefficient vectors in new eigenspace  Update PCA space Perform LDA Update II

13 Experiments  Aim  Autonomously detect wrong pixels  Achieve recognition rates approaching those from clean data  Compare to non-robust and known missing Pixels (MPL)  Datasets 1.Faces: Sheffield Face Database (Graham & Allinson, Face Recognition: From Theory to Applications’98) 2.Objects: Coil20 (Nene et al., TR-Columbia University’96)

14 Facts  Start with reliable basis  Coil20: 4 images  SFD: 2 images  Test several occlusions  Salt-and-pepper  Horizontal bar  Vertical bar  Square  Test several intensities  White  Black  (Random) Gray  Parameters  Black square Gray vertical bar Salt-and-pepper White horizontal bar

15 Results: Salt-and-Pepper

16 Results: Black Square

17 Results: White Horizontal Bar

18 Results: Gray Vertical Bar

19 Conclusion and Future Work  Augmented PCA subspace  Combines:  Reconstructive information  Allows for incremental & robust learning  Discriminative information  Allows for efficient classification  Enables:  Incremental learning  Similar recognition rates as batch  Outlier detection  Reliable occlusion handling (Clear improvement to non-robust approach)  Room for improvement  Optimization of parameters  Handle labelling noise

20 Thanks for your attention !

21 Model grows when adding a new image  truncate by one similar to augmenting the basis Maintain Subspace Size

22  Classification is a two step procedure  Project novel image to PCA space  Project obtained coefficient vector in LDA space  The classification function  Keeps unchanged  Preserves all discriminative information Classification