Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.

Similar presentations


Presentation on theme: "Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona."— Presentation transcript:

1 Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona State University Tempe, AZ, 85281 qzhang53, baoxin.li@asu.edu

2 Problem Description In many applications, we may acquire multiple copies of signals from the same source (an ensemble of signals); Signals in ensemble may be very similar (sharing a common source), but may also have very distinctive differences (e.g., very different acquisition conditions) plus other unique but small variations (e.g., sensor noise).

3 Examples Common sources

4 Examples Different acquisition conditions

5 Examples Sensor noises

6 Examples Signals in ensemble

7 Signals in Ensemble The decomposition of the signal has several benefits: – Obtaining better compression rate. E.g., distributed compressed sensing [Duarte], joint sparsity model [Duarte 2005]; – Extract more relevant features. E.g., A compressive sensing approach for expression- invariant face recognition [Nagesh 2009];

8 Example: Face Images Given face images of same subjects

9 Example: Face Images Can we identify a “clean” image for them?

10 Example: Face Images And their illumination conditions?

11

12 Example: Face Image Set Face image can be represented as a variable parameterized by subject and imaging condition, e.g., illumination, expression etc.; The images of same subject would share a common component; The innovation components should contribute to the imaging conditions. – Innovation component may not be necessary sparse, e.g., the one due to illumination variations.

13 Example: Decomposing Face Images AR dataset: expression variation as a sparse component. X C E

14 Example: Decomposing Face Images X C A

15 Proposed Model

16 Proposed Model Cont’d

17 Examples X = C + A + E

18 Solving the Decomposition

19 Comparison with Other Models

20

21 Decomposition Algorithms

22 Decomposition Algorithms Cont’d

23 Parameters Selection

24 Convergence Property With proper selection of parameters, the proposed algorithm will converge to the optimal solution. In experiment, the algorithm converges within 100 iterations.

25 Examples: Decomposition Process Show examples for how the algorithm converges.

26 Experiment We use three experiments to evaluate the proposed model and algorithm: – Decomposing the synthetic images; – Decomposing the images from extended YaleB dataset; – Applying decomposed component to classification tasks;

27 Decomposing the synthetic images We create training images by mixing the images with low-rank background images. In addition, we add some sparse- supported noise.

28 Decomposing the synthetic images The decomposition result. From top to bottom: common components, low- rank components and sparse components.

29 Decomposition: Robustness over Missing Training Instances We randomly remove 20% training images and test the robustness of decomposition algorithm. From top to bottom: training images and common component, low-rank component.

30 Decomposing Extended YaleB Dataset We use all 2432 images of extended YaleB dataset, which includes 38 subjects and 64 illumination conditions; – Common components capture information unique to certain subjects; – Low rank components capture the illumination conditions of the images; – Sparse components capture the sparse-supported noises and shadows;

31 Decomposing Extended YaleB Dataset Left: common components; Right: the low-rank components.

32 Applying Decomposed Component to Face Recognition

33 Face Recognition: Subspaces Reconstruct image with common component of Subject 1 and low-rank components: (a) coefficient of the reconstruction, (b) the input image, and (c) the reconstructed image. Subject 1Subject 2 Reconstruction with incorrect common components

34 Face Recognition: Measuring the Distance of Subspaces

35 Face Recognition: Experiment We test the algorithm on extended YaleB dataset and Multi-PIE dataset; – Randomly split the set into training sets and testing sets; – To test its robustness over missing training instances, we randomly remove some of the training instances and keep “# train per subject” training instances for each subject; – The performance is compared with SRC, Volterraface and SUN.

36 Face Recognition: Extended YaleB Dataset #train per subject 3224168 Proposed 99.78±0.24%99.54±0.04%99.18±0.14%95.15±1.03% SRC 96.48±0.44%95.29±0.52%91.90±0.94%78.65±1.81% Volterrafaces 99.95±0.06%99.80±0.26%99.48±0.49%90.22±11.84% SUN 89.61±1.85%87.64±2.80%76.91±3.71%60.17±2.09% #train per subject 161284 Proposed 99.56±0.00%99.33±0.23%98.32±0.03%80.03±2.17% SRC 89.14±0.00%87.88±0.44%81.02±0.13%58.54±1.26% Volterrafaces 99.25±0.34%99.17±0.39%96.27±4.03%91.03±2.43% SUN 79.22±0.00%76.75±0.00%68.86±0.00%51.60±0.00% Extended YaleB dataset includes 64 illumination conditions and 38 subjects.

37 Face Recognition: Multi-PIE Dataset #train per image 2015105 Proposed 100±0.00% 99.65±0.37%97.49±0.21% SRC 99.88±0.07% 99.73±0.14%97.73±0.54% Volterrafaces 100±0.00% 95.83±4.16% SUN 100%99.84±0.11%99.45±0.43%95.75±0.49% #train per image 12963 Proposed 100±0.00%99.960.08%99.17±0.15%94.70±0.20% SRC 99.91±0.16%98.89±1.74%96.90±3.73%87.18±1.78% Volterrafaces 100±0.00% 99.54±0.31%94.30±4.72% SUN 100±0.00%99.84±0.05%98.53±0.29%88.75±4.72% We test the performances on face images of frontal poses (P27), which include 68 subjects and 45 illumination variations.

38 Face Recognition: Robustness over Poses Variations we use all the images from 5 near frontal poses (C05, C07, C09, C27, C29), which includes153 conditions for each subject. We randomly pick M=40 illumination conditions for training and the remaining for testing. #train per subject 2015105 Proposed 99.98±0.03%99.92±0.06%99.24±0.06%90.95±0.70% SRC 99.98±0.03%99.45±0.03%96.79±0.28%86.98±0.16% Volterrafaces 99.60±0.22%98.37±0.47%97.63±0.28%89.72±1.45% SUN 99.93±0.05%99.38±0.14%97.89±0.30%88.29±0.02%

39 Variation Recognition

40 We test the proposed algorithm on AR dataset, which contains 100 subjects and 2 sessions, where each session 13 variations. We use the first session for training and second session for testing. We show the confusion matrix, which presents the result in percentages. variations

41 Conclusions We proposed a novel decomposition of a set of face images of multiple subjects, each with multiple images; It facilitates explicit modeling of typical challenges in face recognition, such as illumination conditions and large occlusion; For future work, we plan to expand the current algorithm by incorporating another step that attempts to estimate a mapping matrix for assigning a condition label to each image, during the optimization iteration.


Download ppt "Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona."

Similar presentations


Ads by Google