Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to several works and Some Ideas Songcan Chen 2012.9.4.

Similar presentations


Presentation on theme: "Introduction to several works and Some Ideas Songcan Chen 2012.9.4."— Presentation transcript:

1 Introduction to several works and Some Ideas Songcan Chen 2012.9.4

2 Outlines Introduction to Several works Some ideas from Sparsity Aware

3 Introduction to Several works 1.A Least-Squares Framework for Component Analysis (CA)[1] 2.On the convexity of log-sum-exp functions with positive definite matrices[2]

4 Some Ideas Motivated by CA framework [1] Motivated by Log-Sum-Exp [2] Motivated by Sparsity Aware[3-4]

5 CA framework

6 Proposes a unified least-squares framework, called least-squares weighted kernel reduced rank regression (LS-WKRRR), to formulate many CA methods, As a result, PCA, LDA, CCA, SC, LE and their kernel versions become its special cases. LS-WKRRR’s benefits (1)provides a clean connection between many CA techniques (2)yields efficient numerical schemes to solve CA techniques (3) overcomes the small sample size problem; (4) provides a framework to easily extend CA methods. For example, weighted generalizations of PCA, LDA, SC, and CCA, and several new CA techniques.

7 The LS-WKRRR problem minimizes the following expression: Where Factors: Weights: Data:

8 Solutions to A and B A GEP:

9 Computational Aspects Subspace Iteration Alternated Least Squares (ALS) Gradient Descent and Second-Order Methods Important to notice: both the ALS and the gradient-based algorithms effectively solve the SSS problem, unlike those that directly solve the GEP.

10 PCA,KPCA AND WEIGHTED EXTENSIONS PCA: That is, in (1), set Or alternative formulation:

11 KPCA & WEIGHTED EXTENSIONS KPCA: Weighted PCA:

12 LDA, KLDA and Weighted Extensions LDA: In (1), Set G is label matrix using one- of-c encoding for c classes!

13 CCA, KCCA and Weighted Extensions CCA In (1), set

14 The relations to LLE, LE etc. Please refer to [1]

15 On the convexity of log-sum-exp functions with positive definite (PD) matrices [2]

16 Log-Sum-Exp (LSE) function One of the fundamental functions in convex analysis is the LSE function whose convexity is the core ingredient in the methodology of geometric programming (GP) which has made considerable impact in different fields, e.g., power control in communication theory! This paper Extends these results and consider the convexity of the log-determinant of a sum of rank one PD matrices with scalar exponential weights!

17 LSE function (convex):

18 Extending convexity of vector- function to matrix-variablefor PD A general convexity definition: Where between any two points q 0 and q 1 in the domain.

19 Several Definitions:

20 More general,

21 Applications Robust covariance estimation Kronecker structured covariance estimation Hybrid Robust Kronecker model

22 Robust covariance estimation Assume: The ML objective:

23 The objective is convex in 1/q i and its minimizers are Plugging this solution back into the objective, results in A key lemma:

24 Applying this lemma to (37) yields Plugging it back into the objective yields

25 For avoiding ill-condition, regularize (37) and minimize

26 Other priors added if available: 1) Bounded peak values: 2) Bounded second moment: 3) Smoothness: 4) Sparsity:

27 Kronecker structured covariance estimation The basic Kronecker model is The ML objective:

28 Use The problem (58) turns to

29 Hybrid Robust Kronecker Model The ML objective: Solving for Σ>0 again via Lemma 4 yields

30 the problem (73) reduces to Solve (75) using the fixed point iteration Arbitrary can be used as initial iteration.

31 Some Ideas Motivated by CA framework [1] Motivated by Log-Sum-Exp [2] Motivated by Sparsity Aware [3][4]

32 Motivated by CA framework [1] Recall

33

34 Motivated by Log-Sum-Exp [2] 1) Metric Learning (ML) ML&CL, Relative Distance constraints, LMNN-like,… 2) Classification learning Predictive function: f(X)=tr(W T X)+b; The objective:

35 ML across heterogeneous domains 2 lines: 1) Line 1: 2) Line 2 (for ML&CL) Symmetry and PSD An indefinite measure ({U i } is base & { α i } is sparsified) Implying that 2 lines can be unified to a common indefinite ML!

36 Motivated by Sparsity Aware [3][4] Noise model Where c is the c-th class or cluster, e ci is noise and o ci is outlier and its ||o ci ||≠0 if outlier, 0 otherwise. Discuss: 1)U c =0, o ci =0; e ci ~N(0, dI)  Means; Lap(0,dI)  Medians; other priors  other statistics 2)U c ≠ 0, o ci =0; e ci ~ N(0, dI)  PCA; Lap(0,dI)  L 1 -PCA; other priors  other PCAs;

37 3) U c =0, o ci ≠0; e ci ~N(0, dI)  Robust (k-)Means; ~ Lap(0,dI)  (k-)Medians; 4) Subspace U c ≠0, o ci ≠0; e ci ~N(0, dI)  Robust k-subspaces; 5) m c =0 …… 6) Robust (Semi-)NMF …… 7) Robust CA …… where noise model:Γ=BA T Υ+E+O

38 Reference [1] Fernando De la Torre, A Least-Squares Framework for Component Analysis, IEEE TPAMI,34(6) 2012: 1041-1055. [2] Ami Wiesel, On the convexity of log-sum-exp functions with positive definite matrices, available at http://www.cs.huji.ac.il/~amiw/http://www.cs.huji.ac.il/~amiw/ [3] Gonzalo Mateos & Georgios B. Giannakis, Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization, available at homepage of Georgios B. Giannakis. [4] Pedro A. Forero, Vassilis Kekatos & Georgios B. Giannakis, Robust Clustering Using Outlier-Sparsity Regularization, available at homepage of Georgios B. Giannakis.

39 Thanks! Q&A


Download ppt "Introduction to several works and Some Ideas Songcan Chen 2012.9.4."

Similar presentations


Ads by Google