Presentation is loading. Please wait.

Presentation is loading. Please wait.

LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon.

Similar presentations


Presentation on theme: "LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon."— Presentation transcript:

1 LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon (4,5) (2) Laboratory LTSI - INSERM U642, France (3) University of Rennes 1, France (5) University of Nice Sophia - Antipolis, France (4) Laboratory I3S - CNRS, France Semi-nonnegative INDSCAL analysis

2 Outlines 2 Preliminaries and problem formulation Optimization methods A compact matrix form of derivatives Numerical results Conclusion Global line search

3   Outer product Ex. Order 3 Ex. Order q Outer product of q-vectors  rank-one q-th order tensor Preliminaries and problem formulation 3

4 4 : Tensor – to – rectangular matrix transformation (unfolding according to the i-th mode) : Tensor – to – vector transformation Preliminaries and problem formulation

5 CANonical Decomposition (CAND) [Hitchcock 1927], [Carroll & Chang 1970], [Harshman 1970] λPλP λ1λ1 CAND : Linear combinantion of minimal number of rank -1 terms Preliminaries and problem formulation 5

6 INDSCAL decomposition [Carroll & Chang 1970 ] λPλP λ1λ1 Preliminaries and problem formulation 6

7 CANonical Decomposition (CAND) λPλP λ1λ1 INDSCAL decomposition λ1λ1 λPλP INDSCAL = CAND of 3-order tensor symmetric in two of three modes Preliminaries and problem formulation 7

8 Example : (Semi-) nonnegative INDSCAL decomposition for (semi-) nonnegative BSS Diagonalizing a set of covariance matrices s : zero-mean random vector of P statistically independent components Case 1 : Nonnegative INDSCAL decomposition Case 2 : Semi-nonnegative INDSCAL decomposition where : Covariance matrix : Preliminaries and problem formulation 8 : the (N  P) mixing matrix

9 Problem at hand Problem 2 : Given, find its INDSCAL decomposition with Problem 1 : Given, find its INDSCAL decomposition subject to Preliminaries and problem formulation 9 Constrained problem Unconstrained problem :Hadamard product (element-wise product) Parametrizing the nonnegativity constraint: [Chu et al. 04]

10 Solution : minimizing the following cost function : : Khatri-Rao product with : Some iterative algorithms Steepest Descent Newton Levenberg Marquardt First & second order derivatives of ψ Preliminaries and problem formulation 10

11 Global line search (1/2) Update rules : Looking for the global optimum in a given direction Optimization methods : learning steps. 11 : Directions given by the iterative algorithm with respect to A and C, respectively.

12 3-th order symmetric (in two modes) tensor  Global optimum in the considered direction for Optimization methods Minimization with respect toand : Stationary point of a quadratic polynomial : Stationary point of a 24-th degree polynomial 12 Global line search (2/2) : Stationary point of a 10-th degree polynomial  Global optimum in the considered direction for

13 Steepest Descent (SD) Update rules : Optimization by searching for stationary points of Ψ based on first-order approximation (i.e. the gradient) Optimization methods : learning steps. 13 : Gradient of ψ with respect to A and C, respectively. In this work Learning steps are optimal (optimal line search)  Global optimum in the considered direction. Gradients are given in a compact matrix form.

14 14 Steepest Descent (SD) Optimization methods Computing the differential of ψ  are immediat. Then : A compact matrix form of where:

15 Gradient computation of Ψ(A,C) Then : Compact matrix form of derivatives where: a commutation matrix of size (IP×IP) : N-dimensional vector of ones : Identity matrix of size (N×N) 15

16 Update rules : Newton Optimization by including the second-order approximation to accelerate the convergence Optimization methods 16 : Hessian of ψ with respect to A and C, respectively. In this work Learning steps are also computed optimally (Global line search). Hessians are given in a compact matrix form.

17 17 Convergence requirement : Hessians are positive definite matrices Problem : Lack of positive definiteness Lack of convergence & slowness Solution : Necessity to regularization (i.e. Eigen-Value Decomposition (EVD) - based technique ) Newton Optimization methods U : Matrix of eigen - vectors Σ = diag{λ 1,…,λ NP } : diagonal matrix of eigen-values EVD-based regularization Replace all negative eigen - values by one. mNewton 1 Compute the ratio If mNewton 2

18 Based on a linear approximation to the components of, in the neighborhood of A / C. Levenberg-Marquardt (LM) Update rules : whereis the Jacobian of in A. Jacobians are computed from : and : damped parameter influencing both the direction and the size of the step [Madsen et al. 2004] with : Optimization methods 18

19 Convergence speed VS SNR Noise-free random 3-order tensor Noisy 3-way array :  : Zero-mean normally distributed noise  : Scalar controling the noise level Results averaged over 200 Monte Carlo’s realizations. Numerical results 19

20 Convergence speed VS SNR SNR = 0 dB Numerical results 20

21 Convergence speed VS SNR SNR = 15 dB Numerical results 21

22 Convergence speed VS SNR SNR = 30 dB Numerical results 22

23 Differential concept  Powerful tool for compact matrix derivations forms Global line search for symmetric case  global optimum in the considered direction Iterative algorithms with global line search  suitable step to reach the global optimum Conclusion 23 Algebraic method + iterative method with global line search  global optimum Solving an unconstrained semi-nonnegative INDSCAL problem.


Download ppt "LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon."

Similar presentations


Ads by Google