LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon.

Slides:



Advertisements
Similar presentations
Zhen Lu CPACT University of Newcastle MDC Technology Reduced Hessian Sequential Quadratic Programming(SQP)
Advertisements

Instabilities of SVD Small eigenvalues -> m+ sensitive to small amounts of noise Small eigenvalues maybe indistinguishable from 0 Possible to remove small.
Optimization.
Engineering Optimization
1 Maarten De Vos SISTA – SCD - BIOMED K.U.Leuven On the combination of ICA and CPA Maarten De Vos Dimitri Nion Sabine Van Huffel Lieven De Lathauwer.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Methods For Nonlinear Least-Square Problems
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Independent Component Analysis (ICA) and Factor Analysis (FA)
Unconstrained Optimization Problem
CS Pattern Recognition Review of Prerequisites in Math and Statistics Prepared by Li Yang Based on Appendix chapters of Pattern Recognition, 4.
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
Advanced Topics in Optimization
Why Function Optimization ?
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Accelerating the Optimization in SEE++ Presentation at RISC, Hagenberg Johannes Watzl 04/27/2006 Cooperation Project by RISC and UAR.
Today Wrap up of probability Vectors, Matrices. Calculus

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Chapter 17 Boundary Value Problems. Standard Form of Two-Point Boundary Value Problem In total, there are n 1 +n 2 =N boundary conditions.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Non Negative Matrix Factorization
Ordinary Least-Squares Emmanuel Iarussi Inria. Many graphics problems can be seen as finding the best set of parameters for a model, given some data Surface.
Qualifier Exam in HPC February 10 th, Quasi-Newton methods Alexandru Cioaca.
Nonlinear programming Unconstrained optimization techniques.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Application of Matrix Differential Calculus for Optimization in Statistical Algorithm By Dr. Md. Nurul Haque Mollah, Professor, Dept. of Statistics, University.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Gradient Methods In Optimization
Variations on Backpropagation.
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Optimal Control.
CSCE 441: Computer Graphics Forward/Inverse kinematics
CS5321 Numerical Optimization
Non-linear Least-Squares
Variations on Backpropagation.
METHOD OF STEEPEST DESCENT
Variations on Backpropagation.
Neural Network Training
Performance Optimization
Outline Preface Fundamentals of Optimization
CS5321 Numerical Optimization
Computer Animation Algorithms and Techniques
CS5321 Numerical Optimization
Presentation transcript:

LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon (4,5) (2) Laboratory LTSI - INSERM U642, France (3) University of Rennes 1, France (5) University of Nice Sophia - Antipolis, France (4) Laboratory I3S - CNRS, France Semi-nonnegative INDSCAL analysis

Outlines 2 Preliminaries and problem formulation Optimization methods A compact matrix form of derivatives Numerical results Conclusion Global line search

  Outer product Ex. Order 3 Ex. Order q Outer product of q-vectors  rank-one q-th order tensor Preliminaries and problem formulation 3

4 : Tensor – to – rectangular matrix transformation (unfolding according to the i-th mode) : Tensor – to – vector transformation Preliminaries and problem formulation

CANonical Decomposition (CAND) [Hitchcock 1927], [Carroll & Chang 1970], [Harshman 1970] λPλP λ1λ1 CAND : Linear combinantion of minimal number of rank -1 terms Preliminaries and problem formulation 5

INDSCAL decomposition [Carroll & Chang 1970 ] λPλP λ1λ1 Preliminaries and problem formulation 6

CANonical Decomposition (CAND) λPλP λ1λ1 INDSCAL decomposition λ1λ1 λPλP INDSCAL = CAND of 3-order tensor symmetric in two of three modes Preliminaries and problem formulation 7

Example : (Semi-) nonnegative INDSCAL decomposition for (semi-) nonnegative BSS Diagonalizing a set of covariance matrices s : zero-mean random vector of P statistically independent components Case 1 : Nonnegative INDSCAL decomposition Case 2 : Semi-nonnegative INDSCAL decomposition where : Covariance matrix : Preliminaries and problem formulation 8 : the (N  P) mixing matrix

Problem at hand Problem 2 : Given, find its INDSCAL decomposition with Problem 1 : Given, find its INDSCAL decomposition subject to Preliminaries and problem formulation 9 Constrained problem Unconstrained problem :Hadamard product (element-wise product) Parametrizing the nonnegativity constraint: [Chu et al. 04]

Solution : minimizing the following cost function : : Khatri-Rao product with : Some iterative algorithms Steepest Descent Newton Levenberg Marquardt First & second order derivatives of ψ Preliminaries and problem formulation 10

Global line search (1/2) Update rules : Looking for the global optimum in a given direction Optimization methods : learning steps. 11 : Directions given by the iterative algorithm with respect to A and C, respectively.

3-th order symmetric (in two modes) tensor  Global optimum in the considered direction for Optimization methods Minimization with respect toand : Stationary point of a quadratic polynomial : Stationary point of a 24-th degree polynomial 12 Global line search (2/2) : Stationary point of a 10-th degree polynomial  Global optimum in the considered direction for

Steepest Descent (SD) Update rules : Optimization by searching for stationary points of Ψ based on first-order approximation (i.e. the gradient) Optimization methods : learning steps. 13 : Gradient of ψ with respect to A and C, respectively. In this work Learning steps are optimal (optimal line search)  Global optimum in the considered direction. Gradients are given in a compact matrix form.

14 Steepest Descent (SD) Optimization methods Computing the differential of ψ  are immediat. Then : A compact matrix form of where:

Gradient computation of Ψ(A,C) Then : Compact matrix form of derivatives where: a commutation matrix of size (IP×IP) : N-dimensional vector of ones : Identity matrix of size (N×N) 15

Update rules : Newton Optimization by including the second-order approximation to accelerate the convergence Optimization methods 16 : Hessian of ψ with respect to A and C, respectively. In this work Learning steps are also computed optimally (Global line search). Hessians are given in a compact matrix form.

17 Convergence requirement : Hessians are positive definite matrices Problem : Lack of positive definiteness Lack of convergence & slowness Solution : Necessity to regularization (i.e. Eigen-Value Decomposition (EVD) - based technique ) Newton Optimization methods U : Matrix of eigen - vectors Σ = diag{λ 1,…,λ NP } : diagonal matrix of eigen-values EVD-based regularization Replace all negative eigen - values by one. mNewton 1 Compute the ratio If mNewton 2

Based on a linear approximation to the components of, in the neighborhood of A / C. Levenberg-Marquardt (LM) Update rules : whereis the Jacobian of in A. Jacobians are computed from : and : damped parameter influencing both the direction and the size of the step [Madsen et al. 2004] with : Optimization methods 18

Convergence speed VS SNR Noise-free random 3-order tensor Noisy 3-way array :  : Zero-mean normally distributed noise  : Scalar controling the noise level Results averaged over 200 Monte Carlo’s realizations. Numerical results 19

Convergence speed VS SNR SNR = 0 dB Numerical results 20

Convergence speed VS SNR SNR = 15 dB Numerical results 21

Convergence speed VS SNR SNR = 30 dB Numerical results 22

Differential concept  Powerful tool for compact matrix derivations forms Global line search for symmetric case  global optimum in the considered direction Iterative algorithms with global line search  suitable step to reach the global optimum Conclusion 23 Algebraic method + iterative method with global line search  global optimum Solving an unconstrained semi-nonnegative INDSCAL problem.