G lobal O ptimality of the S uccessive M ax B et A lgorithm USC ENITIAA de NANTES France Mohamed HANAFI and Jos M.F. TEN BERGE Department of psychology.

Slides:



Advertisements
Similar presentations
PLS path modeling and Regularized Generalized Canonical Correlation Analysis for multi-block data analysis Michel Tenenhaus.
Advertisements

Eigen Decomposition and Singular Value Decomposition
Ordinary Least-Squares
Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Chapter 6 Eigenvalues and Eigenvectors
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
UNITE DE SENSOMETRIE ET CHIMIOMETRIE How to use PLS path modeling for analyzing multiblock data sets Michel Tenenhaus Mohamed Hanafi Vincenzo Esposito.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Symmetric Matrices and Quadratic Forms
A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts Dhillon, Inderjit S., Yuqiang Guan, and Brian Kulis.
Computer Graphics Recitation 5.
Factor Analysis Purpose of Factor Analysis
Chapter 4. Numerical Interpretation of Eigenvalues In terms of matrix arithmetic eigenvalues turn matrix multiplication into scalar multiplication. Numerically.
ENGG2013 Unit 17 Diagonalization Eigenvector and eigenvalue Mar, 2011.
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
A Unified View of Kernel k-means, Spectral Clustering and Graph Cuts
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 4 March 30, 2005
1 Numerical geometry of non-rigid shapes Spectral Methods Tutorial. Spectral Methods Tutorial 6 © Maks Ovsjanikov tosca.cs.technion.ac.il/book Numerical.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 6 May 7, 2006
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Digital Control Systems Vector-Matrix Analysis. Definitions.
TRICAP_06 Mohamed Hanafi PLS PATH MODELLING : Computation of latent variables with the estimation mode B UNITE DE SENSOMETRIE ET CHIMIOMETRIE Nantes-France.
Principal Component Analysis. Consider a collection of points.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
Techniques for studying correlation and covariance structure
The Multivariate Normal Distribution, Part 2 BMTRY 726 1/14/2014.
Stats & Linear Models.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Stats Multivariate Data Analysis Stats
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Robust PCA in Stata Vincenzo Verardi FUNDP (Namur) and ULB (Brussels), Belgium FNRS Associate Researcher.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
LTSI (1) Faculty of Mech. & Elec. Engineering, University AL-Baath, Syria Ahmad Karfoul (1), Julie Coloigner (2,3), Laurent Albera (2,3), Pierre Comon.
Techniques for studying correlation and covariance structure Principal Components Analysis (PCA) Factor Analysis.
Chapter 7 Multivariate techniques with text Parallel embedded system design lab 이청용.
Rotation matrices 1 Constructing rotation matricesEigenvectors and eigenvalues 0 x y.
E XACT MATRIX C OMPLETION VIA CONVEX OPTIMIZATION E MMANUEL J. C ANDES AND B ENJAMIN R ECHT M AY 2008 Presenter: Shujie Hou January, 28 th,2011 Department.
1 Matrix Algebra and Random Vectors Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Numerical Analysis – Eigenvalue and Eigenvector Hanyang University Jong-Il Park.
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
Stats & Summary. The Woodbury Theorem where the inverses.
The Power Method for Finding
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
7 7.2 © 2016 Pearson Education, Ltd. Symmetric Matrices and Quadratic Forms QUADRATIC FORMS.
Multivariate Transformation. Multivariate Transformations  Started in statistics of psychology and sociology.  Also called multivariate analyses and.
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Dynamic graphics, Principal Component Analysis Ker-Chau Li UCLA department of Statistics.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Document Clustering Based on Non-negative Matrix Factorization
Background on Classification
Factor Analysis An Alternative technique for studying correlation and covariance structure.
Spectral Clustering.
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Advanced Artificial Intelligence
Matrix Algebra and Random Vectors
Factor Analysis An Alternative technique for studying correlation and covariance structure.
Whitening-Rotation Based MIMO Channel Estimation
数据的矩阵描述.
ECE 576 POWER SYSTEM DYNAMICS AND STABILITY
Performance Surfaces.
Lecture 20 SVD and Its Applications
Lin. indep eigenvectors One single eigenvector
Outline Variance Matrix of Stochastic Variables and Orthogonal Transforms Principle Component Analysis Generalized Eigenvalue Decomposition.
Presentation transcript:

G lobal O ptimality of the S uccessive M ax B et A lgorithm USC ENITIAA de NANTES France Mohamed HANAFI and Jos M.F. TEN BERGE Department of psychology University of Groningen The Netherlands

G lobal O ptimality of the S uccessive M ax B et A lgorithm Summary. 1. The Successive MaxBet Problem (SMP). 2. The MaxBet Algorithm. 3. Global Optimality : Motivation/Problems. 4. Conclusions and Open questions.

1. T he S uccessive M ax B et P roblem ( S.M.P ) Blocks Matrix s.p.s.d

order 1 1. T he S uccessive M ax B et P roblem ( S.M.P )

order s { 1. T he S uccessive M ax B et P roblem ( S.M.P )

2. T he S uccessive M ax B et A lgorithm Ten Berge (1986,1988) Order 1 1. Take arbitrary initial unit length vectors 2. Compute : 3. rescale v k to unit length, and set u k= v k 4. Repeat steps 2 and 3 till convergence

Order s 2. T he S uccessive M ax B et A lgorithm Ten Berge (1986,1988) 1. Take arbitrary initial unit length vectors 2. Compute : 3. rescale v k to unit length, and set u k= v k 4. Repeat steps 2 and 3 till convergence

Property 1 : Convergence of the MaxBet Algorithm

Property 2 : Necessary Condition of Convergence

3. Motivation and results 1. MaxBet Algorithm depends on the starting vector 2. MaxBet algorithm does not guarantee the computation of the global solution of SMP

Motivation and results : an example

 (u)= Function value {  (v)= { Starting Vector { Solution Vector

3. Motivation and results: Two Questions Q1. How can we know that the solution computed by the Maxbet algorithm is global or not ? Q2. When the solution is not global, how can we reach using this solution the global solution ?

3. Motivation and Results : Proceeding Global solution of SMP Spectral properties (eigenvalues and eigenvectors) of

RESULT 1 Result 1

ELEMENTS OF PROOF (Result 1)

(matrix is negative semi definite) 3. Motivation and Results

RESULT 2 then matrix is negative semi definite Result 2

Suppose has a positive eigenvalue 1. w is block-normed vector 2. w is not block-normed vector 2.1. w is not block orthogonal to u 2.2. w is block orthogonal to u ELEMENTS OF PROOF (Result 2)

1. w is block-normed vector w is better solution than u

2. w is not block-normed vector 2.1. w is not block orthogonal to u v is better solution than u

w is not block-normed vector 2.2. w is block orthogonal to u

RESULT 2 Result 3 then matrix is negative semi definite

Suppose has a positive eigenvalue ELEMENTS OF PROOF (Result 3)

u has all elements of the same sign ELEMENTS OF PROOF (Result 3) w has all elements of the same sign

(matrix is negative semi definite) Result 4

ELEMENTS OF PROOF (Result 4)

 (u) = Random research with starting vectors  =0.48 u = ELEMENTS OF PROOF (Result 4)

- Possible Application in statistics : Multivariate Methods (Analysis of K sets of data ) 4. General Conclusions 1. Generalized canonical correlationAnalysis: Horst (1961) 3. Soft Modeling Approach : Estimation of latent variables under mode B Wold (1984); Hanafi (2001) 2. Rotation methods : MaxDiff, MaxBet, generalized Procrustes Analysis Gower(1975); Van de Geer(1984);Ten Berge (1986,1988)

- - Necessary condition for the case K=3 when matrix A has not all elements of the same sign? 4. Perspective and Little Open Question

Motivation: Illustration 1 MaxBet Algorithm depends on the starting vector

The Successive MaxBet Problem (S.M.P) and Multivariate Methods

Some multivarite methods Generalized canonical correlation methods Rotation methods(Agreement methods) SOFT MODELING APPRAOCH(Approch)

Rotation methods S M P = MaxBet method Van de Geer (1984) Ten Berge (1986,1988) S M P = MaxBet method Van de Geer (1984) Ten Berge (1986,1988) S M P = MaxDiff method Van de Geer (1984) Ten Berge (1986,1988) S M P = MaxBet method Van de Geer (1984) Ten Berge (1986,1988) S M P = Generalized Procrustes Analysis Gower(1975), Ten Berge (1986,1988)

Generalized canonical correlation methods SVD SMP = Horst method(1961) S M P = Soft Modeling Appraoch (Hanafi 2001) Mode B soft modeling approach

Multivariate Eigenvalue Problem Watterson and Chu(1993)