Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.

Slides:



Advertisements
Similar presentations
A Large-Grained Parallel Algorithm for Nonlinear Eigenvalue Problems Using Complex Contour Integration Takeshi Amako, Yusaku Yamamoto and Shao-Liang Zhang.
Advertisements

Lecture 5 Newton-Raphson Method
Fixed point iterations and solution of non-linear functions
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 6 Eigenvalues and Eigenvectors
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 17 Introduction to Eigenvalue Problems
Eigenvalues and Eigenvectors
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Linear Transformations
Eigenvalues and Eigenvectors
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Chapter 5 Orthogonality
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
5.II. Similarity 5.II.1. Definition and Examples
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
3D Geometry for Computer Graphics
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Solution of Eigenproblem of Non-Proportional Damping Systems by Lanczos Method In-Won Lee, Professor, PE In-Won Lee, Professor, PE Structural Dynamics.
CHAPTER SIX Eigenvalues
Autumn 2008 EEE8013 Revision lecture 1 Ordinary Differential Equations.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
Eigenvalues and Eigenvectors
In-Won Lee, Professor, PE In-Won Lee, Professor, PE Structural Dynamics & Vibration Control Lab. Structural Dynamics & Vibration Control Lab. Korea Advanced.
Domain Range definition: T is a linear transformation, EIGENVECTOR EIGENVALUE.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Progress in identification of damping: Energy-based method with incomplete and noisy data Marco Prandina University of Liverpool.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using Numerical Integration Takamitsu Watanabe and Yusaku.
* 김 만철, 정 형조, 박 선규, 이 인원 * 김 만철, 정 형조, 박 선규, 이 인원 구조동역학 및 진동제어 연구실 구조동역학 및 진동제어 연구실 한국과학기술원 토목공학과 중복 또는 근접 고유치를 갖는 비비례 감쇠 구조물의 자유진동 해석 1998 한국전산구조공학회 가을.
Lesson 3 CSPP58001.
*Man-Cheol Kim, Hyung-Jo Jung and In-Won Lee *Man-Cheol Kim, Hyung-Jo Jung and In-Won Lee Structural Dynamics & Vibration Control Lab. Structural Dynamics.
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
L 7: Linear Systems and Metabolic Networks. Linear Equations Form System.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
Performance of BLAS-3 Based Tridiagonalization Algorithms on Modern SMP Machines Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
ALGEBRAIC EIGEN VALUE PROBLEMS
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
Chapter 5 Eigenvalues and Eigenvectors
Review of Eigenvectors and Eigenvalues
Modeling and Simulation Dr. Mohammad Kilani
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Eigenvalues and Eigenvectors
Numerical Methods and Analysis
Section 4.1 Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Some useful linear algebra
Numerical Analysis Lecture 16.
SOLUTION OF NONLINEAR EQUATIONS
Eigenvalues and Eigenvectors
SKTN 2393 Numerical Methods for Nuclear Engineers
EIGENVECTORS AND EIGENVALUES
Eigenvalues and Eigenvectors
Presentation transcript:

Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University

Outline Introduction Algorithms Multivariate Newton ’ s method Use of the linear eigenvalue of smallest modulus Use of the signed smallest singular value Numerical experiments Conclusion

Introduction Nonlinear eigenvalue problem Let A( ) be an n by n matrix whose elements depend on a scalar parameter In the nonlinear eigenvalue problem, we seek a value of for which there exists a nonzero vector x such that  and x are called the (nonlinear) eigenvalues and eigenvectors, respectively. Examples A( ) = A – I : Linear eigenvalue problem A( ) = 2 M + C + K: Quadratic eigenalue problem A( ) = (e –1) A 1 +  A 2 – A 3 : General nonlinear eigenvalue problem A( ) x = 0.

Applications of the nonlinear eigenvalue problem Structural mechanics Decay system: 2 Mx + Cx + Kx = 0 M : mass matrix, C : damping matrix, K : stiffness matrix Electronic structure calculation Kohn-Sham equation: H( )  = H( )  Kohn-Sham Hamiltonian Theoretical fluid dynamics Computation of the scaling exponent in turbulent flow

Solution of a quadratic eigenproblem Transformation to a linear generalized eigenproblem Quadratic eigenproblem can be transformed into a linear generalized eigenproblem of twice the size. In general, nonlinear eigenproblem of order k can be transformed into a linear generalized eigenproblem of k times the size. Efficient algorithms for linear eigenproblems (QR, Krylov subspace method, etc.) can be applied. 2 Mx + Cx + Kx = 0

Our target problem Computation of the scaling exponent of a passive scalar field in a turbulent flow Governing equation of the passive scalar  n -point correlation function Scaling exponent We are interested in computing n in the case of n = 4.  n (sx 1, sx 2, …, sx n ) = s n  n (x 1, x 2, …, x n ) Turbulent flow

Our target problem (cont ’ d) The PDE satisfied by  4 By writing  4 = s 4  4, we have Non-linearity comes from folding the (unbounded) domain of ( x 1, x 2, x 3, x 4 ) into a bounded one using the scaling law. F  4 = 0, where ( 4 ( 4 -1)A + 4 B + C)  4 = 0 : quadratic eigenproblem  4 (s, s  1, s  2, …, s  5 ) =  1 4  4 (s/  1, s, s  2 /  1, …, s  5 /  1 )

Our target problem (cont ’ d) Problem characteristics A( ) is large ( n ~ 10 5 ), sparse and nonsymmetric. Dependence of A( ) on is fully nonlinear (includes both exponential and polynomial terms in ), but is analytical. Computation of A( ) takes a long time. The smallest positive eigenvalue is sought.

Solution of a general nonlinear eigenproblem Necessary and sufficient condition for A simple approach (Gat et al., 1997; the case of n = 3 ) Find the solution of det(A( )) = 0 by Newton’s method. Find the eigenvector x as the null space of a constant matrix A( ). Difficulty Computation of det(A( )) requires roughly the same cost as the LU decomposition of A( ). Not efficient when A( ) is large and sparse. x 0, A( ) x = 0 det(A( )) = 0

Equations to be solved Iteration formula Approach based on multivariate Newton ’ s method Basic idea Regard A( ) x = 0 as nonlinear simultaneous equations w.r.t. n+1 variables and x and solve them by Newton’s method. Since there are only n equations, we add a normalization condition v t x = 1 using some vector v.

Multivariate Newton ’ s method (cont ’ d) Advantages Each iteration consists of solving linear simultaneous equations. Much cheaper than computing det(A( )). Convergence is quadratic if the initial values  and x  are sufficiently close to the solution. Disadvantages The iterates may converge to unwanted eigenpairs or fail to converge unless both  and x  are sufficiently good. It is in general difficult to find a good initial value x  for the eigenvector. A'( ) is necessary in addition to A( ).

Approaches based on the linear eigenvalue /singular value of smallest modulus Definition For a fixed, we call  a linear eigenvalue of A( ) if there exists a nonzero vector y such that A( )y =  y. For a fixed, we call  >0 a linear singular value of A( ) if  2 is a linear eigenvalue of A( ) T A( ). Linear eigenvalue / singular value are simply an eigenvalue / singular value of A( ) viewed as a constant matrix.  and  are functions of. Necessary and sufficient conditions for x 0, A( ) x = 0 det(A( )) = 0 A( ) has a zero linear eigenvalue. A( ) has a zero linear singular value.

Approaches based on the linear eigenvalue /singular value of smallest modulus (cont ’ d) A possible approach Let  ( ) = the linear eigenvalue of smallest modulus of A( ),  ( ) = the smallest linear singular value of A( ). Find the solution  to  ( ) = 0 or  ( ) = 0. Find the eigenvector x as the null space of a constant matrix A( ). Advantages Only the initial value for  is required.  ( ) and  ( ) can be computed much more cheaply than det(A( )). Use of the Lanczos, Arnoldi, and Jacobi-Davidson methods A'( ) is not necessary if the secant method is used to find.

Approach based on the linear eigenvalue of smallest modulus Algorithm based on the secant method Difficulty When A( ) is nonsymmetric, computing  ( ) is expensive. Though it is much less expensive than computing det(A( )). ・ Set two initial values  and . ・ Repeat the following until |  ( l )| becomes sufficiently small: ・ Find the eigenvector x as the null space of a constant matrix A( l ).  ( ) l–1 l–2 l

Approach based on the smallest linear singular value Possible advantages For nonsymmetric matrices, singular values can be computed much more easily than eigenvalues. Problems The linear singular value  ( ) of A( ) is defined as the positive square root of the linear eigenvalue of A( ) T A( ). Hence,  ( ) is not smooth at  ( ) = 0. The secant method cannot be applied. Solution Modify the definition of  ( ) so that it is smooth near  ( ) = 0. Analytical singular value Signed smallest singular value  ( ) l–1 l–2 l

Analytical singular value decomposition Theorem 1 (Bunse-Gerstner et al., 1991) Let the elements of A( ) be analytical functions of. Then there exist orthogonal matrices U’( ) and V’( ) and a diagonal matrix  ’( ) = diag(  1 ’( ),  2 ’( ), …,  n ’( ) ) whose elements are analytical functions of  and which satisfy This is called the analytical singular value decomposition of A( ). Notes Analytical singular values may be negative. In general,  1 ’( ) >  2 ’( ) >... >  n ’( ) does not hold. Analytical singular values are expensive to compute. Requires the solution of ODE ’ s starting from some initial point .  ( )  U’( )  ’( )  V’( ) T.  i ’( ) 0  1 ’( )  2 ’( )  3 ’( )

Signed smallest singular value Definition Let v n and u n be the right and left singular vectors of A( ) corresponding to the smallest linear singular value  n. Then we call  n =  n sgn(v n T u n ) the signed smallest singular value of A( ). Theorem 2 Assume that  n ( ) is a simple root and |v n ( ) T u n ( )| 0 in an interval   .  Then the signed smallest singular value  n ( ) sgn(v n ( ) T u n ( )) is an analytic function of in this interval. Proof From the uniqueness of SVD, u n ( ) = u n ’( ) and v n ( ) = v n ’( ). Hence,  The right-hand-side is clearly analytical when |v n ( ) T u n ( )| 0...  n ( ) =  n ( ) sgn(v n ( ) T u n ( )) =  n ( )v n T ( )u n ( ) / |v n T ( )u n ( )| = v n T ( )A( )v n ( ) / |v n T ( )u n ( )| = v n ’ T ( )A( )v n ’( ) / |v n ’ T ( )u n ’( )|..

Approach based on the signed smallest singular value Characteristics of the signed smallest singular value  n ( )  n ( ) = 0  n ( ) = 0  n ( ) is an analytical function of under suitable assumptions. Easy to compute (requires only  n ( ), v n ( ) and u n ( ) ) Algorithm based on the secant method  ( ) l–1 l–2 l ・ Set two initial values  and . ・ Repeat the following until |  ( l )| becomes sufficiently small: ・ Find the eigenvector x as the null space of a constant matrix A( l ).

Numerical experiments Test problem Computation of the scaling exponent in turbulent flow Matrix size is 35,000 and 100,000. Seek the smallest positive (nonlinear) eigenvalue It is known that the eigenvalue is in [0, 4], but the estimate for the (nonlinear) eigenvector is unknown. Computational environment Fujitsu PrimePower HPC2500 (16PU)

Algorithm I: Approach based on the linear eigenvalue of smallest modulus Result for n=35,000 Nonlinear eigenvalue: Computational time : 35,520 sec. for each value of. Secant iteration: 4 times  ( )

Algorithm II: Approach based on the signed smallest singular value Result for n=35,000 Result for n=100,000 Computational time : 16,200 sec. for each value of. Could not be computed with Algorithm 1 because the computational time was too long.  ( ) Nonlinear eigenvalue : Computational time: 2,005 sec. for each value of. (1/18 of Algorithm 1) Secant iteration: 4 times

Conclusion Summary of this study We proposed an algorithm for large sparse nonlinear eigenproblem based on the signed smallest positive singular value. The algorithm proved much faster than the method based on the linear eigenvalue of smallest modulus. Future work Application of the algorithm to various nonlinear eigenproblems