On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using Numerical Integration Takamitsu Watanabe and Yusaku.

Slides:



Advertisements
Similar presentations
A Large-Grained Parallel Algorithm for Nonlinear Eigenvalue Problems Using Complex Contour Integration Takeshi Amako, Yusaku Yamamoto and Shao-Liang Zhang.
Advertisements

Arc-length computation and arc-length parameterization
5.1 Real Vector Spaces.
Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 9 Instructor: Tim Warburton.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
A NOVEL APPROACH TO SOLVING LARGE-SCALE LINEAR SYSTEMS Ken Habgood, Itamar Arel Department of Electrical Engineering & Computer Science GABRIEL CRAMER.
Yasuhiro Fujiwara (NTT Cyber Space Labs)
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Point-wise Discretization Errors in Boundary Element Method for Elasticity Problem Bart F. Zalewski Case Western Reserve University Robert L. Mullen Case.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
A Linearization method for Polynomial Eigenvalue Problems using a contour integral Junko Asakura, Tetsuya Sakurai, Hiroto Tadano Department of Computer.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 17 Introduction to Eigenvalue Problems
Solution of linear system of equations
Linear Transformations
Chapter 4 Roots of Equations
MA5233: Computational Mathematics
Ch 7.9: Nonhomogeneous Linear Systems
Shawn Sickel A Comparison of some Iterative Methods in Scientific Computing.
Efficient Simulation of Physical System Models Using Inlined Implicit Runge-Kutta Algorithms Vicha Treeaporn Department of Electrical & Computer Engineering.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Ordinary Differential Equations (ODEs) 1Daniel Baur / Numerical Methods for Chemical Engineers / Implicit ODE Solvers Daniel Baur ETH Zurich, Institut.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Ordinary Differential Equations (ODEs)
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
1 Parallel Simulations of Underground Flow in Porous and Fractured Media H. Mustapha 1,2, A. Beaudoin 1, J. Erhel 1 and J.R. De Dreuzy IRISA – INRIA.
Dominant Eigenvalues & The Power Method
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Major: All Engineering Majors Author(s): Autar Kaw
Professor Walter W. Olson Department of Mechanical, Industrial and Manufacturing Engineering University of Toledo Solving ODE.
Gerschgorin Circle Theorem. Eigenvalues In linear algebra Eigenvalues are defined for a square matrix M. An Eigenvalue for the matrix M is a scalar such.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
1 st order linear differential equation: P, Q – continuous. Algorithm: Find I(x), s.t. by solving differential equation for I(x): then integrate both sides.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002.
Statistical Sampling-Based Parametric Analysis of Power Grids Dr. Peng Li Presented by Xueqian Zhao EE5970 Seminar.
Integration of 3-body encounter. Figure taken from
PDCS 2007 November 20, 2007 Accelerating the Complex Hessenberg QR Algorithm with the CSX600 Floating-Point Coprocessor Yusaku Yamamoto 1 Takafumi Miyata.
Scalable Symbolic Model Order Reduction Yiyu Shi*, Lei He* and C. J. Richard Shi + *Electrical Engineering Department, UCLA + Electrical Engineering Department,
Tarek A. El-Moselhy and Luca Daniel
US Army Corps of Engineers ® Engineer Research and Development Center Reactive Transport (3): Solve Biogeochemistry with The Primitive Approach Pearce.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
1 Complex Images k’k’ k”k” k0k0 -k0-k0 branch cut   k 0 pole C1C1 C0C0 from the Sommerfeld identity, the complex exponentials must be a function.
Parallel Solution of the Poisson Problem Using MPI
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
*Man-Cheol Kim, Hyung-Jo Jung and In-Won Lee *Man-Cheol Kim, Hyung-Jo Jung and In-Won Lee Structural Dynamics & Vibration Control Lab. Structural Dynamics.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 27.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
Finite Element Modelling of Photonic Crystals Ben Hiett J Generowicz, M Molinari, D Beckett, KS Thomas, GJ Parker and SJ Cox High Performance Computing.
Toward an Automatically Tuned Dense Symmetric Eigensolver for Shared Memory Machines Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
F. Fairag, H Tawfiq and M. Al-Shahrani Department of Math & Stat Department of Mathematics and Statistics, KFUPM. Nov 6, 2013 Preconditioning Technique.
Performance of BLAS-3 Based Tridiagonalization Algorithms on Modern SMP Machines Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
The formulae for the roots of a 3rd degree polynomial are given below
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Solving Systems of Linear Equations: Iterative Methods
Eigenspectrum calculation of the non-Hermitian O(a)-improved Wilson-Dirac operator using the Sakurai-Sugiura method H. Sunoa, Y. Nakamuraa,b, K.-I. Ishikawac,
Chapter 27.
CSCE569 Parallel Computing
Presentation transcript:

On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using Numerical Integration Takamitsu Watanabe and Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University

Outline Background Objective of our study Projection method for generalized eigenvalue problems using numerical integration Application of the sparse direct solver Numerical results Conclusion

Background Generalized eigenvalue problems in quantum chemistry and structural engineering real axis eigenvalues specified interval Problem characteristics A and B are large and sparse. A is real symmetric and B is s.p.d. Eigenvalues are real. Eigenvalues in a specified interval are often needed. Given, find and such that. HOMO LUMO

Background (cont ’ d) A projection method using numerical integration Sakurai and Sugiura, A projection method for generalized eigenvalue problems, J. Comput. Appl. Math. (2003) Reduce the original problem to a small generalized eigenvalue problem within a specified region in the complex plane. By solving the small problem, the eigenvalues lying in the region can be obtained. The main part of computation is to solve multiple linear simultaneous equations. Suited for parallel computation. Original problem Small generalized eigenvalue problem within the region region

Objective of our study Previous approach Solve the linear simultaneous equations by an iterative method. The number of iterations needed for convergence differs from one simultaneous equations to another. This brings about load imbalance between processors, decreasing parallel efficiency. Our study Solve the linear simultaneous equations by a sparse direct solver without pivoting. Load balance will be improved since the computational times are the same for all linear simultaneous equations.

Projection method for generalized eigenvalue problems using numerical integration × λ m+2 × λ m+1 Suppose that has distinct eigenvalues and that we need that lie in a closed curve. Using two arbitrary complex vectors, define a complex function Then, f (z) can be expanded as follows:. C, g(z) : polynomial in z., c c

Projection method for generalized eigenvalue problems using numerical integration (cont ’ d) Further define the moments by and two Hankel matrices by. Th. are the m roots of. The original problem has been reduced to a small problem through contour integral.

Projection method for generalized eigenvalue problems using numerical integration (cont ’ d) Path of integration Set the path of integration  to a circle with center  and radius . Approximate the integral using the trapezoidal rule. Computation of the moments : The function values have to be computed for each. Solution of N independent linear simultaneous equations is necessary ( N = ). 

Application of the sparse direct solver For a sparse s.p.d. matrix, the sparse direct solver provides an efficient way for solving the linear simultaneous equations. We adopt this approach by extending the sparse direct solver to deal with complex symmetric matrices. The coefficient matrix is a sparse complex symmetric matrix. A and B : sparse symmetric matrices, : a complex number

The sparse direct solver Characteristics Reduce the computational work and memory requirements of the Cholesky factorization by exploiting the sparsity of the matrix. Stability is guaranteed when the matrix is s.p.d. Efficient parallelization techniques are available. ordering symbolic factorization Cholesky factorization triangular solution Find a permutation of rows/columns that reduces computational work and memory requirements. Estimate the computational work and memory requirements. Prepare data structures to store the Cholesky factor.

Extension of the sparse direct solver to complex symmetric matrices Algorithm Extension is straightforward by using the Cholesky factorization for complex symmetric matrices. Advantages such as reduced computational work, reduced memory requirements and parallelizability are carried over. Accuracy and stability Theoretically, pivoting is necessary when factorizing complex symmetric matrices. Since our algorithm does not incorporate pivoting, accuracy and stability is not guaranteed. We examine the accuracy and stability experimentally by comparing the results with those obtained using GEPP.

Numerical results Matrices used in the experiments BCSSTK12 BCSSTK13 FMO matrixNNNZexplanation BCSSTK ,857Ore car -- consistent mass BCSSTK ,943Fluid flow generalized eigenvalues FMO ,030Fragment molecular orbital method Harwell-Boeing Library For each matrix, we solve the equations with the sparse direct solver (with MD and ND ordering) and GEPP. We compare the computational time and accuracy of the eigenvalues.

Computational time Computational time (sec.) for one set of linear simultaneous equations and speedup (PowerPC G5, 2.0GHz) matrixLAPACK (GEPP)sparse solver (MD)sparse solver (ND) BCSSTK (1x)0.017 (144x)0.021 (116x) BCSSTK (1x)0.36 (17x)0.43 (14x) FMO5.86 (1x)2.93 (2.0x)3.51 (1.7x) The sparse direct solver is two to over one hundred times faster than GEPP, depending on the nonzero structure. BCSSTK12 BCSSTK13 FMO

Accuracy of the eigenvalues (BCSSTK12) Example of an interval containing 4 eigenvalues Relative errors in the eigenvalues for each algorithm ( N=64 ) Distribution of the eigenvalues and the specified interval eigenvalues specified interval The errors were of the same order for all three solvers. Also, the growth factor for the sparse solver was O(1).

Accuracy of the eigenvalues (BCSSTK13) Example of an interval containing 3 eigenvalues Distribution of the eigenvalues and the specified interval eigenvalues specified interval The errors were of the same order for all three solvers. Relative errors in the eigenvalues for each algorithm ( N=64 )

Accuracy of the eigenvalues (FMO) Example of an interval containing 4 eigenvalues Distribution of the eigenvalues and the specified interval eigenvalues specified interval The errors were of the same order for all three solvers. Relative errors in the eigenvalues for each algorithm ( N=64 )

Conclusion Summary of this study We applied a complex symmetric version of the sparse direct solver to a projection method for generalized eigenvalue problems using numerical integration. The sparse solver succeeded in solving the linear simultaneous equations stably and accurately, producing eigenvalues that are as accurate as those obtained by GEPP. Future work Apply our algorithm to larger matrices arising from quantum chemistry applications. Construct a hybrid method that uses an iterative solver when the growth factor becomes too large. Parallelize the sparse solver to enable more than N processors to be used.