May, 2006IWASEP VI1 Accurate symmetric rank revealing and eigen decompositions of structured matrices Froilán M. Dopico Universidad Carlos III de Madrid.

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Matrices A matrix is a rectangular array of quantities (numbers, expressions or function), arranged in m rows and n columns x 3y.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 13 - Eigen-analysis CVEN 302 July 1, 2002.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
DEF: Characteristic Polynomial (of degree n) QR - Algorithm Note: 1) QR – Algorithm is different from QR-Decomposition 2) a procedure to calculate the.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Nequalities Takagi Factorization on a GPU using CUDA Gagandeep S. Sachdev, Vishay Vanjani & Mary W. Hall School of Computing, University of Utah What is.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
September 2008CORTONA-ITALY, DOUBLY STRUCTURED SETS OF SYMPLECTIC MATRICES Froilán M. Dopico Instituto de Ciencias Matemáticas CSIC-UAM-UC3M-UCM.
Sampling algorithms for l 2 regression and applications Michael W. Mahoney Yahoo Research (Joint work with P. Drineas.
3D Geometry for Computer Graphics
Lecture 19 Quadratic Shapes and Symmetric Positive Definite Matrices Shang-Hua Teng.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Chapter 2 Matrices Definition of a matrix.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Matrices and Determinants
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
CHAPTER SIX Eigenvalues
Compiled By Raj G. Tiwari
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Gram-Schmidt Orthogonalization
Geometric Mean Decomposition and Generalized Triangular Decomposition Yi JiangWilliam W. HagerJian Li
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
 6.2 Pivoting Strategies 1/17 Chapter 6 Direct Methods for Solving Linear Systems -- Pivoting Strategies Example: Solve the linear system using 4-digit.
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Scientific Computing Singular Value Decomposition SVD.
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Lesson 3 CSPP58001.
Numerical Analysis – Eigenvalue and Eigenvector Hanyang University Jong-Il Park.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
BART VANLUYTEN, JAN C. WILLEMS, BART DE MOOR 44 th IEEE Conference on Decision and Control December 2005 Model Reduction of Systems with Symmetries.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
1 Chapter 7 Numerical Methods for the Solution of Systems of Equations.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
By Josh Zimmer Department of Mathematics and Computer Science The set ℤ p = {0,1,...,p-1} forms a finite field. There are p ⁴ possible 2×2 matrices in.
Similar diagonalization of real symmetric matrix
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
On Computing Multiple Eigenvalues, the Jordan Structure and their Sensitivity Zhonggang Zeng Northeastern Illinois University Sixth International Workshop.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
test for definiteness of matrix
MATRICES A rectangular arrangement of elements is called matrix. Types of matrices: Null matrix: A matrix whose all elements are zero is called a null.
ALGEBRAIC EIGEN VALUE PROBLEMS
1 Numerical Methods Solution of Systems of Linear Equations.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
2. Matrix Methods
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Eigenvalues and Eigenvectors
Numerical Analysis Lecture 16.
Derivative of scalar forms
Christopher Crawford PHY
Relative Perturbations for s.d.d. Matrices with applications
Lecture 20 SVD and Its Applications
Inverse Matrices From: D.A. Harville, Matrix Algebra from a Statistician’s Perspective, Springer. Chapter 8.
Presentation transcript:

May, 2006IWASEP VI1 Accurate symmetric rank revealing and eigen decompositions of structured matrices Froilán M. Dopico Universidad Carlos III de Madrid (Spain) joint work with Plamen Koev and Juan M. Molera

May, 2006IWASEP VI2 Rank Revealing Decompositions (RRD)

May, 2006IWASEP VI3 RRDs Computed with high relative accuracy Computed factors satisfy: is the machine precision.

May, 2006IWASEP VI4 Main application of Accurate RRDs Given an accurate RRD there are O(n 3 ) algorithms to compute SVDs to high relative accuracy (Demmel et al. (1999)). where

May, 2006IWASEP VI5 How to compute accurate RRDs Rank Revealing property: In practice from Gaussian Elimination with Complete Pivoting (GECP). Guaranteed RRDs by Pan (2000), Miranian and Gu (2003). Accuracy: Only for matrices with special structures. Demmel: Cauchy, Scaled-Cauchy, Vandermonde. Demmel and Veselic: Well Scaled Positive Definite Matrices. Demmel and Koev: M-Matrices, Polynomial Vandermonde. Demmel and Gragg: Acyclic Matrices (include bidiagonal)… ACCURACY IN A FINITE COMPUTATION (GECP) GUARANTEES ACCURACY IN SVD

May, 2006IWASEP VI6 Our goal: Compute Accurate Symmetric RRDs For symmetric matrices compute: We consider four classes of symmetric matrices: Symmetric Cauchy and Scaled-Cauchy Symmetric Vandermonde Symmetric Totally Nonnegative Symmetric Graded Matrices well conditioned diagonal, and

May, 2006IWASEP VI7 For symmetric indefinite matrices GECP may not preserve the symmetry. Given an accurate symmetric RRD, eigenvalues and vectors are computed with high relative accuracy by the O(n 3 ) symmetric J orthogonal algorithm (Veselic- Slapnicar) : The same accuracy can be obtained given an accurate nonsymmetric RRD of a symmetric matrix using the O(n 3 ) SSVD algorithm (FMD, Molera and Moro). Remarks on computing symmetric RRDs

May, 2006IWASEP VI8 Algorithm 1.Compute accurate Schur Complements (Gohberg, Kailath, Olshevsky) and (Demmel) Symmetric RRDs of Scaled Cauchy Matrices 2.Use Diagonal Pivoting Method with the Bunch-Parlett complete pivoting strategy. 3.Orthogonal diagonalization of the 2 x 2 pivots. CAREFUL ERROR ANALYSIS IS NEEDED, BUT IT IS GENERAL IF ACCURATE SCHUR COMPLEMENTS ARE AVAILABLE. This approach has been used by M. J. Peláez and J. Moro to compute accurate SYMMETRIC RRDs of DSTU and TSC matrices

May, 2006IWASEP VI9 RRDs of Symmetric Vandermonde Matrices (I) Letbe a real number Remarks 1.NONSYMMETRIC approach by Demmel does NOT respect the symmetry: SCALED CAUCHY = VANDERMONDE x DFT 2. Schur complements of Vandermonde matrices are NOT Vandermonde. 3.Symmetric Pivoting DESTROY the symmetric Vandermonde structure. 4. We avoid pivoting and use EXACT formulas for A = LDL T and its converse.

May, 2006IWASEP VI10 RRDs of Symmetric Vandermonde Matrices (II) The LDL T can be accurately computed in 2 n 2 flops CONDITION NUMBER OF L?

May, 2006IWASEP VI11 RRDs of Symmetric Vandermonde Matrices (III) Large Small LargeSmall

May, 2006IWASEP VI12 RRDs of Symmetric Vandermonde Matrices (IV) ? At present only NONSYMMETRIC approach

May, 2006IWASEP VI13 RRDs of Totally Nonnegative Matrices (I) Matrices with all minors nonnegative are called totally nonnegative (TN). We consider only nonsingular matrices. TN can be parametrized as products of nonnegative bidiagonal matrices. BIDIAGONAL FACTORIZATION NEED NOT BE AN RRD OF A

May, 2006IWASEP VI14 Very Good Numerical Virtues of BD(A) P. Koev has shown that given the bidiagonal factorization of a TN matrix, BD(A), it is possible to compute accurately and efficiently O(n 3 ): 1.Eigenvalues of nonsymmetric TN. 2.Singular values and vectors of TNs. 3.Inverse. 4.BDs of products of TNs. 5.LDU decompositions. 6.BDs of Schur complements. 7.BDs of submatrices and Minors. 8.Apply Givens rotations: BD(A) BD(GA). 9.QR factorization……………

May, 2006IWASEP VI15 RRDs of Totally Nonnegative Matrices (II) Our Goal: Compute accurate RRDs given BD(A) 1.In O(n 3 ) time 2.Using a finite process, 3.and, respecting the symmetry. This is nontrivial because PIVOTING STRATEGIES DESTROY TN STRUCTURE. Two algorithms: non-symmetric and symmetric. Why?

May, 2006IWASEP VI16 Accurate RRDs of nonsymmetric TNs 1.BD(A) BD(B) by applying Givens rotations, and B bidiagonal (Golub-Kahan): A = Q B H T 2.B = P 1 LDU P 2 using accurate GECP for acyclic matrices (Demmel et al.) A = (Q P 1 L) D (U P 2 H T ) COST: 14 n 3

May, 2006IWASEP VI17 Accurate RRDs of Symmetric TNs (I) A is POSITIVE DEFINITE COMPLETE PIVOTING IS DIAGONAL PIVOTING, AND IT PRESERVES THE SYMMETRY

May, 2006IWASEP VI18 1. BD(A) BD(T), by applying Givens rotations, and T tridiagonal A = Q T Q T Accurate RRDs of Symmetric TNs (II) 2.Compute a symmetric RRD of T T = P L D L T P T a)GECP (DIAGONAL SYMMETRIC PIVOTING). b)ELEMENTS OF D AND L COMPUTED AS SIGNED QUOTIENTS OF MINORS OF T. c)MINORS OF T COST O(n) FLOPS. d)SUBTRACTION FREE APPROACH 3.Multiply to get: A = Q TQ T = (Q P L) D (Q P L) T COST: 21 n 3

May, 2006IWASEP VI19 RRDs of graded matrices

May, 2006IWASEP VI20 compute an accurate RRD of A? What is known for RRDs of NONSYMMETRIC graded matrices? Under which conditions does GECP applied on A Notice thatimplies

May, 2006IWASEP VI21 What is known for RRDs of NONSYMMETRIC graded matrices? COMPUTED WITH GECP JUST FOR THEORY Demmel et al. (1999)

May, 2006IWASEP VI22 compute an accurate SYMMETRIC RRD of A? Accurate RRDs of SYMMETRIC graded matrices Under which conditions does the DIAGONAL PIVOTING METHOD with the BUNCH-PARLETT COMPLETE PIVOTING strategy If is small

May, 2006IWASEP VI23 Numerical Example: Symmetric Vandermonde for a=1/2

May, 2006IWASEP VI24 CONCLUSIONS If accurate RRDs for a nonsymmetric class of matrices can be computed, accurate symmetric RRDs can (almost always) be computed for the symmetric counterparts. Non-Trivial algorithms may be required. Different classes of matrices require very different approaches: there is not an UNIVERSAL approach. If accurate Schur complements are available DIAGONAL PIVOTING METHOD WITH BUNCH AND PARLETT STRATEGY will do the job. Nonsymmetric RRDs plus SSVD algorithm computes accurate eigenvalues and eigenventors.