By: David McQuilling and Jesus Caban Numerical Linear Algebra.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Dense Matrix Algorithms. Topic Overview Matrix-Vector Multiplication Matrix-Matrix Multiplication Solving a System of Linear Equations.
CS 484. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Numerical Algorithms Matrix multiplication
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
Numerical Algorithms • Matrix multiplication
1cs542g-term Notes  Assignment 1 is out (questions?)
Linear Algebraic Equations
Computer Graphics Recitation 5.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 14 Elimination Methods.
Design of parallel algorithms
Chapter 2 Matrices Definition of a matrix.
CS 584. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
3D Geometry for Computer Graphics
ECIV 520 Structural Analysis II Review of Matrix Algebra.
Ordinary least squares regression (OLS)
Dense Matrix Algorithms CS 524 – High-Performance Computing.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Matrices and Determinants
Lecture 7: Matrix-Vector Product; Matrix of a Linear Transformation; Matrix-Matrix Product Sections 2.1, 2.2.1,
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Chapter 10 Real Inner Products and Least-Square (cont.)
5  Systems of Linear Equations: ✦ An Introduction ✦ Unique Solutions ✦ Underdetermined and Overdetermined Systems  Matrices  Multiplication of Matrices.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
CHAPTER SIX Eigenvalues
Chapter 10 Review: Matrix Algebra
Compiled By Raj G. Tiwari
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
ECON 1150 Matrix Operations Special Matrices
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
By: David McQuilling; Jesus Caban Deng Li Numerical Linear Algebra.
Matrix Algebra. Quick Review Quick Review Solutions.
Slide Chapter 7 Systems and Matrices 7.1 Solving Systems of Two Equations.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide 7- 1.
Modern Navigation Thomas Herring
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Linear Algebra 1.Basic concepts 2.Matrix operations.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Scientific Computing Singular Value Decomposition SVD.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
ES 240: Scientific and Engineering Computation. Chapter 8 Chapter 8: Linear Algebraic Equations and Matrices Uchechukwu Ofoegbu Temple University.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
Lecture 1 Linear algebra Vectors, matrices. Linear algebra Encyclopedia Britannica:“a branch of mathematics that is concerned with mathematical structures.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Matrices Introduction.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Matrices and vector spaces
5 Systems of Linear Equations and Matrices
Linear independence and matrix rank
CSE 541 – Numerical Methods
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Presentation transcript:

By: David McQuilling and Jesus Caban Numerical Linear Algebra

Solving Linear Equations x A b

Gaussian Elimination Start with the matrix representation of Ax = b by adding the solution vector b to the matrix A Now perform Gaussian elimination by row reductions to transform the matrix Ab into an upper triangular matrix that contains the simplified equations. Find the first multiplier by dividing the first entry in the second row by the first entry in the first row. This will allow you to transform the first entry in the second row to a zero by a row reduction, thus reducing the system of equations.

Gaussian Elimination (cont.) Multiply the first row by the multiplier (4 / 2 = 2) and then subtract that from the second row to obtain the new reduced equation.

A = LU x L c U The matrix resulting from the Gaussian elimination steps is actually U concatenated with the new right hand side c. L is obtained from the multipliers used in the Gaussian elimination steps.

Solving Ux=c by back substitution Uxc

Solution to Ax=b The solution vector to Ux=c is actually the solution to the original problem Ax=b. Axb

Gaussian algorithm to find LU After this algorithm completes, the diagonal and upper triangle part of A contains U while the lower triangle contains L. This algorithm runs in O(n 3 ) time.

Gram-Schmidt Factorization Yields the factorization A=QR Starts with n linearly independent vectors in A Constructs n orthonormal vectors  A vector x is orthonormal if x T x = 1 and ||x|| = 1  Any vector is made orthonormal by making it a unit vector in the same direction, i.e. the length is 1. Dividing a vector x by its length (||x||) creates a unit vector in the direction of x

Composition of Q and R  Q is the matrix with n orthonormal column vectors, which were obtained from the n linearly independent vectors of A.  Orthonormal columns implies that Q T Q = I  R is obtained by multiplying Q T on both sides of A = QR  This equation becomes Q T A = Q T QR = I R = R  R becomes an upper triangular matrix because later q’s are orthogonal to earlier a’s, i.e. their dot product is zero. This creates zeroes in the entries below the main diagonal.

Obtaining Q from A Start with the first column vector from A and use that as your first vector q 1 for Q (have to make it a unit vector before adding it to Q) To obtain the second vector q 2, subtract from the second vector in A, a 2, its projection along the previous q i vectors.  The projection of one vector onto another is defined as (x T y / x T x ) x where the vector y is being projected onto the vector x. Once all q i have been found this way, divide them by their lengths so that they are unit vectors. Now those are the orthonormal vectors which make up the columns of Q

Obtaining R from Q T A Once Q has been found, simply multiply A by the transpose of Q to find R Overall, the Gram-Schimdt algorithm is O(mn 2 ) for an mxn matrix A.

Eigenvalues and Eigenvectors Eigenvectors are unique vectors such that when multiplied by a matrix A, they do not change their direction only their magnitude, i.e. Ax=λx Eigenvalues are the scalar multiples of the eigenvectors, i.e. the λ’s

Solving for eigenvalues of A Solving for the eigenvalues involves solving det(A–λ I )=0 where det(A) is the determinant of the matrix A For a 2x2 matrix the determinant is a quadratic equation, for larger matrices the equation increases in complexity so lets solve a 2x2 Some interesting properties of eigenvalues are that the product of them is equal to the det(A) and the sum of them is equal to the trace of A which is the sum of the entries on the main diagonal of A

Solving det(A- λ I )=0

Solving for the eigenvectors Once you have the eigenvalues, λ’s, you can solve for the eigenvectors by solving ( A-λI)x=0 for each λ i In our example we have two λ’s, so we need to solve for two eigenvectors.

Now solve (A-λI)x = 0 for λ=4

Solve (A-λI)x=0 for λ=1

Positive definite matrices Positive definite:  all eigenvalues are positive  All pivots are positive  x T Ax > 0 is positive except at x=0 Applications  Ellipse: find the axes of the a tilted ellipse

Cholesky Factorization Cholesky factorization factors an N*N, symmetric, positive-definite matrix into the product of a lower triangular matrix and its transpose. A = LL T

Least Square It often happens that Ax = b has no solution Too many equations (more equations than unknowns) When the length of e is as small as possible, x is a least square solution Example: Find the closest straight line to three points (0,6), (1,0), and (2,0) Line: b = C + Dt We are asking for two numbers C and D that satisfy three equations No straight line goes through those points Choose the one that minimize the error

Matrix Multiplication

Matrix multiplication is commonly used in  graph theory  numerical algorithms  computer graphics  signal processing Multiplication of large matrices requires a lot of computation time as its complexity is O(n 3 ), where n is the dimension of the matrix Most parallel matrix multiplication algorithms use matrix decomposition based on the number of processors available

Matrix Multiplication Example (a 11 ):

Sequential Matrix Multiplication The product C of the two matrices A and B is defined by a ij, b ij, and c ij is the element in ith row and jth column of the matrix A, B, and C respectively.

Strassen’s Algorithm The Strassen’s algorithm is a sequential algorithm reduced the complexity of matrix multiplication algorithm to O(n 2.81 ). In this algorithm, n  n matrix is divided into 4 n/2  n/2 sub-matrices and multiplication is done recursively by multiplying n/2  n/2 matrices.

Parallel Matrix Partitioning Parallel computers can be used to process large matrices. In order to process a matrix in parallel, it is necessary to partition the matrix so that the different partitions can be mapped to different processors. Partitions:  Block row-wise/column-wise striping  Cyclic row-wise/colum-wise striping  Block checkerboard striping  Cyclic checkerboard

Striped Partitioning Matrix is divided into groups of complete rows or columns, and each processor is assigned one such group. Striped Partitioning  Block-Striped: each processor is assigned contiguous rows or columns  Cyclic-Striped: is when rows or columns are sequentially assigned to processors in a wraparound manner.

Block Stripping From:

Checkerboard Partition The matrix is divided into smaller square or rectangular blocks or sub-matrices that are distributed among processors Checkerboard Partitioning  Block-Checkerboard: partitioning splits both the rows and the columns of the matrix, so no processor is assigned any complete row or column.  Cyclic-Checkerboard: when it wraparound.

Checkerboard Partitioning From:

Parallel Matrix-Vector Product Algorithm:  For each processor I Broadcast X(i) Compute A(i)*x A(i) refers to the n by n/p block row that processor i owns Algorithm uses the formula Y(i) = Y(i) + A(i)*X = Y(i) +  j A(i)*X(j)

Parallel Matrix Multiplication Algorithms:  systolic algorithm  Cannon's algorithm  Fox and Otto's algorithm  PUMMA (Parallel Universal Matrix Multiplication)  SUMMA (Scalable Universal Matrix Multiplication)  DIMMA (Distribution Independent Matrix Multiplication)

Systolic Communication Communication design where the data exchange and communication occurs between the nearest-neighbors. From:

Matrix Decomposition To implement the matrix multiplication, the A and B matrices are decomposed into several sub- matrices. Four methods of matrix decomposition and distribution:  one-dimensional decomposition  two-dimensional square decomposition general decomposition scattered decomposition.

One-dimensional Decomposition The matrix is horizontally decomposed. The i th processor holds i th A sub and B sub and communicates them to two neighbor processors, i.e., to the (i-1) th and (i+1) th processors. The first and last processor communicate with each other as in a ring topology. p0p1p2p3p5p4p6p7

Two-Dimensional Decomposition Square decomposition: The two- dimensional square decomposition Matrix is decomposed into square processor template. General decomposition: General Two-Dimensional Decomposition allow having 2*3, 4*3, etc. configurations.

Two-Dimensional Decomposition Scattered decomposition: the matrix is divided into several sets of blocks. Each set of blocks contains as many elements as the number of processors, and every element in a set of blocks is scattered according to the two- dimensional processor templates.

Applications

Computer Graphics In computer graphics all the transformation are represented as matrices and a set of transformations as a matrix multiplication. Transformations:  Rotation  Translation  Scaling  Shearing X Rotation © Pixar

2D/3D Mesh Laplacian matrix L(G) of the graph G If you take the eigenvalues and eigenvectors of L(G), you can get interesting properties as vibration of the graph. From:

Computer Vision In computer vision we can calibrate a camera to determine the relationship between what appears on the image (or retinal) plane and where it is located in the 3D world by using and representing the parameters as matrices and vectors.

Software for Numerical Linear Algebra

LAPACK LAPACK: Routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. Link:

LINPACK LINPACK is a collection of Fortran subroutines that analyze and solve linear equations and linear least-squares problems. The package solves linear systems whose matrices are general, banded, symmetric indefinite, symmetric positive definite, triangular, and tri- diagonal square. Link:

ATLAS ATLAS (Automatically Tuned Linear Algebra Software) provides portable performance and highly optimized Linear Algebra kernels for arbitrary cache-based architectures. Link:

MathWorks MATLAB: provide engineers, scientists, mathematicians, and educators with an environment for technical computing applications

NAG NAG Numerical Library: solve complex problems in areas such as research, engineering, life and earth sciences, financial analysis and data mining.

NMath Matrix NMath Matrix is an advanced matrix manipulation library. Full-featured structured sparse matrix classes, including triangular, symmetric, Hermitian, banded, tridiagonal, symmetric banded, and Hermitian banded.