Problems in solving generic AX = B Case 1: There are errors in data such that data cannot be fit perfectly (analog: simple case of fitting a line while.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Matrices: Inverse Matrix
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 7 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Chapter 9 Gauss Elimination The Islamic University of Gaza
Linear Algebraic Equations
Linear Systems of Equations Ax = b Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
Lecture 6 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
Curve-Fitting Regression
Singular Value Decomposition COS 323. Underconstrained Least Squares What if you have fewer data points than parameters in your function?What if you have.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 14 Elimination Methods.
Chapter 2 Matrices Definition of a matrix.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Matrices and Determinants
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Precalculus – MAT 129 Instructor: Rachel Graham Location: BETTS Rm. 107 Time: 8 – 11:20 a.m. MWF.
ECON 1150 Matrix Operations Special Matrices
 Row and Reduced Row Echelon  Elementary Matrices.
Chapter 2 Determinants. The Determinant Function –The 2  2 matrix is invertible if ad-bc  0. The expression ad- bc occurs so frequently that it has.
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
Matrices & Determinants Chapter: 1 Matrices & Determinants.
8.1 Matrices & Systems of Equations
Matrices Addition & Subtraction Scalar Multiplication & Multiplication Determinants Inverses Solving Systems – 2x2 & 3x3 Cramer’s Rule.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Lecture 8 Matrix Inverse and LU Decomposition
Solving Linear Systems of Equations
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Lesson 3 CSPP58001.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Chapter 9 Gauss Elimination The Islamic University of Gaza
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
Unsupervised Learning II Feature Extraction
Matrices, Vectors, Determinants.
Chapter 12: Data Analysis by linear least squares Overview: Formulate problem as an over-determined linear system of equations Solve the linear system.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
1 Matrix Math ©Anthony Steed Overview n To revise Vectors Matrices.
TYPES OF SOLUTIONS SOLVING EQUATIONS
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
TYPES OF SOLUTIONS SOLVING EQUATIONS
ECE 3301 General Electrical Engineering
Matrices and vector spaces
Spring Dr. Jehad Al Dallal
Linear independence and matrix rank
Singular Value Decomposition
Chapter 4 Systems of Linear Equations; Matrices
Use Inverse Matrices to Solve 2 Variable Linear Systems
Linear Systems Numerical Methods.
Chapter 4 Systems of Linear Equations; Matrices
Lecture 8 Matrix Inverse and LU Decomposition
Chapter 2 Determinants.
Presentation transcript:

Problems in solving generic AX = B Case 1: There are errors in data such that data cannot be fit perfectly (analog: simple case of fitting a line while there the data actually do not fall on it). Case 2: There are more equations than unknowns (i.e., say 5 equations and 3 unknowns). This is usually called an over-determined system. (Condition: the equations are not linearly dependent). Case 3: There are fewer equations than unknowns, this is called an under- determined system. (no unique solution). over-determined e.g., 1 model coef (unknown) but many observations. Worse yet when they conflict with each other Under-determined More unknowns than number of observations, can’t solve matrix equation exactly, ill conditioned mix-determined Works well in terms of solving the structures since coverage is good. Example from Seismic Structural Study Seismic Tomography (Part II, Solving Inverse Problems)

2 Model size variation The sum of the squared values of elements of X (norm of X) goes to 0 since when we increase , A T A matrix effectively becomes diagonal (with a very large number on the diagonal), naturally, X ----> 0 as a ii ----> infinity. Small damping large damping Tradeoff curves optimal Fit to the data Damping factor Solution norm

Liu & Gu, Tectonophys, 2012 Processes in seismic inversions in general

4 Simple Inverse Solver for Simple Problems Cramer’s rule: Suppose Consider determinant Now multiply D by x (consider some x value), by a property of the determinants, multiplication by a constant x = multiplication of a given column by x. Property 2: adding a constant times a column to a given column does not change determinant,

5 Then, follow same procedure, if D !=0, d != 0, Something good about Cramer’s Rule: (1)It is simpler to understand (2)It can easily extract one solution element, say x, without having to solve simultaneously for y and z. (3)The so called “D” matrix can really be some A matrix multiplied by its transpose, i.e., D=A T A, in other words, this is equally applicable to least-squares problem.

6 Common matrix factorization methods (2) LU Decomposition: L=lower triangular U=upper triangular Key: write A = L * U So in a 4 x 4, (1) Gaussian Elimination and Backsubstitution: Advantage: can use Gauss Jordan Elimination on triangular matrices! Other solvers:

7 (3) Singular value decomposition (SVD): useful to deal with set of equations that are either singular or close to singular, where LU and Gaussian Elimination fail to get an answer for. Ideal for solving least-squares problems. Express A in the following form: U and V are orthogonal matrices (meaning each row/column vector is orthogonal). If A is square matrix, say 3 x 3, then U V and W are all 3 x 3 matrices. orthogonal matrices-  Inverse=Transpose. So U and V are no problems and inverse of W is just 1/W, A -1 = V * [diag(1/w j )]*U T The diagonal elements of W are singular values, the larger, the more important for the large-scale properties of the matrix. So naturally, damping (smoothing) can be done by selectively throw out smaller singular values.

Model prediction A*X (removing smallest SV) Model prediction A*X (removing largest SV) We can see that a large change (poor fit) happens if we remove the largest SV, the change is minor if we remove the smallest SV.

Solution vector X[] elements Black --- removing none Red ---- keep 5 largest SV Green --- Keep 4 largest SV Generally, we see that the solution size decreased, SORT of similar to the damping (regularization) process in our lab 9, but SVD approach is not as predictable as damping. It really depends on the solution vector X and nature of A. The solutions can change pretty dramatically (even though fitting of the data vector doesn’t) by removing singular values. Imagine this operation of removal as changing (or zeroing out) some equations in our equation set.

Keep allKeep 5 largestKeep 10 largest Keep 30 largestKeep 60 largest Keep 80 largest 2D Image compression: use small number of SV to recover the original image

South American Subduction System (the subducting slab is depressing the phase boundary near the base of the upper mantle) Nazca Plate South American Plate Cocos Plate Example of 2D SVD Noise Reduction Courtesy of Sean Contenti

Result using all 26 Eigenvalues Pretty noisy with small-scale high amplitudes, both vertical and horizontal structures are visible.

Result using 10 largest Eigenvalues Some of the higher frequency components are removed from the system. Image appears more linear.

Retaining 7 largest eigenvalues Retaining 5 largest eigenvalues