Vector Norms CSE 541 Roger Crawfis.

Slides:



Advertisements
Similar presentations
Chapter 6 Eigenvalues and Eigenvectors
Advertisements

Matrices A matrix is a rectangular array of quantities (numbers, expressions or function), arranged in m rows and n columns x 3y.
Linear Systems LU Factorization CSE 541 Roger Crawfis.
Maths for Computer Graphics
Linear Algebraic Equations
Multivariable Control Systems Ali Karimpour Assistant Professor Ferdowsi University of Mashhad.
4.I. Definition 4.II. Geometry of Determinants 4.III. Other Formulas Topics: Cramer’s Rule Speed of Calculating Determinants Projective Geometry Chapter.
1 Systems of Linear Equations Error Analysis and System Condition.
Singular Value Decomposition (SVD) (see Appendix A.6, Trucco & Verri) CS485/685 Computer Vision Prof. George Bebis.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
1 Systems of Linear Equations Error Analysis and System Condition.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Chapter 1: Matrices Definition 1: A matrix is a rectangular array of numbers arranged in horizontal rows and vertical columns. EXAMPLE:
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
Autar Kaw Humberto Isaza
Vector Norms DEF: A norm is a function that satisfies
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Fundamentals from Linear Algebra Ghan S. Bhatt and Ali Sekmen Mathematical Sciences and Computer Science College of Engineering Tennessee State University.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 2. Linear systems.
Sec 3.5 Inverses of Matrices Where A is nxn Finding the inverse of A: Seq or row operations.
Inverse Matrices (2 x 2) How to find the inverse of a 2x2 matrix.
Unit 6 : Matrices.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Linear algebra: matrix Eigen-value Problems
CW Matrix Division We have seen that for 2x2 (“two by two”) matrices A and B then AB  BA To divide matrices we need to define what we mean by division!
Matrix Algebra and Regression a matrix is a rectangular array of elements m=#rows, n=#columns  m x n a single value is called a ‘scalar’ a single row.
Linear Algebra 1.Basic concepts 2.Matrix operations.
Mathematical foundationsModern Seismology – Data processing and inversion 1 Some basic maths for seismic data processing and inverse problems (Refreshement.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Chapter 2 System of Linear Equations Sensitivity and Conditioning (2.3) Solving Linear Systems (2.4) January 19, 2010.
Numerical Computation Lecture 9: Vector Norms and Matrix Condition Numbers United International College.
OR Backgrounds-Convexity  Def: line segment joining two points is the collection of points.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Matrix Condition Numbers
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Section 1.7 Linear Independence and Nonsingular Matrices
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Statistics 350 Lecture 13. Today Last Day: Some Chapter 4 and start Chapter 5 Today: Some matrix results Mid-Term Friday…..Sections ; ;
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
13.3 Product of a Scalar and a Matrix.  In matrix algebra, a real number is often called a.  To multiply a matrix by a scalar, you multiply each entry.
Section 6-2: Matrix Multiplication, Inverses and Determinants There are three basic matrix operations. 1.Matrix Addition 2.Scalar Multiplication 3.Matrix.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
EE611 Deterministic Systems Vector Spaces and Basis Changes Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
College Algebra Chapter 6 Matrices and Determinants and Applications
Review of Linear Algebra
Linear Equations in Linear Algebra
Review of Matrix Operations
CHARACTERIZATIONS OF INVERTIBLE MATRICES
Vectors.
Least Squares Approximations
2 Terms 3 Terms 4 Terms Always look for a GCF! Always look for a GCF!
C H A P T E R 3 Vectors in 2-Space and 3-Space
Linear Equations in Linear Algebra
Mathematics.
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Linear Algebra Lecture 20.
Sec 3.5 Inverses of Matrices
Topic 11: Matrix Approach to Linear Regression
Matrices - Operations INVERSE OF A MATRIX
Linear Algebra Lecture 28.
CHARACTERIZATIONS OF INVERTIBLE MATRICES
Outline Numerical Stability Singular Value Decomposition
Presentation transcript:

Vector Norms CSE 541 Roger Crawfis

Vector Norms Measure the magnitude of a vector Is the error in x small or large? General class of p-norms: 1-norm: 2-norm: -norm:

Properties of Vector Norms For any vector norm: These properties define a vector norm

Matrix Norms We will only use matrix norms “induced” by vector norms:

Properties of Matrix Norms These induced matrix norms satisfy:

Condition Number If A is square and nonsingular, then If A is singular, then cond(A) =  If A is nearly singular, then cond(A) is large. The condition number measures the ratio of maximum stretch to maximum shrinkage:

Properties of Condition Number For any matrix A, cond(A)  1 For the identity matrix, cond(I) = 1 For any permutation matrix, cond(P) = 1 For any scalar , cond( A) = cond(A) For any diagonal matrix D,

Errors and Residuals Residual for an approximate solution y to Ax = b is defined as r = b – Ay If A is nonsingular, then ||x – y|| = 0 if and only if ||r || = 0. Does not imply that if ||r||<e, then ||x-y|| is small.

Estimating Accuracy Let x be the solution to Ax = b Let y be the solution to Ay = c Then a simple analysis shows that Errors in the data (b) are magnified by cond(A) Likewise for errors in A