Presentation is loading. Please wait. # Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.

## Presentation on theme: "Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008."— Presentation transcript:

Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.

Matrices  A matrix is a rectangular array of numbers (also called scalars), written between square brackets, as in

Vectors  A vector is defined as a matrix with only one column or row Row vector Column vector or vector

Zero and identity matrices  The zero matrix (of size m X n) is the matrix with all entries equal to zero  An identity matrix is always square and its diagonal entries are all equal to one, otherwise are zero. Identity matrices are denoted by the letter I.

Vector Operations  The inner product (a.k.a. dot product or scalar product) of two vectors is defined by  The magnitude of a vector is

Vector Operations  The projection of vector y onto vector x is  where vector ux has unit magnitude and the same direction as x

Vector Operations  The angle between vectors x and y is  Two vectors x and y are said to be orthogonal if x T y=0 orthonormal if x T y=0 and |x|=|y|=1

Vector Operations  A set of vectors x 1, x 2, …, x n are said to be linearly dependent if there exists a set of coefficients a1, a2, …, an (at least one different than zero) such that  A set of vectors x 1, x 2, …, x n are said to be linearly independent if

Matrix Operations Matrix transpose  If A is an m X n matrix, its transpose, denoted A T, is the n X m matrix given by (A T ) ij = A ji. For example,

Matrix Operations Matrix addition  Two matrices of the same size can be added together, to form another matrix (of the same size), by adding the corresponding entries

Matrix Operations Scalar multiplication  The multiplication of a matrix by a scalar (i.e., number), is done by multiplying every entry of the matrix by the scalar

Matrix Operations Matrix multiplication  You can multiply two matrices A and B provided their dimensions are compatible, which means the number of columns of A equals the number of rows of B. Suppose that A has size m X p and B has size p X n. The product matrix C = AB, which has size m X n, is defined by

Matrix Operations  The trace of a square matrix A d×d is the sum of its diagonal elements  The rank of a matrix is the number of linearly independent rows (or columns) ‏  A square matrix is said to be non-singular if and only if its rank equals the number of rows  (or columns) ‏ A non-singular matrix has a non-zero determinant

Matrix Operations  A square matrix is said to be orthonormal if AA T =A T A=I  For a square matrix A if x T Ax>0 for all x≠0, then A is said to be positive-definite (i.e., the covariance matrix) ‏ if x T Ax≥0 for all x≠0, then A is said to be positive-semidefinite

Matrix inverse  If A is square, and there is a matrix F such that FA = I, then we say that A is invertible or nonsingular.  We call F the inverse of A, and denote it A -1. We can then also define A -k = (A -1 ) k. If a matrix is not invertible, we say it is singular or noninvertible.

Matrix Operations  The pseudo-inverse matrix A† is typically used whenever A-1 does not exist (because A is not square or A is singular):

Matrix Operations  The n-dimensional space in which all the n- dimensional vectors reside is called a vector space  A set of vectors {u1, u2,... un} is said to form a basis for a vector space if any arbitrary vector x can be represented by a linear combination of the {ui}

Matrix Operations  The coefficients {a1, a2,... an} are called the components of vector x with respect to the basis {ui}  In order to form a basis, it is necessary and sufficient that the {ui} vectors are linearly independent

Matrix Operations  A basis {ui} is said to be orthogonal if  A basis {ui} is said to be orthonormal if

Linear Transformations  A linear transformation is a mapping from a vector space X N onto a vector space Y M, and is represented by a matrix Given vector x ∈ X N, the corresponding vector y on Y M is computed as A linear transformation represented by a square matrix A is said to be orthonormal when AA T =A T A=I

Eigenvectors and Eigenvalues  Let A be any square matrix. A scalar is called and eigenvalue of A if there exists a non zero vector v such that: Av=v  Any vector v satisfying this relation is called and eigenvector of A belonging to the eigenvalue of

How to compute the Eigenvalues and the Eigenvectors Find the characteristic polynomial (t) of A. Find the roots of (t) to obtain the eigenvalues of A. Repeat (a) and (b) for each eigenvalue of A. a. Form the matrix M=A-I by subtracting down the diagonal A. b. Find the basis for the solution space of the homogeneous system MX=0. (These basis vectors are linearly independent eigenvectors of A belonging to.) ‏

Example  We have a matrix  The characteristic polynomial (t) of A is computed. We have

Example  Set (t)=(t-5)(t+2)=0. The roots 1 =5 and 2 =- 2 are the eigenvalues of A.  We find an eigenvector v 1 of A belonging to the eigenvalue 1 =5

Example  We find the eigenvector v 2 of A belonging to the eigenvalue 2 =-2  The system has only one independent solution then v 2 =(-1,3) ‏

Similar presentations

Ads by Google