# CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.

## Presentation on theme: "CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE."— Presentation transcript:

CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE

INTRODUCTION We’re going to spend a little time on some important concepts from Linear Algebra Some of it will seem a bit general and abstract, but I will endeavor to give you concrete examples We’ve already covered vectors in general in a previous lecture Here, we will discuss Euclidean space Linear (In)dependence Basis vectors Matrices Matrix Determinant

EUCLIDEAN SPACE

Let’s say we have a vector V with n components (e.g., V has 3 components, v x, v y, v z )  our vector is an n-tuple n-tuple = ordered list of n real numbers The vectors we will be working with exist in n-dimensional real Euclidean space: n-dimensional  i.e., how many components in vector real  real numbers (not complex) Euclidean space  set of all possible n-tuples (all possible points in an n-space) I.e., set of all possible vectors with n components ℝ n  n-dimensional real Euclidean space

EXAMPLE: 3D EUCLIDEAN SPACE 3 dimensions (x, y, z)  3D Euclidean space All possible 3D vectors (3D points) 3D vector in 3D Euclidean space  in ℝ 3

THINGS TO DO IN SPACE For a vector in Euclidean space (in ℝ n ), you can do two things with it: Add them to another vector Multiply it by a scalar In both cases, you end up with another vector in ℝ n

RULES IN EUCLIDEAN SPACE There are several rules in Euclidean space (in fact, these actually DEFINE that we’re in a Euclidean space) Given vectors u, v, and w, and scalars a and b:

DOT PRODUCT REVISITED A general definition for the dot product (also called the inner (dot) product or scalar product): Some of the rules for the dot product:

VECTOR PROJECTION USING THE DOT PRODUCT Orthogonal = perpendicular You can use the dot product to orthogonally project one vector onto another The orthogonal projection (vector) w of a vector u onto a vector v is given by: t = scalar value

ORTHOGONAL PROJECTION EXAMINED Recall: Therefore:

ORTHOGONAL PROJECTION EXAMINED FURTHER Projection  gives us an orthogonal decomposition of u I.e., can describe u in terms of two orthogonal vectors, w and (u – w) w ┴ (u-w) If v is already normalized  Which means ║ w ║ = absolute value of dot product between u and v

NORM (LENGTH) REVISITED The norm (or length) of the vector u is a non-negative number that can be expressed using the dot product: It too has some rules:

CROSS PRODUCT REVISITED The rules for the cross product may be found below:

LINEAR (IN)DEPENDENCE

LINEAR DEPENDENCE AND INDEPENDENCE Let’s say we have two vectors, u 0 and u 1 If I can just multiply a number (scalar) by u 0 to get u 1, then the vectors are linearly DEPENDENT u 0 = a*u 1  linearly DEPENDENT

LINEAR DEPENDENCE AND INDEPENDENCE Linearly DEPENDENT Linearly INDEPENDENT

LINEAR DEPENDENCE AND INDEPENDENCE Another way to look at this is to rearrange the equation and also multiply u 1 by its own scalar: If the ONLY way for this to be true is if a 0 and a 1 equal zero  u 0 and u 1 are linearly INDEPENDENT Can’t cancel out each other Otherwise  u 0 and u 1 are linearly DEPENDENT With two vectors, this only happens when vectors are PARALLEL to each other

LINEAR DEPENDENCE AND INDEPENDENCE: DEFINED Let’s say now we have a n vectors (u 0, u 1, …, u n-1 ), each with their own scalar factor (a 0, a 1, …, a n-1 ) If the ONLY way to make the above statement true is to set a 0 = a 1 = … = a n-1 = 0  vectors (u 0, u 1, …, u n- 1 ) are linearly INDEPENDENT Otherwise, vectors are linearly DEPENDENT Some or all of the vectors can cancel each other out, given the proper scaling factors

SIZE OF SPACE How big a space is (i.e., how many dimensions a space is  n) is determined by the maximum number of linearly independent vectors you want make Example: ℝ 3  can have at most 3 vectors in a set that are linearly independent Example set of linearly independent vectors for ℝ 3 : (1,0,0) (0,1,0) (0,0,1) …can’t come up with another one

BASIS VECTORS

SPANNING SPACE AND BASIS VECTORS If we have a set of n vectors (u 0, u 1, …, u n-1 ) in ℝ n AND: Vectors linearly independent Any vector V in ℝ n can be written as: Then vectors (u 0, u 1, …, u n-1 ) span Euclidean space ℝ n If only one set of v i values will give you V  u vectors are a basis in ℝ n

EXAMPLE OF A 2D BASIS u 0 = (4,3) u 1 = (2,6) Spans ℝ 2  linearly independent and can use to make any vector in ℝ 2 Basis in ℝ 2  only one combination of (v 0, v 1 ) will give you a given V vector Example:

DESCRIBING A VECTOR To completely describe a vector V, we would need to state: Components v i Basis vectors u i However, if we’re using the same basis vectors for all vectors, we can just use the components to describe the vector:

ORTHONORMAL BASIS Orthonormal basis = basis where the vectors meet the following conditions: Every basis vector has length equal to 1  ║ u i ║ = 1 Every pair of basis vectors must be orthogonal  angle between them equals 90° If basis vectors are orthogonal BUT do not have unit length  orthogonal basis

STANDARD BASIS Standard basis = basis where each basis vector u i has components: One for dimension i Zero elsewhere Standard basis vectors  denoted e i Example: 3D standard basis

ORTHONORMAL BASES AND THE DOT PRODUCT Given a vector P and an orthonormal basis (u 0, …, u n-1 ), you can get the components of P using the dot product: Basically you project vector P onto each basis vector u i  gives you distance along u i Example: P on standard basis

INTRODUCTION TO MATRICES

ENTER THE MATRIX Matrix = (p X q) 2D array of numbers (scalars) p = number of rows, q = number of columns Used here to manipulate vectors and points  used to transform vectors/points Given matrix M, another notation for a matrix is [m ij ] In computer graphics, most matrices will be 2x2, 3x3, or 4x4 In the slides that follow (for the most part): Capital letters  matrices Lowercase letters  scalar numbers http://www.papeldeparede.et c.br/wallpapers/codigo- matrix_2283_1280x1024.jpg

IDENTITY MATRIX Identity matrix = square matrix with 1’s on the diagonal and 0’s everywhere else Effectively the matrix equivalent of the number one  multiplying by the identity matrix gives you the same matrix back M = I*M

MATRIX ADDITION To add two matrices, just add the corresponding components Same rules as with vectors Both matrices must have the same dimensions! Resulting matrix  same dimensions as original matrices

RULES OF MATRIX ADDITION Note: 0 = matrix filled with zeros

MULTIPLY A MATRIX BY A SCALAR To multiple a matrix M by a scalar (single number) a, just multiply a by the individual components Again, same as with vectors Not surprisingly, resulting matrix same size as original

RULES OF SCALAR-MATRIX MULTIPLICATION

TRANSPOSE OF A MATRIX Transpose of matrix M = rows become columns and columns become rows Notation: M T If M is (p X q)  M T is (q x p)

RULES OF THE TRANSPOSE MATRIX

TRACE OF A MATRIX Trace of matrix = just the sum of the diagonal elements of a square matrix Notation: tr(M)

MATRIX-MATRIX MULTIPLICATION When multiplying two matrix M and N like this  T = MN  Size of M must be (p X q) Size of N must be (q x r) Result T will be (p x r) ORDER MATTERS!!! 2 x 3 3 x2 x22

MATRIX-MATRIX MULTIPLICATION T = MN For each value in T  t ij  get the dot product of the row i of M and the column j of N

MATRIX-MATRIX MULTIPLICATION

EXAMPLE: MATRIX-MATRIX MULTIPLICATION

RULES OF MATRIX-MATRIX MULTIPLICATION  We will use this for combining transformations  I = identity matrix  This is true in general, even if dimensions are the same!

MULTIPLYING A MATRIX BY A VECTOR We will be using column vectors here Column vector = (q x 1) matrix Multiplying a (q x 1) vector by a matrix (p x q) will give us a new vector (p x 1) For our transformations later, usually p = q so that w has the same size as v w i = dot product of v with row i of M

MATRIX DETERMINANT

DETERMINANT INTRODUCTION The determinant of a matrix Scalar number Only defined for square matrix (e.g., matrix is p x p) Denoted as |M| or det(M) Going to concentrate here on determinants 2x2 and 3x3 matrices Computing determinants for larger square matrices is a kind of recursive procedure

DETERMINANT FOR 2X2 AND 3X3 Pattern: diagonals going: Upper-LEFT to lower-RIGHT  add Upper-RIGHT to lower-LEFT  subtract

CLOSER LOOK AT 2X2 DETERMINANT

3X3 DETERMINANT AND CROSS PRODUCT If you replace: Top row  e x e y e z vectors Middle row  u x u y u z Bottom row  v x v y v z Suddenly have Sarrus’ scheme for computing the cross product! NOTE: Not exactly the same: Cross product  gives you vector Determinant  gives you scalar However, the determinant and cross product are related in some interesting ways

ALTERNATE WAY TO COMPUTE 3X3 DETERMINANT Another way to compute the determinant is to break the matrix up into its columns and use the cross product and dot product: NOTE: the m,n notation means the n th column vector of M

RULES OF THE DETERMINANT Inverse of M  M -1 Assuming we have a matrix M of size n X n:

SCALAR MULTIPLICATION AND THE DETERMINANT If you multiply a scalar a by the whole matrix M  a n |M| However, if you just multiply a by ONE row (or ONE column)  a|M|

ZERO DETERMINANT If either: Two rows (or two columns) have a cross product of zero (both going exactly the same way) OR Any row (or any column) is entirely composed of zeros Then  |M| = 0 Note: if |M| = 0, then |M -1 | = 1/0  so, zero determinant means that M -1 does not exist

ORIENTATION OF A BASIS If each column of a matrix M is in fact a basis vector, then: If determinant |M| is POSITIVE  basis is positively oriented  right-handed system If determinant |M| is NEGATIVE  basis is negatively oriented  left-handed system Example: standard basis  right-handed system

RELATIONSHIP TO AREA AND VOLUME Two vectors u and v can define a parallelogram Three vectors u, v, and w form a solid  parallelepiped Can use scalar triple product to get volume  same as getting determinant of matrix with u, v, and w as columns!

Download ppt "CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE."

Similar presentations