Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linear Algebra and Matrices Methods for Dummies 20 th October, 2010 Melaine Boly Christian Lambert.

Similar presentations


Presentation on theme: "Linear Algebra and Matrices Methods for Dummies 20 th October, 2010 Melaine Boly Christian Lambert."— Presentation transcript:

1 Linear Algebra and Matrices Methods for Dummies 20 th October, 2010 Melaine Boly Christian Lambert

2 Overview Definitions-Scalars, vectors and matrices Vector and matrix calculations Identity, inverse matrices & determinants Eigenvectors & dot products Relevance to SPM and terminology Linear Algebra & Matrices, MfD 2010

3 Part I Matrix Basics Linear Algebra & Matrices, MfD 2010

4 Scalar A quantity (variable), described by a single real number Linear Algebra & Matrices, MfD 2010 e.g. Intensity of each voxel in an MRI scan

5 Vector Not a physics vector (magnitude, direction) Linear Algebra & Matrices, MfD 2010 i.e. A column of numbers VECTOR= EXAMPLE:

6 Matrices Rectangular display of vectors in rows and columns Can inform about the same vector intensity at different times or different voxels at the same time Vector is just a n x 1 matrix Linear Algebra & Matrices, MfD 2010

7 Matrices Square (3 x 3) Linear Algebra & Matrices, MfD 2010 Matrix locations/size defined as rows x columns (R x C) d i j : i th row, j th column Rectangular (3 x 2) 3 dimensional (3 x 3 x 5)

8 Matrices in MATLAB Linear Algebra & Matrices, MfD 2010 X=[1 4 7;2 5 8;3 6 9]Matrix(X) ;=end of a row DescriptionType into MATLABMeaning 2 nd Element of 3 rd column X(2,3) 8 X(3, :) 3 rd row (X(row,column))Reference matrix values Note the : refers to all of row or column and, is the divider between rows and columns Elements 2&3 of column 2 X( [1 2], 2) Special types of matrix All zeros size 3x1 All ones size 2x2 zeros(3,1) ones(2,2)

9 Transposition column rowrow column Linear Algebra & Matrices, MfD 2010

10 Matrix Calculations Addition –Commutative: A+B=B+A –Associative: (A+B)+C=A+(B+C) Subtraction - By adding a negative matrix Linear Algebra & Matrices, MfD 2010

11 Scalar multiplication Scalar * matrix = scalar multiplication Linear Algebra & Matrices, MfD 2010

12 Matrix Multiplication “When A is a mxn matrix & B is a kxl matrix, AB is only possible if n=k. The result will be an mxl matrix” A 1 A 2 A 3 A 4 A 5 A 6 A 7 A 8 A 9 A 10 A 11 A 12 m n x B 13 B 14 B 15 B 16 B 17 B 18 l k Simply put, can ONLY perform A*B IF: Number of columns in A = Number of rows in B = m x l matrix Linear Algebra & Matrices, MfD 2010 Hint: 1)If you see this message in MATLAB: ??? Error using ==> mtimes Inner matrix dimensions must agree -Then columns in A is not equal to rows in B

13 Matrix multiplication Multiplication method: Sum over product of respective rows and columns Matlab does all this for you! Simply type: C = A * B Linear Algebra & Matrices, MfD 2010 Hints: 1)You can work out the size of the output (2x2). In MATLAB, if you pre-allocate a matrix this size (e.g. C=zeros(2,2)) then the calculation is quicker

14 Matrix multiplication Matrix multiplication is NOT commutative i.e the order matters! –AB≠BA Matrix multiplication IS associative –A(BC)=(AB)C Matrix multiplication IS distributive –A(B+C)=AB+AC –(A+B)C=AC+BC Linear Algebra & Matrices, MfD 2010

15 Identity matrix A special matrix which plays a similar role as the number 1 in number multiplication? For any nxn matrix A, we have A I n = I n A = A For any nxm matrix A, we have I n A = A, and A I m = A (so 2 possible matrices) Linear Algebra & Matrices, MfD 2010 If the answers always A, why use an identity matrix? Can’t divide matrices, therefore to solve may problems have to use the inverse. The identity is important in these types of calculations.

16 Identity matrix 1231001+0+00+2+00+0+3 456 X010=4+0+00+5+00+0+6 7890017+0+00+8+00+0+9 Worked example A I 3 = A for a 3x3 matrix: In Matlab: eye(r, c) produces an r x c identity matrix Linear Algebra & Matrices, MfD 2010

17 Part II More Advanced Matrix Techniques Linear Algebra & Matrices, MfD 2010

18 Vector components & orthonormal base Linear Algebra & Matrices, MfD 2010 A given vector (a b) can be summarized by its components, but only in a particular base (set of axes; the vector itself can be independent from the choice of this particular base). example a and b are the components of in the given base (axes chosen for expression of the coordinates in vector space) a b x axis y axis Orthonormal base: set of vectors chosen to express the components of the others, perpendicular to each other and all with norm (length) = 1

19 Linear combination & dimensionality Linear Algebra & Matrices, MfD 2010 Vectorial space: space defined by different vectors (for example for dimensions…). The vectorial space defined by some vectors is a space that contains them and all the vectors that can be obtained by multiplying these vectors by a real number then adding them (linear combination). A matrix A (m  n) can itself be decomposed in as many vectors as its number of columns (or lines). When decomposed, one can represent each column of the matrix by a vector. The ensemble of n vector-column defines a vectorial space proper to matrix A. Similarly, A can be viewed as a matricial representation of this ensemble of vectors, expressing their components in a given base.

20 Linear dependency and rank Linear Algebra & Matrices, MfD 2010 If one can find a linear relationship between the lines or columns of a matrix, then the rank of the matrix (number of dimensions of its vectorial space) will not be equal to its number of column/lines – the matrix will be said to be rank- deficient. Example When representing the vectors, we see that x1 and x2 are superimposed. If we look better, we see that we can express one by a linear combination of the other: x2 = 2 x1. The rank of the matrix will be 1. In parallel, the vectorial space defined will has only one dimension.

21 Linear dependency and rank Linear Algebra & Matrices, MfD 2010 The rank of a matrix corresponds to the dimensionality of the vectorial space defined by this matrix. It corresponds to the number of vectors defined by the matrix that are linearly independents from each other. Linealy independent vectors are vectors defining each one one more dimension in space, compared to the space defined by the other vectors. They cannot be expressed by a linear combination of the others. Note. Linearly independent vectors are not necessarily orthogonal (perpendicular). Example: take 3 linearly independent vectors x1, x2 et x3. Vectors x1 and x2 define a plane (x,y) And vector x3 has an additional non-zero component in the z axis. But x3 is not perpendicular to x1 or x2.

22 Eigenvalues et eigenvectors Linear Algebra & Matrices, MfD 2010 One can represent the vectors from matrix X (eigenvectors of A) as a set of orthogonal vectors (perpendicular), and thus representing the different dimensions of the original matrix A. The amplitude of the matrix A in these different dimensions will be given by the eigenvalues corresponding to the different eigenvectors of A (the vectors composing X). Note: if a matrix is rank-deficient, at least one of its eigenvalues is zero. In Principal Component Analysis (PCA), the matrix is decomposed into eigenvectors and eigenvalues AND the matrix is rotated to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. For A’: u1, u2 = eigenvectors k1, k2 = eigenvalues

23 Vector Products Inner product X T Y is a scalar (1xn) (nx1) Outer product XY T is a matrix (nx1) (1xn) Inner product = scalar Two vectors: Outer product = matrix Linear Algebra & Matrices, MfD 2010

24 Scalar product of vectors Linear Algebra & Matrices, MfD 2010 Calculate the scalar product of two vectors is equivqlent to make the projection of one vector on the other one. One can indeed show that x1 x2 =  x1 .  x2 . cos  where  is the angle that separates two vectors when they have both the same origin. x1 x2 =  .  . cos  In parallel, if two vectors are orthogonal, their scalar product is zero: the projection of one onto the other will be zero.

25 Determinants Linear Algebra & Matrices, MfD 2010 Le déterminant d’une matrice est un nombre scalaire représentant certaines propriétés intrinsèques de cette matrice. Il est noté detA ou |A|. Sa définition est un détour indispensable avant d’aborder l’opération correspondant à la division de matrices, avec le calcul de l’inverse.

26 Determinants Linear Algebra & Matrices, MfD 2010 For a matrix 1  1: For a matrix 2  2: For a matrix 3  3: a 11 a 12 a 13 a 21 a 22 a 23 = a 11 a 22 a 33 +a 12 a 23 a 31 +a 13 a 21 a 32 –a 11 a 23 a 32 –a 12 a 21 a 33 –a 13 a 22 a 31 a 31 a 32 a 33 = a 11 (a 22 a 33 –a 23 a 32 )–a 12 (a 21 a 33 –a 23 a 31 )+a 13 (a 21 a 32 –a 22 a 31 ) The determinant of a matrix can be calculate by multiplying each element of one of its lines by the determinant of a sub-matrix formed by the elements that stay when one suppress the line and column containing this element. One give to the obtained product the sign (-1) i+j.

27 Determinants Determinants can only be found for square matrices. For a 2x2 matrix A, det(A) = ad-bc. Lets have at closer look at that: The determinant gives an idea of the ’volume’ occupied by the matrix in vector space A matrix A has an inverse matrix A -1 if and only if det(A)≠0. In Matlab: det(A) = det(A) Linear Algebra & Matrices, MfD 2010 a b c d det(A) = = ad - bc [ ]

28 Determinants Linear Algebra & Matrices, MfD 2010 The determinant of a matrix is zero if and only if there exist a linear relationship between the lines or the columns of the matrix – if the matrix is rank-deficient. In parallel, one can define the rank of a matrix A as the size of the largest square sub-matrix of A that has a non-zero determionant. Here x1 and x2 are superimposed in space, because one can be expressed by a linear combination of the other: x2 = 2 x1. The determinant of the matrix X will thus be zero. The largest square sub-matrix with a non- zero determinant will be a matrix of 1x1 => the rank of the matrix is 1.

29 Determinants Linear Algebra & Matrices, MfD 2010 In a vectorial space of n dimensions, there will be no more than n linearly independent vectors. If 3 vectors (2  1) x’ 1, x’ 2, x’ 3 are represented by a matrix X’: Graphically, we have: Here x3 can be expressed by a linear combination of x1 and x2. The determinant of the matrix X’ will thus be zero. The largest square sub-matrix with a non-zero determinant will be a matrix of 2x2 => the rank of the matrix is 2.

30 Determinants Linear Algebra & Matrices, MfD 2010 The notions of determinant, of the rank of a matrix and of linear dependency are closely linked. Take a set of vectors x1, x2,…,xn, all with the same number of elements: these vectors are linearly dependent if one can find a set of scalars c1, c2,…,cn non equal to zero such as: c1 x1+ c2 x2+…+ cn xn= 0 A set of vectors are linearly dependent if one of then can be expressed as a linear combination of the others. They define in space a smaller number of dimensions than the total number of vectors in the set. The resulting matrix will be rank-deficient and the determinant will be zero. Similarly, if all the elements of a line or column are zero, the determinant of the matrix will be zero. If a matrix present two rows or columns that are equal, its determinant will also be zero

31 Matrix inverse Definition. A matrix A is called nonsingular or invertible if there exists a matrix B such that: Notation. A common notation for the inverse of a matrix A is A -1. So: The inverse matrix is unique when it exists. So if A is invertible, then A -1 is also invertible and then (A T ) -1 = (A -1 ) T11X 23232323 -1 3 = 2 + 1 3 3 -1 + 1 3 3 =102 13131313 13131313 -2+ 2 3 3 1 + 2 3 3 01 In Matlab: A -1 = inv(A)Matrix division: A/B= A*B -1 Linear Algebra & Matrices, MfD 2010

32 Matrix inverse For a XxX square matrix: The inverse matrix is: Linear Algebra & Matrices, MfD 2010 E.g.: 2x2 matrix For a matrix to be invertible, its determinant has to be non-zero (it has to be square and of full rank). A matrix that is not invertible is said to be singular. Reciprocally, a matrix that is invertible is said to be non-singular.

33 Pseudoinverse Linear Algebra & Matrices, MfD 2010 In SPM, design matrices are not square (more lines than columns, especially for fMRI). The system is said to be overdetermined – there is not a unique solution, i.e. there is more than one solution possible. SPM will use a mathematical trick called the pseudoinverse, which is an approximation used in overdetermined systems, where the solution is constrained to be the one where the  values that are minimum.

34 Linear Algebra & Matrices, MfD 2010 Part III How are matrices relevant to fMRI data?

35 Normalisation Statistical Parametric Map Image time-series Parameter estimates General Linear Model RealignmentSmoothing Design matrix Anatomical reference Spatial filter Statistical Inference RFT p <0.05 Linear Algebra & Matrices, MfD 2010

36 Time BOLD signal Time single voxel time series single voxel time series Voxel-wise time series analysis Model specification Model specification Parameter estimation Parameter estimation Hypothesis Statistic SPM

37 How are matrices relevant to fMRI data? =  +  =  + YX data vector design matrix parameters error vector  Linear Algebra & Matrices, MfD 2010 GLM equation N of scans

38 How are matrices relevant to fMRI data? Y data vector Response variable e.g BOLD signal at a particular voxel A single voxel sampled at successive time points. Each voxel is considered as independent observation. Preprocessing... Intens ity Ti me Y Y = X. β + ε Linear Algebra & Matrices, MfD 2010

39 How are matrices relevant to fMRI data?   X design matrix parameters  Explanatory variables –These are assumed to be measured without error. –May be continuous; –May be dummy, indicating levels of an experimental factor. Y = X. β + ε Solve equation for β – tells us how much of the BOLD signal is explained by X Linear Algebra & Matrices, MfD 2010

40 In Practice Estimate MAGNITUDE of signal changes MR INTENSITY levels for each voxel at various time points Relationship between experiment and voxel changes are established Calculation and notation require linear algebra and matrices manipulations Linear Algebra & Matrices, MfD 2010

41 Summary SPM builds up data as a matrix. Manipulation of matrices enables unknown values to be calculated. Y = X. β + ε Observed = Predictors * Parameters + Error BOLD = Design Matrix * Betas + Error Linear Algebra & Matrices, MfD 2010

42 References SPM course http://www.fil.ion.ucl.ac.uk/spm/course/http://www.fil.ion.ucl.ac.uk/spm/course/ Web Guides http://mathworld.wolfram.com/LinearAlgebra.html http://www.maths.surrey.ac.uk/explore/emmaspages/option1.h tml http://www.inf.ed.ac.uk/teaching/courses/fmcs1/ (Formal Modelling in Cognitive Science course) http://www.wikipedia.org Previous MfD slides Linear Algebra & Matrices, MfD 2010

43 ANY QUESTIONS ?


Download ppt "Linear Algebra and Matrices Methods for Dummies 20 th October, 2010 Melaine Boly Christian Lambert."

Similar presentations


Ads by Google