Presentation is loading. Please wait.

Presentation is loading. Please wait.

Stats 443.3 & 851.3 Summary. The Woodbury Theorem where the inverses.

Similar presentations


Presentation on theme: "Stats 443.3 & 851.3 Summary. The Woodbury Theorem where the inverses."— Presentation transcript:

1 Stats 443.3 & 851.3 Summary

2 The Woodbury Theorem where the inverses

3 Block Matrices Let the n × m matrix be partitioned into sub-matrices A 11, A 12, A 21, A 22, Similarly partition the m × k matrix

4 Product of Blocked Matrices Then

5 The Inverse of Blocked Matrices Let the n × n matrix be partitioned into sub-matrices A 11, A 12, A 21, A 22, Similarly partition the n × n matrix Suppose that B = A -1

6 Summarizing Let Suppose that A -1 = B then

7 Symmetric Matrices An n × n matrix, A, is said to be symmetric if Note:

8 The trace and the determinant of a square matrix Let A denote then n × n matrix Then

9 also where

10 Some properties

11 Special Types of Matrices 1.Orthogonal matrices –A matrix is orthogonal if P ˊ P = PP ˊ = I –In this cases P -1 =P ˊ. –Also the rows (columns) of P have length 1 and are orthogonal to each other

12 Special Types of Matrices (continued) 2.Positive definite matrices –A symmetric matrix, A, is called positive definite if: –A symmetric matrix, A, is called positive semi definite if:

13 Theorem The matrix A is positive definite if

14 Special Types of Matrices (continued) 3.Idempotent matrices –A symmetric matrix, E, is called idempotent if: –Idempotent matrices project vectors onto a linear subspace

15 Eigenvectors, Eigenvalues of a matrix

16 Definition Let A be an n × n matrix Let then is called an eigenvalue of A and and is called an eigenvector of A and

17 Note:

18 = polynomial of degree n in. Hence there are n possible eigenvalues 1, …, n

19 Thereom If the matrix A is symmetric with distinct eigenvalues, 1, …, n, with corresponding eigenvectors Assume

20 The Generalized Inverse of a matrix

21 Definition B (denoted by A - ) is called the generalized inverse (Moore – Penrose inverse) of A if 1. ABA = A 2. BAB = B 3. (AB)' = AB 4. (BA)' = BA Note: A - is unique

22 Hence B 1 = B 1 AB 1 = B 1 AB 2 AB 1 = B 1 (AB 2 ) ' (AB 1 ) ' = B 1 B 2 ' A ' B 1 ' A ' = B 1 B 2 ' A ' = B 1 AB 2 = B 1 AB 2 AB 2 = (B 1 A)(B 2 A)B 2 = (B 1 A) ' (B 2 A) ' B 2 = A ' B 1 ' A ' B 2 ' B 2 = A ' B 2 ' B 2 = (B 2 A) ' B 2 = B 2 AB 2 = B 2 The general solution of a system of Equations The general solution where is arbitrary

23 Let C be a p×q matrix of rank k < min(p,q), then C = AB where A is a p×k matrix of rank k and B is a k×q matrix of rank k

24 The General Linear Model

25 Geometrical interpretation of the General Linear Model

26 Estimation The General Linear Model

27 the Normal Equations

28 The Normal Equations

29 Solution to the normal equations

30 Estimate of  2

31 Properties of The Maximum Likelihood Estimates Unbiasedness, Minimum Variance

32 s 2 is an unbiased estimator of  2. Unbiasedness

33 Distributional Properties Least square Estimates (Maximum Likelidood estimates)

34 The General Linear Model and The Estimates

35 Theorem

36 The General Linear Model with an intercept

37 The matrix formulation (intercept included) Then the model becomes Thus to include an intercept add an extra column of 1’s in the design matrix X and include the intercept in the parameter vector

38 The Gauss-Markov Theorem An important result in the theory of Linear models Proves optimality of Least squares estimates in a more general setting

39 The Gauss-Markov Theorem Assume Consider the least squares estimate of, an unbiased linear estimator of and Let denote any other unbiased linear estimator of

40 Hypothesis testing for the GLM The General Linear Hypothesis

41 Testing the General Linear Hypotheses The General Linear Hypothesis H 0 :h 11  1 + h 12  2 + h 13  3 +... + h 1p  p = h 1 h 21  1 + h 22  2 + h 23  3 +... + h 2p  p = h 2... h q1  1 + h q2  2 + h q3  3 +... + h qp  p = h q where h 11  h 12, h 13,..., h qp and h 1  h 2, h 3,..., h q are known coefficients. In matrix notation

42 Testing

43 An Alternative form of the F statistic

44 Confidence intervals, Prediction intervals, Confidence Regions General Linear Model

45 One at a time (1 –  )100 % confidence interval for (1 –  )100 % confidence interval for  2 and .

46 Multiple Confidence Intervals associated with the test Theorem: Let H be a q × p matrix of rank q. then form a set of (1 –  )100 % simultaneous confidence interval for


Download ppt "Stats 443.3 & 851.3 Summary. The Woodbury Theorem where the inverses."

Similar presentations


Ads by Google