Stats 443.3 & 851.3 Summary. The Woodbury Theorem where the inverses.

Slides:



Advertisements
Similar presentations
Ordinary Least-Squares
Advertisements

8.3 Inverse Linear Transformations
Ch 7.7: Fundamental Matrices
General Linear Model With correlated error terms  =  2 V ≠  2 I.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 6 Eigenvalues and Eigenvectors
12-1 Multiple Linear Regression Models Introduction Many applications of regression analysis involve situations in which there are more than.
The General Linear Model. The Simple Linear Model Linear Regression.
Multivariate distributions. The Normal distribution.
Symmetric Matrices and Quadratic Forms
Stat 200b. Chapter 8. Linear regression models.. n by 1, n by 2, 2 by 1, n by 1.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 2 Matrices Definition of a matrix.
Orthogonality and Least Squares
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Stats & Linear Models.
Multivariate Data and Matrix Algebra Review BMTRY 726 Spring 2012.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Stats Multivariate Data Analysis Stats
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Orthogonality and Least Squares
Slide 6.1 Linear Hypotheses MathematicalMarketing In This Chapter We Will Cover Deductions we can make about  even though it is not observed. These include.
Matrices Matrices For grade 1, undergraduate students For grade 1, undergraduate students Made by Department of Math.,Anqing Teachers college.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Chapter 2 … part1 Matrices Linear Algebra S 1. Ch2_2 2.1 Addition, Scalar Multiplication, and Multiplication of Matrices Definition A matrix is a rectangular.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Properties of Inverse Matrices King Saud University.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
Linear Algebra Chapter 2 Matrices.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
1 G Lect 4W Multiple regression in matrix terms Exploring Regression Examples G Multiple Regression Week 4 (Wednesday)
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Lecture 16: Image alignment
Introduction to Vectors and Matrices
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Eigenvalues and Eigenvectors
Systems of First Order Linear Equations
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
CS485/685 Computer Vision Dr. George Bebis
Derivative of scalar forms
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Matrices and Matrix Operations
Symmetric Matrices and Quadratic Forms
Introduction to Vectors and Matrices
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Stats & Summary

The Woodbury Theorem where the inverses

Block Matrices Let the n × m matrix be partitioned into sub-matrices A 11, A 12, A 21, A 22, Similarly partition the m × k matrix

Product of Blocked Matrices Then

The Inverse of Blocked Matrices Let the n × n matrix be partitioned into sub-matrices A 11, A 12, A 21, A 22, Similarly partition the n × n matrix Suppose that B = A -1

Summarizing Let Suppose that A -1 = B then

Symmetric Matrices An n × n matrix, A, is said to be symmetric if Note:

The trace and the determinant of a square matrix Let A denote then n × n matrix Then

also where

Some properties

Special Types of Matrices 1.Orthogonal matrices –A matrix is orthogonal if P ˊ P = PP ˊ = I –In this cases P -1 =P ˊ. –Also the rows (columns) of P have length 1 and are orthogonal to each other

Special Types of Matrices (continued) 2.Positive definite matrices –A symmetric matrix, A, is called positive definite if: –A symmetric matrix, A, is called positive semi definite if:

Theorem The matrix A is positive definite if

Special Types of Matrices (continued) 3.Idempotent matrices –A symmetric matrix, E, is called idempotent if: –Idempotent matrices project vectors onto a linear subspace

Eigenvectors, Eigenvalues of a matrix

Definition Let A be an n × n matrix Let then is called an eigenvalue of A and and is called an eigenvector of A and

Note:

= polynomial of degree n in. Hence there are n possible eigenvalues 1, …, n

Thereom If the matrix A is symmetric with distinct eigenvalues, 1, …, n, with corresponding eigenvectors Assume

The Generalized Inverse of a matrix

Definition B (denoted by A - ) is called the generalized inverse (Moore – Penrose inverse) of A if 1. ABA = A 2. BAB = B 3. (AB)' = AB 4. (BA)' = BA Note: A - is unique

Hence B 1 = B 1 AB 1 = B 1 AB 2 AB 1 = B 1 (AB 2 ) ' (AB 1 ) ' = B 1 B 2 ' A ' B 1 ' A ' = B 1 B 2 ' A ' = B 1 AB 2 = B 1 AB 2 AB 2 = (B 1 A)(B 2 A)B 2 = (B 1 A) ' (B 2 A) ' B 2 = A ' B 1 ' A ' B 2 ' B 2 = A ' B 2 ' B 2 = (B 2 A) ' B 2 = B 2 AB 2 = B 2 The general solution of a system of Equations The general solution where is arbitrary

Let C be a p×q matrix of rank k < min(p,q), then C = AB where A is a p×k matrix of rank k and B is a k×q matrix of rank k

The General Linear Model

Geometrical interpretation of the General Linear Model

Estimation The General Linear Model

the Normal Equations

The Normal Equations

Solution to the normal equations

Estimate of  2

Properties of The Maximum Likelihood Estimates Unbiasedness, Minimum Variance

s 2 is an unbiased estimator of  2. Unbiasedness

Distributional Properties Least square Estimates (Maximum Likelidood estimates)

The General Linear Model and The Estimates

Theorem

The General Linear Model with an intercept

The matrix formulation (intercept included) Then the model becomes Thus to include an intercept add an extra column of 1’s in the design matrix X and include the intercept in the parameter vector

The Gauss-Markov Theorem An important result in the theory of Linear models Proves optimality of Least squares estimates in a more general setting

The Gauss-Markov Theorem Assume Consider the least squares estimate of, an unbiased linear estimator of and Let denote any other unbiased linear estimator of

Hypothesis testing for the GLM The General Linear Hypothesis

Testing the General Linear Hypotheses The General Linear Hypothesis H 0 :h 11  1 + h 12  2 + h 13  h 1p  p = h 1 h 21  1 + h 22  2 + h 23  h 2p  p = h 2... h q1  1 + h q2  2 + h q3  h qp  p = h q where h 11  h 12, h 13,..., h qp and h 1  h 2, h 3,..., h q are known coefficients. In matrix notation

Testing

An Alternative form of the F statistic

Confidence intervals, Prediction intervals, Confidence Regions General Linear Model

One at a time (1 –  )100 % confidence interval for (1 –  )100 % confidence interval for  2 and .

Multiple Confidence Intervals associated with the test Theorem: Let H be a q × p matrix of rank q. then form a set of (1 –  )100 % simultaneous confidence interval for