METHODS OF TRANSFORMING NON-POSITIVE DEFINITE CORRELATION MATRICES Katarzyna Wojtaszek student number 1118676 CROSS.

Slides:



Advertisements
Similar presentations
4.1 Introduction to Matrices
Advertisements

Mathematics1 Mathematics 1 Applied Informatics Štefan BEREŽNÝ.
Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.
Image Processing. Image processing Once we have an image in a digital form we can process it in the computer. We apply a “filter” and recalculate the.
PRIME FACTORIZATION.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
1 Copyright © 2015, 2011, 2007 Pearson Education, Inc. Chapter 4-1 Systems of Equations and Inequalities Chapter 4.
Matrix Theory Background
1 MONALISA Compact Straightness Monitor Simulation and Calibration Week 4 Report By Patrick Gloster.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Linear Transformations
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Lecture 19 Quadratic Shapes and Symmetric Positive Definite Matrices Shang-Hua Teng.
Ch 7.9: Nonhomogeneous Linear Systems
5. Topic Method of Powers Stable Populations Linear Recurrences.
MAE 552 Heuristic Optimization
ENGG2013 Unit 13 Basis Feb, Question 1 Find the value of c 1 and c 2 such that kshumENGG20132.
Linear and generalised linear models
Linear regression models in matrix terms. The regression function in matrix terms.
LIAL HORNSBY SCHNEIDER
Eigensystems - IntroJacob Y. Kazakia © Eigensystems 1.
Recall that a square matrix is one in which there are the same amount of rows as columns. A square matrix must exist in order to evaluate a determinant.
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
Component Reliability Analysis
PHY 301: MATH AND NUM TECH Chapter 5: Linear Algebra Applications I.Homogeneous Linear Equations II.Non-homogeneous equation III.Eigen value problem.
Some matrix stuff.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Graph of Linear Equations  Objective: –Graph linear equations.
Eigenvalues and Eigenvectors
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
Yaomin Jin Design of Experiments Morris Method.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
GG313 Lecture 14 Coordinate Transformations Strain.
What is the determinant of What is the determinant of
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
Dear Power point User, This power point will be best viewed as a slideshow. At the top of the page click on slideshow, then click from the beginning.
Notes 2.1 and 2.2 LOOKING FOR SQUARES AND SQUARE ROOTS.
Class 23, November 19, 2015 Lesson 4.2.  By the end of this lesson, you should understand (that): ◦ Linear models are appropriate when the situation.
Correlation. u Definition u Formula Positive Correlation r =
Linear Algebra Chapter 6 Linear Algebra with Applications -Gareth Williams Br. Joel Baumeyer, F.S.C.
Revision on Matrices Finding the order of, Addition, Subtraction and the Inverse of Matices.
9.5 Alternating Series. An alternating series is a series whose terms are alternately positive and negative. It has the following forms Example: Alternating.
7 7.2 © 2016 Pearson Education, Ltd. Symmetric Matrices and Quadratic Forms QUADRATIC FORMS.
Table of Contents Matrices - Definition and Notation A matrix is a rectangular array of numbers. Consider the following matrix: Matrix B has 3 rows and.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Linear Algebra Review.
13.4 Product of Two Matrices
Properties Of the Quadratic Performance Surface
Multiplying Matrices.
Matrix Algebra.
Matrices and Linear Transformations
Matrix Algebra and Random Vectors
Further Matrix Algebra
METHODS OF TRANSFORMING NON-POSITIVE DEFINITE CORRELATION MATRICES
Multiplying Matrices.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Chapter 7: Systems of Equations and Inequalities; Matrices
Matrices.
Multiplying Matrices.
Multiplying Matrices.
Multiplying Matrices.
Presentation transcript:

METHODS OF TRANSFORMING NON-POSITIVE DEFINITE CORRELATION MATRICES Katarzyna Wojtaszek student number CROSS

I will try to answer questions: How can I estimate correlation matrix when I have data? What can I do if matrices are non-PD?  Shrinking method  Eigenvalues method  Vines method How can we calculate distances between original and transformed matrices? Which method is the best?  comparing  conclusions

How can I estimate correlation matrix if I have data? I can estimate the correlation matrices from data as follows: 1. I can estimate each off-diagonal element separately

2. I can also estimate whole data together: with i=1,…,s ; j=1,…,n

What can I do when matrices are non- PD? We can use some methods for transforming these matrices to PD correlation matrices using:  Shrinking method  Eigenvalues method  Vines method

How can we calculate distances between original and transformed matrices? There are many methods which we can use to calculate the distance between matrices. In my project I used formula:

1. SHRINKING METHOD  linear shrinking Assumptions: R nxn is given non-PD pseudo correlation matrix is arbitrary correlation matrix Define:  (  [0,1]) =R+ (R* - R) is a pseudo correlation matrix.

Idea: find the smallest such that matrix will be PD. Since R is non-PD then the smallest eigenvalue of R is negative, so we have to choose such that will be positive. Hence: And  0 if  - / ( *- ). So we find matrix which is PD matrix given non-PD matrix R.

 non-linear shrinking Assumption: R nxn is given non-PD pseudo correlation matrix Procedure: where f is strictly increasing odd function with f(0)=0 and  >0.

I considered the following four functions:    

InIn R nxn SET OF PD- MATRICES Linear shrinking Non-linear shrinking Comparison of the linear and non-linear shrinking methods

2.THE EIGENVALUE METHOD. Assumptions: R nxn non-PD pseudo correlation matrix P -orthogonal matrix such that R=PDP T D matrix which the eigenvalues of R on the diagonal  is some constant  0

Idea: Replaced negative values in matrix D by . We obtain: R*=PD*P T = where is a diagonal matrix with diagonal elements equal for i=1,2,…,n.

3.VINES METHOD. Assumptions: R nxn pseudo correlation matrix Idea:  First we have to check if our matrix is PD

If some (-1,1) we change the value V( ) (-1,1)) and recalculate partial correlation using: V( ) =V( ) + We obtain new matrix, witch we have check again.

 Example  Let say that we have matrix R 4x4 Very useful is making graphical model

Which method is the best?  Comparing. Using Matlab I chose randomly 500 non-PD matrices, transformed them and calculated the average distances between non-PD and PD matrices. This table shows us my results. n Lin. shrinking Shrinking f Shrinking f Shrinking f Shrinking f Eigenvalues Vines

 ILUSTATION: average distance

Conclusions: 1.The reason that the linear shrinking is very bad method is that we shrink all elements by the same relative amount 2.The eigenvalues method performes fast and gives very good results regardless matrices dimensions 3.For the non-linear shrinking method the best choice of the projection function are and