# Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad.

## Presentation on theme: "Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad."— Presentation transcript:

Generalised Inverses Modal Analysis and Modal Testing S. Ziaei Rad

Matrix properties Matrix Rank rank [A] = number of columns of [A] which are linearly independent. Matrix Norm  [A]  is a non negative number

Matrix Norm Frobenius Norm  [A]  F =     a ij  2 Spectrum Form  [A]  2  max eigenvalue of [A] H [A] Also  [A]  1 = max (   a ij  : j = 1, 2, …m  [A]   = max (   a ij  : i = 1, 2, …n

Generalised Inverses The generalised inverse of [B] N  m where m  N is defined as [B] + m  N where: [B] + m  N = ([B] T m  N [B] N  m ) -1 [B] T m  N This is the left-inverse of [B] and exists if [B] is of full rank, m.

Although: [B] + m  N [B] N  m = [ I ] m  m Note that: [B] N  m [B] + m  N  [ I ] N  N For matrix [C] m  N where m  N, we have [C] + N  m where: [C] + = [C] T ([C] [C] T ) -1 This is the right-inverse of [C] and exists only if [C] T is of full rank m.

Square Matrix For a square matrix [A] N  N, the left-inverst and the right inverse are identical, and are both given by [A] -1 in this case: [A] -1 [A] = [A][A] -1 = [ I ] applicable to cases where [A] is full rank, N.

Singular value Decomposition For a general N  m (m < N) matrix [D], whose rank r is less than m, none of the previous expressions permit determination of an inverse. Here it is necessary to use the Singular Value Decomposition (SVD): [D] N  m = [U] N  N [  ] N  m [V] T m  m

where [U] and [V] are orthornormal matrices for which [U] T = [U] -1, etc., and [  ] is a diagonal matrix whose r (r  m) non-zero diagonal elements (  1,  2,  3,...  r,) are the singular values of [D]. Then: [D] + m  N = [V] m  m [  ] + m  N [U] N  N where [  ] + m  N =diag (  -1 1,  -1 2,  -1 3,…  -1 r, 0, 0,… 0)

Introduction to the Singular Value Decomposition (SVD) Technique

Main Applications Calculation of the Rank of a Matrix Calculation of Condition Numbers Calculation of Generalised Inverses

The Rank of a Matrix An N  N matrix with all rows (or columns) linearly independent has rank = N. If only r rows or columns are linearly independent, then the rank is = r. An N  m matrix where N  m is of “full rank” if its rank equals N. The classical procedure for calculating the rank of a matrix is by Gauss elimination. An N  N matrix of rank r( { "@context": "http://schema.org", "@type": "ImageObject", "contentUrl": "http://images.slideplayer.com/12/3381688/slides/slide_11.jpg", "name": "The Rank of a Matrix An N  N matrix with all rows (or columns) linearly independent has rank = N.", "description": "If only r rows or columns are linearly independent, then the rank is = r. An N  m matrix where N  m is of full rank if its rank equals N. The classical procedure for calculating the rank of a matrix is by Gauss elimination. An N  N matrix of rank r(

If the rows are not linearly dependent but are very close to it, there will be very small values (but not zeros) after a Gauss elimination. In these cases, it is difficult to establish the rank of the matrix and will be even more so if the elements are complex. The SVD makes this task much easier by working in terms of individual, real, quantities (the singular values of the matrix).

The SVD of an N  m real matrix [A] is given by: [A] (N  m) = [U] (N  m) [  ] (N  m) [V] T (m  m) where [U] and [V] are orthogonal matrices satisfying: [U] T [U] = [U][U] T = [V] T [V] = [V] [V] T = [ I ] and [U] T = [U] -1 and [V] T = [V] -1 Also, it can be noted that [U], [V] and [  ] are all real.

The matrix [  ] is the matrix of singular values of [A] (and is, in fact, the eigenvalues of ([A] T [A]) having the form: If [A] is complex, [  ] is still real, but [U] and [V] are complex and unitary matrices.

Matrix Rank The rank of matrix [A] is equal to the munber of non-zero singular values. An N  m matrix of rank r (< N) will have r non-zero singular values and (N - r) zero or negliguble values. Comparison of the singular values permits establishment of the matrix rank. Usually, this requires the specification of a threshhold value below which singular value are deemed to be “zero”.

In modal analysis applications, the value of matrix rank can be associated with the number of genuine modes existing in a certain frequency range.

Condition Number The condition number of a matrix can be expressed as:  max /  min where  min is the smallest non-zero singular value. The condition number can be used as an indicator of potential computation difficulties (high condition number reflects ill-condition of matrix).

Generalised Inverse One of the major applications of the SVD is to the calculation of the generalised or pseudo inverse of a matrix, a frequent requirement in many aspects of structural modelling. It is often required to be able to “invert” a rectangular matrix when solving an over determined set of equations and the matrix involved may well be ill conditioned, especially when they are populated with measured data containing noise or other imperfections.

To solve: [A] N  m {x} m  1 = {b} N  1 where N > m we can write: {x} m  1 = [A] + m  N {b} N  1 where: [A] + = ([A] T [A]) -1 [A] T

If [A] is not of full rank, the best way to determine its inverse is via the SVD, as follows: [A] + = ([V] T ) -1 [  ] + [U] -1 = [V] [  ] + [U] T where [  ] + is an m  N “diagonal” matrix formed by the reciprocals of the non-zero singular values of [A] with the reciprocals of the zero singular values set to zero

Numerical Example Application: To determine the harmonics force factor {f} applied to a structure where the measured harmonics response are {y} and the relevent FRF matrix is [H]. Given:{y} 5  1 = [H] 5  3 {f} 3  1 where: find {f}

Use the classical pseudo inverse calculation: and recalculation of {y} using this force vector leads to:

Which is clearly incorrect (since it differs from the initial set) Applying the SVD to [H] gives:

From these results, it is clear that the rank of [H] is 2 ( and not 3). Thus, setting the third singular value to zero and then calculation [H] +, we find: and when this is used to recompute the response vector {y}, the original values are found exactly

Other applications of SVD Smoothing it is possible to smooth a matrix containing measured (i.e. noise) data by computing its SVD, then after zeroing the negligible singular values, recomputing the matrix

Determinants The determinant of a matrix can be found using the SVD as an aid to solving for the values of z for which det [A(z)] vanishes. We can write: det [A(z)] = det [U] det [  ] det [V] T But, since [U] and [V] are orthogonal, then: det [U] = det [V] =  1

By analysing the variation with z of the smallest singular value,  r, it is possible to identify those value(s) of z that make  r a minimum. This procedure can be used in multi-point excitation applications to determine natural frequencies and hence modal force appropriations.