Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Model Order Reduction II.2 The Projection Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from:

Similar presentations


Presentation on theme: "Introduction to Model Order Reduction II.2 The Projection Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from:"— Presentation transcript:

1 Introduction to Model Order Reduction II.2 The Projection Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from: Alessandra Nardi, Joel Phillips, Jacob White

2 2 Projection Framework: Non invertible Change of Coordinates Note: q << N reduced state original state

3 3 Original SystemOriginal System Projection Framework Note: now few variables (q<<N) in the state, but still thousands of equations (N)Note: now few variables (q<<N) in the state, but still thousands of equations (N) SubstituteSubstitute

4 4 Projection Framework (cont.) If and are chosen biorthogonalIf V q T and U q T are chosen biorthogonal Reduction of number of equations: test by multiplying by V q T

5 5 Projection Framework (graphically) nxn nxq qxq qxn nxq

6 6 Equation Testing (Projection) Non-invertible change of coordinates (Projection) Projection Framework

7 7 Use Eigenvectors of the system matrix (modal analysis)Use Eigenvectors of the system matrix (modal analysis) Use Frequency Domain DataUse Frequency Domain Data –Compute –Use the SVD to pick q < k important vectors Use Time Series DataUse Time Series Data –Compute –Use the SVD to pick q < k important vectors Approaches for picking V and U II.2.b POD Principal Component Analysis or SVD Singular Value Decomposition or KLD Karhunen-Lo`eve Decomposition or PCA Principal Component Analysis Point Matching

8 8 Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix POD or SVD or KLD or PCA. Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching) Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)Use Singular Vectors of System Grammians Product (Truncated Balance Realizations) Approaches for picking V and U

9 9 A canonical form for model order reduction Assuming A is non-singular we can cast the dynamical linear system into a canonical form for moment matching model order reduction Note: this step is not necessary, it just makes the notation simple for educational purposes

10 10 Taylor series expansion: U U Intuitive view of Krylov subspace choice for change of base projection matrix change base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion pointchange base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion point

11 11 Aside on Krylov Subspaces - Definition The order k Krylov subspace generated from matrix A and vector b is defined as

12 12 Moment matching around non-zero frequencies In stead of expanding around only s=0 we can expand around another points For each expansion point the problem can then be put again in the canonical form

13 13 If and Then Projection Framework: Moment Matching Theorem (E. Grimme 97) Total of 2q moment of the transfer function will match

14 14 Combine point and moment matching: multipoint moment matching Multipole expansion points give larger band Multipole expansion points give larger band Moment (derivates) matching gives more Moment (derivates) matching gives more accurate behavior in between expansion points accurate behavior in between expansion points

15 15 Compare Pade’ Approximations and Krylov Subspace Projection Framework Krylov Subspace Projection Framework: multipoint moment multipoint moment matching matching AND numerically very AND numerically very stable!!! stable!!! Pade approximations: moment matching at moment matching at single DC point single DC point numerically very numerically very ill-conditioned!!! ill-conditioned!!!

16 16 Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix POD or SVD or KLD or PCA. Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching) –general Krylov Subspace methods –case 1: Arnoldi –case 2: PVL –case 3: multipoint moment matching –moment matching preserving passivity: PRIMA Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)Use Singular Vectors of System Grammians Product (Truncated Balance Realizations) Approaches for picking V and U

17 17 If U and V are such that: Then the first q moments (derivatives) of the reduced system match Special simple case #1: expansion at s=0,V=U, orthonormal U T U=I

18 18 Algebraic proof of case #1: expansion at s=0, V=U, orthonormal U T U=I apply k times lemma in next slide

19 19 Lemma:. Note in general: BUT... Substitute: IqIqIqIq U is orthonormal

20 20 Vectors will quickly line up with dominant eigenspace! Need for Orthonormalization of U Vectors {b,Eb,...,E k-1 b} cannot be computed directly

21 21 Need for Orthonormalization of U (cont.) In "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state spaceIn "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state space In particular we can ORTHONORMALIZE the Krylov subspace vectorsIn particular we can ORTHONORMALIZE the Krylov subspace vectors

22 22 Normalize new vector For i = 1 to q Generates new Krylov subspace vector Orthogonalize new vector For j = 1 to i Orthonormalization of U: The Arnoldi Algorithm Normalize first vector O(n) sparse: O(n) dense:O(n 2 ) O(q 2 n) O(n) Computational Complexity

23 23 Most of the computation cost is spent in calculating: Set up and solve a linear system using GCR If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) The total complexity for calculating the projection matrix U q is O(qn) Generating vectors for the Krylov subspace

24 24 What about computing the reduced matrix ? Orthonormalization of the i-th column of U q Orthonormalization of all columns of U q So we don’t need to compute the reduced matrix. We have it already:

25 25 Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix POD or SVD or KLD or PCA. Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching) –general Krylov Subspace methods –case 1: Arnoldi –case 2: PVL –case 3: multipoint moment matching –moment matching preserving passivity: PRIMA Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)Use Singular Vectors of System Grammians Product (Truncated Balance Realizations) Approaches for picking V and U

26 26 Then the first 2q moments of reduced system match If U and V are such that: Special case #2: expansion at s=0, biorthogonal V T U=I

27 27 Proof of special case #2: expansion at s=0, biorthogonal V T U=U T V=I q (cont.) apply k times the lemma in next slide

28 28 Lemma:. Substitute: Substitute: IqIqIqIq biorthonormality IqIqIqIq biorthonormality

29 29 PVL: Pade Via Lanczos [P. Feldmann, R. W. Freund TCAD95] PVL is an implementation of the biorthogonal case 2:PVL is an implementation of the biorthogonal case 2: Use Lanczos process to biorthonormalize the columns of U and V: gives very good numerical stability

30 30 Example: Simulation of voltage gain of a filter with PVL (Pade Via Lanczos)

31 31 Compare to Pade via AWE (Asymptotic Waveform Evaluation)

32 32 Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix POD or SVD or KLD or PCA. Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching) –general Krylov Subspace methods –case 1: Arnoldi –case 2: PVL –case 3: multipoint moment matching –moment matching preserving passivity: PRIMA Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)Use Singular Vectors of System Grammians Product (Truncated Balance Realizations) Approaches for picking V and U

33 33 Case #3: Intuitive view of subspace choice for general expansion points In stead of expanding around only s=0 we can expand around another points For each expansion point the problem can then be put again in the canonical form

34 34 Case #3: Intuitive view of Krylov subspace choice for general expansion points (cont.) matches first k j of transfer function around each expansion point s j Hence choosing Krylov subspace s 1 =0 s1s1s1s1 s2s2s2s2 s3s3s3s3

35 35 Most of the computation cost is spent in calculating: Set up and solve a linear system using GCR If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) The total complexity for calculating the projection matrix U q is O(qn) Generating vectors for the Krylov subspace

36 36 Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix POD or SVD or KLD or PCA. Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching) –general Krylov Subspace methods –case 1: Arnoldi –case 2: PVL –case 3: multipoint moment matching –moment matching preserving passivity: PRIMA Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)Use Singular Vectors of System Grammians Product (Truncated Balance Realizations) Approaches for picking V and U

37 37 Sufficient conditions for passivity Sufficient conditions for passivity: Note that these are NOT necessary conditions (common misconception) i.e. A is negative semidefinite

38 Example Finite Difference System from on Poisson Equation (heat problem) Heat In We already know the Finite Difference matrices is positive semidefinite. Hence A or E=A -1 are negative semidefinite.

39 39 Sufficient conditions for passivity Sufficient conditions for passivity: Note that these are NOT necessary conditions (common misconception) i.e. E is negative semidefinite

40 40 Congruence Transformations Preserve Negative (or positive) Semidefinitness Def. congruence transformationDef. congruence transformation same matrix Note: case #1 in the projection framework V=U produces congruence transformationsNote: case #1 in the projection framework V=U produces congruence transformations Lemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrixLemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrix Proof. Just renameProof. Just rename

41 41 Congruence Transformation Preserves Negative Definiteness of E (hence passivity and stability) If we use Then we loose half of the degrees of freedom i.e. we match only q moments instead of 2q But if the original matrix E is negative semidefinite so is the reduced, hence the system is passive and stable nxn nxq nxq qxn

42 42 Sufficient conditions for passivity Sufficient conditions for passivity: Note that these are NOT necessary conditions (common misconception) i.e. A is negative semidefinite i.e. E is positive semidefinite

43 43 Example. hState-Space Model from MNA of R, L, C circuits     E is Positive Semidefinite A is Negative Semidefinite When using MNA For immittance systems in MNA form Lemma: A is negative semidefinite if and only if

44 44 PRIMA (for preserving passivity) (Odabasioglu, Celik, Pileggi TCAD98) A different implementation of case #1: V=U, U T U=I, Arnoldi Krylov Projection Framework: Use Arnoldi: Numerically very stable

45 45 PRIMA preserves passivity The main difference between and case #1 and PRIMA:The main difference between and case #1 and PRIMA: case #1 applies the projection framework tocase #1 applies the projection framework to PRIMA applies the projection framework toPRIMA applies the projection framework to PRIMA preserves passivity becausePRIMA preserves passivity because –uses Arnoldi so that U=V and the projection becomes a congruence transformation –E and -A produced by electromagnetic analysis are typically positive semidefinite –input matrix must be equal to output matrix

46 46 Algebraic proof of moment matching for PRIMA expansion at s=0, V=U, orthonormal U T U=I Used Lemma: If U is orthonormal (U T U=I) and b is a vector such that

47 47 Proof of lemma Proof:

48 48 Compare methods number of moments matched by model of order q preserving passivity case #1 (Arnoldi, V=U, U T U=I on sA -1 Ex=x+Bu) qno PRIMA (Arnoldi, V=U, U T U=I on sEx=Ax+Bu) qyes necessary when model is used in a time domain simulator case #2 (PVL, Lanczos,V≠U, V T U=I on sA -1 Ex=x+Bu) 2q more efficient no (good only if model is used in frequency domain)

49 49 Conclusions Reduction via eigenmodesReduction via eigenmodes –expensive and inefficient Reduction via rational function fitting (point matching)Reduction via rational function fitting (point matching) –inaccurate in between points, numerically ill-conditioned Reduction via Quasi-Convex OptimizationReduction via Quasi-Convex Optimization –quite efficient and accurate Reduction via moment matching: Pade approximationsReduction via moment matching: Pade approximations –better behavior but covers small frequency band –numerically very ill-conditioned Reduction via moment matching: Krylov Subspace Projection FrameworkReduction via moment matching: Krylov Subspace Projection Framework –allows multipoint expansion moment matching (wider frequency band) –numerically very robust and computationally very efficient –use PVL is more efficient for model in frequency domain –use PRIMA to preserve passivity if model is for time domain simulator

50 Case study: Passive Reduced Models from an Electromagnetic Field Solver dielectric layer long coplanar T-line, shorted on other side

51 frequency [Hz] __ with dielectrics - - w/o dielectrics Importance of including dielectrics: a simple transmission line example 10 -4 10 -3 10 -2 10 -1 10 0 1023456 x 10 8 admittance [S]

52 Can guarantee passivity Techniques for including dielectrics Finite Element MethodFinite Element Method Green’s Functions for dielectric bodiesGreen’s Functions for dielectric bodies Surface Formulations using Equivalent TheoremSurface Formulations using Equivalent Theorem –(substitute dielectrics with equivalent surface currents and use free space Green’s functions) Volume Formulations using Polarization CurrentsVolume Formulations using Polarization Currents

53 Volume Integral Formulation including Dielectrics current and charge conservation dielectrics conductors

54 Frequency independent kernel approximation Note: in this work we used a classical frequency independent approximation for the integration kernel:Note: in this work we used a classical frequency independent approximation for the integration kernel:

55 Reducing to algebraic form Surface and Volume discretization both for conductors and dielectrics + Galerkin gives branch equations:Surface and Volume discretization both for conductors and dielectrics + Galerkin gives branch equations: conductors dielectrics

56 dielectric layer conductor A mesh formulation for both conductors and dielectrics

57 Mesh analysis guarantees passivity where: Can prove that:

58 Mesh analysis guarantees passivity (cont.) where: Can prove that:

59 diagonal with positive coef. positive definite when using Galerkin congruence transformation preserves positive definiteness is block diagonal and the blocks are all positive, is block diagonal and the blocks are all positive, hence is positive semidefinite and so is

60

61 congruence transformation preserves positive definiteness diagonal with positive coef. is block diagonal and the blocks are all positive semidefinite, hence is also positive semidefinite is block diagonal and the blocks are all positive semidefinite, hence is also positive semidefinite

62 Example 1: frequency response of the coplanar transmission line frequency [Hz] __ with dielectrics, reduced model o with dielectrics, full system 10 -4 10 -3 10 -2 10 -1 10 0 1023456 x 10 8 admittance [S] (order 16) (order 700)

63 Example2: frequency response of the line with opposite strips frequency [Hz] __ with dielectrics, reduced model o with dielectrics, full system 10 -4 10 -3 10 -2 10 -1 10 0 1023456 x 10 8 admittance [S] (order 16) (order 700)

64 Example2: Current distributions Note: NOT TO SCALE! reduced filament widths for visualization purposes

65 Example 3: current distributions for two bus wires on an MCM

66 Frequency response for the reduced model of the MCM bus __ with dielectrics, reduced model (order 12) o with dielectrics, full system (order 600) - - without dielectrics frequency [Hz] 10 -4 10 -3 10 -2 10 -1 10 0 1023456 x 10 8 admittance [S]

67 Conclusions Electromagnetic Example Volume formulation with full mesh analysis (both conductors and dielectrics) producesVolume formulation with full mesh analysis (both conductors and dielectrics) produces –well conditioned –and positive semidefinite matrices Hence guaranteed passive models are generated when using congruence transformationHence guaranteed passive models are generated when using congruence transformation

68 68 Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix POD or SVD or KLD or PCA. Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching) Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)Use Singular Vectors of System Grammians Product (Truncated Balance Realizations) Approaches for picking V and U

69 69 Energy of the output y(t) starting from state x with no input: Observability Gramian Observability Gramian: Note: If x=x i the i-th eigenvector of W o : Hence: eigenvectors of W o corresponding to small eigenvalues do NOT produce much energy at the output (i.e. they are not very observable): Idea: let’s get rid of them! Note: it is also the solution Lyapunov equation

70 70 Minimum amount input energy required to drive the system to a specific state x : Controllability Gramian Inverse of Controllability Gramian: Note: If x=x i the i-th eigenvector of W c : Hence: eigenvectors of W c corresponding to small eigenvalues do require a lot of input energy in order to be reached (i.e. they are not very controllable): Idea: let’s get rid of them! It is also the solution of

71 71 Naïve Controllability/Observability MOR Suppose I could compute a basis for the strongly observable and/or strongly controllable spaces. Projection-based MOR can give a reduced model that deletes weakly observable and/or weakly controllable modes.Suppose I could compute a basis for the strongly observable and/or strongly controllable spaces. Projection-based MOR can give a reduced model that deletes weakly observable and/or weakly controllable modes. Problem:Problem: –What if the same mode is strongly controllable, but weakly observable? –Are the eigenvalues of the respective Gramians even unique?

72 72 Changing coordinate system Consider an invertible change of coordinates:Consider an invertible change of coordinates: We know that the input/output relationship will be unchanged.We know that the input/output relationship will be unchanged. But what about the the Gramians, and their eigenvalues?But what about the the Gramians, and their eigenvalues? Gramians and their eigenvalues change! Hence the relative degrees of observability and controllability are properties of the coordinate systemGramians and their eigenvalues change! Hence the relative degrees of observability and controllability are properties of the coordinate system A bad choice of coordinates will lead to bad reduced models if we look at controllability and observability separately.A bad choice of coordinates will lead to bad reduced models if we look at controllability and observability separately. What coordinate system should we use then?What coordinate system should we use then?

73 73 Fortunately the eigenvalues of the product (Hankel singular values) do not change when changing coordinates: Balancing Diagonal matrix with eigenvalues of the product The eigenvectors change And since W c and W o are symmetric, a change of coordinate matrix U can be found that diagonalizes both: In Balanced coordinates the Gramians are equal and diagonal But not the eigenvalues

74 74 Selection of vectors for the columns of the reduced order projection matrix. In balanced coordinates it is easy to select the best vectors for the reduced model: we want the subspace of vectors that are at the same time most controllable and observable: In other words the ones corresponding to the largest eigenvalues of the controllability and observability Grammians product. simply pick the eigenvectors corresponding to the largest entries on the diagonal (Hankel singular values).

75 75 Truncated Balance Realization Summary The good news:The good news: –we even have bounds for the error –Can do even a bit better with the optimal Hankel Reduction The bad news:The bad news: –it is expensive: need to compute the Gramians (solve Lyapunov equation)need to compute the Gramians (solve Lyapunov equation) need to compute eigenvalues of the product: O(N 3 )need to compute eigenvalues of the product: O(N 3 ) The bottom line:The bottom line: –If the size of your system allows you O(N 3 ) computation, Truncated Balance Realization is a much better choice than the any other reduction method. –But if you cannot afford O(N 3 ) computation (e.g. dense matrix with N > 5000) then PRIMA or PVL or Quasi-Convex- Optimization are better choices

76 76 Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix POD or SVD or KLD or PCA. Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching) Use Singular Vectors of System Grammians ProductUse Singular Vectors of System Grammians Product –Truncated Balance Realizations (TBR) –Guaranteed Passive TBR Approaches for picking V and U

77 77 TBR: Passivity Preserving? TBR does not generally preserve passivity ‑ Not guaranteed PR-preserving ‑ Not guarateed BR-preserving A special case: “symmetrizable” models ‑ Suppose the system is transformable to symmetric and internally PR ‑ TBR will generate PR models! (via congruence!) ‑ Stronger property than for PRIMA: TBR is coordinate- invariant

78 78 Positive-Real Lemma Lur’e equations : The system is positive-real if and only if is positive semidefinite A dual set of equations can be written for with

79 79 PR Preserving TBR Lur’e equations for “Grammians” : Lyapunov + Constraints Insight : from the PR lemma Can be used in a TBR procedure ‑ “Balance” the Lur’e equations then truncate By similar partitioning argument, truncated (reduced) system will be PR/BR (passive) iff the original is!

80 80 Physical Interpretation Consider Y-parameter model ‑ Inputs: voltages. Outputs: currents. ‑ Dissipated energy Lur’e Equation for PR-“Controllability” Grammian ‑ Singular values represent: gains from dissipated energy to state ‑ Minimum energy dissipation to reach a given state: Lur’e Equation for PR-“Observability” Grammian ‑ Singular values represent: gains from state to output ‑ Energy dissipated, given initial state:

81 81 Computational Procedure Put system into standard form ‑ If is singular, requires an eigendecomposition Solve the PR/BR Lur’e equations ‑ Solve a generalized eigenproblem of 2X size ‑ Special treatment for singular Balance & Truncate as in standard TBR

82 82 Alternate Hybrid Procedure Perform standard TBR Use Positive-Real Lemma to check passivity of models generated If model is not acceptable, proceed to PR-TBR Why? ‑ Usually costs less ‑ May get better models

83 83 Example : RLC Model TBR Model Not Positive Real

84 84 Example : Integrated Spiral Inductor Order 60 PRIMA Order 5 PR-TBR

85 Two Complementary Approaches Moment Matching Approaches –Accurate over a narrow band. Matching function values and derivatives. –Cheap: O(qn). –Use it as a FIRST STAGE REDUCTION Truncated Balanced Realization and Hankel Reduction –Optimal (best accuracy for given size q, and apriori error bound. –Expensive: O(n 3 ) –USE IT AS A SECOND STAGE REDUCTION

86 Combined Krylov-TBR algorithm Initial Model: (A B C), n Intermediate Model: (A i B i C i ), n i Reduced Model: (A r B r C r ), q Krylov reduction ( W i, V i ): A i = W i T AV i B i = W i T B C i = CV i TBR reduction ( W t, V t ): A r = W t T AV t B r = W t T B C r = CV t

87 87 Conclusions Moment Matching Projection Methods ‑ e.g. PVL, PRIMA, Arnoldi ‑ are suitable for application to VERY large systems O(qn) ‑ but do not generate optimal models PR/BR-TBR ‑ Independent of system structure ‑ Guarantee passive models ‑ but computationally O(n 3 ) usable only on model size < 3000 Combination of projection methods and new TBR technique provides near-optimal compression and guaranteed passive models -- in reasonable time Quasi-Convex Optimization Reduction is also a good alternative specially when building models from measurements

88 88 Course Outline Numerical Simulation Quick intro to PDE Solvers Quick intro to ODE Solvers Model Order reduction Linear systems Common engineering practice Optimal techniques in terms of model accuracy Efficient techniques in terms of time and memory Non-Linear Systems Parameterized Model Order Reduction Linear Systems Non-Linear Systems Monday Yesterday Friday Tomorrow Today


Download ppt "Introduction to Model Order Reduction II.2 The Projection Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from:"

Similar presentations


Ads by Google