Chapter 2 Interconnect Analysis Prof. Lei He Electrical Engineering Department University of California, Los Angeles URL: eda.ee.ucla.edu

Slides:



Advertisements
Similar presentations
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Note 4 Model Order Reduction (2) Spring 2010 Prof. Chung-Kuan Cheng 1.
Advertisements

3D Geometry for Computer Graphics
Lect.3 Modeling in The Time Domain Basil Hamed
Properties of State Variables
Applications of Laplace Transforms Instructor: Chia-Ming Tsai Electronics Engineering National Chiao Tung University Hsinchu, Taiwan, R.O.C.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Note 3 Model Order Reduction (1) Spring 2008 Prof. Chung-Kuan Cheng.
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 3 Model Order Reduction (1) Spring 2008 Prof. Chung-Kuan Cheng.
1 BSMOR: Block Structure-preserving Model Order Reduction http//:eda.ee.ucla.edu Hao Yu, Lei He Electrical Engineering Dept., UCLA Sheldon S.D. Tan Electrical.
Chapter 7 Reading on Moment Calculation. Time Moments of Impulse Response h(t) Definition of moments i-th moment Note that m 1 = Elmore delay when h(t)
Department of EECS University of California, Berkeley EECS 105 Fall 2003, Lecture 2 Lecture 2: Transfer Functions Prof. Niknejad.
Chapter 3 Determinants and Matrices
UCSD CSE 245 Notes – SPRING 2006 CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 3 Model Order Reduction (1) Spring 2006 Prof.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
UCSD CSE 245 Notes – SPRING 2006 CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 4 Model Order Reduction (2) Spring 2006 Prof.
SAMSON: A Generalized Second-order Arnoldi Method for Reducing Multiple Source Linear Network with Susceptance Yiyu Shi, Hao Yu and Lei He EE Department,
3D Geometry for Computer Graphics
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Microwave Engineering
1 Introduction to Model Order Reduction Luca Daniel Massachusetts Institute of Technology
THE LAPLACE TRANSFORM LEARNING GOALS Definition The transform maps a function of time into a function of a complex variable Two important singularity functions.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Separation Principle. Controllers & Observers Plant Controller Observer Picture.
Chapter 5: The Orthogonality and Least Squares
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
CMPS 1371 Introduction to Computing for Engineers MATRICES.
1 In this lecture we will compare two linearizing controller for a single-link robot: Linearization via Taylor Series Expansion Feedback Linearization.
1 Parasitic Extraction Step 2: Model Order Reduction Luca Daniel Massachusetts Institute of Technology Slides available at:
A more reliable reduction algorithm for behavioral model extraction Dmitry Vasilyev, Jacob White Massachusetts Institute of Technology.
Model Order Reduction Luca Daniel University of California, Berkeley Massachusetts Institute of Technology with contributions from: Joel Phillips, Cadence.
Decentralized Model Order Reduction of Linear Networks with Massive Ports Boyuan Yan, Lingfei Zhou, Sheldon X.-D. Tan, Jie Chen University of California,
§ Linear Operators Christopher Crawford PHY
Introduction to Model Order Reduction II.2 The Projection Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from:
Motivation Thus far we have dealt primarily with the input/output characteristics of linear systems. State variable, or state space, representations describe.
THE LAPLACE TRANSFORM LEARNING GOALS Definition
1 Alexander-Sadiku Fundamentals of Electric Circuits Chapter 16 Applications of the Laplace Transform Copyright © The McGraw-Hill Companies, Inc. Permission.
Lecture 7: State-Space Modeling 1.Introduction to state-space modeling Definitions How it relates to other modeling formalisms 2.State-space examples 3.Transforming.
Chapter 7 The Laplace Transform
I.4 - System Properties Stability, Passivity
Signal & Weight Vector Spaces
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Pieter Heres, Aday Error control in Krylov subspace methods for Model Order Reduction Pieter Heres June 21, 2005 Eindhoven.
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
QD C QD C QD C QD C Chapter 2 Interconnect Analysis Model Order Reduction Prof. Lei He Electrical Engineering Department.
Prof. David R. Jackson Dept. of ECE Notes 14 ECE Microwave Engineering Fall 2015 Network Analysis Multiport Networks 1.
CONTROL SYSTEM UNIT - 6 UNIT - 6 Datta Meghe Institute of engineering Technology and Research Sawangi (meghe),Wardha 1 DEPARTMENT OF ELECTRONICS & TELECOMMUNICATION.
Beyond Vectors Hung-yi Lee. Introduction Many things can be considered as “vectors”. E.g. a function can be regarded as a vector We can apply the concept.
DAC, July 2006 Model Order Reduction of Linear Networks with Massive Ports via Frequency-Dependent Port Packing Peng Li and Weiping Shi Department of ECE.
DEPT.:-ELECTRONICS AND COMMUNICATION SUB: - CIRCUIT & NETWORK
Microwave Engineering
Chapter 2 Interconnect Analysis
Chapter 2 Interconnect Analysis
Chapter 2 Interconnect Analysis Delay Modeling
Chapter 2 Interconnect Analysis Model Order Reduction
EE 201C Homework 2 (due Feb 3 ) Wei Wu
Wei Yao Homework 3 Wei Yao
CSE245: Computer-Aided Circuit Simulation and Verification
Chapter 3 Linear Algebra
Homework 1: Electrical System
EE 201C Homework 2 (due Feb 3 ) Wei Wu
Model Order Reduction Slides adopted from Luca Daniel
Symmetric Matrices and Quadratic Forms
Chapter 4. Time Response I may not have gone where I intended to go, but I think I have ended up where I needed to be. Pusan National University Intelligent.
Chapter 3 Modeling in the Time Domain
THE LAPLACE TRANSFORM LEARNING GOALS Definition
Fang Gong Homework 3 Fang Gong
Presentation transcript:

Chapter 2 Interconnect Analysis Prof. Lei He Electrical Engineering Department University of California, Los Angeles URL: eda.ee.ucla.edu

Organization  Chapter 2a First/Second Order Analysis  Chapter 2b Moment calculation and AWE  Chapter 2c Projection based model order reduction

Projection Framework: Change of variables Note: q << N reduced state original state

Projection Framework Original System Substitute Note: now few variables (q<<N) in the state, but still thousands of equations (N)

Projection Framework (cont.) Reduction of number of equations: test multiplying by V q T If V and U biorthogonal

nxn qxq nxq qxn Projection Framework (cont.)

Equation Testing Change of variables Projection Framework

Use Eigenvectors Use Time Series Data n Compute n Use the SVD to pick q < k important vectors Use Frequency Domain Data n Compute n Use the SVD to pick q < k important vectors Use Singular Vectors of System Grammians? Use Krylov Subspace Vectors? Approaches for picking V and U

Taylor series expansion: U U Intuitive view of Krylov subspace choice for change of base projection matrix change base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion pointchange base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion point

Combine point and moment matching: multipoint moment matching Multipole expansion points give larger band Multipole expansion points give larger band Moment (derivates) matching gives more accurate Moment (derivates) matching gives more accurate behavior in between expansion points behavior in between expansion points

Compare Pade’ Approximations and Krylov Subspace Projection Framework Krylov Subspace Projection Framework: multipoint moment multipoint moment matching matching numerically very numerically very stable!!! stable!!! Pade approximations: moment matching at moment matching at single DC point single DC point numerically very numerically very ill-conditioned!!! ill-conditioned!!!

Aside on Krylov Subspaces - Definition The order k Krylov subspace generated from matrix A and vector b is defined as

If and Then Projection Framework: Moment Matching Theorem (E. Grimme 97)

If U and V are such that: Then the first q moments (derivatives) of the reduced system match Special simple case #1: expansion at s=0,V=U, orthonormal U T U=I

Vectors will line up with dominant eigenspace! Need for Orthonormalization of U

Need for Orthonormalization of U (cont.)  In "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state space  In particular we can ORTHONORMALIZE the Krylov subspace vectors

Normalize new vector For i = 1 to k Generates k+1 vectors! Orthogonalize new vector For j = 1 to i Orthonormalization of U:The Arnoldi Algorithm

Then the first 2q moments of reduced system match If U and V are such that: Special case #2: expansion at s=0, biorthogonal V T U=I

PVL: Pade Via Lanczos [P. Feldmann, R. W. Freund TCAD95] PVL is an implementation of the biorthogonal case 2: Use Lanczos process to biorthonormalize the columns of U and V: gives very good numerical stability

Case #3: Intuitive view of subspace choice for general expansion points  In stead of expanding around only s=0 we can expand around another points  For each expansion point the problem can then be put again in the standard form

Case #3: Intuitive view of Krylov subspace choice for general expansion points (cont.) matches first k j of transfer function around each expansion point s j Hence choosing Krylov subspace s 1 =0 s1s1s1s1 s2s2s2s2 s3s3s3s3

Interconnected Systems ROM  Can we assure that the simulation of the composite system will be well- behaved? At least preclude non-physical behavior of the reduced model?  In reality, reduced models are only useful when connected together with other models and circuit elements in a composite simulation  Consider a state-space model connected to external circuitry (possibly with feedback!)

Passivity  Passive systems do not generate energy. We cannot extract out more energy than is stored. A passive system does not provide energy that is not in its storage elements.  If the reduced model is not passive it can generate energy from nothingness and the simulation will explode

Interconnecting Passive Systems QD C QD C QD C QD C The interconnection of stable models is not necessarily stable BUT the interconnection of passive models is a passive model:

Positive Real Functions A positive real function is a function internally stable with non- negative real part (no unstable poles) (no negative resistors) (real response) Hermittian=conjugate and transposed It means its real part is a positive semidefinite matrix at all frequencies

Positive Realness & Passivity For systems with immittance (impedance or admittance) matrix representation, positive-realness of the transfer function is equivalent to passivity ROM    

Necessary conditions for passivity for Poles/Zeros  The positive-real condition on the matrix rational function implies that: –If H(s) is positive-real also its inverse is positive real –If H(s) is positive-real it has no poles in the RHP, and hence also no zeros there.  Occasional misconception : “if the system function has no poles and no zeros in the RHP the system is passive”.  It is necessary that a positive-real function have no poles or zeros in the RHP, but not sufficient.

Sufficient conditions for passivity Sufficient conditions for passivity: Note that these are NOT necessary conditions (common misconception)

Congruence Transformations Preserve Positive Semidefinitness Def. congruence transformation same matrix  Note: case #1 in the projection framework V=U produces congruence transformations  Property: a congruence transformation preserves the positive semidefiniteness of the matrix  Proof. Just rename  Note:

PRIMA (for preserving passivity) (Odabasioglu, Celik, Pileggi TCAD98) A different implementation of case #1: V=U, U T U=I, Arnoldi Krylov Projection Framework: Use Arnoldi: Numerically stable Use Arnoldi: Numerically very stable

PRIMA preserves passivity  The main difference between and case #1 and PRIMA: –case #1 applies the projection framework to –PRIMA applies the projection framework to  PRIMA preserves passivity because – uses Arnoldi so that U=V and the projection becomes a congruence transformation – E and A produced by electromagnetic analysis are typically positive semidefinite while may not be. – input matrix must be equal to output matrix

Compare methods number of moments matched by model of order q preserving passivity case #1 (Arnoldi, V=U, U T U=I on sA -1 Ex=x+Bu) qno PRIMA (Arnoldi, V=U, U T U=I on sEx=Ax+Bu) qyes necessary when model is used in a time domain simulator case #2 (PVL, Lanczos,V≠U, V T U=I on sA -1 Ex=x+Bu) 2q more efficient no (good only if model is used in frequency domain)