A more reliable reduction algorithm for behavioral model extraction Dmitry Vasilyev, Jacob White Massachusetts Institute of Technology.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Fast Algorithms For Hierarchical Range Histogram Constructions
5/4/2015rew Accuracy increase in FDTD using two sets of staggered grids E. Shcherbakov May 9, 2006.
Properties of State Variables
Applied Linear Algebra - in honor of Hans SchneiderMay 25, 2010 A Look-Back Technique of Restart for the GMRES(m) Method Akira IMAKURA † Tomohiro SOGABE.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Review Resources: Wiki: State Variables YMZ: State Variable Technique Wiki: Controllability.
Solving Linear Systems (Numerical Recipes, Chap 2)
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Symmetric Matrices and Quadratic Forms
CSE245: Computer-Aided Circuit Simulation and Verification Lecture Note 3 Model Order Reduction (1) Spring 2008 Prof. Chung-Kuan Cheng.
Similarity transformations  Suppose that we are given a ss model as in (1).  Now define state vector v(t) that is the same order of x(t), such that the.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 12 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
1 BSMOR: Block Structure-preserving Model Order Reduction http//:eda.ee.ucla.edu Hao Yu, Lei He Electrical Engineering Dept., UCLA Sheldon S.D. Tan Electrical.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
Chapter 7 Reading on Moment Calculation. Time Moments of Impulse Response h(t) Definition of moments i-th moment Note that m 1 = Elmore delay when h(t)
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 12 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
UCSD CSE 245 Notes – SPRING 2006 CSE245: Computer-Aided Circuit Simulation and Verification Lecture Notes 3 Model Order Reduction (1) Spring 2006 Prof.
Computing Sketches of Matrices Efficiently & (Privacy Preserving) Data Mining Petros Drineas Rensselaer Polytechnic Institute (joint.
An introduction to iterative projection methods Eigenvalue problems Luiza Bondar the 23 rd of November th Seminar.
Symmetric Definite Generalized Eigenproblem
CS240A: Conjugate Gradients and the Model Problem.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Input image Output image Transform equation All pixels Transform equation.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
1 Introduction to Model Order Reduction Luca Daniel Massachusetts Institute of Technology
Solution of Eigenproblem of Non-Proportional Damping Systems by Lanczos Method In-Won Lee, Professor, PE In-Won Lee, Professor, PE Structural Dynamics.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 Introduction to Model Order Reduction Luca Daniel Massachusetts Institute of Technology
Iterative factorization of the error system in Moment Matching and applications to error bounds Heiko Panzer, Thomas Wolf, Boris Lohmann GAMM-Workshop.
Introduction to the Hankel -based model order reduction for linear systems D.Vasilyev Massachusetts Institute of Technology, 2004.
1 Introduction to Model Order Reduction Thanks to Jacob White, Peter Feldmann II.1 – Reducing Linear Time Invariant Systems Luca Daniel.
In-Won Lee, Professor, PE In-Won Lee, Professor, PE Structural Dynamics & Vibration Control Lab. Structural Dynamics & Vibration Control Lab. Korea Advanced.
Decentralized Model Order Reduction of Linear Networks with Massive Ports Boyuan Yan, Lingfei Zhou, Sheldon X.-D. Tan, Jie Chen University of California,
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
Introduction to Model Order Reduction II.2 The Projection Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from:
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
* 김 만철, 정 형조, 박 선규, 이 인원 * 김 만철, 정 형조, 박 선규, 이 인원 구조동역학 및 진동제어 연구실 구조동역학 및 진동제어 연구실 한국과학기술원 토목공학과 중복 또는 근접 고유치를 갖는 비비례 감쇠 구조물의 자유진동 해석 1998 한국전산구조공학회 가을.
Motivation Thus far we have dealt primarily with the input/output characteristics of linear systems. State variable, or state space, representations describe.
A TBR-based Trajectory Piecewise-Linear Algorithm for Generating Accurate Low-order Models for Nonlinear Analog Circuits and MEMS Dmitry Vasilyev, Michał.
1 Chapter 5: Harmonic Analysis in Frequency and Time Domains Contributors: A. Medina, N. R. Watson, P. Ribeiro, and C. Hatziadoniu Organized by Task Force.
Xuanxing Xiong and Jia Wang Electrical and Computer Engineering Illinois Institute of Technology Chicago, Illinois, United States November, 2011 Vectorless.
*Man-Cheol Kim, Hyung-Jo Jung and In-Won Lee *Man-Cheol Kim, Hyung-Jo Jung and In-Won Lee Structural Dynamics & Vibration Control Lab. Structural Dynamics.
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Perturbation analysis of TBR model reduction in application to trajectory-piecewise linear algorithm for MEMS structures. Dmitry Vasilyev, Michał Rewieński,
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 12 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
BART VANLUYTEN, JAN C. WILLEMS, BART DE MOOR 44 th IEEE Conference on Decision and Control December 2005 Model Reduction of Systems with Symmetries.
Chapter 2 Interconnect Analysis Prof. Lei He Electrical Engineering Department University of California, Los Angeles URL: eda.ee.ucla.edu
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Programming Massively Parallel Graphics Multiprocessors using CUDA Final Project Amirhassan Asgari Kamiabad
Nonlinear balanced model residualization via neural networks Juergen Hahn.
ALGEBRAIC EIGEN VALUE PROBLEMS
Model Reduction Techniques in Neuronal Simulation Richard Hall, Jay Raol and Steven J. Cox Model Reduction Techniques in Neuronal Simulation Richard Hall,
DAC, July 2006 Model Order Reduction of Linear Networks with Massive Ports via Frequency-Dependent Port Packing Peng Li and Weiping Shi Department of ECE.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
CSE 554 Lecture 8: Alignment
Properties Of the Quadratic Performance Surface
Chapter 2 Interconnect Analysis
Dmitry Vasilyev Thesis supervisor: Jacob K White
CSE245: Computer-Aided Circuit Simulation and Verification
Model Order Reduction Slides adopted from Luca Daniel
Symmetric Matrices and Quadratic Forms
EE 616 Computer Aided Analysis of Electronic Networks Lecture 12
Symmetric Matrices and Quadratic Forms
2.2 Fixed-Point Iteration
Outline Sparse Reconstruction RIP Condition
Presentation transcript:

A more reliable reduction algorithm for behavioral model extraction Dmitry Vasilyev, Jacob White Massachusetts Institute of Technology

Outline Background Projection framework for model reduction Balanced Truncation algorithm and approximations AISIAD algorithm Description of the proposed algorithm Modified AISIAD and a low-rank square root algorithm Efficiency and accuracy Conclusions

Model reduction problem Reduction should be automatic Must preserve input-output properties Many (> 10 4 ) internal states inputsoutputs few (<100) internal states inputsoutputs

Differential Equation Model Model can represent: Finite-difference spatial discretization of PDEs Circuits with linear elements A – stable, n x n (large) E – SPD, n x n - state - vector of inputs - vector of outputs

Model reduction problem n – large (thousands)! Need the reduction to be automatic and preserve input-output properties (transfer function) q – small (tens)

Approximation error Wide-band applications: model should have small worst-case error ω => maximal difference over all frequencies

Projection framework for model reduction Pick biorthogonal projection matrices W and V Projection basis are columns of V and W Vx r x x n x xrxr V q W T AVx r Ax Most reduction methods are based on projection

LTI SYSTEM X (state) t u t y input output P (controllability) Which modes are easier to reach? Q (observability) Which modes produce more output? Reduced model retains most controllable and most observable modes Mode must be both very controllable and very observable Projection should preserve important modes

Reduced system: ( W T AV, W T B, CV, D ) Compute controllability and observability gramians P and Q : (~n 3 ) AP + PA T + BB T =0 A T Q + QA + C T C = 0 Reduced model keeps the dominant eigenspaces of PQ : (~n 3 ) PQ v i = λ i v i w i PQ = λ i w i Balanced truncation reduction (TBR) Very expensive. P and Q are dense even for sparse models

Arnoldi [Grimme ‘97]: V = colsp{A -1 B, A -2 B, …}, W=V T, approx. P dom only Padé via Lanczos [Feldman and Freund ‘95] colsp(V) = {A -1 B, A -2 B, …}, - approx. P dom colsp(W) = {A -T C T, (A -T ) 2 C T, …}, - approx. Q dom Frequency domain POD [Willcox ‘02], Poor Man’s TBR [Phillips ‘04] Most reduction algorithms effectively separately approximate dominant eigenspaces of P and Q : However, what matters is the product PQ colsp(V) = {(jω 1 I-A) -1 B, (jω 2 I-A) -1 B, …}, - approx. P dom colsp(W) = {(jω 1 I-A) -T C T, (jω 2 I-A) -T C T, …}, - approx. Q dom

RC line (symmetric circuit) Symmetric, P=Q all controllable states are observable and vice versa V(t) – input i(t) - output

RLC line (nonsymmetric circuit) P and Q are no longer equal! By keeping only mostly controllable and/or only mostly observable states, we may not find dominant eigenvectors of PQ Vector of states:

Lightly damped RLC circuit Exact low-rank approximations of P and Q of order < 50 leads to PQ ≈ 0!! R = 0.008, L = C = N=100

Lightly damped RLC circuit Union of eigenspaces of P and Q does not necessarily approximate dominant eigenspace of PQ. Top 5 eigenvectors of P Top 5 eigenvectors of Q

AISIAD model reduction algorithm Idea of AISIAD approximation: Approximate eigenvectors using power iterations: V i converges to dominant eigenvectors of PQ Need to find the product (PQ)V i X i = (PQ)V i => V i+1 = qr(X i ) “iterate” How?

Approximation of the product V i+1 =qr(PQV i ), AISIAD algorithm W i ≈ qr(QV i ) V i+1 ≈ qr(PW i ) Approximate using solution of Sylvester equation

More detailed view of AISIAD approximation Right-multiply by W i X X H, q x q (original AISIAD) M, n x q

X X H, q x q Modified AISIAD approximation Right-multiply by V i Approximate! M, n x q ^

Modified AISIAD approximation Right-multiply by V i We can take advantage of numerous methods, which approximate P and Q ! X X H, q x q Approximate! M, n x q ^

n x qn x q n x nn x n Specialized Sylvester equation A X + X H = -M q x qq x q Need only column span of X

Solving Sylvester equation Schur decomposition of H : A X + X = -M ~ ~ Solve for columns of X ~ ~ X

Solving Sylvester equation Applicable to any stable A Requires solving q times Schur decomposition of H : Solution can be accelerated via fast MVP Another methods exists, based on IRA, needs A>0 [Zhou ‘02]

Solving Sylvester equation Applicable to any stable A Requires solving q times Schur decomposition of H : For SISO systems and P = 0 equivalent to matching at frequency points –Λ(W T AW) ^

Modified AISIAD algorithm 1.Obtain low-rank approximations of P and Q 2.Solve AX i +X i H + M = 0, => X i ≈ PW i where H=W i T A T W i, M = P(I - W i W i T )A T W i + BB T W i 3. Perform QR decomposition of X i =V i R 4. Solve A T Y i +Y i F + N = 0, => Y i ≈ QV i where F=V i T AV i, N = Q(I - V i V i T )AV + C T CV i 5.Perform QR decomposition of Y i =W i+1 R to get new iterate. 6.Go to step 2 and iterate. 7.Bi-orthogonalize W and V and construct reduced model: ( W T AV, W T B, CV, D ) LR-sqrt ^^ ^ ^

For systems in the descriptor form Generalized Lyapunov equations: Lead to similar approximate power iterations

mAISIAD and low-rank square root Low-rank gramians LR-square root mAISIAD (inexpensive step) (more expensive) For the majority of non-symmetric cases, mAISIAD works better than low-rank square root (cost varies)

RLC line example results H-infinity norm of reduction error (worst-case discrepancy over all frequencies) N = 1000, 1 input 2 outputs

Steel rail coolling profile benchmark Taken from Oberwolfach benchmark collection, N= inputs, 6 outputs

mAISIAD is useless for symmetric models For symmetric systems ( A = A T, B = C T ) P=Q, therefore mAISIAD is equivalent to LRSQRT for P,Q of order q RC line example ^^

Cost of the algorithm Cost of the algorithm is directly proportional to the cost of solving a linear system: (where s jj is a complex number) Cost does not depend on the number of inputs and outputs (non-descriptor case) (descriptor case)

Conclusions The algorithm has a superior accuracy and extended applicability with respect to the original AISIAD method Very promising low-cost approximation to TBR Applicable to any dynamical system, will work (though, usually worse) even without low-rank gramians Passivity and stability preservation possible via post-processing Not beneficial if the model is symmetric