ECE 530 – Analysis Techniques for Large-Scale Electrical Systems

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
5.1 Real Vector Spaces.
Ch 7.7: Fundamental Matrices
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
MAT 2401 Linear Algebra Exam 2 Review
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Computer Graphics Recitation 5.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 08 Chapter 8: Linear Transformations.
ECE 333 Renewable Energy Systems Lecture 13: Per Unit, Power Flow Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Matrices CS485/685 Computer Vision Dr. George Bebis.
Stats & Linear Models.
5.1 Orthogonality.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Linear Algebra Chapter 4 Vector Spaces.
Day 1 Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Section 4.1 Vectors in ℝ n. ℝ n Vectors Vector addition Scalar multiplication.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Section 2.3 Properties of Solution Sets
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
ECE 476 Power System Analysis Lecture 11: Ybus, Power Flow Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Ch 6 Vector Spaces. Vector Space Axioms X,Y,Z elements of  and α, β elements of  Def of vector addition Def of multiplication of scalar and vector These.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Graphics Graphics Korea University kucg.korea.ac.kr Mathematics for Computer Graphics 고려대학교 컴퓨터 그래픽스 연구실.
 Matrix Operations  Inverse of a Matrix  Characteristics of Invertible Matrices …
CS479/679 Pattern Recognition Dr. George Bebis
Linear Algebra Lecture 36.
Quantum One.
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Elementary Linear Algebra
Linear Algebra Lecture 20.
EIGENVECTORS AND EIGENVALUES
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Presentation transcript:

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Lecture 24: Power System Equivalents; Krylov Subspace Methods Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign haozhu@illinois.edu 11/20/2014

Announcements No class on Thursday Dec 4

Ward Equivalents (Kron Reduction) Equivalent is performed by doing a reduction of the bus admittance matrix Done by doing a partial factorization of the Ybus Computationally efficient Yee matrix is never explicitly inverted! Similar to what is done when fills are added, with new equivalent lines eventually joining the boundary buses

Ward Equivalents (Kron Reduction) Prior to equivalencing constant power injections are converted to equivalent current injections, the system equivalenced, then they are converted back to constant power injections Tends to place large shunts at the boundary buses This equivalencing process has no impact on the non-boundary study buses Various versions of the approach are used, primarily differing on the handling of reactive power injections The equivalent embeds information about the operating state when the equivalent was created

PowerWorld Example Ward type equivalents can be created in PowerWorld by going into the Edit Mode and selecting Tools, Equivalencing Use Select the Buses to determine buses in the equivalent Use Create the Equivalent to actually create the equivalent When making equivalents for large networks the boundary buses tend to be joined by many high impedance lines; these lines can be eliminated by setting the Max Per Unit for Equivalent Line field to a relatively low value (say 2.0 per unit) Loads and gens are converted to shunts, equivalenced, then converted back

PowerWorld B7Flat_Eqv Example In this example the B7Flat_Eqv case is reduced, eliminating buses 1, 3 and 4. The study system is then 2, 5, 6, 7, with buses 2 and 5 the boundary buses For ease of comparison system is modeled unloaded

PowerWorld B7Flat_Eqv Example Original Ybus

PowerWorld B7Flat_Eqv Example Note Yes=Yse' if no phase shifters

PowerWorld B7Flat_Eqv Example Comparing original and equivalent Only modification was a change in the impedance between buses 2 and 5, modeled by adding an equivalent line

Contingency Analysis Application of Equivalencing One common application of equivalencing is contingency analysis Most contingencies have a rather limited affect Much smaller equivalents can be created for each contingent case, giving rapid contingency screening Contingencies that appear to have violations in contingency screening can be processed by more time consuming but also more accurate methods W.F. Tinney, J.M. Bright, "Adaptive Reductions for Power System Equivalents," IEEE. Trans Power, May 1987, pp. 351-359

New Applications in Equivalencing Models in which the entire extent of the network is retained, but the model size is greatly reduced Often used for economic studies Mixed ac/dc solutions, possibly with an equivalent as well Internal portion is modeled with full ac power flow, more distant parts of the network are retained but modeled with a dc power flow, rest might be equivalenced Attribute preserving equivalents Retain characteristics other than just impedances, such as PTDFs; also new research looking at preserving line limits

Iterative Methods for Solving Ax=b In the 1960s and 1970s iterative methods to solve large sparse linear systems started to gain popularity The interest arose due to the development of new, efficient Krylov subspace iteration schemes that were in many cases more appropriate than the general purpose direct solution software codes Such schemes are gaining ground because they are easier to implement efficiently on high-performance computers than the direct methods GPUs can also be used for parallel computation

References The good and still free book mentioned earlier on sparse matrices is Iterative Methods for Spare Linear Systems, by Yousef Saad, 2002, at www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf Y. Saad, "Numerical Methods for Large Eigenvalue Problems," 2011, available for free at http://www-users.cs.umn.edu/~saad/eig_book_2ndEd.pdf R.S. Varga, “Matrix Iterative Analysis,” Prentice Hall, Englewood Cliffs, NJ, 1962. D.M. Young, “Iterative Solution of Large Linear Systems,” Academic Press, New York, NY 1971.

Krylov Subspace Outline Review of fields and vector spaces Eigensystem basics Definition of Krylov subspaces and annihilating polynomial Generic Krylov subspace solver Steepest descent Conjugate gradient Arnoldi process

Basic Definitions: Fields A field F is a set of elements for which the operations of addition, subtraction, multiplication, and division are defined The following field axioms hold for any field F and arbitrary a,b,g  F Closure : a + b  F and a  b  F Commutativity: a + b = b + a, a  b = b  a Associativity: (a + b) + g = a + (b + g), (a  b)  g = a  (b  g) Distributivity of multiplication: a  (b + g) = (a  b) + (a  g)

Basic Definitions: Fields existence and uniqueness of the null element 0 : a + 0 = a and a  0 = 0 existence of the additive inverse: for every a  F there exists a unique b  F such that a + b = 0 existence of the multiplicative inverse: for all a  F and a  0, there exists an element g  F such that a  g = 1

Vector Spaces A vector space V over the field F is denoted by (V,F) The space V is a set of vectors which satisfies the following axioms of addition and scalar multiplication: Closure: For all x1, x2  V then x1 + x2  V Commutativity of addition: x1 + x2 = x2 + x1 Associativity of addition: (x1 + x2) + x3 = x1 + (x2 + x3) Identity element of addition: There exists an element 0  V such that for every x  V, x + 0 = x Inverse element of addition: For every x  V there exists an element –x  V such that x + (-x) = 0

Vector Spaces Scalar multiplication: For all x  V and a  F, a  x  V Identity element of scalar multiplication: There exists a field 1 such that 1  x = x Associativity of scalar multiplication: a  (b  x) = (a  b)  x Distributivity of scalar multiplication with respect to field addition: (a + b)  x = a  x + b  x Distributivity of scalar multiplication with respect to vector addition: a  (x1 + x2)= a  x1 + a  x2

Linear Combination and Span Consider the subset {xi  V, i = 1,2,…n} with the elements xi, which are arbitrary vectors in the vector space V Corresponding to the arbitrary scalars a1, a2, ... an we can form the linear combination of vectors The set of all linear combinations of x1 ,x2 , … xn is called the span of x1 ,x2 , … xn and is denoted by span{x1 ,x2 , … xn }

Linear Independence A set of vectors x1 ,x2 , … xn in vector space V is linearly independent (l.i.) if and only if A criterion of linear independence of the set of vectors is related to the matrix A necessary and sufficient condition of l.i. of this set is that X be of full column rank. X = [x1 , x2 , …, xn]

Linear Independence The maximum number n, such that there exists n vectors in V that are l.i., is called the dimension of the vector space V The vectors x1 ,x2 , … xn form a basis for the vector space V if and only if x1 ,x2 , … xn are l.i. x1 ,x2 , … xn span V (i.e., every vector in V can be expressed as a linear combination of x1 ,x2 , … xn) A vector space V can have many bases For example for the 2 vector space one basis is (1,0) and (0,1), while another is (1,0) and (1,1)

Eigensystem Definitions The scalar l is an eigenvalue of the n by n matrix A if and only if Ax = lx for some x  0 where x is called the eigenvector corresponding to l The existence of the eigenvalue l implies (Ax - lx) = 0 so the matrix (Ax - lx) is singular

Eigensystem Definitions The characteristic equation for determining l is The function p() is called the characteristic polynomial of A Suppose that l1, l2, ...,ln are the n distinct eigenvalues of the n by n matrix A, and let x1 ,x2 , … xn be the corresponding eigenvectors The set formed by these eigenvectors is l.i. When the eigenvalues of A are distinct, the modal matrix, defined by X = [x1 ,x2 , … xn] is nonsingular 𝑝 𝜆 = det 𝜆 𝐈 −𝐀 = 𝜆 𝑛 + 𝑎 𝑛−1 𝜆 𝑛−1 +…+ 𝑎 1 𝜆+ 𝑎 0 =0

Eigensystem Definitions In this case, X satisfies the equation where

Diagonalizable Matrices An n by n matrix A is said to be diagonalizable if there exists a nonsingular modal matrix X, and a diagonal matrix L such that It follows from the definition that

Diagonalizable Matrices Hence in general for an arbitrary k The matrix X is sometimes referred to as a similarity transformation matrix It follows that a diagonalizable matrix A implies that any polynomial function of A is diagonalizable

Example Given Its eigenvalues are -1, 2 and 5, with Its characteristic polynomial is

Example We can verify that A = X L X-1 Also

Cayley-Hamilton Theorem and Minimum Polynomial The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic equation The minimal polynomial is the polynomial such that with the minimum degree m The minimal polynomial and characteristic polynomial are the same if A has n distinct eigenvalues

Cayley-Hamilton Theorem and Minimum Polynomial This allows us to express A-1 in terms of powers of A For the previous example with and Verify

Solution of Linear Equations As covered previously, the solution of a dense (i.e., non-sparse) system of n equations is O(n3) Even for a sparse A the direct solution of linear equations can be computationally expensive, and using the previous techniques not easy to parallelize We next present an alternative, iterative approach to obtain the solution using the application of Krylov subspace based methods Builds on the idea that we can express x = A-1b

Definition of a Krylov Subspace Given a matrix A and a vector v, the ith order Krylov subspace is defined as Clearly, i cannot go to arbitrarily large; if fact, for a matrix A of rank n, then i  n For a specified matrix A and a vector v, the largest value of i is given by the order of the annihilating polynomial 𝐾 𝑖 𝐯,𝐀 = span {𝐯, 𝐀𝐯, 𝐀 2 𝐯, …, 𝐀 𝑖−1 𝐯}

Generic Krylov Subspace Solver The following is a generic Krylov subspace solver method for solving Ax = b using only matrix vector multiplies Step 1: Start with an initial guess x(0) and some predefined error tolerance e > 0; compute the residual, r(0) = b – A x(0); set i = 0 Step 2: While ||r(i) ||  e Do (a) i := i + 1 (b) get Ki(r(0),A) (c) find x(i) = {x(0) + Ki(r(0),A)} to minimize ||r(i) || Stop

Krylov Subspace Solver Note that no calculations are performed in Step 2 once i becomes greater than the order of the annihilating polynomial The Krylov subspace methods differ from each other in the construction scheme of the Krylov subspace in Step 2(b) of the scheme the residual minimization criterion used in Step 2(c) A common initial guess is x(0) = 0, giving r(0) = b – A x(0) = b

Krylov Subspace Solver Every solver involves the A matrix only in matrix-vector products: Air(0), i=1,2,… The methods can strive to effectively exploit the spectral matrix structure of A with the aim to make the overall procedure computationally efficient To make this approach computationally efficient it is carried out by using the spectral information of A; for this purpose we order the eigenvalues of A according to their absolute values with