Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems

Similar presentations


Presentation on theme: "ECE 530 – Analysis Techniques for Large-Scale Electrical Systems"— Presentation transcript:

1 ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Lecture 24: Power System Equivalents; Krylov Subspace Methods Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign 11/20/2014

2 Announcements No class on Thursday Dec 4

3 Ward Equivalents (Kron Reduction)
Equivalent is performed by doing a reduction of the bus admittance matrix Done by doing a partial factorization of the Ybus Computationally efficient Yee matrix is never explicitly inverted! Similar to what is done when fills are added, with new equivalent lines eventually joining the boundary buses

4 Ward Equivalents (Kron Reduction)
Prior to equivalencing constant power injections are converted to equivalent current injections, the system equivalenced, then they are converted back to constant power injections Tends to place large shunts at the boundary buses This equivalencing process has no impact on the non-boundary study buses Various versions of the approach are used, primarily differing on the handling of reactive power injections The equivalent embeds information about the operating state when the equivalent was created

5 PowerWorld Example Ward type equivalents can be created in PowerWorld by going into the Edit Mode and selecting Tools, Equivalencing Use Select the Buses to determine buses in the equivalent Use Create the Equivalent to actually create the equivalent When making equivalents for large networks the boundary buses tend to be joined by many high impedance lines; these lines can be eliminated by setting the Max Per Unit for Equivalent Line field to a relatively low value (say 2.0 per unit) Loads and gens are converted to shunts, equivalenced, then converted back

6 PowerWorld B7Flat_Eqv Example
In this example the B7Flat_Eqv case is reduced, eliminating buses 1, 3 and 4. The study system is then 2, 5, 6, 7, with buses 2 and 5 the boundary buses For ease of comparison system is modeled unloaded

7 PowerWorld B7Flat_Eqv Example
Original Ybus

8 PowerWorld B7Flat_Eqv Example
Note Yes=Yse' if no phase shifters

9 PowerWorld B7Flat_Eqv Example
Comparing original and equivalent Only modification was a change in the impedance between buses 2 and 5, modeled by adding an equivalent line

10 Contingency Analysis Application of Equivalencing
One common application of equivalencing is contingency analysis Most contingencies have a rather limited affect Much smaller equivalents can be created for each contingent case, giving rapid contingency screening Contingencies that appear to have violations in contingency screening can be processed by more time consuming but also more accurate methods W.F. Tinney, J.M. Bright, "Adaptive Reductions for Power System Equivalents," IEEE. Trans Power, May 1987, pp

11 New Applications in Equivalencing
Models in which the entire extent of the network is retained, but the model size is greatly reduced Often used for economic studies Mixed ac/dc solutions, possibly with an equivalent as well Internal portion is modeled with full ac power flow, more distant parts of the network are retained but modeled with a dc power flow, rest might be equivalenced Attribute preserving equivalents Retain characteristics other than just impedances, such as PTDFs; also new research looking at preserving line limits

12 Iterative Methods for Solving Ax=b
In the 1960s and 1970s iterative methods to solve large sparse linear systems started to gain popularity The interest arose due to the development of new, efficient Krylov subspace iteration schemes that were in many cases more appropriate than the general purpose direct solution software codes Such schemes are gaining ground because they are easier to implement efficiently on high-performance computers than the direct methods GPUs can also be used for parallel computation

13 References The good and still free book mentioned earlier on sparse matrices is Iterative Methods for Spare Linear Systems, by Yousef Saad, 2002, at www-users.cs.umn.edu/~saad/IterMethBook_2ndEd.pdf Y. Saad, "Numerical Methods for Large Eigenvalue Problems," 2011, available for free at R.S. Varga, “Matrix Iterative Analysis,” Prentice Hall, Englewood Cliffs, NJ, 1962. D.M. Young, “Iterative Solution of Large Linear Systems,” Academic Press, New York, NY 1971.

14 Krylov Subspace Outline
Review of fields and vector spaces Eigensystem basics Definition of Krylov subspaces and annihilating polynomial Generic Krylov subspace solver Steepest descent Conjugate gradient Arnoldi process

15 Basic Definitions: Fields
A field F is a set of elements for which the operations of addition, subtraction, multiplication, and division are defined The following field axioms hold for any field F and arbitrary a,b,g  F Closure : a + b  F and a  b  F Commutativity: a + b = b + a, a  b = b  a Associativity: (a + b) + g = a + (b + g), (a  b)  g = a  (b  g) Distributivity of multiplication: a  (b + g) = (a  b) + (a  g)

16 Basic Definitions: Fields
existence and uniqueness of the null element 0 : a + 0 = a and a  0 = 0 existence of the additive inverse: for every a  F there exists a unique b  F such that a + b = 0 existence of the multiplicative inverse: for all a  F and a  0, there exists an element g  F such that a  g = 1

17 Vector Spaces A vector space V over the field F is denoted by (V,F)
The space V is a set of vectors which satisfies the following axioms of addition and scalar multiplication: Closure: For all x1, x2  V then x1 + x2  V Commutativity of addition: x1 + x2 = x2 + x1 Associativity of addition: (x1 + x2) + x3 = x1 + (x2 + x3) Identity element of addition: There exists an element 0  V such that for every x  V, x + 0 = x Inverse element of addition: For every x  V there exists an element –x  V such that x + (-x) = 0

18 Vector Spaces Scalar multiplication: For all x  V and a  F, a  x  V Identity element of scalar multiplication: There exists a field 1 such that 1  x = x Associativity of scalar multiplication: a  (b  x) = (a  b)  x Distributivity of scalar multiplication with respect to field addition: (a + b)  x = a  x + b  x Distributivity of scalar multiplication with respect to vector addition: a  (x1 + x2)= a  x1 + a  x2

19 Linear Combination and Span
Consider the subset {xi  V, i = 1,2,…n} with the elements xi, which are arbitrary vectors in the vector space V Corresponding to the arbitrary scalars a1, a2, ... an we can form the linear combination of vectors The set of all linear combinations of x1 ,x2 , … xn is called the span of x1 ,x2 , … xn and is denoted by span{x1 ,x2 , … xn }

20 Linear Independence A set of vectors x1 ,x2 , … xn in vector space V is linearly independent (l.i.) if and only if A criterion of linear independence of the set of vectors is related to the matrix A necessary and sufficient condition of l.i. of this set is that X be of full column rank. X = [x1 , x2 , …, xn]

21 Linear Independence The maximum number n, such that there exists n vectors in V that are l.i., is called the dimension of the vector space V The vectors x1 ,x2 , … xn form a basis for the vector space V if and only if x1 ,x2 , … xn are l.i. x1 ,x2 , … xn span V (i.e., every vector in V can be expressed as a linear combination of x1 ,x2 , … xn) A vector space V can have many bases For example for the 2 vector space one basis is (1,0) and (0,1), while another is (1,0) and (1,1)

22 Eigensystem Definitions
The scalar l is an eigenvalue of the n by n matrix A if and only if Ax = lx for some x  0 where x is called the eigenvector corresponding to l The existence of the eigenvalue l implies (Ax - lx) = 0 so the matrix (Ax - lx) is singular

23 Eigensystem Definitions
The characteristic equation for determining l is The function p() is called the characteristic polynomial of A Suppose that l1, l2, ...,ln are the n distinct eigenvalues of the n by n matrix A, and let x1 ,x2 , … xn be the corresponding eigenvectors The set formed by these eigenvectors is l.i. When the eigenvalues of A are distinct, the modal matrix, defined by X = [x1 ,x2 , … xn] is nonsingular 𝑝 𝜆 = det 𝜆 𝐈 −𝐀 = 𝜆 𝑛 + 𝑎 𝑛−1 𝜆 𝑛−1 +…+ 𝑎 1 𝜆+ 𝑎 0 =0

24 Eigensystem Definitions
In this case, X satisfies the equation where

25 Diagonalizable Matrices
An n by n matrix A is said to be diagonalizable if there exists a nonsingular modal matrix X, and a diagonal matrix L such that It follows from the definition that

26 Diagonalizable Matrices
Hence in general for an arbitrary k The matrix X is sometimes referred to as a similarity transformation matrix It follows that a diagonalizable matrix A implies that any polynomial function of A is diagonalizable

27 Example Given Its eigenvalues are -1, 2 and 5, with
Its characteristic polynomial is

28 Example We can verify that A = X L X-1 Also

29 Cayley-Hamilton Theorem and Minimum Polynomial
The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic equation The minimal polynomial is the polynomial such that with the minimum degree m The minimal polynomial and characteristic polynomial are the same if A has n distinct eigenvalues

30 Cayley-Hamilton Theorem and Minimum Polynomial
This allows us to express A-1 in terms of powers of A For the previous example with and Verify

31 Solution of Linear Equations
As covered previously, the solution of a dense (i.e., non-sparse) system of n equations is O(n3) Even for a sparse A the direct solution of linear equations can be computationally expensive, and using the previous techniques not easy to parallelize We next present an alternative, iterative approach to obtain the solution using the application of Krylov subspace based methods Builds on the idea that we can express x = A-1b

32 Definition of a Krylov Subspace
Given a matrix A and a vector v, the ith order Krylov subspace is defined as Clearly, i cannot go to arbitrarily large; if fact, for a matrix A of rank n, then i  n For a specified matrix A and a vector v, the largest value of i is given by the order of the annihilating polynomial 𝐾 𝑖 𝐯,𝐀 = span {𝐯, 𝐀𝐯, 𝐀 2 𝐯, …, 𝐀 𝑖−1 𝐯}

33 Generic Krylov Subspace Solver
The following is a generic Krylov subspace solver method for solving Ax = b using only matrix vector multiplies Step 1: Start with an initial guess x(0) and some predefined error tolerance e > 0; compute the residual, r(0) = b – A x(0); set i = 0 Step 2: While ||r(i) ||  e Do (a) i := i + 1 (b) get Ki(r(0),A) (c) find x(i) = {x(0) + Ki(r(0),A)} to minimize ||r(i) || Stop

34 Krylov Subspace Solver
Note that no calculations are performed in Step 2 once i becomes greater than the order of the annihilating polynomial The Krylov subspace methods differ from each other in the construction scheme of the Krylov subspace in Step 2(b) of the scheme the residual minimization criterion used in Step 2(c) A common initial guess is x(0) = 0, giving r(0) = b – A x(0) = b

35 Krylov Subspace Solver
Every solver involves the A matrix only in matrix-vector products: Air(0), i=1,2,… The methods can strive to effectively exploit the spectral matrix structure of A with the aim to make the overall procedure computationally efficient To make this approach computationally efficient it is carried out by using the spectral information of A; for this purpose we order the eigenvalues of A according to their absolute values with


Download ppt "ECE 530 – Analysis Techniques for Large-Scale Electrical Systems"

Similar presentations


Ads by Google