ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.

Slides:



Advertisements
Similar presentations
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Advertisements

Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
Chapter 9 Gauss Elimination The Islamic University of Gaza
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Linear Algebraic Equations
ECE 333 Renewable Energy Systems Lecture 14: Power Flow Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry Trigonometric functions Defined using right triangle  x y h.
Math for CSLecture 41 Linear Least Squares Problem Over-determined systems Minimization problem: Least squares norm Normal Equations Singular Value Decomposition.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
GG313 Lecture 12 Matrix Operations Sept 29, 2005.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
ECE 333 Renewable Energy Systems Lecture 13: Per Unit, Power Flow Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Matrices CS485/685 Computer Vision Dr. George Bebis.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
SVD(Singular Value Decomposition) and Its Applications
Chapter 10 Review: Matrix Algebra
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
© 2005 Yusuf Akgul Gebze Institute of Technology Department of Computer Engineering Computer Vision Geometric Camera Calibration.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Progress in identification of damping: Energy-based method with incomplete and noisy data Marco Prandina University of Liverpool.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Scientific Computing Singular Value Decomposition SVD.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 476 Power System Analysis Lecture 11: Ybus, Power Flow Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Chapter 9 Gauss Elimination The Islamic University of Gaza
ECE 476 Power System Analysis Lecture 12: Power Flow Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 476 Power System Analysis Lecture 14: Power Flow Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Unsupervised Learning II Feature Extraction
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye University of Illinois at Urbana-Champaign 1 Lecture 23: Small Signal.
Matrices and Vectors Review Objective
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
ECEN 615 Methods of Electric Power Systems Analysis
Presentation transcript:

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign 11/6/ Lecture 21: Least-Squares Method; State Estimation

Announcements HW 6 is due Tuesday, November 11 2

With the previous background we proceed to the typical schemes in use for solving least squares problems, all along paying adequate attention to the numerical aspects of the solution approach If the matrix is full, then often the best solution approach is to use a singular value decomposition (SVD), to form a matrix known as the pseudo-inverse of the matrix – We'll cover this later after first considering the sparse problem We first review some fundamental building blocks and then present the key results useful for the sparse matrices common in state estimation The Least Squares Problem 3

Power System State Estimation Power system state estimation (SE) is covered in ECE 573, so we'll just touch on it here; it is a key least squares application Overall goal is to come up with a power flow model for the present "state" of the power system based on the actual system measurements SE assumes the topology and parameters of the transmission network are mostly known Measurements come from SCADA, and increasingly, from PMUs 4

Power System State Estimation Good introductory reference is Power Generation, Operation and Control by Wood, Wollenberg and Sheble, 3 rd Edition Problem can be formulated in a nonlinear, weighted least squares form as where J(x) is the cost function, x are the state variables (primarily bus voltage magnitudes and angles), {z i } are the m measurements, f(x) relates the states to the measurements and  i is the assumed standard deviation 5

Measurement Example Assume we measure the real and reactive power flowing into one end of a transmission line; then the z i -f i (x) functions correspond to – Two measurements for four unknowns Other measurements, such as the flow at the other end, power injection and voltage magnitudes, add redundancy 6

Assumed Error Hence the goal is to decrease the error between the measurements and the assumed model states x The  i term weighs the various measurements, recognizing that they can have vastly different assumed errors Measurement error is assumed Gaussian; whether it is or not is another question; outliers (bad measurements) are often removed 7

SE Iterative Solution Algorithm Solved by sequential linearization Calculate the gradient of J(x) 8

SE Iterative Solution Algorithm Use Newton's method to solve for x for which the gradient is zero 9 This is exactly the least-squares form developed earlier with H T R -1 H an n by n matrix. This could be solved with Gaussian elimination, but this isn't preferred because of the numerical issues

Example: Two Bus Case Assume a two bus case with a generator supplying a load through a single line with x=0.1 pu. Assume measurements of the p/q flow on both ends of the line (into line positive), and the voltage magnitude at both the generator and the load end. So B 12 = B 21 = We need to assume a reference angle unless directly measuring 

Example: Two Bus Case Let 11 We assume an angle reference of   

Example: Two Bus Case With a flat start guess we get 12

Example: Two Bus Case 13

QR Factorization It’s preferred over Gaussian Elimination, since it handles ill-conditioned matrices Can be used with sparse matrices We will first split the R -1 matrix QR factorization represents the m by n H' matrix as with Q an m by m orthonormal matrix and U an upper triangular matrix (most books use Q R but we use U to avoid confusion with the previous R) 14

QR Factorization We then have But since Q is an orthonormal matrix, Hence we have And 15

QR Factorization Next issue we discuss the QR factorization algorithm to factor any matrix A into an m by m orthonormal matrix Q and an m by n upper triangular matrix U (usually R) Several methods are available including the Householder method and the Givens method Givens is preferred when dealing with sparse matrices One good reference is Gene H. Golub and Charles F. Van Loan, “Matrix Computations,” second edition, Johns Hopkins University Press,

Givens Algorithm The Givens algorithm works by pre-multiplying the desired matrix (A here) by a series of matrices and their transposes, starting with G 1 G 1 T – G k is the identity matrix modified to have two non-ones on the diagonals c = cos(  ), and two non-zero off diagonal elements, one set to -s = -sin(  ), and one to s = sin(  – Each G k is an orthonormal matrix, so pre-multiplying by G k G k T = I ensures the final product is equal to A – G k values are chosen to zero out particular elements in the lower triangle of A 17

Givens Algorithm Algorithm proceeds column by column, sequentially zeroing out elements in the lower triangle of A, starting at the bottom of each column 18

Givens Algorithm 19

Givens G Matrix The orthogonal G(i,j,  ) matrix is Premultiplication by G(i,j,  ) T is a rotation by  radians in the (i,j) coordinate plane 20 i j

Small Givens Example Let First we zero out A[2,1], a=1, b=2 giving s= 0.894, c=

Small Givens Example Next zero out A[3,2] with a=1.7889, b=1, giving c= , s= Final solution is 22

Givens Method for SE Example Starting with the H matrix we get To zero out H'[5,1]=1 we have b=100, a=-1000, giving c=0.995, s=

Start of Givens for SE Example Which gives The next rotation would be to zero out element H'[4,1], continuing until all the elements in the lower triangle have been reduced 24

Givens Comments For a full matrix, Givens is O(mn 2 ) since each element in the lower triangle needs to be zeroed O(nm), and each operation is O(n) Computation can be drastically reduced for a sparse matrix since we only need to zero out the elements that are initially non-zero, and any that become non-zero (i.e., the fills) – Also, for each multiply we only need to deal with the nonzeros in the impacted row Givens rotation is commonly used to solve the SE 25

Singular Value Decomposition An extremely useful matrix analysis technique is the singular value decomposition (SVD), which takes an m by n real matrix A and represents it as where U is an m by n orthogonal matrix (such that U T U = I,  is an n by n diagonal matrix whose elements are the non-negative singular values of A, and V is an n by n orthogonal matrix Note, there is an other formulation with U as m by m, and  as m by n Computational order is O(n 2 m); ok if n is small 26

Matrix Singular Values The singular values of a matrix A are the square roots of the eignenvalues of A T A – The singular values are real, nonnegative numbers, usually listed in decreasing order Each singular value  satisfies where u (dimension m) and v (dimension n) are both unit length and called respectively the left-singular and right-singular vectors for singular value  27

SVD Applications SVD applications come from the property that A can be written as where each one of the matrices is known as a mode More of the essence of the matrix is contained in the modes associated with the larger singular values An immediate application is data compression in which A represents an image; often a quite good representation of the image is available from just a small percentage of the modes 28

SVD Image Compression Example 29 Image source

SVD Applications Another application is removing noise. If the columns of A are signals, since noise often affects more the smaller singular values, then noise can be removed by taking the SVD, and reconstructing A without these modes – Noise is uniform across all modes Another application is principal component analysis (PCA) in which the idea is to take a data set with a number of variables, and reduce the data and determine the data associations – The principal components correspond to the largest singular values when data is appropriately normalized 30

Pseudo-inverse of a Matrix The pseudo-inverse of a matrix generalizes concept of a matrix inverse to an m by n matrix, in which m >= n – Specifically talking about a Moore-Penrose Matrix Inverse Notation for the pseudo-inverse of A is A + Satisfies AA + A = A If A is a square matrix, then A + = A -1 Quite useful for solving the least squares problem since the least squares solution of Ax = b is x = A + b 31

Pseudo-inverse and SVD pseudo-inverse can be directly determined from the SVD in which   is formed by replacing the non-zero diagonal elements by its inverse, and leaving the zeros – Numerically small values in  are assumed zero – V is n by n –   is n by n – U T is n by m – A + is therefore n by m 32 Computationally doing the SVD dominates

Simple Least Squares Example Assume we which to fix a line (mx + b = y) to three data points: (1,1), (2,4), (6,4) Two unknowns, m and b; hence x = [m b] T Setup in form of Ax = b 33

Simple Least Squares Example Doing the SVD Computing the pseudo-inverse 34

Simple Least Squares Example Computing x = [m b] T gives With the pseudo-inverse approach we immediately see the sensitivity of the elements of x to the elements of b 35