Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECE 530 – Analysis Techniques for Large-Scale Electrical Systems

Similar presentations


Presentation on theme: "ECE 530 – Analysis Techniques for Large-Scale Electrical Systems"— Presentation transcript:

1 ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Lecture 20: Least-Squares Method Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign 11/4/2014

2 Revisit LCDFs Closed line base case line k addition case

3 LCDF : Evaluation We can evaluate LCDF by reversing the line outage
Recall how we define LODF outaged base case outage case

4 LCDF : Evaluation So the post-outage line flow 𝑓 𝑘 in LCDF is exactly the pre-outage line k flow in LODF But the change in line l flow in LCDF becomes the opposite of that in LODF So for the LCDF calculation Δ 𝑓 𝑙 =− 𝑑 𝑙 𝑘 𝑓 𝑘 Hence, we have 𝐿𝐶𝐷 𝐹 𝑙 𝑘 = Δ 𝑓 𝑙 𝑓 𝑘 =− 𝑑 𝑙 𝑘 We can verify this from the earlier example on 𝐿𝑂𝐷 𝐹 3 4 for the 5-bus case

5 Least Squares So far we have considered the solution of Ax = b in which A is a square matrix; as long as A is nonsingular there is a single solution That is, we have the same number of equations (m) as unknowns (n) Many problems are overdetermined in which there more equations than unknowns (m > n) Overdetermined systems are usually inconsistent, in which no value of x exactly solves all the equations Underdetermined systems have more unknowns than equations (m < n); they never have a unique solution but are usually consistent

6 Method of Least Squares
The least squares method is a solution approach for determining an approximate solution for an overdetermined system If the system is inconsistent, then not all of the equations can be exactly satisfied The difference for each equation between its exact solution and the estimated solution is known as the error Least squares seeks to minimize the sum of the squares of the errors Weighted least squares allows differ weights for the equations

7 Least Squares Problem Consider 𝐀𝐱=𝐛, where 𝐀∈ 𝐑 𝑚×𝑛 , 𝐱∈ 𝐑 𝑛 , 𝐛∈ 𝐑 𝑚
or

8 Least Squares Solution
We write (ai)T for the row i of A and ai is a column vector Here, m ≥ n and the solution we are seeking is that which minimizes 𝐀𝐱−𝐛 𝑝 , where p denotes some specific vector norm Since usually an overdetermined system has no exact solution, the best we can do is determine an x that minimizes the desired norm.

9 Example 1: Choice of p We discuss the choice of p in terms of a specific example Consider the equation Ax = b with Hence three equations and one unknown We will consider three possible choices for p

10 Example 1: Choice of p (i) p = 1 (ii) p = 2 (iii) p = 

11 The Least Squares Problem
In general, is non-differentiable for p = 1 or p = ∞ The choice of p = 2 has become well established and given the least-squares fit interpretation We next motivate the choice of p = 2 by first considering the least–squares problem

12 The Least Squares Problem
The problem is tractable for 2 major reasons (i) the function is differentiable in x ; and min 𝐱∈ 𝐑 𝑛 𝐀𝐱−𝐛 2

13 The Least Squares Problem
(ii) the norm is preserved under orthogonal transformations: with Q an arbitrary orthogonal matrix; that is, Q satisfies 𝐐 𝐐 𝑇 = 𝐐 𝑇 𝐐=𝐈 𝐐∈ 𝐑 𝑛×𝑛

14 The Least Squares Problem
We introduce next, the basic underlying assumption: A is of full (column) rank, i.e., the columns of A constitute a set of linearly independent vectors This assumption implies that the rank of A is n because n ≤ m since we are dealing with an overdetermined system Fact: The least squares problem solution x* satisfies

15 Proof of Fact Since by definition the least squares solution x* minimizes at the optimum, the derivative of this function vanishes:

16 Implications This underlying assumption implies that
A is full column rank Therefore, the fact that ATA is positive definite (p.d.) follows from considering any x ≠ 0 and evaluating which is the definition of a p.d. matrix We use the shorthand ATA ≻ 0 for ATA being a symmetric, positive definite matrix

17 Implications The underlying assumption that A is full rank and therefore ATA is p.d. implies that there exists a unique least squares solution Note: we use the inverse in a conceptual, rather than a computational, sense The below formulation is known as the normal equations, with the conceptual solution as its unique solution

18 Implications An important implication of positive definiteness is that we can factor ATA since ATA ≻ 0 The expression ATA = GTG is called the Cholesky factorization of any symmetric p.d. matrix ATA

19 Least Squares Solution Algorithm
Step 1: Compute the lower triangular part of ATA Step 2: Obtain the Cholesky Factorization Step 3: Compute Step 4: Solve for y using forward substitution in and for x using backward substitution in

20 Practical Considerations
The two key problems that arise in practice with the triangularization procedure are: While A maybe sparse, ATA is much less sparse and consequently requires more computing and storage resources for the solution ATA may be numerically less well-conditioned than A 20

21 Example 2: Loss of Sparsity
Assume the B matrix for a network is Then BTB is Second neighbors are now connected! But large networks are still sparse, just not as sparse

22 Numerical Conditioning
To understand the numerical ill-conditioning issue, we need to introduce terminology We define the norm of a matrix 𝐁∈ 𝐑 𝑚×𝑛 to be

23 Numerical Conditioning
𝐁 i.e., li is a root of the polynomial In words, the norm of matrix B is the square root of the largest eigenvalue of BTB

24 Numerical Conditioning
The conditioning number of a matrix B is defined as A well–conditioned matrix has a small value of , close to 1 The larger the value of is, the more pronounced the ill-conditioning is The ill-conditioned nature of ATA may severely impact the accuracy of the computed solution

25 Example 3: Ill-Conditioned ATA
We illustrate the fact that an ill-conditioned matrix ATA results in highly sensitive solutions of least-squares problems in this example with then

26 Example 3: Ill-Conditioned ATA
We introduce a “noise” in A to be the matrix dA

27 Example 3: Ill-Conditioned ATA
This “noise” leads to the error E in the computation of ATA with Let and assume that there is no “noise” in , i.e.,

28 Example 3: Ill-Conditioned ATA
The resulting error in solving the normal equations is independent of since it is caused purely by 𝜹𝑨 and 𝜹𝑨 𝑇 𝒃 Let x be the true solution of the normal equations ⇒ 𝒙= 1 0

29 Example 3: Ill-Conditioned ATA
Let be the solution of the system with the error arising due to , i.e., the solution of Therefore,

30 Example 3: Ill-Conditioned ATA
Therefore, the relative error is Now, the conditioning number and So the product approximates the relative error

31 Example 3: Ill-Conditioned ATA
Thus the conditioning number is a major contributor to the error in the least-squares solution In other words, the sensitivity of the solution to any system error, be its data entry or of a numerical nature, is very dependent on

32 Solving the Least-Squares Problem
With the previous background we proceed to the typical schemes in use for solving least squares problems, all along paying adequate attention to the numerical aspects of the solution approach If the matrix is full, then often the best solution approach is to use a singular value decomposition (SVD), to form a matrix known as the pseudo-inverse of the matrix We'll cover this later after first considering the sparse problem We first review some fundamental building blocks and then present the key results useful for the sparse matrices common in state estimation

33 Power System State Estimation
Power system state estimation (SE) is covered in ECE 573, so we'll just touch on it here; it is a key least squares application Overall goal is to come up with a power flow model for the present "state" of the power system based on the actual system measurements SE assumes the topology and parameters of the transmission network are mostly known Measurements come from SCADA, and increasingly, from PMUs

34 Power System State Estimation
Good introductory reference is Power Generation, Operation and Control by Wood, Wollenberg and Sheble, 3rd Edition Problem can be formulated in a nonlinear, weighted least squares form as where J(x) is the cost function, x are the state variables (primarily bus voltage magnitudes and angles), {zi} are the m measurements, f(x) relates the states to the measurements and i is the assumed standard deviation

35 Measurement Example Assume we measure the real and reactive power flowing into one end of a transmission line; then the zi-fi(x) functions correspond to Two measurements for four unknowns Other measurements, such as the flow at the other end, power injection and voltage magnitudes, add redundancy

36 Assumed Error Hence the goal is to decrease the error between the measurements and the assumed model states x The i term weighs the various measurements, recognizing that they can have vastly different assumed errors Measurement error is assumed Gaussian; whether it is or not is another question; outliers (bad measurements) are often removed


Download ppt "ECE 530 – Analysis Techniques for Large-Scale Electrical Systems"

Similar presentations


Ads by Google