Pivoting, Perturbation Analysis, Scaling and Equilibration

Slides:



Advertisements
Similar presentations
Prof. Muhammad Saeed 1.Nonlinear Equations 2.System of Linear Equations.
Advertisements

Numerical Solution of Linear Equations
CS-110 Computational Engineering Part 3
Lecture 5 Newton-Raphson Method
Chapter 2 Solutions of Systems of Linear Equations / Matrix Inversion
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
ROOTS OF EQUATIONS Student Notes ENGR 351 Numerical Methods for Engineers Southern Illinois University Carbondale College of Engineering Dr. L.R. Chevalier.
MATH 685/ CSI 700/ OR 682 Lecture Notes
CSE 245: Computer Aided Circuit Simulation and Verification Fall 2004, Nov Nonlinear Equation.
Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 5 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 20 Solution of Linear System of Equations - Iterative Methods.
Revision.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 5 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
Systems of Non-Linear Equations
Algorithm for Gauss elimination 1) first eliminate for each eq. j, j=1 to n-1 for all eq.s k greater than j a) multiply eq. j by a kj /a jj b) subtract.
Newton's Method for Functions of Several Variables
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Linear Algebraic Equations ~ Gauss Elimination Chapter.
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Taylor Series.
Row Reduction Method Lesson 6.4.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
CISE301_Topic11 CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4:
ME451 Kinematics and Dynamics of Machine Systems Numerical Solution of DAE IVP Newmark Method November 1, 2013 Radu Serban University of Wisconsin-Madison.
Newton's Method for Functions of Several Variables Joe Castle & Megan Grywalski.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Lecture 7 - Systems of Equations CVEN 302 June 17, 2002.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3- Chapter 12 Iterative Methods.
Numerical Methods.
Linear Systems – Iterative methods
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
Lecture 5 - Single Variable Problems CVEN 302 June 12, 2002.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
2/26/ Gauss-Siedel Method Electrical Engineering Majors Authors: Autar Kaw
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Multivariable Linear Systems and Row Operations
Use Inverse Matrices to Solve Linear Systems
Iterative Solution Methods
Section 6.1 Systems of Linear Equations
CSE 245: Computer Aided Circuit Simulation and Verification
Spring Dr. Jehad Al Dallal
Gauss-Siedel Method.
Numerical Analysis Lecture12.
Autar Kaw Benjamin Rigsby
Chapter 6.
Numerical Analysis Lecture 45.
Computers in Civil Engineering 53:081 Spring 2003
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Numerical Analysis Lecture 16.
Nonlinear regression.
Numerical Analysis Lecture14.
Numerical Analysis Lecture13.
6.5 Taylor Series Linearization
Numerical Analysis Lecture10.
Mathematical Solution of Non-linear equations : Newton Raphson method
Some Comments on Root finding
Chapter 6.
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
1 Newton’s Method.
Errors and Error Analysis Lecture 2
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Ax = b Methods for Solution of the System of Equations:
Ax = b Methods for Solution of the System of Equations (ReCap):
Presentation transcript:

Pivoting, Perturbation Analysis, Scaling and Equilibration

Perturbation Analysis Consider the system of equation Ax = b Question: If small perturbation, 𝜹𝑨, is given in the matrix A and/or 𝜹𝒃 in the vector b, what is the effect 𝜹𝒙 on the solution vector x ? Alternatively: how sensitive is the solution x to small perturbations in the coefficient matrix, 𝜹𝑨, and the forcing function, 𝜹𝒃 ? Solve tutorial pbm here

Perturbation in forcing vector b: System of equation: A(x + δx) = (b + δb) Aδx = δb since, Ax = b δx = - A-1δb Take the norms of vectors and matrices: 𝜹𝒙 = 𝑨 −𝟏 𝜹𝒃 ≤ 𝑨 −𝟏 𝜹𝒃 = 𝑨 −𝟏 𝒃 𝜹𝒃 𝒃 = 𝑨 −𝟏 𝑨𝒙 𝜹𝒃 𝒃 ≤ 𝑨 −𝟏 𝑨 𝒙 𝜹𝒃 𝒃 𝜹𝒙 𝒙 ≤ 𝑨 𝑨 −𝟏 𝜹𝒃 𝒃

Perturbation in matrix A: System of equation: (A + δA)(x + δx) = b Aδx + δA(x + δx) = 0 since, Ax = b δx = - A-1δA(x + δx) Take the norms of vectors and matrices: 𝜹𝒙 = 𝑨 −𝟏 𝜹𝑨 𝒙+𝜹𝒙 ≤ 𝑨 −𝟏 𝜹𝑨 𝒙+𝜹𝒙 ≤ 𝑨 −𝟏 𝜹𝑨 𝒙 + 𝑨 −𝟏 𝜹𝑨 𝜹𝒙 𝜹𝒙 𝒙 ≤ 𝑨 𝑨 −𝟏 𝜹𝑨 𝑨 Product of perturbation quantities (negligible)

Condition Number: Condition number of a matrix A is defined as: 𝒞 𝑨 = 𝑨 −𝟏 𝑨 𝒞 𝑨 is the proportionality constant relating relative error or perturbation in A and b with the relative error or perturbation in x Value of 𝒞 𝑨 depends on the norm used for calculation. Use the same norm for both A and A-1. If 𝒞 𝑨 ≤1 or of the order of 1, the matrix is well-conditioned. If 𝒞 𝑨 ≫1, the matrix is ill-conditioned.

Since 𝒞 𝑨 ≫1, the matrix is ill-conditioned.

Is determinant a good measure of matrix conditioning?

Scaling and Equilibration: It helps to reduce the truncation errors during computation. Helps to obtain a more accurate solution for moderately ill-conditioned matrix. Example: Consider the following set of equations Scale variable x1 = 103 × x1ʹ and multiply the second equation by 100. Resulting equation is:

Scaling Vector x is replaced by xʹ such that, x = Sxʹ. S is a diagonal matrix containing the scale factors! For the example problem: Ax = b becomes: Ax = ASxʹ = Aʹxʹ = b where, Aʹ = AS Scaling operation is equivalent to post-multiplication of the matrix A by a diagonal matrix S containing the scale factors on the diagonal

Equilibration Equilibration is multiplication of one equation by a constant such that the values of the coefficients become of the same order of magnitude as the coefficients of other equations. The operation is equivalent to pre-multiplication by a diagonal matrix E on both sides of the equation. Ax = b becomes: EAx = Eb For the example problem: 1 0 0 0 10 2 0 0 0 1 0.003 1.45 0.3 0.00002 0.0096 0.0021 0.0015 0.966 0.201 𝑥 1 𝑥 2 𝑥 3 = 1 0 0 0 10 2 0 0 0 1 11 0.12 19 0.003 1.45 0.3 0.002 0.96 0.21 0.0015 0.966 0.201 𝑥 1 𝑥 2 𝑥 3 = 11 12 19 Equilibration operation is equivalent to pre-multiplication of the matrix A and vector b by a diagonal matrix E containing the equilibration factors on the diagonal

Example Problem Does the solution exist for complete pivoting? 10 −5 10 −5 1 10 −5 − 10 −5 1 1 1 2 𝑥 1 𝑥 2 𝑥 3 = 2×10 −5 −2× 10 −5 1   Perform complete pivoting and carry out Gaussian elimination steps using 3-digit floating-point arithmetic with round-off. Explain the results. b) Rewrite the set of equations after scaling according to x3 = 105  x3 and equilibration on the resulting equations 1 and 2. Solve the system with the same precision for floating point operations.

Pivoting, Scaling and Equilibration (Recap) Before starting the solution algorithm, take a look at the entries in A and decide on the scaling and equilibration factors. Construct matrices E and S. Transform the set of equation Ax = b to EASxʹ = Eb Solve the system of equation Aʹxʹ = bʹ for xʹ, where Aʹ = EAS and bʹ = Eb Compute: x = Sxʹ Gauss Elimination: perform partial pivoting at each step k For all other methods: perform full pivoting before the start of the algorithm to make the matrix diagonally dominant, as far as practicable! These steps will guarantee the best possible solution for all well-conditioned and mildly ill-conditioned matrices! However, none of these steps can transform an ill-conditioned matrix to a well-conditioned one.

Iterative Improvement by Direct Methods For moderately ill-conditioned matrices an approximate solution x͂ to the set of equations Ax = b can be improved through iterations using direct methods. Compute: r = b - A x͂ Recognize: r = b - A x͂ + Ax - b Therefore: A(x - x͂ ) = AΔx = r Compute: x = x͂ + Δx The iteration sequence can be repeated until ǁ Δx ǁ ≤ ε

Solution of System of Nonlinear Equations

System of Non-Linear Equations f(x) = 0 f is now a vector of functions: f = {f1, f2, … fn}T x is a vector of independent variables: x = {x1, x2, … xn}T Open methods: Fixed point, Newton-Raphson, Secant

Open Methods: Fixed Point Rewrite the system as follows: f(x) = 0 is rewritten as x = Φ(x) Initialize: assume x (0) Iteration Step k: x(k+1) = Φ(x (k)), initialize x (0) Stopping Criteria: 𝑥 𝑘+1 − 𝑥 𝑘 𝑥 𝑘+1 ≤𝜀

Open Methods: Fixed Point Condition for convergence: For single variable:│gʹ(ξ)│ < 1 For multiple variable, the derivative becomes the Jacobian matrix 𝕁 whose elements are 𝐽 𝑖𝑗 = 𝜕 𝜙 𝑖 𝜕 𝑥 𝑗 . Example 2-variables: 𝕁= 𝜕 𝜙 1 𝜕 𝑥 1 𝜕 𝜙 1 𝜕 𝑥 2 𝜕 𝜙 2 𝜕 𝑥 1 𝜕 𝜙 2 𝜕 𝑥 2 Sufficient Condition: 𝕁 <1 Necessary Condition: Spectral Radius, 𝜌 𝕁 < 1

Open Methods: Newton-Raphson Example 2-variable: f1(x, y) = 0 and f2(x, y) = 0 2-d Taylor’s series: 0= 𝑓 1 𝑥 𝑘+1 , 𝑦 𝑘+1 = 𝑓 1 𝑥 𝑘 , 𝑦 𝑘 + 𝑥 𝑘+1 − 𝑥 𝑘 𝜕 𝑓 1 𝜕𝑥 𝑥 𝑘 , 𝑦 𝑘 + 𝑦 𝑘+1 − 𝑦 𝑘 𝜕 𝑓 1 𝜕𝑦 𝑥 𝑘 , 𝑦 𝑘 +𝐻𝑂𝑇 0= 𝑓 2 𝑥 𝑘+1 , 𝑦 𝑘+1 = 𝑓 2 𝑥 𝑘 , 𝑦 𝑘 + 𝑥 𝑘+1 − 𝑥 𝑘 𝜕 𝑓 2 𝜕𝑥 𝑥 𝑘 , 𝑦 𝑘 + 𝑦 𝑘+1 − 𝑦 𝑘 𝜕 𝑓 2 𝜕𝑦 𝑥 𝑘 , 𝑦 𝑘 +𝐻𝑂𝑇 𝜕 𝑓 1 𝜕𝑥 𝜕 𝑓 1 𝜕𝑦 𝜕 𝑓 2 𝜕𝑥 𝜕 𝑓 2 𝜕𝑦 𝑥 𝑘 , 𝑦 𝑘 𝑥 𝑘+1 − 𝑥 𝑘 𝑦 𝑘+1 − 𝑦 𝑘 = − 𝑓 1 𝑥 𝑘 , 𝑦 𝑘 − 𝑓 2 𝑥 𝑘 , 𝑦 𝑘

Open Methods: Newton-Raphson Initialize: assume x (0) Recall single variable: 0=𝑓 𝑥 𝑘+1 =𝑓 𝑥 𝑘 + 𝑥 𝑘+1 − 𝑥 𝑘 𝑓 ′ 𝑥 𝑘 +𝐻𝑂𝑇 Multiple Variables: 0=𝒇 𝒙 (𝑘+1) =𝒇 𝒙 (𝑘) +𝕁 𝒙 (𝑘) 𝒙 (𝑘+1) − 𝒙 (𝑘) +𝐻𝑂𝑇 Iteration Step k: 𝕁 𝒙 𝑘 ∆𝒙=−𝒇 𝒙 𝑘 ; 𝒙 𝑘+1 = 𝒙 𝑘 +∆𝒙 Stopping Criteria: 𝑥 𝑘+1 − 𝑥 𝑘 𝑥 𝑘+1 ≤𝜀 Solve tutorial pbm here

Open Methods: Newton-Raphson Example 2-variable: 𝜕 𝑓 1 𝜕 𝑥 1 𝜕 𝑓 1 𝜕 𝑥 2 𝜕 𝑓 2 𝜕 𝑥 1 𝜕 𝑓 2 𝜕 𝑥 2 𝑥 1 𝑘 , 𝑥 2 𝑘 ∆ 𝑥 1 ∆ 𝑥 2 = −𝑓 1 𝑥 1 𝑘 , 𝑥 2 𝑘 −𝑓 2 𝑥 1 𝑘 , 𝑥 2 𝑘 ∆ 𝑥 1 ∆ 𝑥 2 = 𝑥 1 𝑘+1 𝑥 2 𝑘+1 − 𝑥 1 𝑘 𝑥 2 𝑘 = 𝑥 1 𝑘+1 − 𝑥 1 𝑘 𝑥 2 𝑘+1 − 𝑥 2 𝑘 𝑥 1 𝑘+1 𝑥 2 𝑘+1 = 𝑥 1 𝑘 𝑥 2 𝑘 + ∆ 𝑥 1 ∆ 𝑥 2

Example Problem: Tutorial 3 Q2 Solve the following system of equations using: Fixed-point iteration Newton-Raphson method starting with an initial guess of x = 1.2 and y = 1.2. Solution: Iteration Step k: 𝕁 𝒙 𝑘 ∆𝒙=−𝒇 𝒙 𝑘 ; 𝒙 𝑘+1 = 𝒙 𝑘 +∆𝒙 Stopping Criteria: 𝑥 𝑘+1 − 𝑥 𝑘 𝑥 𝑘+1 ≤𝜀 f(x) = 0 Solve tutorial pbm here

Open Methods: Newton-Raphson Example 2-variable: 𝜕𝑢 𝜕 𝑥 1 𝜕𝑢 𝜕 𝑥 2 𝜕𝑣 𝜕 𝑥 1 𝜕𝑣 𝜕 𝑥 2 𝑥 1 𝑘 , 𝑥 2 𝑘 ∆ 𝑥 1 ∆ 𝑥 2 = −𝑢 𝑥 1 𝑘 , 𝑥 2 𝑘 −𝑣 𝑥 1 𝑘 , 𝑥 2 𝑘 ∆ 𝑥 1 ∆ 𝑥 2 = 𝑥 1 𝑘+1 𝑥 2 𝑘+1 − 𝑥 1 𝑘 𝑥 2 𝑘 = 𝑥 1 𝑘+1 − 𝑥 1 𝑘 𝑥 2 𝑘+1 − 𝑥 2 𝑘 𝑥 1 𝑘+1 𝑥 2 𝑘+1 = 𝑥 1 𝑘 𝑥 2 𝑘 + ∆ 𝑥 1 ∆ 𝑥 2

Open Methods: Secant Jacobian of the Newton-Raphson method is evaluated numerically using difference approximation. Numerical methods for estimation of derivative of a function will be covered in detail later. Rest of the method is same.