Chapter 10. Numerical Solutions of Nonlinear Systems of Equations Jihoon Myung Computer Networks Research Lab. Dept. of Computer Science and Engineering Korea University jmyung@korea.ac.kr
Contents Fixed Points for Functions of Several Variables Newton’s Method Quasi-Newton Methods Steepest Descent Techniques Homotopy and Continuation Methods
Fixed Points for Functions of Several Variables A system of nonlinear equations The functions are the coordinate functions of F
Fixed Points for Functions of Several Variables Example 1. The 3×3 nonlinear system
Fixed Points for Functions of Several Variables
Fixed Points for Functions of Several Variables
Fixed Points for Functions of Several Variables
Fixed Points for Functions of Several Variables
Fixed Points for Functions of Several Variables Function G(x)
Fixed Points for Functions of Several Variables Main function
Fixed Points for Functions of Several Variables Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001
Fixed Points for Functions of Several Variables One way to accelerate convergence of the fixed-point iteration is to use the latest estimates as the Gauss-Seidel method for linear systems This method does not always accelerate the convergence
Fixed Points for Functions of Several Variables Main function for using the latest estimates
Fixed Points for Functions of Several Variables Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001
Newton’s Method Newton’s method in the one-dimensional case Newton’s method for nonlinear systems Using a similar approach in the n-dimensional case
Newton’s Method
Newton’s Method
Newton’s Method (Jacobian matrix)
Newton’s Method In practice, explicit computation of j(x)-1 is avoided A vector y is found that satisfies J(x(k-1))y=-F(x(k-1)) The new approximation, x(k), is obtained by adding y to x(k-1) Newton’s method can converge very rapidly once an approximation is obtained that is near the true solution It is not always easy to determine starting values that will lead to a solution The method is comparatively expensive to employ Good starting values can usually be found by the Steepest Descent method
Newton’s Method For solving J(x(k-1))y=-F(x(k-1)), use Gaussian elimination
Newton’s Method
Newton’s Method
Newton’s Method Gaussian elimination with partial pivoting
Newton’s Method Function F(x)
Newton’s Method Jacobian matrix
Newton’s Method Main function
Newton’s Method Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001
Quasi-Newton Methods Broyden’s method A generalization of the Secant method to systems of nonlinear equations Belong to a class of methods known as least-change secant updates that produce algorithms called quasi-Newton Replace the Jacobian matrix in Newton’s method with an approximation matrix that is updated at each iteration Superlinear convergence
Quasi-Newton Methods An initial approximation x(0) is given Calculate the next approximation x(1) The same manner as Newton’s method, or If it is inconvenient to determine J(x(0)) exactly, use the difference equations to approximate the partial derivatives
Quasi-Newton Methods Compute x(2),… Examine the Secant method for a single nonlinear equation
Quasi-Newton Methods
Quasi-Newton Methods
Quasi-Newton Methods Matrix inversion
Quasi-Newton Methods Matrix inversion (con’t)
Matrix inversion (con’t)
Quasi-Newton Methods Matrix inversion(con’t)
Main function
Main function (con’t)
Quasi-Newton Methods Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001
Quasi-Newton Methods Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001 Euclidean norm
Steepest Descent Techniques Steepest Descent method Determine a local minimum for a multivariable functions of the form Converge only linearly to the solution Converge even for poor initial approximations Used to find sufficiently accurate starting approximations for the Newton-based techniques
Steepest Descent Techniques
Steepest Descent Techniques
Steepest Descent Techniques
Steepest Descent Techniques The direction of greatest decrease in the value of g at x is the direction given by
Steepest Descent Techniques Choose three numbers α1 < α2 < α1 that, we hope, are close to where the minimum value of h(α) occurs Construct the quadratic polynomial P(x) that interpolates h at α1, α2, and α3 Define in [α1, α3] so that P( ) is a minimum and use P( ) to approximate the minimal value of h(α) Choose α1=0 A number α3 is found with h(α3) < h(α1) α2 is chosen to be α3/2 The minimum value of P occurs either at the only critical point of P or at the right endpoint α3
Steepest Descent Techniques Function f1, f2,…,fn and function g
Steepest Descent Techniques The gradient of g
Steepest Descent Techniques Main function
Steepest Descent Techniques Main function (con’t)
Steepest Descent Techniques Main function (con’t)
Steepest Descent Techniques Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001
Steepest Descent Techniques Result x(0) = (0, 0, 0)t Tolerance = 0.00001
Homotopy and Continuation Methods Homotopy, or continuation, methods for nonlinear systems embed the problem to be solved within a collection of problems
Homotopy and Continuation Methods
Homotopy and Continuation Methods
Homotopy and Continuation Methods
Homotopy and Continuation Methods Main function
Homotopy and Continuation Methods Main function (con’t)
Homotopy and Continuation Methods Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001
Homotopy and Continuation Methods Result x(0) = (0, 0, 0)t Tolerance = 0.00001