Chapter 10. Numerical Solutions of Nonlinear Systems of Equations

Slides:



Advertisements
Similar presentations
Instabilities of SVD Small eigenvalues -> m+ sensitive to small amounts of noise Small eigenvalues maybe indistinguishable from 0 Possible to remove small.
Advertisements

Numerical Solution of Nonlinear Equations
Optimization.
CSE 330: Numerical Methods
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Function Optimization Newton’s Method. Conjugate Gradients
Motion Analysis (contd.) Slides are from RPI Registration Class.
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 19 Solution of Linear System of Equations - Iterative Methods.
Systems of Non-Linear Equations
Advanced Topics in Optimization
Newton's Method for Functions of Several Variables
Why Function Optimization ?
Tier I: Mathematical Methods of Optimization
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.

UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Solving Non-Linear Equations (Root Finding)
ITERATIVE TECHNIQUES FOR SOLVING NON-LINEAR SYSTEMS (AND LINEAR SYSTEMS)
Chapter 17 Boundary Value Problems. Standard Form of Two-Point Boundary Value Problem In total, there are n 1 +n 2 =N boundary conditions.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Newton's Method for Functions of Several Variables Joe Castle & Megan Grywalski.
Application of Differential Applied Optimization Problems.
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3- Chapter 12 Iterative Methods.
CHAPTER 3 NUMERICAL METHODS
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
5. Nonlinear Functions of Several Variables
Jump to first page Chapter 3 Splines Definition (3.1) : Given a function f defined on [a, b] and a set of numbers, a = x 0 < x 1 < x 2 < ……. < x n = b,
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Solving Non-Linear Equations (Root Finding)
One Dimensional Search
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Systems Numerical Methods. 2 Jacobi Iterative Method Choose an initial guess (i.e. all zeros) and Iterate until the equality is satisfied. No guarantee.
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
MA2213 Lecture 12 REVIEW. 1.1 on page Compute Compute quadratic Taylor polynomials for 12. Compute where g is the function in problems 11 and 12.
CSCE 441: Computer Graphics Forward/Inverse kinematics
Neural Networks Winter-Spring 2014
Solving Systems of Linear Equations: Iterative Methods
Numerical Analysis Lecture12.
Computational Optimization
MATH 2140 Numerical Methods
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Non-linear Least-Squares
Collaborative Filtering Matrix Factorization Approach
Computers in Civil Engineering 53:081 Spring 2003
Nonlinear regression.
Numerical Analysis Lecture 26.
Optimization Methods TexPoint fonts used in EMF.
6.5 Taylor Series Linearization
Numerical Linear Algebra
Newton’s Method and Its Extensions
CS5321 Numerical Optimization
Outline Preface Fundamentals of Optimization
Multiple features Linear Regression with multiple variables
Multiple features Linear Regression with multiple variables
Outline Preface Fundamentals of Optimization
Numerical Analysis Lecture11.
Numerical Analysis – Solving Nonlinear Equations
Pivoting, Perturbation Analysis, Scaling and Equilibration
Approximation of Functions
Presentation transcript:

Chapter 10. Numerical Solutions of Nonlinear Systems of Equations Jihoon Myung Computer Networks Research Lab. Dept. of Computer Science and Engineering Korea University jmyung@korea.ac.kr

Contents Fixed Points for Functions of Several Variables Newton’s Method Quasi-Newton Methods Steepest Descent Techniques Homotopy and Continuation Methods

Fixed Points for Functions of Several Variables A system of nonlinear equations The functions are the coordinate functions of F

Fixed Points for Functions of Several Variables Example 1. The 3×3 nonlinear system

Fixed Points for Functions of Several Variables

Fixed Points for Functions of Several Variables

Fixed Points for Functions of Several Variables

Fixed Points for Functions of Several Variables

Fixed Points for Functions of Several Variables Function G(x)

Fixed Points for Functions of Several Variables Main function

Fixed Points for Functions of Several Variables Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001

Fixed Points for Functions of Several Variables One way to accelerate convergence of the fixed-point iteration is to use the latest estimates as the Gauss-Seidel method for linear systems This method does not always accelerate the convergence

Fixed Points for Functions of Several Variables Main function for using the latest estimates

Fixed Points for Functions of Several Variables Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001

Newton’s Method Newton’s method in the one-dimensional case Newton’s method for nonlinear systems Using a similar approach in the n-dimensional case

Newton’s Method

Newton’s Method

Newton’s Method (Jacobian matrix)

Newton’s Method In practice, explicit computation of j(x)-1 is avoided A vector y is found that satisfies J(x(k-1))y=-F(x(k-1)) The new approximation, x(k), is obtained by adding y to x(k-1) Newton’s method can converge very rapidly once an approximation is obtained that is near the true solution It is not always easy to determine starting values that will lead to a solution The method is comparatively expensive to employ Good starting values can usually be found by the Steepest Descent method

Newton’s Method For solving J(x(k-1))y=-F(x(k-1)), use Gaussian elimination

Newton’s Method

Newton’s Method

Newton’s Method Gaussian elimination with partial pivoting

Newton’s Method Function F(x)

Newton’s Method Jacobian matrix

Newton’s Method Main function

Newton’s Method Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001

Quasi-Newton Methods Broyden’s method A generalization of the Secant method to systems of nonlinear equations Belong to a class of methods known as least-change secant updates that produce algorithms called quasi-Newton Replace the Jacobian matrix in Newton’s method with an approximation matrix that is updated at each iteration Superlinear convergence

Quasi-Newton Methods An initial approximation x(0) is given Calculate the next approximation x(1) The same manner as Newton’s method, or If it is inconvenient to determine J(x(0)) exactly, use the difference equations to approximate the partial derivatives

Quasi-Newton Methods Compute x(2),… Examine the Secant method for a single nonlinear equation

Quasi-Newton Methods

Quasi-Newton Methods

Quasi-Newton Methods Matrix inversion

Quasi-Newton Methods Matrix inversion (con’t)

Matrix inversion (con’t)

Quasi-Newton Methods Matrix inversion(con’t)

Main function

Main function (con’t)

Quasi-Newton Methods Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001

Quasi-Newton Methods Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001 Euclidean norm

Steepest Descent Techniques Steepest Descent method Determine a local minimum for a multivariable functions of the form Converge only linearly to the solution Converge even for poor initial approximations Used to find sufficiently accurate starting approximations for the Newton-based techniques

Steepest Descent Techniques

Steepest Descent Techniques

Steepest Descent Techniques

Steepest Descent Techniques The direction of greatest decrease in the value of g at x is the direction given by

Steepest Descent Techniques Choose three numbers α1 < α2 < α1 that, we hope, are close to where the minimum value of h(α) occurs Construct the quadratic polynomial P(x) that interpolates h at α1, α2, and α3 Define in [α1, α3] so that P( ) is a minimum and use P( ) to approximate the minimal value of h(α) Choose α1=0 A number α3 is found with h(α3) < h(α1) α2 is chosen to be α3/2 The minimum value of P occurs either at the only critical point of P or at the right endpoint α3

Steepest Descent Techniques Function f1, f2,…,fn and function g

Steepest Descent Techniques The gradient of g

Steepest Descent Techniques Main function

Steepest Descent Techniques Main function (con’t)

Steepest Descent Techniques Main function (con’t)

Steepest Descent Techniques Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001

Steepest Descent Techniques Result x(0) = (0, 0, 0)t Tolerance = 0.00001

Homotopy and Continuation Methods Homotopy, or continuation, methods for nonlinear systems embed the problem to be solved within a collection of problems

Homotopy and Continuation Methods

Homotopy and Continuation Methods

Homotopy and Continuation Methods

Homotopy and Continuation Methods Main function

Homotopy and Continuation Methods Main function (con’t)

Homotopy and Continuation Methods Result x(0) = (0.1, 0.1, -0.1)t Tolerance = 0.00001

Homotopy and Continuation Methods Result x(0) = (0, 0, 0)t Tolerance = 0.00001