1 A Matrix Free Newton /Krylov Method For Coupling Complex Multi-Physics Subsystems Yunlin Xu School of Nuclear Engineering Purdue University October 23,

Slides:



Advertisements
Similar presentations
Instabilities of SVD Small eigenvalues -> m+ sensitive to small amounts of noise Small eigenvalues maybe indistinguishable from 0 Possible to remove small.
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Chapter 6 Feature-based alignment Advanced Computer Vision.
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Gizem ALAGÖZ. Simulation optimization has received considerable attention from both simulation researchers and practitioners. Both continuous and discrete.
Error Measurement and Iterative Methods
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
The loss function, the normal equation,
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Function Optimization Newton’s Method. Conjugate Gradients
MA5233: Computational Mathematics
Tutorial 12 Unconstrained optimization Conjugate gradients.
Revision.
Efficient Methodologies for Reliability Based Design Optimization
Planning operation start times for the manufacture of capital products with uncertain processing times and resource constraints D.P. Song, Dr. C.Hicks.
Active Set Support Vector Regression
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
Chapter 6 Numerical Interpolation
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
UNCONSTRAINED MULTIVARIABLE
Professor Walter W. Olson Department of Mechanical, Industrial and Manufacturing Engineering University of Toledo Solving ODE.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
1 Hybrid methods for solving large-scale parameter estimation problems Carlos A. Quintero 1 Miguel Argáez 1 Hector Klie 2 Leticia Velázquez 1 Mary Wheeler.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Australian Journal of Basic and Applied Sciences, 5(11): , 2011 ISSN Monte Carlo Optimization to Solve a Two-Dimensional Inverse Heat.
MATH 685/CSI 700 Lecture Notes Lecture 1. Intro to Scientific Computing.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
ME451 Kinematics and Dynamics of Machine Systems Numerical Solution of DAE IVP Newmark Method November 1, 2013 Radu Serban University of Wisconsin-Madison.
Qualifier Exam in HPC February 10 th, Quasi-Newton methods Alexandru Cioaca.
Nonlinear programming Unconstrained optimization techniques.
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.
Numerical Methods for Nuclear Nonlinear Coupled System J. Gan, Y. Xu T. J. Downar School of Nuclear Engineering Purdue University October, 2002.
1 EEE 431 Computational Methods in Electrodynamics Lecture 4 By Dr. Rasime Uyguroglu
Efficient Integration of Large Stiff Systems of ODEs Using Exponential Integrators M. Tokman, M. Tokman, University of California, Merced 2 hrs 1.5 hrs.
Discontinuous Galerkin Methods Li, Yang FerienAkademie 2008.
6. Introduction to Spectral method. Finite difference method – approximate a function locally using lower order interpolating polynomials. Spectral method.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
On the Use of Sparse Direct Solver in a Projection Method for Generalized Eigenvalue Problems Using Numerical Integration Takamitsu Watanabe and Yusaku.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Numerical Methods.
Solution of Nonlinear Functions
Parallel Solution of the Poisson Problem Using MPI
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
A comparison between PROC NLP and PROC OPTMODEL Optimization Algorithm Chin Hwa Tan December 3, 2008.
MECH4450 Introduction to Finite Element Methods Chapter 9 Advanced Topics II - Nonlinear Problems Error and Convergence.
Multi-area Nonlinear State Estimation using Distributed Semidefinite Programming Hao Zhu October 15, 2012 Acknowledgements: Prof. G.
Numerical Analysis. Numerical Analysis or Scientific Computing Concerned with design and analysis of algorithms for solving mathematical problems that.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Discretization for PDEs Chunfang Chen,Danny Thorne Adam Zornes, Deng Li CS 521 Feb., 9,2006.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
On the Use of Finite Difference Matrix-Vector Products in Newton-Krylov Solvers for Implicit Climate Dynamics with Spectral Elements ImpactObjectives 
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Circuit Simulation using Matrix Exponential Method Shih-Hung Weng, Quan Chen and Chung-Kuan Cheng CSE Department, UC San Diego, CA Contact:
Quality of Service for Numerical Components Lori Freitag Diachin, Paul Hovland, Kate Keahey, Lois McInnes, Boyana Norris, Padma Raghavan.
EEE 431 Computational Methods in Electrodynamics
Lecture 19 MA471 Fall 2003.
CS5321 Numerical Optimization
Solution of Equations by Iteration
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Conjugate Gradient Method
Numerical Linear Algebra
Performance Optimization
Errors and Error Analysis Lecture 2
Numerical Modeling Ramaz Botchorishvili
Pivoting, Perturbation Analysis, Scaling and Equilibration
CS5321 Numerical Optimization
Presentation transcript:

1 A Matrix Free Newton /Krylov Method For Coupling Complex Multi-Physics Subsystems Yunlin Xu School of Nuclear Engineering Purdue University October 23, 2006

2 Content  Introduction  MFNK and Optimal Perturbation Size Fixed Point Iteration (FPI) for coupling subsystems –A Matrix Free Newton/Krylov method based on FPI –Local Convergence analysis of MFNK –Truncation and Round-off Error –Estimation of Optimal Finite Difference Perturbation  Global Convergence strategies –Line search –Model trust region  Numerical Examples  Summary

3 Features of Multi-Physics Subsystems  Multiple nonlinear subsystems are coupled together: The solution of each subsystem depends on some external variables which come from the other system : internal variables : external variables  Each subsystem can be solved with reliable methods as long as they remain decoupled

4 Two General Approaches for Coupling Subsystems  Analytic Approach: reformulate the coupled system into a larger system of equations –Standard Newton-type methods can be applied  Synthetic Approach: combine the subsystem solvers for the coupled system –utilize the well-tested and reliable solutions of each of the subsystems because: It may be too expensive to reformulate the coupled system and forego the significant investment in developing reliable solvers for each of the subsystems. One of the subsystems may be solved using commercial software that prevents access to the source code which makes it impossible to reformulate the coupled equations for the analytic approach

5 Coupled Subsystem Example: Nuclear Reactor Simulation

6 Time Advancement: Marching vs Nest Scheme TRAC-E PARCS step n step n+1 –The time steps must be kept small for accuracy and stability concerns  Marching  Nested TRAC-E PARCS step n step n+1 Converged ? N Y N Y N Y –Computational cost for each time step increased –Numerical Stability and accuracy can be improved –Time step size may be extended

7 Ringhals BWR Stability  48 hours on 2 GHz machine for initialization! 128 Chans

8 Synthetic Approaches  Nested Iteration –Subsystems are chained in block Gauss-Seidel or block Jacobi iteration –Convergence is not guaranteed.  Matrix Free Newton/Krylov Method –Approximate Mat-Vec by quotient : –The system Jacobian is not constructed –Local Convergence guaranteed  Problems with Direct Application of MFNK for Coupling of Subsystems –Solvers for the subsystems are not fully utilized –Difficult to find a good preconditioner for MFNK –In some cases, it is not possible to obtain residuals for a subsystem if the solver of subsystem is commercial software which can be used only as a “black box”.

9 Objectives of Research  Propose a general approach to implement efficient matrix free Newton/Krylov methods for coupling complex subsystems with their respective solvers  Identify and address specific issues which arise in implementing MFNK for practical applications –Local convergence analysis of the matrix free Newton/Krylov method –Optimal perturbation size for the finite difference approximation in MFNK –Globally convergent strategies

10 Fixed Point Iteration for Coupling Subsystems  Block Iteration  Block Iteration for coupling subsystems are fixed point iterations  The condition for convergence of FPI is ||Φ( x *)||<1.  If ||Φ( x *)||>1, then the FPI may diverge

11 Matrix Free Newton Krylov Method Based on a Fixed Point iteration  Define a nonlinear system: F( x )= Φ( x ) - x =0  The solution of this system is the fixed point of function Φ( x ), which is also the solution of original coupled nonlinear system  MFNK algorithm:

12 Local Convergence of INM  The convergence of INM depends on the inner residual, assume –If p  2, the INM has local q-quadratic convergence. –If 1<p<2, INM converges with q-order at least p. –If p=1 and, the INM has local q-linear convergence.  Inexact Newton Method (INM)  Local Convergence of INM

13 Local Convergence of MFNK The inner residual consists with:  MFNK is an INM –iterative residual –finite difference residual  There are two conflicting sources of error in finite difference: –Truncation error –Round off error

14 Local Convergence of MFNK (cont.)  In theory, MFNK has local q-linear convergence, if  In practice, MFNK can achieve q-quadratic convergence, if  The optimal  should balance the round-off error and truncation error

15 Optimal vs Empirical Perturbation Size  The norm of the Jacobian and  can be estimated with information provided by the MFNK algorithm: or  An empirical prescription was proposed attempt to balance the truncation and round-off errors (Dennis)

16 Global Convergence Strategies  Solution x* of system of nonlinear equation: F ( x )=0 is also the global minimizer of optimization problem:  Newton step s N is the step from current solution to global minimizer of model problem:  f(x c +s N ) may be larger than f(x c ), due to big step s N such that m(x c +s N ) is no longer a good approximation of f(x c +s N ). In this case, we need globally convergent strategy to force f(x c +s N )<f(x c )

17 Descent Direction  Newton step is descent direction of both objective function and its model:  For any descent direction p k, there exist λ satisfies: (1) α-condition β-condition  A sequence { x k } generated by x k+1 = x k +λ k p k satisfying previous condition will converge to a minimizer of f ( x ). (2) (1),(2) proofs can be found in Dennis & Schnabel’s book

18 Line Search  Take MF Newton step as descent direction, and select λ to minimize a model of following function  Quadratic model  λ predicted from quadratic model

19 Information Required in Quadratic Model  Two function values:  One Gradient  Approximations for Gradient in MFNK or

20 Model Trust Region  Minimize model function in neighborhood, trust region xcxc xNxN cc x c +s(  c ) sNsN subject to

21 Double Dogleg Curve  Approximate optimal path with double dogleg curve C.P. xcxc xNxN N cc sNsN  Step along double dogleg curve.

22 Cauchy Point  Cauchy Point is minimizer in steepest descent direction  Projection of Step for Cauchy Point on Krylov subspace (Brown & Saad)

23 Example Problem I: Polynomials  Two dimensional second order polynomials  Solution  Jacobian  Nonlinear level

24 PLY 1 Truncation Error Dominated Case

25 PLY 1 Errors

26 PLY 1 Step Sizes Optimal Empirical

27 PLY 1 Step Size Parametric

28 PLY 2 Round Off Error Dominated Case

29 PLY 2 Errors

30 PLY 2 Stepsizes Optimal Empirical

31 PLY 2 Stepsize Parametric

32 Numerical Examples: Navier-Stokes-Like Problem (Goyon)  PDE DiffusionConvectionNon-physicalForce function  Boundary Condition  Force function  Goyon, Precoditioned Newton Methods using Incremental Unknowns Methods for Resolution of a Steady-State Navier-Stokes-Like Problem. Applied Mathematic and Computation, 87(1997), pp

33 The Finite difference Equations

34 Structure of the Matrices JacobianDiag block

35 NSL1: 50X50 meshes, =  Solving (u,v) as One Nonlinear System, w 0 =(1-8  ) w*

36 NSL1: Newton Iterative error and residual ErrorResidual

37 NSL1: Coupling Subsystem  Solving u and v as two subsystem, and coupled by FPI or MFNK

38 NSL1: Global convergence w 0 =(1-8  ) w*

39 NSL1 Residual

40 NSL1: First Backtracking The first backtracking occurred after the first Newton iteration. The L2 norm of residual Lambda before No GS LS MTR

41 NSL2: 1000X1000 meshes

42 NSL2: Residuals for FPI and MFR FPIMFR

43 NSL 2: Optimal vs Empirical Finite Difference Optimal Empirical

44 NSL2 Step Sizes

45 NSL2 Step Size parametric

46 Summary  A general approach, MFNK, was presented here for coupling subsystems with respective solvers.  Based on any FPI, a corresponding MFNK method can be constructed.  MFNK provides a more efficient method than FPI for coupling subsystems.  MFNK can converge for several cases in which the corresponding FPI diverges.  Locally, MFNK converges at least q-linearly and in many cases q- quadratically.  A more sophisticated FPI scheme provides a more efficient nonlinear system for the corresponding MFNK.

47 Summary (Cont.)  A method was proposed to estimate the optimal perturbation size for matrix free Newton/Krylov methods.  The method was based on an analysis of the truncation error and the round-off error introduced by the finite difference approximation.  The optimal perturbation size can be accurately estimated in the MFNK algorithm with almost no additional computational cost.  Numerical examples shows that the optimum perturbation size leads to improved convergence of the MFNK method compared to the perturbation determined by empirical formulas.

48 Summary (Cont.)  Line Search and Model Trust Region, were implemented within the framework of MFNK –For the line search method, a quadratic or higher order model was used with an approximation for the gradient –For the model trust region strategy, a double dogleg approach was implemented using the projection of a Newton step and Cauchy point within the Krylov subspace –the model trust region strategy showed better local performance than the line search strategy  Peer-to-Peer parallel MFNK algorithm was implemented