Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004.

Slides:



Advertisements
Similar presentations
Fast Fourier Transform for speeding up the multiplication of polynomials an Algorithm Visualization Alexandru Cioaca.
Advertisements

Roundoff and truncation errors
Computational Modeling for Engineering MECN 6040
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Extended Kalman Filter (EKF) And some other useful Kalman stuff!
Computational Methods in Physics PHYS 3437
MATH 685/ CSI 700/ OR 682 Lecture Notes
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
2. Numerical differentiation. Approximate a derivative of a given function. Approximate a derivative of a function defined by discrete data at the discrete.
CE33500 – Computational Methods in Civil Engineering Differentiation Provided by : Shahab Afshari
Data mining and statistical learning - lecture 6
ES 240: Scientific and Engineering Computation. InterpolationPolynomial  Definition –a function f(x) that can be written as a finite series of power functions.
Numerical Algorithms Matrix multiplication
Chapter 9 Gauss Elimination The Islamic University of Gaza
Total Recall Math, Part 2 Ordinary diff. equations First order ODE, one boundary/initial condition: Second order ODE.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
Round-Off and Truncation Errors
MA5233: Computational Mathematics
Lecture 2: Numerical Differentiation. Derivative as a gradient
Computer-Aided Analysis on Energy and Thermofluid Sciences Y.C. Shih Fall 2011 Chapter 6: Basics of Finite Difference Chapter 6 Basics of Finite Difference.
Engineering Computation Curve Fitting 1 Curve Fitting By Least-Squares Regression and Spline Interpolation Part 7.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Ordinary Differential Equations (ODEs) 1Daniel Baur / Numerical Methods for Chemical Engineers / Implicit ODE Solvers Daniel Baur ETH Zurich, Institut.
Spectral analysis: Foundations
Pseudospectral Methods
1 Introduction to Model Order Reduction Luca Daniel Massachusetts Institute of Technology
Function approximation: Fourier, Chebyshev, Lagrange
1 1.0 Students solve equations and inequalities involving absolute value. Algebra 2 Standards.
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Wicomico High School Mrs. J. A. Austin AP Calculus 1 AB Third Marking Term.
Finite Differences Finite Difference Approximations  Simple geophysical partial differential equations  Finite differences - definitions  Finite-difference.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Interpolation. Interpolation is important concept in numerical analysis. Quite often functions may not be available explicitly but only the values of.
MATH 685/CSI 700 Lecture Notes Lecture 1. Intro to Scientific Computing.
Lecture 22 MA471 Fall Advection Equation Recall the 2D advection equation: We will use a Runge-Kutta time integrator and spectral representation.
Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002.
MA5251: Spectral Methods & Applications
1 LES of Turbulent Flows: Lecture 14 (ME EN ) Prof. Rob Stoll Department of Mechanical Engineering University of Utah Fall 2014.
Interpolation and Approximation To the question, "Why approximate?", we can only answer, "Because we must!" Mathematical models of physical or natural.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
1. Interpolating polynomials Polynomial of degree n,, is a linear combination of Definitions: (interval, continuous function, abscissas, and polynomial)
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 8.
6. Introduction to Spectral method. Finite difference method – approximate a function locally using lower order interpolating polynomials. Spectral method.
CHAPTER 3 Model Fitting. Introduction Possible tasks when analyzing a collection of data points: Fitting a selected model type or types to the data Choosing.
Elliptic PDEs and the Finite Difference Method
1 Complex Images k’k’ k”k” k0k0 -k0-k0 branch cut   k 0 pole C1C1 C0C0 from the Sommerfeld identity, the complex exponentials must be a function.
Slide 1 / 39 Projection and its Importance in Scientific Computing ________________________________________________ CS Lecture Notes 02/21/2007.
5. Integration 2.Quadrature as Box Counting 3.Algorithm: Trapezoid Rule 4.Algorithm: Simpson’s Rule 5.Integration Error 6.Algorithm: Gaussian Quadrature.
Numerical Methods.
Sensitivity derivatives Can obtain sensitivity derivatives of structural response at several levels Finite difference sensitivity (section 7.1) Analytical.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Chapter 9 Gauss Elimination The Islamic University of Gaza
Polynomials, Curve Fitting and Interpolation. In this chapter will study Polynomials – functions of a special form that arise often in science and engineering.
Final Project Topics Numerical Methods for PDEs Spring 2007 Jim E. Jones.
Discretization for PDEs Chunfang Chen,Danny Thorne Adam Zornes, Deng Li CS 521 Feb., 9,2006.
Spring 2006CISC101 - Prof. McLeod1 Announcements Assn 4 is posted. Note that due date is the 12 th (Monday) at 7pm. (Last assignment!) Final Exam on June.
Lecture 1 Introduction Dr. Hakikur Rahman Thanks to Dr. S. M. Lutful Kabir for Slides CSE 330: Numerical Methods.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
6 May, University of Portsmouth Department of Mathematics Project Presentation Barycentric representation of some interpolants: Theory and numerics.
1 CHAP 3 WEIGHTED RESIDUAL AND ENERGY METHOD FOR 1D PROBLEMS FINITE ELEMENT ANALYSIS AND DESIGN Nam-Ho Kim.
Interpolation.
Class Notes 18: Numerical Methods (1/2)
Solution of Equations by Iteration
Numerical Analysis Lecture 45.
Class Notes 9: Power Series (1/3)
CSE 245: Computer Aided Circuit Simulation and Verification
SKTN 2393 Numerical Methods for Nuclear Engineers
Presentation transcript:

Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004

Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004

Richard Baltensperger Université Fribourg Manfred Trummer Pacific Institute for the Mathematical Sciences & Simon Fraser University Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004

Pacific Institute for the Mathematical Sciences

Simon Fraser University

Banff International Research Station PIMS – MSRI collaboration (with MITACS) 5-day workshops 2-day workshops Focused Research Research In Teams

Round-off Errors 1991 – a patriot missile battery in Dhahran, Saudi Arabia, fails to intercept Iraqi missile resulting in 28 fatalities. Problem traced back to accumulation of round-off error in computer arithmetic – a patriot missile battery in Dhahran, Saudi Arabia, fails to intercept Iraqi missile resulting in 28 fatalities. Problem traced back to accumulation of round-off error in computer arithmetic.

Introduction Spectral Differentiation: Approximate function f(x) by a global interpolant Approximate function f(x) by a global interpolant Differentiate the global interpolant exactly, e.g., Differentiate the global interpolant exactly, e.g.,

Features of spectral methods + Exponential (spectral) accuracy, converges faster than any power of 1/N + Fairly coarse discretizations give good results - Full instead of sparse matrices - Tighter stability restrictions - Less robust, difficulties with irregular domains

Types of Spectral Methods Spectral-Galerkin: work in transformed space, that is, with the coefficients in the expansion. Spectral-Galerkin: work in transformed space, that is, with the coefficients in the expansion. Example:u t = u xx

Types of Spectral Methods Spectral Collocation (Pseudospectral): work in physical space. Choose collocation points x k, k=0,..,N, and approximate the function of interest by its values at those collocation points. Spectral Collocation (Pseudospectral): work in physical space. Choose collocation points x k, k=0,..,N, and approximate the function of interest by its values at those collocation points. Computations rely on interpolation. Issues: Aliasing, ease of computation, nonlinearities Issues: Aliasing, ease of computation, nonlinearities

Interpolation Periodic function, equidistant points: FOURIER Periodic function, equidistant points: FOURIER Polynomial interpolation: Polynomial interpolation: Interpolation by Rational functions Interpolation by Rational functions Chebyshev points Legendre points Hermite, Laguerre

Differentiation Matrix D Discrete data set f k, k=0,…,N Interpolate between collocation points x k p(x k ) = f k p(x k ) = f k Differentiate p(x) Evaluate p’(x k ) = g k All operations are linear: g = Df

Software Funaro: FORTRAN code, various polynomial spectral methods Funaro: FORTRAN code, various polynomial spectral methods Don-Solomonoff, Don-Costa: PSEUDOPAK FORTRAN code, more engineering oriented, includes filters, etc. Don-Solomonoff, Don-Costa: PSEUDOPAK FORTRAN code, more engineering oriented, includes filters, etc. Weideman-Reddy: Based on Differentiation Matrices, written in MATLAB (fast Matlab programming) Weideman-Reddy: Based on Differentiation Matrices, written in MATLAB (fast Matlab programming)

Polynomial Interpolation Lagrange form: “Although admired for its mathematical beauty and elegance it is not useful in practice” “ E x p e n s i v e ” “ D i f f i c u l t t o u p d a t e ”

Barycentric formula, version 1

Barycentric formula, version 2 =1  Set-up: O(N 2 )  Evaluation: O(N)  Update (add point): O(N)  New f k values: no extra work!

Barycentric formula: weights w k =0

Barycentric formula: weights w k  Equidistant points:  Chebyshev points (1 st kind):  Chebyshev points (2 nd kind):

Barycentric formula: weights w k  Weights can be multiplied by the same constant  This function interpolates for any weights ! Rational interpolation!  Relative size of weights indicates ill- conditioning

Computation of the Differentiation Matrix Entirely based upon interpolation.

Barycentric Formula Barycentric (Schneider/Werner):

Chebyshev Differentiation Differentiation Matrix: x k = cos(k  /N)

Chebyshev Matrix has Behavioural Problems Trefethen-Trummer, 1987 Trefethen-Trummer, 1987 Rothman, 1991 Rothman, 1991 Breuer-Everson, 1992 Breuer-Everson, 1992 Don-Solomonoff, 1995/1997 Don-Solomonoff, 1995/1997 Bayliss-Class-Matkowsky, 1995 Bayliss-Class-Matkowsky, 1995 Tang-Trummer, 1996 Tang-Trummer, 1996 Baltensperger-Berrut, 1996 Baltensperger-Berrut, 1996 numerical

Chebyshev Matrix and Errors Matrix Absolute Errors Relative Errors

Round-off error analysis has relative error and so has, therefore

Roundoff Error Analysis With “good” computation we expect an error in D 01 of O(N 2  ) With “good” computation we expect an error in D 01 of O(N 2  ) Most elements in D are O(1) Most elements in D are O(1) Some are of O(N), and a few are O(N 2 ) Some are of O(N), and a few are O(N 2 ) We must be careful to see whether absolute or relative errors enter the computation We must be careful to see whether absolute or relative errors enter the computation

Remedies Preconditioning, add ax+b to f to create a function which is zero at the boundary Preconditioning, add ax+b to f to create a function which is zero at the boundary Compute D in higher precision Compute D in higher precision Use trigonometric identities Use trigonometric identities Use symmetry: Flipping Trick Use symmetry: Flipping Trick NST: Negative sum trick NST: Negative sum trick

More ways to compute Df FFT based approach FFT based approach Schneider-Werner formula Schneider-Werner formula If we only want Df but not the matrix D (e.g., time stepping, iterative methods), we can compute Df for any f via

Chebyshev Differentiation Matrix “Original Formulas”: Cancellation! x k = cos(k  /N)

Trigonometric Identities

Flipping Trick Use and “flip” the upper half of D into the lower half, or the upper left triangle into the lower right triangle. sin(  – x) not as accurate as sin(x)

NST: Negative sum trick Spectral Differentiation is exact for constant functions: Arrange order of summation to sum the smaller elements first - requires sorting

Numerical Example

Observations Original formula for D is very inaccurate Trig/Flip + “NST” (Weideman-Reddy) provides good improvement FFT not as good as “expected”, in particular when N is not a power of 2 NST applied to original D gives best matrix Even more accurate ways to compute Df

Machine Dependency Results can vary substantially from machine to machine, and may depend on software. Intel/PC: FFT performs better SGI SUN DEC Alpha

Understanding the Negative sum trick

Error in D Error in f

Understanding NST 2

Understanding NST 3 The inaccurate matrix elements are multiplied by very small numbers leading to O(N 2 ) errors -- optimal accuracy

Understanding the Negative sum trick NST is an (inaccurate) implementation of the Schneider-Werner formula: Schneider-Werner Negative Sum Trick

Understanding the Negative sum trick Why do we obtain superior results when applying the NST to the original (inaccurate) formula? Accuracy of Finite Difference Quotients:

Finite Difference Quotients For monomials a cancellation of the cancellation errors takes place, e.g.: Typically f j – f k is less accurate than x j – x k, so computing x j - x k more accurately does not help!

Finite Difference Quotients

Fast Schneider-Werner Cost of Df is 2N 2, SW costs 3N 2 Can implement Df with “Fast SW” method Size of each corner blocks is N ½ Cost: 2N 2 + O(N)

Polynomial Differentiation For example, Legendre, Hermite, Laguerre For example, Legendre, Hermite, Laguerre Fewer tricks available, but Negative Sum Trick still provides improvements Fewer tricks available, but Negative Sum Trick still provides improvements Ordering the summation may become even more important Ordering the summation may become even more important

Higher Order Derivatives Best not to compute D (2) =D 2, etc. Best not to compute D (2) =D 2, etc. Formulas by Welfert (implemented in Weideman-Reddy) Formulas by Welfert (implemented in Weideman-Reddy) Negative sum trick shows again improvements Negative sum trick shows again improvements Higher order differentiation matrices badly conditioned, so gaining a little more accuracy is more important than for first order Higher order differentiation matrices badly conditioned, so gaining a little more accuracy is more important than for first order

Using D to solve problems In many applications the first and last row/column of D is removed because of boundary conditions In many applications the first and last row/column of D is removed because of boundary conditions f -> Df appears to be most sensitive to how D is computed (forward problem = differentiation) f -> Df appears to be most sensitive to how D is computed (forward problem = differentiation) Have observed improvements in solving BVPs Have observed improvements in solving BVPs Skip

Solving a Singular BVP

Results Results

Close Demystified some of the less intuitive behaviour of differentiation matrices Demystified some of the less intuitive behaviour of differentiation matrices Get more accuracy for the same cost Get more accuracy for the same cost Study the effects of using the various differentiation matrices in applications Study the effects of using the various differentiation matrices in applications Forward problem is more sensitive than inverse problem Forward problem is more sensitive than inverse problem Df: Time-stepping, Iterative methods Df: Time-stepping, Iterative methods

To think about Is double precision enough as we are able to solve “bigger” problems? Is double precision enough as we are able to solve “bigger” problems? Irony of spectral methods: Exponential convergence, round-off error is the limiting factor Irony of spectral methods: Exponential convergence, round-off error is the limiting factor Accuracy requirements limit us to N of moderate size -- FFT is not so much faster than matrix based approach Accuracy requirements limit us to N of moderate size -- FFT is not so much faster than matrix based approach

And now for the twist….