Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)

Similar presentations


Presentation on theme: "Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)"— Presentation transcript:

1 Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C) thacker@ap.smu.ca

2 Today’s Lecture Notes on projects & talks Notes on projects & talks Issues with adaptive step size selection Issues with adaptive step size selection Second order ODEs Second order ODEs nth order ODEs nth order ODEs Algorithm outline for handling nth order ODEs Algorithm outline for handling nth order ODEs Advanced warning – no class on March 31st

3 Project time-line By now you should have approximately ½ of the project coding & development done By now you should have approximately ½ of the project coding & development done There are only 4 weeks until the first talk There are only 4 weeks until the first talk I would advise you to have at least some outline of how your write-up will look already I would advise you to have at least some outline of how your write-up will look already Project reports are due on the last day of term Project reports are due on the last day of term

4 Talk Format Allowing for changing over presenters, connecting/disconnecting etc, we can allow up to 60 minutes per lecture period Allowing for changing over presenters, connecting/disconnecting etc, we can allow up to 60 minutes per lecture period 3 people per lecture means 20 minutes per presenter 3 people per lecture means 20 minutes per presenter 15 minute presentation 15 minute presentation 5 minutes for questions 5 minutes for questions Remember your presentation will be marked by your peers Remember your presentation will be marked by your peers

5 Issues with the adaptive step size algorithm Step (8) If err < 0.1  h is too small. Increase by 1.5 for next step Step (8) If err < 0.1  h is too small. Increase by 1.5 for next step If err >  h is too big. Halve h and repeat iteration If err >  h is too big. Halve h and repeat iteration If 0.1  ≤ err ≤   h is OK. If 0.1  ≤ err ≤   h is OK. Clearly if there is problems with convergence this step will continue to keep dividing h forever. You need to set up a limit here, either by not allowing h to get smaller than some preset limit or counting the number of times you halve h Clearly if there is problems with convergence this step will continue to keep dividing h forever. You need to set up a limit here, either by not allowing h to get smaller than some preset limit or counting the number of times you halve h Because of rounding error as you increment x it is very unlikely that you will precisely hit x max with the final step Because of rounding error as you increment x it is very unlikely that you will precisely hit x max with the final step Therefore you should choose h=min(h,x max -x) where x is current position Therefore you should choose h=min(h,x max -x) where x is current position

6 Issues with the adaptive step size algorithm: more “gotcha’s” Step (7) Estimate error Step (7) Estimate error Should add a very small (“tiny”) number to the absolute value of the denominator to avoid a divide by zero (when the two estimates are very close to zero you might - in incredibly rare situations - get the two numbers adding to zero). Should add a very small (“tiny”) number to the absolute value of the denominator to avoid a divide by zero (when the two estimates are very close to zero you might - in incredibly rare situations - get the two numbers adding to zero). Since you store values of y and x in an array it is possible that you may run out of storage (because you need too many points). You should do some check of the number of positions stored versus the maximum possible allowed in your decleration of the x & y arrays. This will avoid the possibility of a segmentation fault. Since you store values of y and x in an array it is possible that you may run out of storage (because you need too many points). You should do some check of the number of positions stored versus the maximum possible allowed in your decleration of the x & y arrays. This will avoid the possibility of a segmentation fault.

7 Second order ODEs Consider the general second order ODE Consider the general second order ODE We now require two initial values be provided, namely the y 0 value and the derivative y´(x 0 ) We now require two initial values be provided, namely the y 0 value and the derivative y´(x 0 ) These are called the Cauchy conditions These are called the Cauchy conditions If we let z=y´, then z´=y´´ and we have If we let z=y´, then z´=y´´ and we have (1)

8 Second order ODEs cont We have thus turned a second order ODE into a first order ODE for a vector We have thus turned a second order ODE into a first order ODE for a vector Can apply the R-K solver to the system but you now have two components to integrate Can apply the R-K solver to the system but you now have two components to integrate At each step must update all x, y and z values At each step must update all x, y and z values

9 Diagrammatically x 0 x 1 y(x) y0y0 x 0 x 1 z(x) z0z0 Remember z’=g(x,y,y’) Remember y’=z y´ 0 =z 0 z1z1 z´ 0 =g 0 y1y1

10 nth order ODEs Systems with higher than second order derivatives are actually quite rare in physics Systems with higher than second order derivatives are actually quite rare in physics Nonetheless we can adapt the idea for 2 nd order systems to nth order Nonetheless we can adapt the idea for 2 nd order systems to nth order Suppose we have a system specified by Suppose we have a system specified by Such an equation requires n initial values for the derivatives, suppose again we have the Cauchy conds. Such an equation requires n initial values for the derivatives, suppose again we have the Cauchy conds.

11 Definitions for the nth order system

12 So another system of coupled first order equations that can be solved via R-K.

13 Algorithm for solving such a system Useful parameters to set: imax=number of points at which y(x) is to be evaluated (1000 is reasonable) nmax=highest order of ODE to be solved (say 9) errmax=highest tolerated error (say 1.0×10 -9 ) Declare x(imax),y(imax,nmax),y0(nmax),yl(nmax), ym(nmax),yr(nmax),ytilde(nmax),ystar(nmax) ym(nmax),yr(nmax),ytilde(nmax),ystar(nmax) Note that the definitions used here are not quite consistent with the variable definitions used in the discussion of the single function case.

14 User inputs Need to get from the user (not necessarily at run time) Need to get from the user (not necessarily at run time) The g(x,y,y’,…) function The g(x,y,y’,…) function Domain of x, i.e. what are xmin and xmax Domain of x, i.e. what are xmin and xmax What is the order of the ODE to be solved, stored in variable nord What is the order of the ODE to be solved, stored in variable nord What are the initial values for y and the derivatives (the Cauchy conditions), these are stored in y0(nmax) What are the initial values for y and the derivatives (the Cauchy conditions), these are stored in y0(nmax)

15 Correspondence of arrays to variables The arrays y(imax,nmax), y0(nmax) corresponds to the y derivatives as follows: The arrays y(imax,nmax), y0(nmax) corresponds to the y derivatives as follows: y(1:imax,1)≡y vals with y(1,1)=y 0 =y0(1) y(1:imax,1)≡y vals with y(1,1)=y 0 =y0(1) y(1:imax,2)≡y´ vals with y(1,2)=y´ 0 =y0(2) y(1:imax,2)≡y´ vals with y(1,2)=y´ 0 =y0(2) y(1:imax,3)≡y´´ vals with y(1,3)=y´´ 0 =y0(3) y(1:imax,3)≡y´´ vals with y(1,3)=y´´ 0 =y0(3) y(1:imax,nord)≡y (nord-1) vals with y(1,nord)=y (n-1) 0 = y0(nord) y(1:imax,nord)≡y (nord-1) vals with y(1,nord)=y (n-1) 0 = y0(nord) | |

16 Choose initial step size & initialize yl values Apply same criterion as standard Runge-Kutta Apply same criterion as standard Runge-Kutta dx=0.1×errmax×(xmax-xmin) dx=0.1×errmax×(xmax-xmin) dxmin=10 -3 ×dx dxmin=10 -3 ×dx We can use this value to ensure that adaptive step size is never less than 1/1000 th of the initial guess We can use this value to ensure that adaptive step size is never less than 1/1000 th of the initial guess x(1)=xl=xmin x(1)=xl=xmin Set initial position for solver Set initial position for solver Initialize yl (left y-values for the first interval) Initialize yl (left y-values for the first interval) do n=1,nord yl(n)=y0(n)=y(1,n) do n=1,nord yl(n)=y0(n)=y(1,n)

17 Start adaptive loop Set i=1 this will count number of x positions evaluated Set i=1 this will count number of x positions evaluated dxh=dx/2 --- half width of zone dxh=dx/2 --- half width of zone xr=xl+dx --- right hand boundary xr=xl+dx --- right hand boundary xm=xl+dxh --- mid point of zone xm=xl+dxh --- mid point of zone Perform R-K calculations on this zone for all y n Perform R-K calculations on this zone for all y n Need to calculate all y values on right boundary using a single R-K step (stored in ytilde(nmax)) Need to calculate all y values on right boundary using a single R-K step (stored in ytilde(nmax)) Need to calculate all y values on right boundary using two half R-K steps (stored in ystar(nmax)) – for example Need to calculate all y values on right boundary using two half R-K steps (stored in ystar(nmax)) – for example call rk(xl,yl,ytilde,nord,dx) call rk(xl,yl,ym,nord,dxh) call rk(xm,ym,ystar,nord,dxh)

18 Now evaluate R.E.-esque value yr(n) err=0. err=0. do n=1,nord do n=1,nord If err< 0.1×errmax then increase dx: dx=dx*1.5 If err< 0.1×errmax then increase dx: dx=dx*1.5 If err > errmax  dx is too big: dx=max(dxh,dxmin) and repeat evaluation of this zone If err > errmax  dx is too big: dx=max(dxh,dxmin) and repeat evaluation of this zone If 0.1 errmax  ≤ err ≤ errmax  dx is OK & we need to store all results If 0.1 errmax  ≤ err ≤ errmax  dx is OK & we need to store all results Error is now set over all the functions being integrated

19 Update values and prepare for next step Increment i: i=i+1 (check that it doesn’t exceed imax) Increment i: i=i+1 (check that it doesn’t exceed imax) x(i)=xr x(i)=xr xl=xr (note should check xl hasn’t gone past xmax & that h value is chosen appropriately) xl=xr (note should check xl hasn’t gone past xmax & that h value is chosen appropriately) do n=1,nord do n=1,nord y(i,n)=yr(n) y(i,n)=yr(n) yl(n)=yr(n) yl(n)=yr(n) Then return to top of loop and do next step Then return to top of loop and do next step

20 Runge-Kutta routine Subroutine rk(xl,yl,yr,nord,dx) Subroutine rk(xl,yl,yr,nord,dx) yl and yr will be arrays of size nord yl and yr will be arrays of size nord Set useful variables: Set useful variables: dxh=dx/2. dxh=dx/2. xr=xl+dx xr=xl+dx xm=xl+dxh xm=xl+dxh Recall, given y’=f(x,y), (xl,yl), and dx then Recall, given y’=f(x,y), (xl,yl), and dx then

21 Steps in algorithm Complications: have a vector of functions and vector of yl values Complications: have a vector of functions and vector of yl values call derivs(xl,yl,f0,nord) (sets all function f0 values) call derivs(xl,yl,f0,nord) (sets all function f0 values) do n=1,nord do n=1,nord call derivs(xm,,nord) (sets function values) call derivs(xm,,nord) (sets function values) do n=1,nord do n=1,nord call derivs(xm,,nord) (sets function values) call derivs(xm,,nord) (sets function values) do n=1,nord do n=1,nord call derivs(xr,,nord) (sets function values) call derivs(xr,,nord) (sets function values)

22 Calculate R-K formula & derivs subroutine do i=1,nord do i=1,nord That’s the end of the R-K routine That’s the end of the R-K routine subroutine derivs(x,y,yp,nord) subroutine derivs(x,y,yp,nord) do n=1,nord-1: yp(n)=y(n+1) do n=1,nord-1: yp(n)=y(n+1) Lastly set yp(nord)=g(x,y,y´,…,y (nord-1) ) Lastly set yp(nord)=g(x,y,y´,…,y (nord-1) )

23 Summary There are a few issues with the adaptive step size algorithm you need to be concerned about (avoiding divide by zero etc.) There are a few issues with the adaptive step size algorithm you need to be concerned about (avoiding divide by zero etc.) Second order systems can be turned into coupled first order systems Second order systems can be turned into coupled first order systems Nth order systems can be turned into n coupled first order systems Nth order systems can be turned into n coupled first order systems The adaptive R-K algorithm for vectors is in principle similar to that for a single function The adaptive R-K algorithm for vectors is in principle similar to that for a single function However, must loop over vectors However, must loop over vectors

24 Implicit Methods: Backward Euler method Recall in the Forward Euler method Recall in the Forward Euler method Rather than predicting forward, we can predict backward using the value of f(t n,y n ) Rather than predicting forward, we can predict backward using the value of f(t n,y n ) Rewriting in terms of n+1 and n Rewriting in terms of n+1 and n Replacing y n+1 =r we need to use a root finding method to solve Replacing y n+1 =r we need to use a root finding method to solve NON-EXAMINABLE BUT USEFUL TO KNOW

25 Notes on implicit Methods Implicit methods tend to be more stable, and for the same step size more accurate Implicit methods tend to be more stable, and for the same step size more accurate The trade-off is the increased expense of using an interative procedure to find the roots The trade-off is the increased expense of using an interative procedure to find the roots This can become very expensive in coupled systems This can become very expensive in coupled systems Richardson Extrapolation can also be applied to implicit ODEs, results in higher order schemes with good convergence properties Richardson Extrapolation can also be applied to implicit ODEs, results in higher order schemes with good convergence properties Use exactly the same procedure as outline in previous lecture, compare expansions at h and h/2 Use exactly the same procedure as outline in previous lecture, compare expansions at h and h/2 Crank-Nicholson is another popular implicit method Crank-Nicholson is another popular implicit method Relies upon the derivative at both the start and end points of the interval Relies upon the derivative at both the start and end points of the interval Second order accurate solution Second order accurate solution

26 Multistep methods Thus far we’ve consider self starting methods that use only values from x n and x n+1 Thus far we’ve consider self starting methods that use only values from x n and x n+1 Alternatively, accuracy can be improved by using a linear combination of additional points Alternatively, accuracy can be improved by using a linear combination of additional points Utilize y n-s+1,y n-s+2,…,y n to construct approximations to derivatives of order up to s, at t n Utilize y n-s+1,y n-s+2,…,y n to construct approximations to derivatives of order up to s, at t n Example for s=2 Example for s=2

27 Comparison of single and multistep methods

28 Second order method We can utilize this relationship to describe a multistep second order method We can utilize this relationship to describe a multistep second order method Generalized to higher orders, these methods are known as Adams-Bashforth (predictor) methods Generalized to higher orders, these methods are known as Adams-Bashforth (predictor) methods s=1 recovers the Euler method s=1 recovers the Euler method Implicit methodologies are possible as well Implicit methodologies are possible as well Adams-Moulton (predictor-corrector) methods Adams-Moulton (predictor-corrector) methods Since these methods rely on multisteps the first few values of y must be calculated by another method, e.g. RK Since these methods rely on multisteps the first few values of y must be calculated by another method, e.g. RK Starting method needs to be as accurate as the multistep method Starting method needs to be as accurate as the multistep method

29 “Stiff” problems Definition of stiffness: Definition of stiffness: “Loosely speaking, the initial value problem is referred to as being stiff if the absolute stability requirement dictates a much smaller time step than is needed to satisfy approximation requirements alone.” “Loosely speaking, the initial value problem is referred to as being stiff if the absolute stability requirement dictates a much smaller time step than is needed to satisfy approximation requirements alone.” Fomally: An IVP is stiff in some interval [0,b] if the step size needed to maintain stability of the forward Euler method is much smaller than the step size required to represent the solution accurately. Fomally: An IVP is stiff in some interval [0,b] if the step size needed to maintain stability of the forward Euler method is much smaller than the step size required to represent the solution accurately. Stability requirements are overriding accuracy requirements Stability requirements are overriding accuracy requirements Why does this happen? Why does this happen? Trying to integrate smooth solutions that are surrounded by strongly divergent or oscillatory solutions Trying to integrate smooth solutions that are surrounded by strongly divergent or oscillatory solutions Small deviations away from the true solution lead to forward terms being very inaccurate Small deviations away from the true solution lead to forward terms being very inaccurate

30 Example Consider: y’(t)=-15y(t), t≥0, y(0)=1 Exact solution: y(t)=e -15t, so y(t)→0 as t→0 If we examine the forward Euler method, strong oscillatory behaviour forces us to take very small steps even though the function looks quite smooth

31 Implicit methods in stiff problems Because implicit methods can use longer timesteps, they are strongly favoured in integrations of stiff systems Because implicit methods can use longer timesteps, they are strongly favoured in integrations of stiff systems Consider a two-stage Adams-Moulton integrator: Consider a two-stage Adams-Moulton integrator:

32 Adams-Moulton solution for h=0.125 Much better behaviour and convergence

33 Choosing stiff solvers This isn’t as easy as you might think This isn’t as easy as you might think Performance of different algorithms can be quite dependent upon the specific problem Performance of different algorithms can be quite dependent upon the specific problem Researchers often write papers comparing the performance of different solvers on a given problem and then advise on which one to use Researchers often write papers comparing the performance of different solvers on a given problem and then advise on which one to use This is a sensible way to do things This is a sensible way to do things I recommend you do the same if you have a stiff problem I recommend you do the same if you have a stiff problem Try solvers from library packages like ODEPACK or the Numerical recipes routines Try solvers from library packages like ODEPACK or the Numerical recipes routines

34 Stability of the Forward Euler method Stability is more important than the truncation error Stability is more important than the truncation error Consider y’= y for some complex Consider y’= y for some complex Provided Re < 0 solution is bounded Provided Re < 0 solution is bounded Substitute into the Euler method Substitute into the Euler method For y n to remain bounded we must have For y n to remain bounded we must have Thus a poorly chosen  t that breaks the above inequality will lead to y n increasing without limit Thus a poorly chosen  t that breaks the above inequality will lead to y n increasing without limit

35 Behaviour of small errors We considered the previous y’= y equation because it describes the behaviour of small changes We considered the previous y’= y equation because it describes the behaviour of small changes Suppose we have a solution y s =y+  where  is the small error. Suppose we have a solution y s =y+  where  is the small error. Substitute into y’=f(t,y) and use a Taylor expansion Substitute into y’=f(t,y) and use a Taylor expansion To leading order in 

36 Next lecture Monte Carlo methods Monte Carlo methods


Download ppt "Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)"

Similar presentations


Ads by Google