Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)

Similar presentations


Presentation on theme: "Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)"— Presentation transcript:

1 Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)

2 Today’s Lecture Functions and roots: Part II Functions and roots: Part II Global bracket finding techniques Global bracket finding techniques Müller-Brent root finder Müller-Brent root finder Example Example

3 Global Bracket finding Dr Clarke’s Global Bracket Finding Algorithm! Dr Clarke’s Global Bracket Finding Algorithm! The explanation is somewhat lengthy, don’t be too concerned if you don’t “get it all” first time out – however the fundamental concepts are simple The explanation is somewhat lengthy, don’t be too concerned if you don’t “get it all” first time out – however the fundamental concepts are simple Broad-brush Broad-brush Calculate array x(n) Calculate array x(n) Values of independent variable x at which f(x) will be evaluated Values of independent variable x at which f(x) will be evaluated These values won’t necessarily be equally spaced, in fact we might expect them not to be These values won’t necessarily be equally spaced, in fact we might expect them not to be Store all values of f(x(n)) Store all values of f(x(n)) Evaluate for all n, although this may need to be a large number Evaluate for all n, although this may need to be a large number Apply root finder to each pair of values of f(x(i)) Apply root finder to each pair of values of f(x(i)) If (f(x(i))*f(x(i+1)) root between this pair If (f(x(i))*f(x(i+1)) root between this pair If no root, skip to next pair If no root, skip to next pair Report all roots found Report all roots found

4 Issues to address How do we know how large to make the domain of x we consider? How do we know how large to make the domain of x we consider? We need limits on “x min ”, “x max ” We need limits on “x min ”, “x max ” Some problems have natural limits (see example later) Some problems have natural limits (see example later) Roots may be all positive in which case only need upper bound Roots may be all positive in which case only need upper bound For polynomials (∑a n x n ) for large x, x n term dominates For polynomials (∑a n x n ) for large x, x n term dominates a n-1 +a n x≈0 so x max ≥-a n-1 /a n a n-1 +a n x≈0 so x max ≥-a n-1 /a n For smallest root, a 0 +a 1 x ≈0 so x min ≤-a 0 /a1 For smallest root, a 0 +a 1 x ≈0 so x min ≤-a 0 /a1 However, in many cases we may just have to guess! However, in many cases we may just have to guess!

5 How many points to we need? This is equivalent to answering “how big does the spacing between points need to be?” This is equivalent to answering “how big does the spacing between points need to be?” x(1) =x min A xBxB B xCxC C D xDxD

6 Selecting the step size  x To be conservative we want to pick our first step x(2)=x(1)+dx(1) to be such that there are no roots between x(1) and x(2) To be conservative we want to pick our first step x(2)=x(1)+dx(1) to be such that there are no roots between x(1) and x(2)  x B achieves this if A corresponds to the first point  x B achieves this if A corresponds to the first point At each stage we should adapt dx(i), so that if x(i+1)=x(i)+dx(i) we want to be reasonably sure there isn’t more than one root between the two points At each stage we should adapt dx(i), so that if x(i+1)=x(i)+dx(i) we want to be reasonably sure there isn’t more than one root between the two points  x C achieves this but looks slightly dangerous if we used it generally – it could cover two roots on the right of the diagram  x C achieves this but looks slightly dangerous if we used it generally – it could cover two roots on the right of the diagram  x D is just too big  x D is just too big

7 Evaluating an optimal  x Needs to be sensitive to local changes in the function Needs to be sensitive to local changes in the function  x needs to contain ≤ 1 root  x needs to contain ≤ 1 root Finding an optimal  x is a bit like “groping around in the dark” Finding an optimal  x is a bit like “groping around in the dark” Start with small steps Start with small steps If the function feels “straight” take bigger steps If the function feels “straight” take bigger steps If the function has rapid changes in direction then take smaller steps If the function has rapid changes in direction then take smaller steps

8 Follow the function “length” to find  x yy xx ll xx  x 1/2 xx In this part of the function halving the size of  x doesn’t have to big an impact on  l/  x – the step size is OK Here halving  x produces a significant change in  l/  x

9 Quick summary of conditions If  l/  x~  l 1/2 /  x 1/2 then  x is OK If  l/  x~  l 1/2 /  x 1/2 then  x is OK If  l/  x and  l 1/2 /  x 1/2 are very close then we may actually be able to increase the size of  x If  l/  x and  l 1/2 /  x 1/2 are very close then we may actually be able to increase the size of  x If  l/  x and  l 1/2 /  x 1/2 are “very different” then  x needs to be decreased If  l/  x and  l 1/2 /  x 1/2 are “very different” then  x needs to be decreased We try using small  x until we find a separation that produces the first condition We try using small  x until we find a separation that produces the first condition Remember: we are using  l/  x to tell us what is happening to the value of the function locally Remember: we are using  l/  x to tell us what is happening to the value of the function locally

10 Slight modification – not intuitive We really want to consider the change in path length as a function of the relative change in  x and x We really want to consider the change in path length as a function of the relative change in  x and x So rather than using f’(x)=  f/  x we use So rather than using f’(x)=  f/  x we use This scales the derivative in terms of the x and y values This scales the derivative in terms of the x and y values Although for numerical stability reasons we don’t quite use x and f(x) (see next slide) Although for numerical stability reasons we don’t quite use x and f(x) (see next slide) The numerical value of the derivative is given by The numerical value of the derivative is given by

11 Efficiency & stability concerns Using Using Reduces the number of function calls significantly near x=0 Reduces the number of function calls significantly near x=0 This was found by trial and error This was found by trial and error Using Using Doesn’t have a huge advantage except making divide by zero less likely Doesn’t have a huge advantage except making divide by zero less likely

12 Final part of algorithm We have the mechanisms to find x(n) and the function evaluations f(n) We have the mechanisms to find x(n) and the function evaluations f(n) Apply root finding to each pair using following algorithm Apply root finding to each pair using following algorithm do i=1,n-1 if (f(i)*f(i+1).lt. 0.0) then find root with preferred method and report find root with preferred method and report end if end if end do end do I think I will set a homework question with a detailed explanation of the algorithm I think I will set a homework question with a detailed explanation of the algorithm

13 Müller-Brent method Most expensive part of root finding algorithms is frequently the function evaluation Most expensive part of root finding algorithms is frequently the function evaluation In Secant method n+1 guess is based upon function evaluations from n and n-1 steps In Secant method n+1 guess is based upon function evaluations from n and n-1 steps We throw away the n-2 step We throw away the n-2 step With only two function evaluations the fitting method must be linear With only two function evaluations the fitting method must be linear We could use n-2 step to give us three points that we use for quadratic interpolation We could use n-2 step to give us three points that we use for quadratic interpolation

14 Secant method compared to Müller- Brent Secant method fits a straight line between (x (n),f(x (n) ) and (x (n-1),f(x (n-1) ) Secant method fits a straight line between (x (n),f(x (n) ) and (x (n-1),f(x (n-1) ) Corresponds to dashed line Corresponds to dashed line f(x) x (n) x (n-1) x (n-2) For Müller-Brent we rely on three function evaluations to fit a quadratic as given by the dot-dashed-line

15 Fitting a quadratic Three points is enough data to fit a function of the form Three points is enough data to fit a function of the form Since we know q(x) passes through all three points we have Since we know q(x) passes through all three points we have Which gives two equations for two unknowns, a & b Which gives two equations for two unknowns, a & b

16 Solving for a & b Given the two equations we can eliminate a or b to get equations for the two constants Given the two equations we can eliminate a or b to get equations for the two constants We now have a,b,c so we can use the general formula to calculate the zeros for x (n+1) -x (n) We now have a,b,c so we can use the general formula to calculate the zeros for x (n+1) -x (n)

17 General quadratic solution issues Recall we talked about the problems of differencing very similar numbers Recall we talked about the problems of differencing very similar numbers Precision is lost due to the fixed number of digits in the mantissa Precision is lost due to the fixed number of digits in the mantissa We use the same method discussed in the previous lectures for improving stability We use the same method discussed in the previous lectures for improving stability If b≥0 then take + root If b≥0 then take + root

18 Full Müller-Brent The previous two equations for x (n+1) give the guesses for the root The previous two equations for x (n+1) give the guesses for the root Can suffer from similar problems to NR if too close to extremum though Can suffer from similar problems to NR if too close to extremum though (This part of algorithm is due to Muller) (This part of algorithm is due to Muller) Brent added the option of a bisection step to improve behaviour Brent added the option of a bisection step to improve behaviour Same idea as the hybrid NR-bisection algorithm we discussed Same idea as the hybrid NR-bisection algorithm we discussed

19 Higher order fitting methods In practice quadratic is as high as we need to go In practice quadratic is as high as we need to go Firstly, if the quadratic fit guesses a root outside the bracket then the cubic fit will not do any better Firstly, if the quadratic fit guesses a root outside the bracket then the cubic fit will not do any better Secondly, if it converges the quadratic fits usually takes about 3 steps Secondly, if it converges the quadratic fits usually takes about 3 steps The cubic fit would require 4 steps just to calculate the initial points required for the guess (x (n-3),x (n-2),x (n-1),x (n) ) The cubic fit would require 4 steps just to calculate the initial points required for the guess (x (n-3),x (n-2),x (n-1),x (n) ) You can think of this the following way You can think of this the following way By the time we can fit a cubic a quadratic method will likely have the root anyway… By the time we can fit a cubic a quadratic method will likely have the root anyway…

20 Example of root finding from QM Consider the problem of a square well potential Consider the problem of a square well potential V0V0 -a 0 a 0 Time independent Schrödinger equation is

21 Substitute potential Solving the pair of equations (1) involves fitting a sinusoidal solution inside the square well and a decaying exponential solution outside it. The two solutions ψ and ψ’ must be continuous at x=±a. Solving the pair of equations (1) involves fitting a sinusoidal solution inside the square well and a decaying exponential solution outside it. The two solutions ψ and ψ’ must be continuous at x=±a. We won’t go through the details of the solution process, but two types of solution can be found, the parameters of which are given by transcendental equations We won’t go through the details of the solution process, but two types of solution can be found, the parameters of which are given by transcendental equations

22 The solution classes Class I Class I This gives even solutions for ψ This gives even solutions for ψ Note, Note, Class II This gives odd solutions for ψ We thus solve these equations to find  – this will give the energy levels of the solutions, E. However, we do not know a priori where the roots will be, so we need to use a bracket Finder.

23 Root finding We recast the two solutions classes as equations f(  tan  √(      and We recast the two solutions classes as equations f(  tan  √(      and g(  cot  √(      What problems might we expect? What problems might we expect? Both tan and cot have odd ordered poles that change sign. As we discussed in the last lecture this can lead to singularities being interpreted as roots – we need to look out for this. tan  

24 Strategies for dealing with poles f(1) f(2) f(3) f(4) For the root: f(1)f’(1)>0 For the odd pole: f(3)f’(3)<0 So the difference in signs of the derivatives can be used to distinguish between a root and an odd-order pole.

25 Strategy 2: multiply by new function We can multiply by cos  to give We can multiply by cos  to give Multiply by sin  to give Multiply by sin  to give These functions will share the same roots as g & f These functions will share the same roots as g & f WARNING: watch out for any spurious roots that are introduced by multiplying by the new function! WARNING: watch out for any spurious roots that are introduced by multiplying by the new function! Make sure you check the roots explicitly

26 Consider g(  ) By inspection we can see that  =0 is a root of this equation By inspection we can see that  =0 is a root of this equation However, if we examine g(  ) as  → 0 However, if we examine g(  ) as  → 0 Always be aware that multiplying by a new function may induce spurious roots. ~

27 Summary Global bracket finding is difficult Global bracket finding is difficult Bracket finder given here is a sound first approach, but it isn’t perfect Bracket finder given here is a sound first approach, but it isn’t perfect Key step is locally adaptive evaluation of the step between evaluation points Key step is locally adaptive evaluation of the step between evaluation points The Muller-Brent method improves upon Newton- Raphson by using a quadratic fit The Muller-Brent method improves upon Newton- Raphson by using a quadratic fit Distinguishing between poles and roots is possible provided one is prepared to look at derivatives Distinguishing between poles and roots is possible provided one is prepared to look at derivatives Be careful we multiplying by additional functions to remove poles – this can induce spurious roots Be careful we multiplying by additional functions to remove poles – this can induce spurious roots

28 Next Lecture Interpolation Interpolation


Download ppt "Computational Methods in Physics PHYS 3437 Dr Rob Thacker Dept of Astronomy & Physics (MM-301C)"

Similar presentations


Ads by Google