Presentation is loading. Please wait.

Presentation is loading. Please wait.

Numerical Methods Marisa Villano, Tom Fagan, Dave Fairburn, Chris Savino, David Goldberg, Daniel Rave.

Similar presentations


Presentation on theme: "Numerical Methods Marisa Villano, Tom Fagan, Dave Fairburn, Chris Savino, David Goldberg, Daniel Rave."— Presentation transcript:

1 Numerical Methods Marisa Villano, Tom Fagan, Dave Fairburn, Chris Savino, David Goldberg, Daniel Rave

2 An Overview The Method of Finite Differences Error Approximations and Dangers Approxmations to Diffusions Crank Nicholson Scheme Stability Criterion

3 Finite Differences Best known numerical method of approximation Marisa Villano

4 Finite Differences Approximating the derivative with a difference quotient from the Taylor series Function of One Variable Choose mesh size Δx Then u j ~ u(jΔx)

5 First Derivative Approximations Backward difference: (u j – u j-1 ) / Δx Forward difference: (u j+1 – u j ) / Δx Centered difference: (u j+1 – u j-1 ) / 2Δx

6 Taylor Expansion u(x + Δx) = u(x) + u΄(x)Δx + 1/2 u˝(x)(Δx) + 1/6 u˝΄(x)(Δx) + O(Δx) u(x – Δx) = u(x) – u΄(x)Δx + 1/2 u˝(x)(Δx) - 1/6 u˝΄(x)(Δx) + O(Δx) 2 3 4 4 2 3

7 Taylor Expansion u΄(x) = u(x) – u(x – Δx) + O(Δx) Δx u΄(x) = u(x + Δx) – u(x) + O(Δx) Δx u΄(x) = u(x + Δx) – u(x – Δx) + O(Δx) 2Δx 2

8 Second Derivative Approximation Centered difference: (u j+1 – 2u j + u j-1 ) / (Δx) Taylor Expansion u˝(x) = u(x + Δx) – 2u(x) + u(x – Δx) + O(Δx) (Δx) 2 2 2

9 Function of Two Variables u(jΔx, nΔt) ~ u j Backward difference for t and x (jΔx, nΔt) ~ (u j – u j ) / Δt (jΔx, nΔt) ~ (u j – u j ) / Δx n nn-1 n ∂u ∂t ∂u ∂x

10 Function of Two Variables Forward difference for t and x (jΔx, nΔt) ~ (u j – u j ) / Δt (jΔx, nΔt) ~ (u j – u j ) / Δx n+1 n n ∂u ∂t ∂u ∂x

11 Function of Two Variables Centered difference for t and x (jΔx, nΔt) ~ (u j – u j ) / (2Δt) (jΔx, nΔt) ~ (u j – u j ) / (2Δx) n+1 n-1 ∂u ∂t ∂u ∂x

12 Error Truncation Error: introduced in the solution by the approximation of the derivative Local Error: from each term of the equation Global Error: from the accumulation of local error Roundoff Error: introduced in the computation by the finite number of digits used by the computer

13 The Dangers of the Finite Difference Method Evidence from an example in 8.1 Dave Fairburn

14 Example from 8.1 Consider u t = u xx u(x,0) = h(x) We will use the finite difference method to approximate the solution Forward difference for u t Centered difference for u xx Re-write equation in terms of the finite difference approximations

15 Finite Difference Eqn. u j n+1 - u j n = u n j+1 - 2u j n + u n j-1 tx() 2 Error: The local truncation error is O(t) from the left hand side and is O(x) 2 from the right hand side.

16 Assumptions Assume that we choose a small change in x, and that the denominator on both sides of the equation are equal. We are now left with the scheme: u j n+1 = u n j+1 - u n j + u n j-1 Solving u with this scheme is now easy to do once we have the initial data.

17 Initial Data Let u(x,0) = h(x) = a step function with the following properties: h(x) = 0 for all j except for j = 5, so h j = 0 0 0 0 1 0 0 0 0 0 0 …. Initially, only a certain section, which is at j = 5 is equal to the value of 1. “j” serves as the counter for the x values.

18 How to solve? We know u 0 j = 1 at j = 5 and 0 at all other j initially (given by superscript 0). We can plug into our scheme to solve for u 1 j at all j’s. u 1 j = u 0 j-1 - u 0 j + u 0 j+1 u 1 5 = -1; u 1 4 = 1; u 1 6 = 1 Now we can continue to increase the # of iterations, n, and create a table…

19 Solution for 4 iterations 41-410- 16 19- 16 10-410 301-36-76-3100 2001-23 1000 1000110000 00000100000 12345678910 j values n-valuesn-values

20 Analysis of Solution Is this solution viable? Maximum principle states that the solution must be between 0 and 1 given our initial data At n = 4, our solution has already ballooned to u = 19! Clearly, there are cases when the finite difference method can pose serious problems.

21 Charting the Error Assume the solution is constant and equal to 0.5 (halfway between the possible 0 and 1)

22 Lessons Learned While the finite difference method is easy and convenient to use in many cases, there are some dangers associated with the method. We will investigate why the assumption that allowed us to simplify the scheme could have been a major contributor to the large error.

23 Approximations of Diffusions Neumann Boundary Conditions and the Crank-Nicolson Scheme Chris Savino

24 Approximations of Diffusions Errors have accumulated from the approximations of the derivatives using the previous scheme The problem is the choice of the mesh Δt to the mesh Δx Let s= can solve scheme

25 Neumann Boundary Conditions 0 x l Simplest Approximations are

26 To get smallest error, we use centered differences for the derivatives on the boundary Introduce ghost points Boundary Conditions become

27 Crank-Nicolson Scheme Can avoid any restrictions on stability conditions Unconditionally stable no matter what the value of s is.

28 Centered Second Difference: Pick a number theta between 0 and 1 Theta scheme:

29 We analyze the scheme by plugging in a separated solution Therefore

30 Must Check stability condition If then Therefore is always true

31 If then there is no restriction on the size of s for stability to hold The scheme is unconditionally stable When theta = ½ it is called the Crank-Nicolson scheme If theta < ½ then the scheme is stable if

32 Stability Criterion Approximations of the diffusion equation, u t =u xx David Goldberg

33 Stability Criterion The method of finite differences gives an answer, but it does not guarantee that this answer is meaningful. Values must be chosen appropriately, to ensure that the results make sense and are applicable to real world scenarios. This condition, that values must satisfy in order to be worthwhile, is called the “stability criterion.”

34 Example As per the book, take, for instance, the diffusion problem:

35 Example, continued As can be easily shown, the graph of φ(x) looks like this.

36 Example, continued In attempting to use the method of finite differences, we are using a forward difference for u t and a centered difference for u xx. This means that It is important to note here that the superscript n denotes a counter on the t variable, and the subscript j denotes a counter on the x variable.

37 Example, continued In order to make the calculations a bit cleaner, we are introducing a variable, s, which is defined by Rearranging, we have It would be nice if we could just plug in values and get a valid result…

38 Example, continued However, putting in different values can lead to the results being close to, or far from, that actual answer. For instance, letting ∆x=π/20, and letting s=5/11, we get a relatively nice result. Letting s=5/9 does not get such a nice result. So what, of significance, changes?

39 Example, Continued As it turns out, changing the value of s can significantly change the validity of the solution. To see why, we return to our equation.

40 Example, continued Since the left hand side is a function of T and the right hand side is a function of X, they must be equal to a constant.

41 Example, continued This is a discrete version of an ODE, which when solved gives

42 Example, finished Thus, to achieve stability,. This is why setting s=5/9 didn’t give a valid result. It is to be noted that usually the necessary criterion is that, but that in this case it was irrelevant. So the stability criterion must be worked out before one can effectively use the method of finite differences.

43 Approximations of Diffusions Example from 8.2 Daniel Rave

44 Summary Breif Review of Methods Wide Applicability Importance of Stability


Download ppt "Numerical Methods Marisa Villano, Tom Fagan, Dave Fairburn, Chris Savino, David Goldberg, Daniel Rave."

Similar presentations


Ads by Google