Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 1.

Similar presentations


Presentation on theme: "Chapter 1."— Presentation transcript:

1 Chapter 1

2 Mathematical Modeling and Engineering Problem solving Chapter 1
Requires understanding of engineering systems By observation and experiment Theoretical analysis and generalization Computers are great tools, however, without fundamental understanding of engineering problems, they will be useless.

3 Fig. 1.1

4 Dependent independent forcing
A mathematical model is represented as a functional relationship of the form Dependent independent forcing Variable = f variables, parameters, functions Dependent variable: Characteristic that usually reflects the state of the system Independent variables: Dimensions such as time ans space along which the systems behavior is being determined Parameters: reflect the system’s properties or composition Forcing functions: external influences acting upon the system

5 Newton’s 2nd law of Motion
States that “the time rate change of momentum of a body is equal to the resulting force acting on it.” The model is formulated as F = m a (1.2) F=net force acting on the body (N) m=mass of the object (kg) a=its acceleration (m/s2)

6 Formulation of Newton’s 2nd law has several characteristics that are typical of mathematical models of the physical world: It describes a natural process or system in mathematical terms It represents an idealization and simplification of reality Finally, it yields reproducible results, consequently, can be used for predictive purposes.

7 Some mathematical models of physical phenomena may be much more complex.
Complex models may not be solved exactly or require more sophisticated mathematical techniques than simple algebra for their solution Example, modeling of a falling parachutist:

8

9 If the parachutist is initially at rest (v=0 at t=0), using calculus
This is a differential equation and is written in terms of the differential rate of change dv/dt of the variable that we are interested in predicting. If the parachutist is initially at rest (v=0 at t=0), using calculus Independent variable Dependent variable Parameters Forcing function

10 Conservation Laws and Engineering
Conservation laws are the most important and fundamental laws that are used in engineering. Change = increases – decreases (1.13) Change implies changes with time (transient). If the change is nonexistent (steady-state), Eq becomes Increases =Decreases

11 For steady-state incompressible fluid flow in pipes:
Fig 1.6 For steady-state incompressible fluid flow in pipes: Flow in = Flow out or = Flow4 Flow4 = 60

12 Refer to Table 1.1

13 Chapter 2

14 Programming and Software Chapter 2
Objective is how to use the computer as a tool to obtain numerical solutions to a given engineering model. There are two ways in using computers: Use available software Or, write computer programs to extend the capabilities of available software, such as Excel and Matlab. Engineers should not be tool limited, it is important that they should be able to do both!

15 Computer programs are set of instructions that direct the computer to perform a certain task.
To be able to perform engineering-oriented numerical calculations, you should be familiar with the following programming topics: Simple information representation (constants, variables, and type declaration) Advanced information representation (data structure, arrays, and records) Mathematical formulas (assignment, priority rules, and intrinsic functions) Input/Output Logical representation (sequence, selection, and repetition) Modular programming (functions and subroutines) We will focus the last two topics, assuming that you have some prior exposure to programming.

16 Structured Programming
Structured programming is a set of rules that prescribe god style habits for programmer. An organized, well structured code Easily sharable Easy to debug and test Requires shorter time to develop, test, and update The key idea is that any numerical algorithm can be composed of using the three fundamental structures: Sequence, selection, and repetition

17 Fig. 2.1

18 Fig.2.2 Sequence. Computer code must be implemented one instruction at a time, unless you instruct otherwise. The structure can be expressed as a flowchart or pseudocode.

19 Fig. 2.3 Selection. Splits the program’s flow into branches based on outcome of a logical condition.

20 Repetition. A means to implement instructions repeatedly.
Fig. 2.4

21 Fig. 2.5

22 Figure 2.6

23 Modular Programming The computer programs can be divided into subprograms, or modules, that can be developed and tested separately. Modules should be as independent and self contained as possible. Advantages to modular design are: It is easier to understand the underlying logic of smaller modules They are easier to debug and test Facilitate program maintenance and modification Allow you to maintain your own library of modules for later use

24 Fig. 2.7

25 EXCEL Is a spreadsheet that allow the user to enter and perform calculations on rows and columns of data. When any value on the sheet is changed, entire calculation is updated, therefore, spreadsheets are ideal for “what if?” sorts of analysis. Excel has some built in numerical capabilities including equation solving, curve fitting and optimization. It also includes VBA as a macro language that can be used to implement numerical calculations. It has several visualization tools, such as graphs and three dimensional plots.

26 Fig. 2.8

27 MATLAB Is a flagship software which was originally developed as a matrix laboratory. A variety of numerical functions, symbolic computations, and visualization tools have been added to the matrix manipulations. MATLAB is closely related to programming.

28 Fig. 2.9

29 Other Languages and Libraries
Fortran 90 (IMSL) C++

30 Chapter 3

31 Approximations and Round-Off Errors Chapter 3
For many engineering problems, we cannot obtain analytical solutions. Numerical methods yield approximate results, results that are close to the exact analytical solution. We cannot exactly compute the errors associated with numerical methods. Only rarely given data are exact, since they originate from measurements. Therefore there is probably error in the input information. Algorithm itself usually introduces errors as well, e.g., unavoidable round-offs, etc … The output information will then contain error from both of these sources. How confident we are in our approximate result? The question is “how much error is present in our calculation and is it tolerable?”

32 Accuracy. How close is a computed or measured value to the true value
Precision (or reproducibility). How close is a computed or measured value to previously computed or measured values. Inaccuracy (or bias). A systematic deviation from the actual value. Imprecision (or uncertainty). Magnitude of scatter.

33 Fig. 3.2

34 Significant Figures Number of significant figures indicates precision. Significant digits of a number are those that can be used with confidence, e.g., the number of certain digits plus one estimated digit. 53,800 How many significant figures? 5.38 x 5.380 x 5.380 x Zeros are sometimes used to locate the decimal point not significant figures.

35 Error Definitions True Value = Approximation + Error
Et = True value – Approximation (+/-) True error

36 Iterative approach, example Newton’s method
For numerical methods, the true value will be known only when we deal with functions that can be solved analytically (simple systems). In real world applications, we usually not know the answer a priori. Then Iterative approach, example Newton’s method (+ / -)

37 Computations are repeated until stopping criterion is satisfied.
Use absolute value. Computations are repeated until stopping criterion is satisfied. If the following criterion is met you can be sure that the result is correct to at least n significant figures. Pre-specified % tolerance based on the knowledge of your solution

38 Round-off Errors Numbers such as p, e, or cannot be expressed by a fixed number of significant figures. Computers use a base-2 representation, they cannot precisely represent certain exact base-10 numbers. Fractional quantities are typically represented in computer using “floating point” form, e.g., Integer part exponent mantissa Base of the number system used

39 Figure 3.5

40 Figure 3.6

41 Figure 3.7

42 156.78  0.15678x103 in a floating point base-10 system
Suppose only decimal places to be stored Normalized to remove the leading zeroes. Multiply the mantissa by 10 and lower the exponent by 1 x 10-1 Additional significant figure is retained

43 for a base-10 system 0.1 ≤m<1 for a base-2 system 0.5 ≤m<1
Therefore for a base-10 system 0.1 ≤m<1 for a base-2 system 0.5 ≤m<1 Floating point representation allows both fractions and very large numbers to be expressed on the computer. However, Floating point numbers take up more room. Take longer to process than integer numbers. Round-off errors are introduced because mantissa holds only a finite number of significant figures.

44 Chopping Example: p= to be stored on a base-10 system carrying 7 significant digits. p= chopping error et= If rounded p= et= Some machines use chopping, because rounding adds to the computational overhead. Since number of significant figures is large enough, resulting chopping error is negligible.

45 Chapter

46 Truncation Errors and the Taylor Series Chapter 4
Non-elementary functions such as trigonometric, exponential, and others are expressed in an approximate fashion using Taylor series when their values, derivatives, and integrals are computed. Any smooth function can be approximated as a polynomial. Taylor series provides a means to predict the value of a function at one point in terms of the function value and its derivatives at another point.

47 Figure 4.1

48 Example: To get the cos(x) for small x: If x=0.5 cos(0.5) = … = From the supporting theory, for this series, the error is no greater than the first omitted term.

49 Any smooth function can be approximated as a polynomial.
f(xi+1) ≈ f(xi) zero order approximation, only true if xi+1 and xi are very close to each other. f(xi+1) ≈ f(xi) + f′(xi) (xi+1-xi) first order approximation, in form of a straight line

50 nth order approximation
(xi+1-xi)= h step size (define first) Reminder term, Rn, accounts for all terms from (n+1) to infinity.

51 e is not known exactly, lies somewhere between xi+1>e >xi .
Need to determine f n+1(x), to do this you need f'(x). If we knew f(x), there wouldn’t be any need to perform the Taylor series expansion. However, R=O(hn+1), (n+1)th order, the order of truncation error is hn+1. O(h), halving the step size will halve the error. O(h2), halving the step size will quarter the error.

52 Truncation error is decreased by addition of terms to the Taylor series.
If h is sufficiently small, only a few terms may be required to obtain an approximation close enough to the actual value for practical purposes. Example: Calculate series, correct to the 3 digits.

53 Error Propagation fl(x) refers to the floating point (or computer) representation of the real number x. Because a computer can hold a finite number of significant figures for a given number, there may be an error (round-off error) associated with the floating point representation. The error is determined by the precision of the computer (e).

54 Suppose that we have a function f(x) that is dependent on a single independent variable x. fl(x) is an approximation of x and we would like to estimate the effect of discrepancy between x and fl(x) on the value of the function:

55 Figure 4.7

56 Also, let et, the fractional relative error, be the error associated with fl(x). Then
Rearranging, we get Machine epsilon, upper boundary

57 Case 1: Addition of x1 and x2 with associated errors et1 and et2 yields the following result:
fl(x1)=x1(1+et1) fl(x2)=x2(1+et2) fl(x1)+fl(x2)=et1 x1+et2 x2+x1+x2 A large error could result from addition if x1 and x2 are almost equal magnitude but opposite sign, therefore one should avoid subtracting nearly equal numbers.

58 fl(xi)-E ≤ xi ≤ fl(xi)+E Eti ≤E
Generalization: Suppose the numbers fl(x1), fl(x2), fl(x3), …, fl(xn) are approximations to x1, x2, x3, … ,xn and that in each case the maximum possible error is E. fl(xi)-E ≤ xi ≤ fl(xi)+E Eti ≤E It follows by addition that So that x Therefore, the maximum possible error in the sum of fl(xi) is nE

59 Case 2: Multiplication of x1 and x2 with associated errors et1 and et2 results in:

60 Since et1, et2 are both small, the term et1et2 should be small relative to et1+et2. Thus the magnitude of the error associated with one multiplication or division step should be et1+et2. et1 ≤e (upper bound) Although error of one calculation may not be significant, if 100 calculations were done, the error is then approximately 100e. The magnitude of error associated with a calculation is directly proportional to the number of multiplication steps. Refer to Table 4.3

61 Overflow: Any number larger than the largest number that can be expressed on a computer will result in an overflow. Underflow (Hole) : Any positive number smaller than the smallest number that can be represented on a computer will result an underflow. Stable Algorithm: In extended calculations, it is likely that many round-offs will be made. Each of these plays the role of an input error for the remainder of the computation, impacting the eventual output. Algorithms for which the cumulative effect of all such errors are limited, so that a useful result is generated, are called “stable” algorithms. When accumulation is devastating and the solution is overwhelmed by the error, such algorithms are called unstable.

62 Figure 4.8

63 Part 2 / Chapter 5

64 Part 2 Roots of Equations
Why? But

65 All Iterative

66 Figure PT2.1

67 Bracketing Methods (Or, two point methods for finding roots) Chapter 5
Two initial guesses for the root are required. These guesses must “bracket” or be on either side of the root. == > Fig. 5.1 If one root of a real and continuous function, f(x)=0, is bounded by values x=xl, x =xu then f(xl) . f(xu) <0. (The function changes sign on opposite sides of the root) Insert Fig. 5.1 in here

68 Figure 5.2

69 Figure 5.3

70 Figure 5.4b Figure 5.4a Figure 5.4c

71 The Bisection Method For the arbitrary equation of one variable, f(x)=0 Pick xl and xu such that they bound the root of interest, check if f(xl).f(xu) <0. Estimate the root by evaluating f[(xl+xu)/2]. Find the pair If f(xl). f[(xl+xu)/2]<0, root lies in the lower interval, then xu=(xl+xu)/2 and go to step 2.

72 If f(xl). f[(xl+xu)/2]>0, root lies in the upper interval, then xl= [(xl+xu)/2, go to step 2.
If f(xl). f[(xl+xu)/2]=0, then root is (xl+xu)/2 and terminate. Compare es with ea If ea< es, stop. Otherwise repeat the process.

73 Figure 5.6

74 Evaluation of Method Pros Easy Always find root
Number of iterations required to attain an absolute error can be computed a priori. Cons Slow Know a and b that bound root Multiple roots No account is taken of f(xl) and f(xu), if f(xl) is closer to zero, it is likely that root is closer to xl .

75 How Many Iterations will It Take?
Length of the first Interval Lo=b-a After 1 iteration L1=Lo/2 After 2 iterations L2=Lo/4 After k iterations Lk=Lo/2k

76 If the absolute magnitude of the error is
and Lo=2, how many iterations will you have to do to get the required accuracy in the solution?

77 The False-Position Method (Regula-Falsi)
If a real root is bounded by xl and xu of f(x)=0, then we can approximate the solution by doing a linear interpolation between the points [xl, f(xl)] and [xu, f(xu)] to find the xr value such that l(xr)=0, l(x) is the linear approximation of f(x). == > Fig. 5.12

78 Procedure Find a pair of values of x, xl and xu such that fl=f(xl) <0 and fu=f(xu) >0. Estimate the value of the root from the following formula (Refer to Box 5.1) and evaluate f(xr).

79 Use the new point to replace one of the original points, keeping the two points on opposite sides of the x axis. If f(xr)<0 then xl=xr == > fl=f(xr) If f(xr)>0 then xu=xr == > fu=f(xr) If f(xr)=0 then you have found the root and need go no further!

80 Always converges for a single root.
See if the new xl and xu are close enough for convergence to be declared. If they are not go back to step 2. Why this method? Faster Always converges for a single root. See Sec.5.3.1, Pitfalls of the False-Position Method Note: Always check by substituting estimated root in the original equation to determine whether f(xr) ≈ 0.

81 Chapter 6

82 Open Methods Chapter 6 Figure 6.1 Open methods are based on formulas that require only a single starting value of x or two starting values that do not necessarily bracket the root.

83 Simple Fixed-point Iteration
Rearrange the function so that x is on the left side of the equation: Bracketing methods are “convergent”. Fixed-point methods may sometime “diverge”, depending on the stating point (initial guess) and how the function behaves.

84 Example:

85 Convergence x=g(x) can be expressed as a pair of equations: y1=x
Figure 6.2 x=g(x) can be expressed as a pair of equations: y1=x y2=g(x) (component equations) Plot them separately.

86 Conclusion Fixed-point iteration converges if When the method converges, the error is roughly proportional to or less than the error of the previous step, therefore it is called “linearly convergent.”

87 Newton-Raphson Method
Most widely used method. Based on Taylor series expansion: Solve for Newton-Raphson formula

88 Fig. 6.5 A convenient method for functions whose derivatives can be evaluated analytically. It may not be convenient for functions whose derivatives cannot be evaluated analytically.

89 Fig. 6.6

90 The Secant Method A slight variation of Newton’s method for functions whose derivatives are difficult to evaluate. For these cases the derivative can be approximated by a backward finite divided difference.

91 Fig. 6.7 Requires two initial estimates of x , e.g, xo, x1. However, because f(x) is not required to change signs between estimates, it is not classified as a “bracketing” method. The scant method has the same properties as Newton’s method. Convergence is not guaranteed for all xo, f(x).

92 Fig. 6.8

93 Multiple Roots None of the methods deal with multiple roots efficiently, however, one way to deal with problems is as follows: This function has roots at all the same locations as the original function

94 Fig. 6.13

95 “Multiple root” corresponds to a point where a function is tangent to the x axis.
Difficulties Function does not change sign at the multiple root, therefore, cannot use bracketing methods. Both f(x) and f′(x)=0, division by zero with Newton’s and Secant methods.

96 Systems of Linear Equations

97 Taylor series expansion of a function of more than one variable
The root of the equation occurs at the value of x and y where ui+1 and vi+1 equal to zero.

98 A set of two linear equations with two unknowns that can be solved for.

99 Determinant of the Jacobian of the system.

100 Chapter 7

101 Roots of Polynomials Chapter 7
The roots of polynomials such as Follow these rules: For an nth order equation, there are n real or complex roots. If n is odd, there is at least one real root. If complex root exist in conjugate pairs (that is, l+mi and l-mi), where i=sqrt(-1).

102 Conventional Methods The efficacy of bracketing and open methods depends on whether the problem being solved involves complex roots. If only real roots exist, these methods could be used. However, Finding good initial guesses complicates both the open and bracketing methods, also the open methods could be susceptible to divergence. Special methods have been developed to find the real and complex roots of polynomials – Müller and Bairstow methods.

103 Müller Method Müller’s method obtains a root estimate by projecting a parabola to the x axis through three function values. Figure 7.3

104 Müller Method The method consists of deriving the coefficients of parabola that goes through the three points: 1. Write the equation in a convenient form:

105 The parabola should intersect the three points [xo, f(xo)], [x1, f(x1)], [x2, f(x2)]. The coefficients of the polynomial can be estimated by substituting three points to give Three equations can be solved for three unknowns, a, b, c. Since two of the terms in the 3rd equation are zero, it can be immediately solved for c=f(x2).

106 Solved for a and b

107 Roots can be found by applying an alternative form of quadratic formula:
The error can be calculated as ±term yields two roots, the sign is chosen to agree with b. This will result in a largest denominator, and will give root estimate that is closest to x2.

108 Once x3 is determined, the process is repeated using the following guidelines:
If only real roots are being located, choose the two original points that are nearest the new root estimate, x3. If both real and complex roots are estimated, employ a sequential approach just like in secant method, x1, x2, and x3 to replace xo, x1, and x2.

109 Bairstow’s Method Bairstow’s method is an iterative approach loosely related to both Müller and Newton Raphson methods. It is based on dividing a polynomial by a factor x-t:

110 To permit the evaluation of complex roots, Bairstow’s method divides the polynomial by a quadratic factor x2-rx-s:

111 For the remainder to be zero, bo and b1 must be zero
For the remainder to be zero, bo and b1 must be zero. However, it is unlikely that our initial guesses at the values of r and s will lead to this result, a systematic approach can be used to modify our guesses so that bo and b1 approach to zero. Using a similar approach to Newton Raphson method, both bo and b1 can be expanded as function of both r and s in Taylor series.

112

113 If partial derivatives of the b’s can be determined, then the two equations can be solved simultaneously for the two unknowns Dr and Db. Partial derivatives can be obtained by a synthetic division of the b’s in a similar fashion the b’s themselves are derived:

114 At each step the error can be estimated as
Then At each step the error can be estimated as Solved for Dr and Ds, in turn are employed to improve the initial guesses.

115 The values of the roots are determined by
At this point three possibilities exist: The quotient is a third-order polynomial or greater. The previous values of r and s serve as initial guesses and Bairstow’s method is applied to the quotient to evaluate new r and s values. The quotient is quadratic. The remaining two roots are evaluated directly, using the above eqn. The quotient is a 1st order polynomial. The remaining single root can be evaluated simply as x=-s/r.

116 Refer to Tables pt2.3 and pt2.4

117 Part 3 - Chapter 9

118 Part 3 Linear Algebraic Equations
An equation of the form ax+by+c=0 or equivalently ax+by=-c is called a linear equation in x and y variables. ax+by+cz=d is a linear equation in three variables, x, y, and z. Thus, a linear equation in n variables is a1x1+a2x2+ … +anxn = b A solution of such an equation consists of real numbers c1, c2, c3, … , cn. If you need to work more than one linear equations, a system of linear equations must be solved simultaneously.

119 Noncomputer Methods for Solving Systems of Equations
For small number of equations (n ≤ 3) linear equations can be solved readily by simple techniques such as “method of elimination.” Linear algebra provides the tools to solve such systems of linear equations. Nowadays, easy access to computers makes the solution of large sets of linear algebraic equations possible and practical.

120 Fig. pt3.5

121 Gauss Elimination Chapter 9
Solving Small Numbers of Equations There are many ways to solve a system of linear equations: Graphical method Cramer’s rule Method of elimination Computer methods For n ≤ 3

122 Graphical Method For two equations: Solve both equations for x2:

123 Plot x2 vs. x1 on rectilinear paper, the intersection of the lines present the solution.
Fig. 9.1

124 Figure 9.2

125 Determinants and Cramer’s Rule
Determinant can be illustrated for a set of three equations: Where [A] is the coefficient matrix:

126 Assuming all matrices are square matrices, there is a number associated with each square matrix [A] called the determinant, D, of [A]. If [A] is order 1, then [A] has one element: [A]=[a11] D=a11 For a square matrix of order 3, the minor of an element aij is the determinant of the matrix of order 2 by deleting row i and column j of [A].

127

128 Cramer’s rule expresses the solution of a systems of linear equations in terms of ratios of determinants of the array of coefficients of the equations. For example, x1 would be computed as:

129 Method of Elimination The basic strategy is to successively solve one of the equations of the set for one of the unknowns and to eliminate that variable from the remaining equations by substitution. The elimination of unknowns can be extended to systems with more than two or three equations; however, the method becomes extremely tedious to solve by hand.

130 Naive Gauss Elimination
Extension of method of elimination to large sets of equations by developing a systematic scheme or algorithm to eliminate unknowns and to back substitute. As in the case of the solution of two equations, the technique for n equations consists of two phases: Forward elimination of unknowns Back substitution

131 Fig. 9.3

132 Pitfalls of Elimination Methods
Division by zero. It is possible that during both elimination and back-substitution phases a division by zero can occur. Round-off errors. Ill-conditioned systems. Systems where small changes in coefficients result in large changes in the solution. Alternatively, it happens when two or more equations are nearly identical, resulting a wide ranges of answers to approximately satisfy the equations. Since round off errors can induce small changes in the coefficients, these changes can lead to large solution errors.

133 Singular systems. When two equations are identical, we would loose one degree of freedom and be dealing with the impossible case of n-1 equations for n unknowns. For large sets of equations, it may not be obvious however. The fact that the determinant of a singular system is zero can be used and tested by computer algorithm after the elimination stage. If a zero diagonal element is created, calculation is terminated.

134 Techniques for Improving Solutions
Use of more significant figures. Pivoting. If a pivot element is zero, normalization step leads to division by zero. The same problem may arise, when the pivot element is close to zero. Problem can be avoided: Partial pivoting. Switching the rows so that the largest element is the pivot element. Complete pivoting. Searching for the largest element in all rows and columns then switching.

135 Gauss-Jordan It is a variation of Gauss elimination. The major differences are: When an unknown is eliminated, it is eliminated from all other equations rather than just the subsequent ones. All rows are normalized by dividing them by their pivot elements. Elimination step results in an identity matrix. Consequently, it is not necessary to employ back substitution to obtain solution.

136 Chapter 10

137 LU Decomposition and Matrix Inversion Chapter 10
Provides an efficient way to compute matrix inverse by separating the time consuming elimination of the Matrix [A] from manipulations of the right-hand side {B}. Gauss elimination, in which the forward elimination comprises the bulk of the computational effort, can be implemented as an LU decomposition.

138 If L- lower triangular matrix U- upper triangular matrix Then,
[A]{X}={B} can be decomposed into two matrices [L] and [U] such that [L][U]=[A] [L][U]{X}={B} Similar to first phase of Gauss elimination, consider [U]{X}={D} [L]{D}={B} [L]{D}={B} is used to generate an intermediate vector {D} by forward substitution Then, [U]{X}={D} is used to get {X} by back substitution.

139 Fig.10.1

140 LU decomposition requires the same total FLOPS as for Gauss elimination. Saves computing time by separating time-consuming elimination step from the manipulations of the right hand side. Provides efficient means to compute the matrix inverse

141 Error Analysis and System Condition
Inverse of a matrix provides a means to test whether systems are ill-conditioned. Vector and Matrix Norms Norm is a real-valued function that provides a measure of size or “length” of vectors and matrices. Norms are useful in studying the error behavior of algorithms.

142

143 Figure 10.6

144 The length of this vector can be simply computed as
Length or Euclidean norm of [F] For an n dimensional vector Frobenius norm

145 Matrix Condition Number
Frobenius norm provides a single value to quantify the “size” of [A]. Matrix Condition Number Defined as For a matrix [A], this number will be greater than or equal to 1.

146 That is, the relative error of the norm of the computed solution can be as large as the relative error of the norm of the coefficients of [A] multiplied by the condition number. For example, if the coefficients of [A] are known to t-digit precision (rounding errors~10-t) and Cond [A]=10c, the solution [X] may be valid to only t-c digits (rounding errors~10c-t).

147 Chapter 11

148 Special Matrices and Gauss-Seidel Chapter 11
Certain matrices have particular structures that can be exploited to develop efficient solution schemes. A banded matrix is a square matrix that has all elements equal to zero, with the exception of a band centered on the main diagonal. These matrices typically occur in solution of differential equations. The dimensions of a banded system can be quantified by two parameters: the band width BW and half-bandwidth HBW. These two values are related by BW=2HBW+1. Gauss elimination or conventional LU decomposition methods are inefficient in solving banded equations because pivoting becomes unnecessary.

149 Figure 11.1

150 Tridiagonal Systems A tridiagonal system has a bandwidth of 3:
An efficient LU decomposition method, called Thomas algorithm, can be used to solve such an equation. The algorithm consists of three steps: decomposition, forward and back substitution, and has all the advantages of LU decomposition.

151 Gauss-Seidel Iterative or approximate methods provide an alternative to the elimination methods. The Gauss-Seidel method is the most commonly used iterative method. The system [A]{X}={B} is reshaped by solving the first equation for x1, the second equation for x2, and the third for x3, …and nth equation for xn. For conciseness, we will limit ourselves to a 3x3 set of equations.

152 Now we can start the solution process by choosing guesses for the x’s
Now we can start the solution process by choosing guesses for the x’s. A simple way to obtain initial guesses is to assume that they are zero. These zeros can be substituted into x1equation to calculate a new x1=b1/a11.

153 New x1 is substituted to calculate x2 and x3
New x1 is substituted to calculate x2 and x3. The procedure is repeated until the convergence criterion is satisfied: For all i, where j and j-1 are the present and previous iterations.

154 Fig. 11.4

155 Convergence Criterion for Gauss-Seidel Method
The Gauss-Seidel method has two fundamental problems as any iterative method: It is sometimes nonconvergent, and If it converges, converges very slowly. Recalling that sufficient conditions for convergence of two linear equations, u(x,y) and v(x,y) are

156 Similarly, in case of two simultaneous equations, the Gauss-Seidel algorithm can be expressed as

157 Substitution into convergence criterion of two linear equations yield:
In other words, the absolute values of the slopes must be less than unity for convergence:

158 Figure 11.5

159 Part 4 - Chapter 13

160 Optimization Part 4 Root finding and optimization are related, both involve guessing and searching for a point on a function. Fundamental difference is: Root finding is searching for zeros of a function or functions Optimization is finding the minimum or the maximum of a function of several variables.

161 figure PT4.1

162 Mathematical Background
An optimization or mathematical programming problem generally be stated as: Find x, which minimizes or maximizes f(x) subject to Where x is an n-dimensional design vector, f(x) is the objective function, di(x) are inequality constraints, ei(x) are equality constraints, and ai and bi are constants

163 Optimization problems can be classified on the basis of the form of f(x):
If f(x) and the constraints are linear, we have linear programming. If f(x) is quadratic and the constraints are linear, we have quadratic programming. If f(x) is not linear or quadratic and/or the constraints are nonlinear, we have nonlinear programming. When equations(*) are included, we have a constrained optimization problem; otherwise, it is unconstrained optimization problem.

164 Figure PT4.5

165 One-Dimensional Unconstrained Optimization Chapter 13
In multimodal functions, both local and global optima can occur. In almost all cases, we are interested in finding the absolute highest or lowest value of a function. Figure 13.1

166 How do we distinguish global optimum from local one?
By graphing to gain insight into the behavior of the function. Using randomly generated starting guesses and picking the largest of the optima as global. Perturbing the starting point to see if the routine returns a better point or the same local minimum.

167 Golden-Section Search
A unimodal function has a single maximum or a minimum in the a given interval. For a unimodal function: First pick two points that will bracket your extremum [xl, xu]. Pick an additional third point within this interval to determine whether a maximum occurred. Then pick a fourth point to determine whether the maximum has occurred within the first three or last three points The key is making this approach efficient by choosing intermediate points wisely thus minimizing the function evaluations by replacing the old values with new values.

168 Figure 13.2

169 The second say that the ratio of the length must be equal
The first condition specifies that the sum of the two sub lengths l1 and l2 must equal the original interval length. The second say that the ratio of the length must be equal Golden Ratio

170 Figure 13.4

171 The method starts with two initial guesses, xl and xu, that bracket one local extremum of f(x):
Next two interior points x1 and x2 are chosen according to the golden ratio The function is evaluated at these two interior points.

172 Two results can occur: If f(x1)>f(x2) then the domain of x to the left of x2 from xl to x2, can be eliminated because it does not contain the maximum. Then, x2 becomes the new xl for the next round. If f(x2)>f(x1), then the domain of x to the right of x1 from xl to x2, would have been eliminated. In this case, x1 becomes the new xu for the next round. New x1 is determined as before

173 The real benefit from the use of golden ratio is because the original x1 and x2 were chosen using golden ratio, we do not need to recalculate all the function values for the next iteration.

174 Newton’s Method A similar approach to Newton- Raphson method can be used to find an optimum of f(x) by defining a new function g(x)=f‘(x). Thus because the same optimal value x* satisfies both f‘(x*)=g(x*)=0 We can use the following as a technique to the extremum of f(x).

175 Chapter 14

176 Multidimensional Unconstrained Optimization Chapter 14
Techniques to find minimum and maximum of a function of several variables are described. These techniques are classified as: That require derivative evaluation Gradient or descent (or ascent) methods That do not require derivative evaluation Non-gradient or direct methods.

177 Figure 14.1

178 DIRECT METHODS Random Search
Based on evaluation of the function randomly at selected values of the independent variables. If a sufficient number of samples are conducted, the optimum will be eventually located. Example: maximum of a function f (x, y)=y-x-2x2-2xy-y2 can be found using a random number generator.

179 Figure 14.2

180 Advantages/ Works even for discontinuous and nondifferentiable functions. Always finds the global optimum rather than the global minimum. Disadvantages/ As the number of independent variables grows, the task can become onerous. Not efficient, it does not account for the behavior of underlying function.

181 Univariate and Pattern Searches
More efficient than random search and still doesn’t require derivative evaluation. The basic strategy is: Change one variable at a time while the other variables are held constant. Thus problem is reduced to a sequence of one-dimensional searches that can be solved by variety of methods. The search becomes less efficient as you approach the maximum.

182 Figure 14.3

183 Pattern directions can be used to shoot directly along the ridge towards maximum.
Figure 14.4

184 Figure 14.5 Best known algorithm, Powell’s method, is based on the observation that if points 1 and 2 are obtained by one-dimensional searches in the same direction but from different starting points, then, the line formed by 1 and 2 will be directed toward the maximum. Such lines are called conjugate directions.

185 GRADIENT METHODS Gradients and Hessians
The Gradient/ If f(x,y) is a two dimensional function, the gradient vector tells us What direction is the steepest ascend? How much we will gain by taking that step? Directional derivative of f(x,y) at point x=a and y=b

186 Figure 14.6 For n dimensions

187 The Hessian/ For one dimensional functions both first and second derivatives valuable information for searching out optima. First derivative provides (a) the steepest trajectory of the function and (b) tells us that we have reached the maximum. Second derivative tells us that whether we are a maximum or minimum. For two dimensional functions whether a maximum or a minimum occurs involves not only the partial derivatives w.r.t. x and y but also the second partials w.r.t. x and y.

188 Figure 14.7 Figure 14.8

189 Assuming that the partial derivatives are continuous at and near the point being evaluated
The quantity [H] is equal to the determinant of a matrix made up of second derivatives

190 The Steepest Ascend Method
Figure 14.9 Start at an initial point (xo,yo), determine the direction of steepest ascend, that is, the gradient. Then search along the direction of the gradient, ho, until we find maximum. Process is then repeated.

191 The problem has two parts
Determining the “best direction” and Determining the “best value” along that search direction. Steepest ascent method uses the gradient approach as its choice for the “best” direction. To transform a function of x and y into a function of h along the gradient section: h is distance along the h axis

192 If xo=1 and yo=2 Figure 14.10

193 Chapter 15

194 Constrained Optimization Chapter 15
LINEAR PROGRAMMING An optimization approach that deals with meeting a desired objective such as maximizing profit or minimizing cost in presence of constraints such as limited resources Mathematical functions representing both the objective and the constraints are linear.

195 Basic linear programming problem consists of two major parts:
Standard Form/ Basic linear programming problem consists of two major parts: The objective function A set of constraints For maximization problem, the objective function is generally expressed as cj= payoff of each unit of the jth activity that is undertaken xj= magnitude of the jth activity Z= total payoff due to the total number of activities

196 The constraints can be represented generally as
Where aij=amount of the ith resource that is consumed for each unit of the jth activity and bi=amount of the ith resource that is available The general second type of constraint specifies that all activities must have a positive value, xi>0 . Together, the objective function and the constraints specify the linear programming problem.

197 Figure 15.1

198 Possible outcomes that can be generally obtained in a linear programming problem/
Unique solution. The maximum objective function intersects a single point. Alternate solutions. Problem has an infinite number of optima corresponding to a line segment. No feasible solution. Unbounded problems. Problem is under-constrained and therefore open-ended.

199 Figure 15.2

200 The Simplex Method/ Assumes that the optimal solution will be an extreme point. The approach must discern whether during problem solution an extreme point occurs. To do this, the constraint equations are reformulated as equalities by introducing slack variables.

201 A slack variable measures how much of a constrained resource is available, e.g.,
7x1+ 11 x2 ≤ 77 If we define a slack variable S1 as the amount of raw gas that is not used for a particular production level (x1, x2) and add it to the left side of the constraint, it makes the relationship exact. 7x1+ 11 x2 + S1 = 77 If slack variable is positive, it means that we have some slack that is we have some surplus that is not being used. If it is negative, it tells us that we have exceeded the constraint. If it is zero, we have exactly met the constraint. We have used up all the allowable resource.

202 Maximize

203 We now have a system of linear algebraic equations.
For even moderately sized problems, the approach can involve solving a great number of equations. For m equations and n unknowns, the number of simultaneous equations to be solved are:

204 Figure 15.3

205 Part 5 - Chapter 17

206 Part 5 CURVE FITTING Describes techniques to fit curves (curve fitting) to discrete data to obtain intermediate estimates. There are two general approaches two curve fitting: Data exhibit a significant degree of scatter. The strategy is to derive a single curve that represents the general trend of the data. Data is very precise. The strategy is to pass a curve or a series of curves through each of the points. In engineering two types of applications are encountered: Trend analysis. Predicting values of dependent variable, may include extrapolation beyond data points or interpolation between data points. Hypothesis testing. Comparing existing mathematical model with measured data.

207 Figure PT5.1

208 Mathematical Background
Simple Statistics/ In course of engineering study, if several measurements are made of a particular quantity, additional insight can be gained by summarizing the data in one or more well chosen statistics that convey as much information as possible about specific characteristics of the data set. These descriptive statistics are most often selected to represent The location of the center of the distribution of the data, The degree of spread of the data.

209 Arithmetic mean. The sum of the individual data points (yi) divided by the number of points (n).
Standard deviation. The most common measure of a spread for a sample. or

210 Variance. Representation of spread by the square of the standard deviation.
Coefficient of variation. Has the utility to quantify the spread of data. Degrees of freedom

211 Figure PT5.2

212 Figure PT5.3

213 Figure PT5.6

214 Least Squares Regression Chapter 17
Linear Regression Fitting a straight line to a set of paired observations: (x1, y1), (x2, y2),…,(xn, yn). y=a0+a1x+e a1- slope a0- intercept e- error, or residual, between the model and the observations

215 Criteria for a “Best” Fit/
Minimize the sum of the residual errors for all available data: n = total number of points However, this is an inadequate criterion, so is the sum of the absolute values

216 Figure 17.2

217 Best strategy is to minimize the sum of the squares of the residuals between the measured y and the y calculated with the linear model: Yields a unique line for a given set of data.

218 List-Squares Fit of a Straight Line/
Normal equations, can be solved simultaneously Mean values

219 Figure 17.3

220 Figure 17.4

221 Figure 17.5

222 Sum of the squares of residuals around the regression line is Sr
“Goodness” of our fit/ If Total sum of the squares around the mean for the dependent variable, y, is St Sum of the squares of residuals around the regression line is Sr St-Sr quantifies the improvement or error reduction due to describing data in terms of a straight line rather than as an average value. r2-coefficient of determination Sqrt(r2) – correlation coefficient

223 For a perfect fit Sr=0 and r=r2=1, signifying that the line explains 100 percent of the variability of the data. For r=r2=0, Sr=St, the fit represents no improvement.

224 Polynomial Regression
Some engineering data is poorly represented by a straight line. For these cases a curve is better suited to fit the data. The least squares method can readily be extended to fit the data to higher order polynomials (Sec. 17.2).

225 General Linear Least Squares
Minimized by taking its partial derivative w.r.t. each of the coefficients and setting the resulting equation equal to zero

226 Chapter 18

227 Interpolation Chapter 18
Estimation of intermediate values between precise data points. The most common method is: Although there is one and only one nth-order polynomial that fits n+1 points, there are a variety of mathematical formats in which this polynomial can be expressed: The Newton polynomial The Lagrange polynomial

228 Figure 18.1

229 Newton’s Divided-Difference Interpolating Polynomials
Linear Interpolation/ Is the simplest form of interpolation, connecting two data points with a straight line. f1(x) designates that this is a first-order interpolating polynomial. Slope and a finite divided difference approximation to 1st derivative Linear-interpolation formula

230 Figure 18.2

231 Quadratic Interpolation/
If three data points are available, the estimate is improved by introducing some curvature into the line connecting the points. A simple procedure can be used to determine the values of the coefficients.

232 General Form of Newton’s Interpolating Polynomials/
Bracketed function evaluations are finite divided differences

233 Errors of Newton’s Interpolating Polynomials/
Structure of interpolating polynomials is similar to the Taylor series expansion in the sense that finite divided differences are added sequentially to capture the higher order derivatives. For an nth-order interpolating polynomial, an analogous relationship for the error is: For non differentiable functions, if an additional point f(xn+1) is available, an alternative formula can be used that does not require prior knowledge of the function: x Is somewhere containing the unknown and he data

234 Lagrange Interpolating Polynomials
The Lagrange interpolating polynomial is simply a reformulation of the Newton’s polynomial that avoids the computation of divided differences:

235 As with Newton’s method, the Lagrange version has an estimated error of:

236 Figure 18.10

237 Coefficients of an Interpolating Polynomial
Although both the Newton and Lagrange polynomials are well suited for determining intermediate values between points, they do not provide a polynomial in conventional form: Since n+1 data points are required to determine n+1 coefficients, simultaneous linear systems of equations can be used to calculate “a”s.

238 Where “x”s are the knowns and “a”s are the unknowns.

239 Figure 18.13

240 Spline Interpolation There are cases where polynomials can lead to erroneous results because of round off error and overshoot. Alternative approach is to apply lower-order polynomials to subsets of data points. Such connecting polynomials are called spline functions.

241 Figure 18.14

242 Figure 18.15

243 Figure 18.16

244 Figure 18.17

245 Chapter 19

246 Fourier Approximation Chapter 19
Engineers often deal with systems that oscillate or vibrate. Therefore trigonometric functions play a fundamental role in modeling such problems. Fourier approximation represents a systemic framework for using trigonometric series for this purpose.

247 Figure 19.2

248 Curve Fitting with Sinusoidal Functions
A periodic function f(t) is one for which where T is a constant called the period that is the smallest value for which this equation holds. Any waveform that can be described as a sine or cosine is called sinusoid: Four parameters serve to characterize the sinusoid. The mean value A0 sets the average height above the abscissa. The amplitude C1 specifies the height of the oscillation. The angular frequency w0 characterizes how often the cycles occur. The phase angle, or phase shift, t parameterizes the extent which the sinusoid is shifted horizontally.

249 Figure 19.3

250 Figure 19.4

251 An alternative model that still requires four parameters but that is cast in the format of a general linear model can be obtained by invoking the trigonometric identity:

252 Least-squares Fit of a Sinusoid/
Sinusoid equation can be thought of as a linear least-squares model Thus our goal is to determine coefficient values that minimize

253 Continuous Fourier Series
In course of studying heat-flow problems, Fourier showed that an arbitrary periodic function can be represented by an infinite series of sinusoids of harmonically related frequencies. For a function with period T, a continuous Fourier series can be written:

254 Where w0=2p/T is called fundamental frequency and its constant multiples 2w0, 3w0, etc., are called harmonics. The coefficients of the equation can be calculated as follows:

255 Frequency and Time Domains
Although it is not as familiar, the frequency domain provides an alternative perspective for characterizing the behavior of oscillating functions. Just as an amplitude can be plotted versus time, it can also be plotted against frequency. In such a plot, the magnitude or amplitude of the curve, f(t), is the dependent variable and time t and frequency f=w0/2p are independent variables. Thus, the amplitude and time axis form a time plane, and the amplitude and the frequency axes form a frequency plane.

256 Figure 19.7

257 Fourier Integral and Transform
Although the Fourier series is useful tool for investigating the spectrum of a periodic function, there are many waveforms that do not repeat themselves regularly, such as a signal produced by a lightning bolt. Such a nonrecurring signal exhibits a continuous frequency spectrum and the Fourier integral is the primary tool available for this purpose.

258 Fourier transform pair
The transition from a periodic to a nonperiodic function can be affected by allowing the period to approach infinity. Then the Fourier series reduces to Inverse Fourier transform of F(iw0) Fourier transform pair Fourier integral of f(t), or Fourier transform of f(t)

259 Discrete Fourier Transform (DFT)
In engineering, functions are often represented by finite sets of discrete values and data is often collected in or converted to such a discrete format. An interval from 0 to t can be divided into N equispaced subintervals with widths of Dt=T/N.

260 Figure 19.11

261 Fast Fourier Transform (FFT)
FFT is an algorithm that has been developed to compute the DFT in an extremely economical (fast) fashion by using the results of previous computations to reduce the number of operations. FFT exploits the periodicity and symmetry of trigonometric functions to compute the transform with approximately N log2 N operations. Thus for N=50 samples, the FFT is 10 times faster than the standard DFT. For N=1000, it is about 100 times faster. Sande-Tukey Algorithm Cooley-Tukey Algorithm

262 Fig.19.14

263 Part 6 - Chapter 21

264 Part 6 Numerical Differentiation and Integration
Calculus is the mathematics of change. Because engineers must continuously deal with systems and processes that change, calculus is an essential tool of engineering. Standing in the heart of calculus are the mathematical concepts of differentiation and integration:

265 Figure PT6.1

266 Figure PT6.2

267 Noncomputer Methods for Differentiation and Integration
The function to be differentiated or integrated will typically be in one of the following three forms: A simple continuous function such as polynomial, an exponential, or a trigonometric function. A complicated continuous function that is difficult or impossible to differentiate or integrate directly. A tabulated function where values of x and f(x) are given at a number of discrete points, as is often the case with experimental or field data.

268 Figure PT6.4

269 Figure PT6.7

270 Figure PT6.10

271 Newton-Cotes Integration Formulas Chapter 21
The Newton-Cotes formulas are the most common numerical integration schemes. They are based on the strategy of replacing a complicated function or tabulated data with an approximating function that is easy to integrate:

272 Figure 21.1

273 Figure 21.2

274 The Trapezoidal Rule The Trapezoidal rule is the first of the Newton-Cotes closed integration formulas, corresponding to the case where the polynomial is first order: The area under this first order polynomial is an estimate of the integral of f(x) between the limits of a and b: Trapezoidal rule

275 Figure 21.4

276 Error of the Trapezoidal Rule/
When we employ the integral under a straight line segment to approximate the integral under a curve, error may be substantial: where x lies somewhere in the interval from a to b.

277 Figure 21.6

278 The Multiple Application Trapezoidal Rule/
One way to improve the accuracy of the trapezoidal rule is to divide the integration interval from a to b into a number of segments and apply the method to each segment. The areas of individual segments can then be added to yield the integral for the entire interval. Substituting the trapezoidal rule for each integral yields:

279 Figure 21.8

280 An error for multiple-application trapezoidal rule can be obtained by summing the individual errors for each segment: Thus, if the number of segments is doubled, the truncation error will be quartered.

281 Simpson’s Rules More accurate estimate of an integral is obtained if a high-order polynomial is used to connect the points. The formulas that result from taking the integrals under such polynomials are called Simpson’s rules. Simpson’s 1/3 Rule/ Results when a second-order interpolating polynomial is used.

282 Figure 21.10

283 Simpson’s 1/3 rule is more accurate than trapezoidal rule.
Single segment application of Simpson’s 1/3 rule has a truncation error of: Simpson’s 1/3 rule is more accurate than trapezoidal rule.

284 The Multiple-Application Simpson’s 1/3 Rule/
Just as the trapezoidal rule, Simpson’s rule can be improved by dividing the integration interval into a number of segments of equal width. Yields accurate results and considered superior to trapezoidal rule for most applications. However, it is limited to cases where values are equispaced. Further, it is limited to situations where there are an even number of segments and odd number of points.

285 Figure 21.11

286 Simpson’s 3/8 Rule/ An odd-segment-even-point formula used in conjunction with the 1/3 rule to permit evaluation of both even and odd numbers of segments. More accurate

287 Figure 21.12

288 Chapter 22

289 Integration of Equations Chapter 22
Functions to be integrated numerically are in two forms: A table of values. We are limited by the number of points that are given. A function. We can generate as many values of f(x) as needed to attain acceptable accuracy. Will focus on two techniques that are designed to analyze functions: Romberg integration Gauss quadrature

290 Romberg Integration Is based on successive application of the trapezoidal rule to attain efficient numerical integrals of functions. Richardson’s Extrapolation/ Uses two estimates of an integral to compute a third and more accurate approximation.

291 The estimate and error associated with a multiple-application trapezoidal rule can be represented generally as I =exact value of integral I(h) =the approximation from an n segment application of trapezoidal rule with step size h E(h) =the truncation error Assumed constant regardless of step size

292 Improved estimate of the integral

293 Figure 22.3

294 Gauss Quadrature Gauss quadrature implements a strategy of positioning any two points on a curve to define a straight line that would balance the positive and negative errors. Hence the area evaluated under this straight line provides an improved estimate of the integral.

295 Figure 22.6

296 Method of Undetermined Coefficients/
The trapezoidal rule yields exact results when the function being integrated is a constant or a straight line, such as y=1 and y=x: Solve simultaneously Trapezoidal rule

297 Figure 22.7

298 Derivation of the Two-Point Gauss-Legendre Formula/
The object of Gauss quadrature is to determine the equations of the form However, in contrast to trapezoidal rule that uses fixed end points a and b, the function arguments x0 and x1 are not fixed end points but unknowns. Thus, four unknowns to be evaluated require four conditions. First two conditions are obtained by assuming that the above eqn. for I fits the integral of a constant and a linear function exactly. The other two conditions are obtained by extending this reasoning to a parabolic and a cubic functions.

299 Solved simultaneously
Yields an integral estimate that is third order accurate

300 Figure 22.8

301 Notice that the integration limits are from -1 to 1
Notice that the integration limits are from -1 to 1. This was done for simplicity and make the formulation as general as possible. A simple change of variable is used to translate other limits of integration into this form. Provided that the higher order derivatives do not increase substantially with increasing number of points (n), Gauss quadrature is superior to Newton-Cotes formulas. Error for the Gauss-Legendre formulas

302 Improper Integrals Improper integrals can be evaluated by making a change of variable that transforms the infinite range to one that is finite, where –A is chosen as a sufficiently large negative value so that the function has begun to approach zero asymptotically at least as fast as 1/x2.

303 Chapter 23

304 Numerical Differentiation Chapter 23
Notion of numerical differentiation has been introduced in Chapter 4. In this chapter more accurate formulas that retain more terms will be developed.

305 High Accuracy Differentiation Formulas
High-accuracy divided-difference formulas can be generated by including additional terms from the Taylor series expansion.

306 Inclusion of the 2nd derivative term has improved the accuracy to O(h2).
Similar improved versions can be developed for the backward and centered formulas as well as for the approximations of the higher derivatives.

307 Richardson Extrapolation
There are two ways to improve derivative estimates when employing finite divided differences: Decrease the step size, or Use a higher-order formula that employs more points. A third approach, based on Richardson extrapolation, uses two derivative estimates t compute a third, more accurate approximation.

308 For centered difference approximations with O(h2)
For centered difference approximations with O(h2). The application of this formula yield a new derivative estimate of O(h4).

309 Derivatives of Unequally Spaced Data
Data from experiments or field studies are often collected at unequal intervals. One way to handle such data is to fit a second-order Lagrange interpolating polynomial. x is the value at which you want to estimate the derivative.

310 Derivatives and Integrals for Data with Errors
Figure 23.5

311 Part 7 - Chapter 25

312 Part 7 Ordinary Differential Equations
Equations which are composed of an unknown function and its derivatives are called differential equations. Differential equations play a fundamental role in engineering because many physical phenomena are best formulated mathematically in terms of their rate of change. v- dependent variable t- independent variable

313 Differential equations are also classified as to their order.
When a function involves one dependent variable, the equation is called an ordinary differential equation (or ODE). A partial differential equation (or PDE) involves two or more independent variables. Differential equations are also classified as to their order. A first order equation includes a first derivative as its highest derivative. A second order equation includes a second derivative. Higher order equations can be reduced to a system of first order equations, by redefining a variable.

314 ODEs and Engineering Practice
Figure PT7.1

315 Figure PT7.2

316 Figure PT7.5

317 Runga-Kutta Methods Chapter 25
This chapter is devoted to solving ordinary differential equations of the form Euler’s Method

318 Figure 25.2

319 The first derivative provides a direct estimate of the slope at xi
where f(xi,yi) is the differential equation evaluated at xi and yi. This estimate can be substituted into the equation: A new value of y is predicted using the slope to extrapolate linearly over the step size h.

320 Figure 25.3

321 Error Analysis for Euler’s Method/
Numerical solutions of ODEs involves two types of error: Truncation error Local truncation error Propagated truncation error The sum of the two is the total or global truncation error Round-off errors

322 The Taylor series provides a means of quantifying the error in Euler’s method. However;
The Taylor series provides only an estimate of the local truncation error-that is, the error created during a single step of the method. In actual problems, the functions are more complicated than simple polynomials. Consequently, the derivatives needed to evaluate the Taylor series expansion would not always be easy to obtain. In conclusion, the error can be reduced by reducing the step size If the solution to the differential equation is linear, the method will provide error free predictions as for a straight line the 2nd derivative would be zero.

323 Figure 25.4

324 Improvements of Euler’s method
A fundamental source of error in Euler’s method is that the derivative at the beginning of the interval is assumed to apply across the entire interval. Two simple modifications are available to circumvent this shortcoming: Heun’s Method The Midpoint (or Improved Polygon) Method

325 Heun’s Method/ One method to improve the estimate of the slope involves the determination of two derivatives for the interval: At the initial point At the end point The two derivatives are then averaged to obtain an improved estimate of the slope for the entire interval.

326 Figure 25.9

327 Figure 25.10

328 The Midpoint (or Improved Polygon) Method/
Uses Euler’s method t predict a value of y at the midpoint of the interval:

329 Figure 25.12

330 Runge-Kutta Methods (RK)
Runge-Kutta methods achieve the accuracy of a Taylor series approach without requiring the calculation of higher derivatives. Increment function p’s and q’s are constants

331 k’s are recurrence functions
k’s are recurrence functions. Because each k is a functional evaluation, this recurrence makes RK methods efficient for computer calculations. Various types of RK methods can be devised by employing different number of terms in the increment function as specified by n. First order RK method with n=1 is in fact Euler’s method. Once n is chosen, values of a’s, p’s, and q’s are evaluated by setting general equation equal to terms in a Taylor series expansion.

332 Values of a1, a2, p1, and q11 are evaluated by setting the second order equation to Taylor series expansion to the second order term. Three equations to evaluate four unknowns constants are derived. A value is assumed for one of the unknowns to solve for the other three.

333 Three of the most commonly used methods are:
Because we can choose an infinite number of values for a2, there are an infinite number of second-order RK methods. Every version would yield exactly the same results if the solution to ODE were quadratic, linear, or a constant. However, they yield different results if the solution is more complicated (typically the case). Three of the most commonly used methods are: Huen Method with a Single Corrector (a2=1/2) The Midpoint Method (a2=1) Raltson’s Method (a2=2/3)

334 Figure 25.14

335 Systems of Equations Many practical problems in engineering and science require the solution of a system of simultaneous ordinary differential equations rather than a single equation: Solution requires that n initial conditions be known at the starting value of x.

336 Adaptive Runge-Kutta Methods
Figure 25.20 For an ODE with an abrupt changing solution, a constant step size can represent a serious limitation.

337 Step-Size Control/ The strategy is to increase the step size if the error is too small and decrease it if the error is too large. Press et al. (1992) have suggested the following criterion to accomplish this: Dpresent= computed present accuracy Dnew= desired accuracy a= a constant power that is equal to 0.2 when step size increased and 0.25 when step size is decreased

338 Implementation of adaptive methods requires an estimate of the local truncation error at each step.
The error estimate can then serve as a basis for either lengthening or decreasing step size.

339 Chapter 26

340 Stiffness and Multistep Methods Chapter 26
Two areas are covered: Stiff ODEs will be described - ODEs that have both fast and slow components to their solution. Implicit solution technique and multistep methods will be described.

341 Stiffness A stiff system is the one involving rapidly changing components together with slowly changing ones. Both individual and systems of ODEs can be stiff: If y(0)=0, the analytical solution is developed as:

342 Figure 26.1

343 The solution starts at y(0)=y0 and asymptotically approaches zero.
Insight into the step size required for stability of such a solution can be gained by examining the homogeneous part of the ODE: The solution starts at y(0)=y0 and asymptotically approaches zero. is the solution.

344 If Euler’s method is used to solve the problem numerically:
The stability of this formula depends on the step size h:

345 Thus, for transient part of the equation, the step size must be <2/1000=0.002 to maintain stability. While this criterion maintains stability, an even smaller step size would be required to obtain an accurate solution. Rather than using explicit approaches, implicit methods offer an alternative remedy. An implicit form of Euler’s method can be developed by evaluating the derivative at a future time.

346 Backward or implicit Euler’s method
The approach is called unconditionally stable. Regardless of the step size:

347 Figure 26.2

348 Figure 26.3

349 Multistep Methods The Non-Self-Starting Heun Method/
Huen method uses Euler’s method as a predictor and trapezoidal rule as a corrector. Predictor is the weak link in the method because it has the greatest error, O(h2). One way to improve Heun’s method is to develop a predictor that has a local error of O(h3).

350 Figure 26.4

351 Step-Size Control/ Constant Step Size. Variable Step Size.
A value for h must be chosen prior to computation. It must be small enough to yield a sufficiently small truncation error. It should also be as large as possible to minimize run time cost and round-off error. Variable Step Size. If the corrector error is greater than some specified error, the step size is decreased. A step size is chosen so that the convergence criterion of the corrector is satisfied in two iterations. A more efficient strategy is to increase and decrease by doubling and halving the step size.

352 Figure 26.6

353 Integration Formulas/ Newton-Cotes Formulas. Open Formulas.
Closed Formulas. fn(x) is an nth order interpolating polynomial.

354 Adams Formulas (Adams-Bashforth). Open Formulas.
The Adams formulas can be derived in a variety of ways. One way is to write a forward Taylor series expansion around xi. A second order open Adams formula: Closed Formulas. A backward Taylor series around xi+1 can be written: Listed in Table 26.2

355 Figures 26.7

356 Figure 26.8

357 Figure 26.9

358 Higher-Order multistep Methods/ Milne’s Method.
Uses the three point Newton-Cotes open formula as a predictor and three point Newton-Cotes closed formula as a corrector. Fourth-Order Adams Method. Based on the Adams integration formulas. Uses the fourth-order Adams-Bashforth formula as the predictor and fourth-order Adams-Moulton formula as the corrector.

359 Chapter 27

360 Boundary-Value and Eigenvalue Problems Chapter 27
An ODE is accompanied by auxiliary conditions. These conditions are used to evaluate the integral that result during the solution of the equation. An nth order equation requires n conditions. If all conditions are specified at the same value of the independent variable, then we have an initial-value problem. If the conditions are specified at different values of the independent variable, usually at extreme points or boundaries of a system, then we have a boundary-value problem.

361 Figure 27.1

362 General Methods for Boundary-value Problems
Figure 27.2

363 (Heat transfer coefficient)
Boundary Conditions Analytical Solution:

364 The Shooting Method/ Converts the boundary value problem to initial-value problem. A trial-and-error approach is then implemented to solve the initial value approach. For example, the 2nd order equation can be expressed as two first order ODEs: An initial value is guessed, say z(0)=10. The solution is then obtained by integrating the two 1st order ODEs simultaneously.

365 Using a 4th order RK method with a step size of 2:
This differs from T(10)=200. Therefore a new guess is made, z(0)=20 and the computation is performed again. z(0)=20 T(10)= Since the two sets of points, (z, T)1 and (z, T)2, are linearly related, a linear interpolation formula is used to compute the value of z(0) as to determine the correct solution.

366 Figure 27.3

367 Nonlinear Two-Point Problems.
For a nonlinear problem a better approach involves recasting it as a roots problem. Driving this new function, g(z0), to zero provides the solution.

368 Figure 27.4

369 Finite Differences Methods.
The most common alternatives to the shooting method. Finite differences are substituted for the derivatives in the original equation. Finite differences equation applies for each of the interior nodes. The first and last interior nodes, Ti-1 and Ti+1, respectively, are specified by the boundary conditions. Thus, a linear equation transformed into a set of simultaneous algebraic equations can be solved efficiently.

370 Eigenvalue Problems Special class of boundary-value problems that are common in engineering involving vibrations, elasticity, and other oscillating systems. Eigenvalue problems are of the general form:

371 l is the unknown parameter called the eigenvalue or characteristic value.
A solution {X} for such a system is referred to as an eigenvector. The determinant of the matrix [[A]-l[I]] must equal to zero for nontrivial solutions to be possible. Expanding the determinant yields a polynomial in l. The roots of this polynomial are the solutions to the eigen values.

372 The Polynomial Method/
When dealing with complicated systems or systems with heterogeneous properties, analytical solutions are often difficult or impossible to obtain. Numerical solutions to such equations may be the only practical alternatives. These equations can be solved by substituting a central finite-divided difference approximation for the derivatives. Writing this equation for a series of nodes yields a homogeneous system of equations. Expansion of the determinant of the system yields a polynomial, the roots of which are the eigenvalues.

373 The Power Method/ An iterative approach that can be employed to determine the largest eigenvalue. To determine the largest eigenvalue the system must be expressed in the form:

374 Part 8 - Chapter 29

375 Part 8 Partial Differential Equations
Table PT8.1

376 Figure PT8.4

377 Finite Difference: Elliptic Equations Chapter 29
Solution Technique Elliptic equations in engineering are typically used to characterize steady-state, boundary value problems. For numerical solution of elliptic PDEs, the PDE is transformed into an algebraic difference equation. Because of its simplicity and general relevance to most areas of engineering, we will use a heated plate as an example for solving elliptic PDEs.

378 Figure 29.1

379 Figure 29.3

380 The Laplacian Difference Equations/
Laplace Equation O[D(x)2] O[D(y)2] Laplacian difference equation. Holds for all interior points

381 Figure 29.4

382 In addition, boundary conditions along the edges must be specified to obtain a unique solution.
The simplest case is where the temperature at the boundary is set at a fixed value, Dirichlet boundary condition. A balance for node (1,1) is: Similar equations can be developed for other interior points to result a set of simultaneous equations.

383 The result is a set of nine simultaneous equations with nine unknowns:

384 The Liebmann Method/ Most numerical solutions of Laplace equation involve systems that are very large. For larger size grids, a significant number of terms will b e zero. For such sparse systems, most commonly employed approach is Gauss-Seidel, which when applied to PDEs is also referred as Liebmann’s method.

385 Boundary Conditions We will address problems that involve boundaries at which the derivative is specified and boundaries that are irregularly shaped. Derivative Boundary Conditions/ Known as a Neumann boundary condition. For the heated plate problem, heat flux is specified at the boundary, rather than the temperature. If the edge is insulated, this derivative becomes zero.

386 Figure 29.7

387 Thus, the derivative has been incorporated into the balance.
Similar relationships can be developed for derivative boundary conditions at the other edges.

388 Many engineering problems exhibit irregular boundaries.
Figure 29.9 Irregular Boundaries Many engineering problems exhibit irregular boundaries.

389 First derivatives in the x direction can be approximated as:

390 Control-Volume Approach
A similar equation can be developed in the y direction. Control-Volume Approach Figure 29.12

391 Figure 29.13

392 The control-volume approach resembles the point-wise approach in that points are determined across the domain. In this case, rather than approximating the PDE at a point, the approximation is applied to a volume surrounding the point.

393 Chapter 30

394 Finite Difference: Parabolic Equations Chapter 30
Parabolic equations are employed to characterize time-variable (unsteady-state) problems. Conservation of energy can be used to develop an unsteady-state energy balance for the differential element in a long, thin insulated rod.

395 Figure 30.1

396 Energy balance together with Fourier’s law of heat conduction yields heat-conduction equation:
Just as elliptic PDEs, parabolic equations can be solved by substituting finite divided differences for the partial derivatives. In contrast to elliptic PDEs, we must now consider changes in time as well as in space. Parabolic PDEs are temporally open-ended and involve new issues such as stability.

397 Figure 30.2

398 Explicit Methods The heat conduction equation requires approximations for the second derivative in space and the first derivative in time:

399 This equation can be written for all interior nodes on the rod.
It provides an explicit means to compute values at each node for a future time based on the present values at the node and its neighbors.

400 Figure 30.3

401 Convergence and Stability/
Convergence means that as Dx and Dt approach zero, the results of the finite difference method approach the true solution. Stability means that errors at any stage of the computation are not amplified but are attenuated as the computation progresses. The explicit method is both convergent and stable if

402 Figure 30.5

403 Derivative Boundary Conditions/
As was the case for elliptic PDEs, derivative boundary conditions can be readily incorporated into parabolic equations. Thus an imaginary point is introduced at i= -1, providing a vehicle for incorporating the derivative boundary condition into the analysis.

404 A simple Implicit Method
Implicit methods overcome difficulties associated with explicit methods at the expense of somewhat more complicated algorithms. In implicit methods, the spatial derivative is approximated at an advanced time interval l+1: which is second-order accurate.

405 This eqn. applies to all but the first and the last interior nodes, which must be modified to reflect the boundary conditions: Resulting m unknowns and m linear algebraic equations

406 Figures 30.6

407 Figure 30.7

408 The Crank-Nicolson Method
Provides an alternative implicit scheme that is second order accurate both in space and time. To provide this accuracy, difference approximations are developed at the midpoint of the time increment:

409 Parabolic Equations in Two Spatial Dimensions
For two dimensions the heat-conduction equation can be applied more than one spatial dimension: Standard Explicit and Implicit Schemes/ An explicit solution can be obtained by substituting finite-differences approximations for the partial derivatives. However, this approach is limited by a stringent stability criterion, thus increases the required computational effort.

410 The direct application of implicit methods leads to solution of m x n simultaneous equations.
When written for two or three dimensions, these equations lose the valuable property of being tridiagonal and require very large matrix storage and computation time.

411 The ADI Scheme/ The alternating-direction implicit, or ADI, scheme provides a means for solving parabolic equations in two spatial dimensions using tridiagonal matrices. Each time increment is executed in two steps. For the first step, heat conduction equation is approximated by: Thus the approximation of partial derivatives are written explicitly, that is at the base point tl where temperatures are known. Consequently only the three temperature terms in each approximation are unknown.

412 Figure 30.10

413 Chapter 31

414 Finite-Element Method Chapter 31
Finite element method provides an alternative to finite-difference methods, especially for systems with irregular geometry, unusual boundary conditions, or heterogeneous composition. This method divides the solution domain into simply shaped regions or elements. An approximate solution for the PDE can be developed for each element. The total solution is generated by linking together, or “assembling,” the individual solutions taking care to ensure continuity at the interelement boundaries.

415 Figure 31.1

416 The General Approach Discretization/
Figure 31.2 Discretization/ First step is dividing the solution domain into finite elements.

417 Element Equations/ Next step is develop equations to approximate the solution for each element. Must choose an appropriate function with unknown coefficients that will be used to approximate the solution. Evaluation of the coefficients so that the function approximates the solution in an optimal fashion. Choice of Approximation Functions: For one dimensional case the simplest case is a first-order polynomial;

418 Approximation or shape function
Interpolation functions

419 Figure 31.3

420 The fact that we are dealing with linear equations facilitates operations such as differentiation and integration: Obtaining an Optimal Fit of the Function to the Solution: Most common approaches are the direct approach, the method of weighted residuals, and the variational approach.

421 Mathematically, the resulting element equations will often consists of a set of linear algebraic equations that can be expressed in matrix form: [k]=an element property or stiffness matrix {u}=a column vector of unknowns at the nodes {F}=a column vector reflecting the effect of any external influences applied at the nodes.

422 Assembly/ The assembly process is governed by the concept of continuity. The solutions for contiguous elements are matched so that the unknown values (and sometimes the derivatives) at their common nodes are equivalent. When all the individual versions of the matrix equation are finally assembled: [K] = assemblage property matrix {u´} and {F´}= assemblage of the vectors {u} and {F}

423 Boundary Conditions/ Matrix equation when modified to account for system’s boundary conditions: Solution/ In many cases the elements can be configured so that the resulting equations are banded. Highly efficient solution schemes are available for such systems (Part Three).

424 Postprocessing/ Upon obtaining solution, it can be displayed in tabular form or graphically.

425 Two-dimensional Problems
Although the mathematical “bookkeeping” increases significantly, the extension of the finite element approach to two dimensions is conceptually similar to one-dimensional applications. It follows the same steps.

426 Figure 31.9

427 Figure 31.10


Download ppt "Chapter 1."

Similar presentations


Ads by Google