Presentation is loading. Please wait.

Presentation is loading. Please wait.

Round-off Errors.

Similar presentations


Presentation on theme: "Round-off Errors."— Presentation transcript:

1 Round-off Errors

2 Key Concepts Round-off / Chopping Errors
Recognize how floating point arithmetic operations can introduce and amplify round-off errors What can be done to reduce the effect of round-off errors

3 There are discrete points on the number lines that can be represented by our computer.
How about the space between ?

4 Implication of FP representations
Only limited range of quantities may be represented. Overflow and underflow Only a finite number of quantities within the range may be represented. round-off errors or chopping errors

5 Round-off / Chopping Errors (Error Bounds Analysis)
Let z be a real number we want to represent in a computer, and fl(z) be the representation of z in the computer. What is the largest possible value of ? i.e., in the worst case, how much data are we losing due to round-off or chopping errors?

6 Chopping Errors (Error Bounds Analysis)
Suppose the mantissa can only support n digits. Thus the absolute and relative chopping errors are Suppose ß = 10 (base 10), what are the values of ai such that the errors are the largest?

7 Chopping Errors (Error Bounds Analysis)

8 Round-off Errors (Error Bounds Analysis)
Round down Round up fl(z) is the rounded value of z

9 Round-off Errors (Error Bounds Analysis) Absolute error of fl(z)
When rounding down Similarly, when rounding up i.e., when

10 Round-off Errors (Error Bounds Analysis) Relative error of fl(z)

11 Summary of Error Bounds Analysis
Chopping Errors Round-off errors Absolute Relative β base n # of significant digits or # of digits in the mantissa Regardless of chopping or round-off is used to round the numbers, the absolute errors may increase as the numbers grow in magnitude but the relative errors are bounded by the same magnitude.

12 Machine Epsilon Relative chopping error Relative round-off error
eps is known as the machine epsilon – the smallest number such that 1 + eps > 1 epsilon = 1; while (1 + epsilon > 1) epsilon = epsilon / 2; epsilon = epsilon * 2; Algorithm to compute machine epsilon

13 Propagation of Errors Each number or value of a variable is represented with error These errors (Ex and Ey) are carried over to the result of every arithmetic operation (+, -, x, ⌯) How much error is propagated to the result of each arithmetic operation?

14 Example #1 Assume 4 decimal mantissa with rounding are used
(Final value after round-off) How many types of errors and how much errors are introduced to the final value?

15 Propagated Error: (xT + yT) - (xA + yA) = Ex+Ey
Example #1 Propagated Error: (xT + yT) - (xA + yA) = Ex+Ey Propagated Error = x x10-3

16 Example #1 Rounding Error:

17 Finally, the total error is
Example #1 Finally, the total error is The total error is the sum of the propagated error and rounding error

18 Propagation of Errors (In General)
Let  be the operation between xT and yT  can be any of +, -, x, ⌯ Let * be the corresponding operation carried out by the computer Note: xA  yA ≠ xA * yA

19 Propagation of Errors (In General)
Error between the true result and the computed result is (xT  yT ) – (xA * yA) = (xT  yT – xA  yA) + (xA  yA – xA * yA) Errors in x and y propagated by the operation Rounding error of the result |xA  yA – xA * yA| = fl(xA  yA) ≤ |xA  yA | x eps

20 Analysis of Propagated Errors Addition and Subtraction

21 Propagated Errors – Multiplication
Very small and can be neglected

22 Propagated Errors – Division
if εy is small and negligible

23 Example #2 Effects of rounding errors in arithmetic manipulations
Assuming 4-digit decimal mantissa Round-off in simple multiplication or division Results by the computer:

24 Danger of adding/subtracting a small number to/from a large number
Possible workarounds: 1) Sort the numbers by magnitude (if they have the same signs) and add the numbers in increasing order 2) Reformulate the formula algebraically

25 Associativity not necessarily hold for floating point addition (or multiplication)
The two answers are NOT the same! Note: In this example, if we simply sort the numbers by magnitude and add the number in increasing order, we actually get worse answer! Better approach is analyze the problem algebraically.

26 Subtraction of two close numbers
The result will be normalized into x 101 However, note that the zero added to the end of the mantissa is not significant. Note: x 101 implies the error is about ± x 101 but the actual error could be as big as ± x 102

27 Subtractive Cancellation – Subtraction of two very close numbers
The error bound is just as large as the estimation of the result! Subtraction of nearly equal numbers are major cause of errors! Avoid subtractive cancellation whenever possible.

28 Avoiding Subtractive Cancellations
Example 1: When x is large, compute Is there a way to reduce the errors assuming that we are using the same number of bits to represent numbers? Answer: One possible solution is via rationalization

29 Subtraction of nearly equal numbers
Example 2: Compute the roots of ax2 + bx + c = 0 using Solve x2 – 26x + 1 = 0

30 Assume 5 decimal mantissa,
Example 2 (continue) Assume 5 decimal mantissa, implies that one solution is more accurate than the other one.

31 Alternatively, a better solution is
Example 2 (continue) Alternatively, a better solution is with i.e., instead of computing we use as the solution for the second root

32 Note: This formula does NOT give more accurate result in ALL cases.
We have to be careful when writing numerical programs. A prior estimation of the answer, and the corresponding error, is needed first. If the error is large, we have to use alternative methods to compute the solution.

33 Assignment 1 (Problem 1) Assume 3 decimal mantissa with rounding
Evaluate f(1000) directly. Evaluate f(1000) as accurate as possible using an alternative approach. Find the relative error of f(1000) in part (a) and (b).

34 Propagation of Errors in a Series
Let the series be Is there any difference between adding (((x1 + x2) +x3) +x4) +…+xm and (((xm + xm-1) +xm-2) +xm-3) +…+x1

35 Example: for (i = 0; i < 100000; i++) { sumx = sumx + x;
sumy = sumy + y; sumz = sumz + z; } printf("%sumx = %f\n", sumx); printf("%sumy = %f\n", sumy); printf("%sumz = %f\n", sumz); return 0; #include <stdio.h> int main() { float sumx, x; float sumy, y; double sumz, z; int i; sumx = 0.0; sumy = 0.0; sumz = 0.0; x = 1.0; y = ; z = ; Output: sumx = sumy = sumz =

36 Exercise Discuss to what extent (a + b)c = ac + bc
is violated in machine arithmetic.

37 Example: Evaluate ex as
#include <stdio.h> #include <math.h> int main() { float x = 10, sum = 1, term = 1, temp = 0; int i = 0; while (temp != sum) { i++; term = term * x / i; temp = sum; sum = sum + term; printf("%2d %-12f %-14f\n", i, term, sum); } printf("exact value = %f\n", exp((double)x)); return 0;

38 Output (when x = 10) term sum 17 281.145752 21711.982422
exact value =

39 Example: Evaluate ex as
#include <stdio.h> #include <math.h> int main() { float x = 10, sum = 1, term = 1, temp = 0; int i = 0; while (temp != sum) { i++; term = term * x / i; temp = sum; sum = sum + term; printf("%2d %-12f %-14f\n", i, term, sum); } printf("exact value = %f\n", exp((double)x)); return 0; Arithmetic operations that introduce errors

40 Output (when x = -10) term sum Not just incorrect answer!
exact value = Not just incorrect answer! We get negative value!

41 Errors vs. Number of Arithmetic Operations
Assume 3-digit mantissa with rounding (a) Evaluate y = x3 – 3x2 + 4x for x = 2.73 (b) Evaluate y = [(x – 3)x + 4] x for x = 2.73 Compare and discuss the errors obtained in part (a) and (b).

42 Summary Round-off/chopping errors
Analysis Propagation of errors in arithmetic operations Analysis and Calculation How to minimize propagation of errors Avoid adding huge number to small number Avoid subtracting numbers that are close Minimize the number of arithmetic operations involved


Download ppt "Round-off Errors."

Similar presentations


Ads by Google