# Can You Trust Your Computer? CS365 – Mathematics of Computer Science Spring 2007 David Uhrig Tiffany Sharrard.

## Presentation on theme: "Can You Trust Your Computer? CS365 – Mathematics of Computer Science Spring 2007 David Uhrig Tiffany Sharrard."— Presentation transcript:

Can You Trust Your Computer? CS365 – Mathematics of Computer Science Spring 2007 David Uhrig Tiffany Sharrard

Introduction User's Perception of Computers Real vs. Floating Point Numbers Rounding and Chopping Over and Under Flow Machine Constants Error Propagation and Analysis New Ideas and Solutions Conclusion Questions

User ’ s Perception of Computers How are computers perceived by users? A computer is seen as an tool that will give you an exact answer What a computer may do: Can create only garbage because of how the computer handles real numbers

Example 10 48 + 914 – 10 48 + 10 32 + 615 – 10 32 The answer to this is 1529, but most digital computers would return zero Why?

Real vs. Floating Point Real Number System Can be written in decimal notation Can be infinite Includes all positive and negative integers, fractions, and irrational numbers

Real vs. Floating Point Floating Point Number System A t-digit base b floating-point number form: ± d 1 d 2 d 3 … d t b e Where d 1 d 2 d 3 … d t is the mantissa, b is the base number system, e is the exponent

Real vs. Floating Point Floating Point Number System (cont ’ d) The exponent is an integer between two fixed integer bounds e1 and e2 e1 <= 0 <= e2

Real vs. Floating Point Floating Point Number System (cont ’ d) Normalized Depends on: Base b Length of the mantissa t Bounds for the exponent, e1 and e2

Rounding vs. Chopping Chop A number is chopped to t digits and all the digits past t are discarded Example: t = 5 x = 2.5873892874 result = 2.5873

Rounding vs. Chopping Round A number x is rounded to t digits when x is replaced by a t digit number that approximates x with minimum error Example: t = 5 x = 2.5873892874 result = 2.5874

Overflow vs. Underflow Overflow Occurs when the result of a floating point operation is larger than the largest floating point number in the given floating point number system When this occurs, almost all computers will signal an error message

Overflow vs. Underflow Underflow Occurs when the result of a computation is smaller than the smallest quantity the computer can store Some computers don ’ t see this error because the machine sets the number to zero

Machine Constants Amount of round-off depends on the floating-point format your computer uses Before the error can be corrected, the machine constants need to be identified. Constants vary greatly by hardware IEEE 754 is the Standard for Binary Floating-Point Arithmetic

Machine Constants IEEE 754 Standard

Machine Epsilon To quantify the amount of round-off error, a round-off unit is specified: ε - Machine Epsilon, or Machine Precision This is the fractional accuracy of a floating point number. Represented by: ƒl (1 + ε) ≥ 1 Where ε is the smallest floating point number the machine can generate.

Computing ɛ Program Output david@david-laptop:~\$./findepsilon current Epsilon, 1 + current Epsilon 1 2.00000000000000000000 0.5 1.50000000000000000000 0.25 1.25000000000000000000 0.125 1.12500000000000000000 0.0625 1.06250000000000000000 0.03125 1.03125000000000000000 0.015625 1.01562500000000000000 0.0078125 1.00781250000000000000 0.00390625 1.00390625000000000000 0.00195312 1.00195312500000000000 0.000976562 1.00097656250000000000 0.000488281 1.00048828125000000000 0.000244141 1.00024414062500000000 0.00012207 1.00012207031250000000 6.10352E-05 1.00006103515625000000 3.05176E-05 1.00003051757812500000 1.52588E-05 1.00001525878906250000 7.62939E-06 1.00000762939453125000 3.8147E-06 1.00000381469726562500 1.90735E-06 1.00000190734863281250 9.53674E-07 1.00000095367431640625 4.76837E-07 1.00000047683715820312 2.38419E-07 1.00000023841857910156 Calculated Machine epsilon: 1.19209E-07 david@david-laptop:~\$ C Code * #include int main(int argc, char **argv) { float machEps=1.0f; printf("current Epsilon, 1 + current Epsilon\n"); while(1)‏ { printf("%G\t%.20f\n", machEps, (1.0f+machEps)); machEps/=2.0f; //If next epsilon yields 1, then break, because //current epsilon is the machine epsilon. if((float)(1.0+(machEps/2.0)) == 1.0)‏ break; } printf("\nCalculated Machine epsilon: %G\n", machEps); return 0; } * - Code borrowed from Wikipedia Entry on Machine Epsilon: http://en.wikipedia.org/wiki/Machine_epsilon

Error Propagation An optimistic value for the round-off accumulation in performing N arithmetic operations is roughly √ (N ɛ ). Could be N ɛ or even larger! Example: Subtractive Cancellation 4-digit base 10 arithmatic: ƒl [(10000 + 1) – 10000] = 0 (10000 + 1) – 10000 = 1

Error Analysis Two primary techniques of error analysis Forward Error Analysis Floating-point representation of the error is subjected to the same mathematical operations as the data itself. Equation for the error itself Backward Error Analysis Attempt to regenerate the original mathematical problem from previously computed solutions Minimizes error generation and propagation

Testing for Error Propagation Use the computed solution in the original problem Use Double or Extended Precision rather than Single Precision Rerun the problem with slightly modified (incorrect) data and look at the results

New Ideas Increased RAM and Processor speeds allow for more intricate solutions and alternatives to floating point errors. Nonfloating-point arithmetic implementations Rational Arithmetic Multiple or Full Precision Arithmetic Scalar and Dot Products of Vectors

Conclusion User's Perception of Computers Real vs. Floating Point Numbers Rounding and Chopping Over and Under Flow Machine Constants Error Propagation and Analysis New Ideas and Solutions...Questions?

Download ppt "Can You Trust Your Computer? CS365 – Mathematics of Computer Science Spring 2007 David Uhrig Tiffany Sharrard."

Similar presentations