# Lecture 2 Addendum Rounding Techniques ECE 645 – Computer Arithmetic.

## Presentation on theme: "Lecture 2 Addendum Rounding Techniques ECE 645 – Computer Arithmetic."— Presentation transcript:

Lecture 2 Addendum Rounding Techniques ECE 645 – Computer Arithmetic

2 Required Reading Parhami, Chapter 17.5, Rounding schemes Rounding Algorithms 101 http://www.diycalculator.com/popup-m-round.shtml

3 Rounding Rounding occurs when we want to approximate a more precise number (i.e. more fractional bits L) with a less precise number (i.e. fewer fractional bits L') Example 1: old: 000110.11010001 (K=6, L=8) new: 000110.11 (K'=6, L'=2) Example 2: old: 000110.11010001 (K=6, L=8) new: 000111. (K'=6, L'=0) The following pages show rounding from L>0 fractional bits to L'=0 bits, but the mathematics hold true for any L' < L Usually, keep the number of integral bits the same K'=K

4 Rounding Equation y = round(x) Fractional partWhole part x k–1 x k–2... x 1 x 0. x –1 x –2... x –l y k–1 y k–2... y 1 y 0 Round

5 Rounding Techniques There are different rounding techniques: 1) truncation results in round towards zero in signed magnitude results in round towards -∞ in two's complement 2) round to nearest number 3) round to nearest even number (or odd number) 4) round towards +∞ Other rounding techniques 5) jamming or von Neumann 6) ROM rounding Each of these techniques will differ in their error depending on representation of numbers i.e. signed magnitude versus two's complement Error = round(x) – x

6 1) Truncation Truncation in signed-magnitude results in a number chop(x) that is always of smaller magnitude than x. This is called round towards zero or inward rounding 011.10 (3.5) 10  011 (3) 10 Error = -0.5 111.10 (-3.5) 10  111 (-3) 10 Error = +0.5 Truncation in two's complement results in a number chop(x) that is always smaller than x. This is called round towards -∞ or downward-directed rounding 011.10 (3.5) 10  011 (3) 10 Error = -0.5 100.01 (-3.5) 10  100 (-4) 10 Error = -0.5 The simplest possible rounding scheme: chopping or truncation x k–1 x k–2... x 1 x 0. x –1 x –2... x –l x k–1 x k–2... x 1 x 0 trunc ulp

7 Truncation Function Graph: chop(x) Fig. 17.5 Truncation or chopping of a signed-magnitude number (same as round toward 0). Fig. 17.6 Truncation or chopping of a 2’s-complement number (same as round to -∞). chop( x ) –4 –3 –2 –1 x –4 –3 –2 –1 4 3 2 1 4 3 2 1 x ) –4 –3 –2 –1 x –4 –3 –2 –1 4 3 2 1 4 3 2 1

8 Bias in two's complement truncation X (binary) X (decimal) chop(x) (binary) chop(x) (decimal) Error (decimal) 011.00301130 011.013.250113-0.25 011.103.50113-0.5 011.113.750113-0.75 100.01-3.75100-4-0.25 100.10-3.5100-4-0.5 100.11-3.25100-4-0.75 101.00-3101-30 Assuming all combinations of positive and negative values of x equally possible, average error is -0.375 In general, average error = -(2 -L' -2 -L )/2, where L' = new number of fractional bits

9 Implementation truncation in hardware Easy, just ignore (i.e. truncate) the fractional digits from L to L'+1 x k-1 x k-2.. x 1 x 0. x -1 x -2.. x -L =y k-1 y k-2.. y 1 y 0. ignore (i.e. truncate the rest)

10 2) Round to nearest number Rounding to nearest number what we normally think of when say round 010.01 (2.25) 10  010 (2) 10 Error = -0.25 010.11 (2.75) 10  011 (3) 10 Error = +0.25 010.00 (2.00) 10  010 (2) 10 Error = +0.00 010.10 (2.5) 10  011 (3) 10 Error = +0.5 [round-half-up (arithmetic rounding)] 010.10 (2.5) 10  010 (2) 10 Error = -0.5 [round-half-down]

11 Round-half-up: dealing with negative numbers Rounding to nearest number what we normally think of when say round 101.11 (-2.25) 10  110 (-2) 10 Error = +0.25 101.01 (-2.75) 10  101 (-3) 10 Error = -0.25 110.00 (-2.00) 10  110 (-2) 10 Error = +0.00 101.10 (-2.5) 10  110 (-2) 10 Error = +0.5 [asymmetric implementation] 101.10 (-2.5) 10  101 (-3) 10 Error = -0.5 [symmetric implementation]

12 Round to Nearest Function Graph: rtn(x) Round-half-up version Asymmetric implementation Symmetric implementation

13 Bias in two's complement round to nearest Round-half-up asymmetric implementation X (binary) X (decimal) rtn(x) (binary) rtn(x) (decimal) Error (decimal) 010.00201020 010.012.250102-0.25 010.102.50113+0.5 010.112.750113+0.25 101.01-2.75101-3-0.25 101.10-2.5110-2+0.5 101.11-2.25110-2+0.25 110.00-2110-20 Assuming all combinations of positive and negative values of x equally possible, average error is +0.125 Smaller average error than truncation, but still not symmetric error We have a problem with the midway value, i.e. exactly at 2.5 or -2.5 leads to positive error bias always Also have the problem that you can get overflow if only allocate K' = K integral bits Example: rtn(011.10)  overflow This overflow only occurs on positive numbers near the maximum positive value, not on negative numbers

14 Implementing round to nearest (rtn) in hardware Round-half-up asymmetric implementation Two methods Method 1: Add '1' in position one digit right of new LSB (i.e. digit L'+1) and keep only L' fractional bits x k-1 x k-2.. x 1 x 0. x -1 x -2.. x -L + 1 =y k-1 y k-2.. y 1 y 0. y -1 Method 2: Add the value of the digit one position to right of new LSB (i.e. digit L'+1) into the new LSB digit (i.e. digit L) and keep only L' fractional bits x k-1 x k-2.. x 1 x 0. x -1 x -2.. x -L + x -1 y k-1 y k-2.. y 1 y 0. ignore (i.e. truncate the rest) ignore (i.e truncate the rest)

15 Round to Nearest Even Function Graph: rtne(x) To solve the problem with the midway value we implement round to nearest-even number (or can round to nearest odd number) Fig. 17.8 Rounding to the nearest even number. Fig. 17.9 R* rounding or rounding to the nearest odd number.

16 Bias in two's complement round to nearest even (rtne) average error is now 0 (ignoring the overflow) cost: more hardware X (binary) X (decimal ) rtne(x) (binary) rtne(x) (decimal) Error (decimal) 000.100.50000-0.5 001.101.50102+0.5 010.102.50102-0.5 011.103.50100 (overfl)4+0.5 100.10-3.5100-4-0.5 101.10-2.5010-2+0.5 110.10-1.5010-2-0.5 111.10-0.50000+0.5

17 4) Rounding towards infinity We may need computation errors to be in a known direction Example: in computing upper bounds, larger results are acceptable, but results that are smaller than correct values could invalidate upper bound Use upward-directed rounding (round toward +∞) up(x) always larger than or equal to x Similarly for lower bounds, use downward-directed rounding (round toward -∞) down(x) always smaller than or equal to x We have already seen that round toward -∞ in two's complement can be implemented by truncation

18 Rounding Toward Infinity Function Graph: up(x) and down(x) up(x)down(x) down(x) can be implemented by chop(x) in two's complement

19 Two's Complement Round to Zero Two's complement round to zero (inward rounding) also exists inward( x ) –4 –3 –2 –1 x –4 –3 –2 –1 4 3 2 1 4 3 2 1

20 Other Methods Note that in two's complement round to nearest (rtn) involves an addition which may have a carry propagation from LSB to MSB Rounding may take as long as an adder takes Can break the adder chain using the following two techniques: Jamming or von Neumann ROM-based

21 5) Jamming or von Neumann Chop and force the LSB of the result to 1 Simplicity of chopping, with the near-symmetry or ordinary rounding Max error is comparable to chopping (double that of rounding)

22 6) ROM Rounding Fig. 17.11 ROM rounding with an 8  2 table. Example: Rounding with a 32  4 table Rounding result is the same as that of the round to nearest scheme in 31 of the 32 possible cases, but a larger error is introduced when x 3 = x 2 = x 1 = x 0 = x –1 = 1 x k–1... x 4 x 3 x 2 x 1 x 0. x –1 x –2... x –l x k–1... x 4 y 3 y 2 y 1 y 0 ROM ROM dataROM address

23 ANSI/IEEE Floating-Point Standard ANSI/IEEE standard includes four rounding modes: Round to nearest even [default rounding mode] Round toward zero (inward) Round toward +  (upward) Round toward –  (downward)