Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.

Similar presentations


Presentation on theme: "CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson."— Presentation transcript:

1 CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson & Hennessy, ©2005 Some slides and/or pictures in the following are adapted from: slides ©2008 UCB

2  32-bit signed numbers (2’s complement): two = 0 ten two = + 1 ten two = + 2,147,483,646 ten two = + 2,147,483,647 ten two = – 2,147,483,648 ten two = – 2,147,483,647 ten two = – 2 ten two = – 1 ten Number Representations maxint minint  Converting <32-bit values into 32-bit values l copy the most significant bit (the sign bit) into the “empty” bits > > l sign extend versus zero extend ( lb vs. lbu ) MSB LSB

3 MIPS Arithmetic Logic Unit (ALU)  Must support the Arithmetic/Logic operations of the ISA add, addi, addiu, addu sub, subu mult, multu, div, divu sqrt and, andi, nor, or, ori, xor, xori beq, bne, slt, slti, sltiu, sltu 32 m (operation) result A B ALU 4 zeroovf 1 1  With special handling for l sign extend – addi, addiu, slti, sltiu l zero extend – andi, ori, xori l overflow detection – add, addi, sub

4 Dealing with Overflow OperationOperand AOperand BResult indicating overflow A + B≥ 0 < 0 A + B< 0 ≥ 0 A - B≥ 0< 0 A - B< 0≥ 0  Overflow occurs when the result of an operation cannot be represented in 32-bits, i.e., when the sign bit contains a value bit of the result and not the proper sign bit  When adding operands with different signs or when subtracting operands with the same sign, overflow can never occur  MIPS signals overflow with an exception (aka interrupt) – an unscheduled procedure call where the EPC contains the address of the instruction that caused the exception

5 Two ’ s Complement Arithmetic  Addition is accomplished by adding the codes, ignoring any final carry  Subtraction: change the sign and add  16 + (-23) =?  16 - (-23) =?  (-16) =?

6

7

8 Hardware for Addition and Subtraction

9 Multiply  Binary multiplication is just a bunch of right shifts and adds multiplicand multiplier partial product array double precision product n 2n n can be formed in parallel and added in parallel for faster multiplication

10 Multiplication Example 1011 Multiplicand (11 dec) x 1101 Multiplier (13 dec) 1011 Partial products 0000 Note: if multiplier bit is 1 copy 1011 multiplicand (place value) 1011 otherwise zero Product (143 dec) Note: need double length result

11 Add and Right Shift Multiplier Hardware multiplicand 32-bit ALU multiplier Control add shift right product

12 Add and Right Shift Multiplier Hardware multiplicand 32-bit ALU multiplier Control add shift right product = = 5 add add = 30

13 Unsigned Binary Multiplication

14 Execution of Example

15 Multiplying Negative Numbers  This does not work!  Solution 1 l Convert to positive if required l Multiply as above l If signs were different, negate answer  Solution 2 Booth ’ s algorithm

16 Booth ’ s Algorithm

17 Example of Booth ’ s Algorithm

18  Multiply ( mult and multu ) produces a double precision product mult $s0, $s1 # hi||lo = $s0 * $s1 Low-order word of the product is left in processor register lo and the high-order word is left in register hi Instructions mfhi rd and mflo rd are provided to move the product to (user accessible) registers in the register file MIPS Multiply Instruction x18  Multiplies are usually done by fast, dedicated hardware and are much more complex (and slower) than adders

19 MIPS Multiplication  Two 32-bit registers for product l HI: most-significant 32 bits l LO: least-significant 32-bits  Instructions l mult rs, rt / multu rs, rt -64-bit product in HI/LO l mfhi rd / mflo rd -Move from HI/LO to rd -Can test HI value to see if product overflows 32 bits l mul rd, rs, rt -Least-significant 32 bits of product –> rd

20 Division  Division is just a bunch of quotient digit guesses and left shifts and subtracts dividend = quotient x divisor + remainder dividend divisor partial remainder array quotient n n remainder n

21 Division of Unsigned Binary Integers Quotient Dividend Remainder Partial Remainders Divisor

22 Left Shift and Subtract Division Hardware divisor 32-bit ALU quotient Control subtract shift left dividend remainder

23 Left Shift and Subtract Division Hardware divisor 32-bit ALU quotient Control subtract shift left dividend remainder = = sub rem neg, so ‘ient bit = restore remainder sub rem neg, so ‘ient bit = restore remainder sub rem pos, so ‘ient bit = sub rem pos, so ‘ient bit = 1 = 3 with 0 remainder

24 Division of Signed Binary Integers

25

26

27  Divide ( div and divu ) generates the reminder in hi and the quotient in lo div $s0, $s1 # lo = $s0 / $s1 # hi = $s0 mod $s1 Instructions mfhi rd and mflo rd are provided to move the quotient and reminder to (user accessible) registers in the register file MIPS Divide Instruction  As with multiply, divide ignores overflow so software must determine if the quotient is too large. Software must also check the divisor to avoid division by x1A

28 MIPS Division  Use HI/LO registers for result l HI: 32-bit remainder l LO: 32-bit quotient  Instructions l div rs, rt / divu rs, rt l No overflow or divide-by-0 checking -Software must perform checks if required Use mfhi, mflo to access result

29 Representation of Fractions “Binary Point” like decimal point signifies boundary between integer and fractional parts: xx. yyyy Example 6-bit representation: = 1x x x2 -3 = If we assume “fixed binary point”, range of 6-bit representations with this format: 0 to (almost 4)

30 Fractional Powers of / / / / / i 2 -i

31 and Example: (done in class)

32 Representation of Fractions So far, in our examples we used a “fixed” binary point. What we really want is to “float” the binary point. Why? Floating binary point most effective use of our limited bits (and thus more accuracy in our number representation): … … Any other solution would lose accuracy! example: put into binary. Represent as in 5-bits choosing where to put the binary point. Store these bits and keep track of the binary point 2 places to the left of the MSB With floating point rep., each numeral carries a exponent field recording the whereabouts of its binary point. The binary point can be outside the stored bits, so very large and small numbers can be represented.

33 x radix (base) decimal pointsignificand exponent  Normalized form: no leadings 0s (exactly one digit to left of decimal point)  Alternatives to representing 1/1,000,000,000 l Normalized: 1.0 x l Not normalized: 0.1 x 10 -8,10.0 x Scientific Notation (in Decimal)

34 1.0 two x 2 -1 radix (base) “binary point” exponent  Computer arithmetic that supports it called floating point, because it represents numbers where the binary point is not fixed, as it is for integers Declare such variable in C as float significand Scientific Notation (in Binary)

35  Normal format: +1.xxxxxxxxxx two *2 yyyy two  32-bit version (C “ float ”) 0 31 SExponent Significand 1 bit8 bits23 bits S represents Sign Exponent represents y’s Significand represents x’s Floating Point Representation

36  What if result too large? Overflow!  Exponent larger than represented in 8-bit Exponent field  What if result too small? Underflow!  Negative exponent larger than represented in 8-bit Exponent field  What would help reduce chances of overflow and/or underflow? 0 1 underflow overflow Floating Point Representation

37  64 bit version (C “ double ”) Double Precision (vs. Single Precision) –C variable declared as double –But primary advantage is greater accuracy due to larger significand 0 31 SExponent Significand 1 bit11 bits20 bits Significand (cont’d) 32 bits Double Precision Fl. Pt. Representation

38  Next Multiple of Word Size (128 bits) l Unbelievable range of numbers l Unbelievable precision (accuracy)  Currently being worked on (IEEE 754r) l Current version has 15 exponent bits and 112 significand bits (113 precision bits)  Oct-Precision? l Some have tried, no real traction so far  Half-Precision? l Yep, that’s for a short (16 bit) en.wikipedia.org/wiki/Quad_precision en.wikipedia.org/wiki/Half_precision QUAD Precision Fl. Pt. Representation

39 Single Precision (DP similar):  Sign bit:1 means negative, 0 means positive  Significand: l To pack more bits, leading 1 implicit for normalized numbers l bits single, bits double l always true: 0 < Significand < 1 (for normalized numbers)  Note: 0 has no leading 1, so reserve exponent value 0 just for number SExponent Significand 1 bit8 bits23 bits IEEE 754 Floating Point Standard

40  IEEE 754 uses “biased exponent” representation l Designers wanted FP numbers to be used even if no FP hardware; e.g., sort records with FP numbers using integer compares l Wanted bigger (integer) exponent field to represent bigger numbers l 2’s complement poses a problem (because negative numbers look bigger)  1.0x 2 -1 and 1.0x2 1 (done in class) IEEE 754 Floating Point Standard

41 Called Biased Notation, where bias is number subtracted to get real number –IEEE 754 uses bias of 127 for single precision –Subtract 127 from Exponent field to get actual value for exponent –1023 is bias for double precision  Summary (single precision): 0 31 SExponent Significand 1 bit8 bits23 bits (-1) S x (1 + Significand) x 2 (Exponent-127) –Double precision identical, except with exponent bias of 1023 (half, quad similar) IEEE 754 Floating Point Standard

42 Single-Precision Range  Exponents and reserved  Smallest value l Exponent:  actual exponent = 1 – 127 = –126 l Fraction: 000…00  significand = 1.0 l ±1.0 × 2 –126 ≈ ±1.2 × 10 –38  Largest value l exponent:  actual exponent = 254 – 127 = +127 l Fraction: 111…11  significand ≈ 2.0 l ±2.0 × ≈ ±3.4 ×

43 Double-Precision Range  Exponents 0000…00 and 1111…11 reserved  Smallest value l Exponent:  actual exponent = 1 – 1023 = –1022 l Fraction: 000…00  significand = 1.0 l ±1.0 × 2 –1022 ≈ ±2.2 × 10 –308  Largest value l Exponent:  actual exponent = 2046 – 1023 = l Fraction: 111…11  significand ≈ 2.0 l ±2.0 × ≈ ±1.8 ×

44 Floating-Point Precision  Relative precision l all fraction bits are significant l Single: approx 2 –23 -Equivalent to 23 × log 10 2 ≈ 23 × 0.3 ≈ 6 decimal digits of precision l Double: approx 2 –52 -Equivalent to 52 × log 10 2 ≈ 52 × 0.3 ≈ 16 decimal digits of precision

45 Floating-Point Example  Represent –0.75 l –0.75 = (–1) 1 × × 2 –1 l S = 1 l Fraction = 1000…00 2 l Exponent = –1 + Bias -Single: – = 126 = Double: – = 1022 =  Single: …00  Double: …00

46 Floating-Point Example  What number is represented by the single-precision float …00 l S = 1 l Fraction = 01000…00 2 l Fxponent = = 129  x = (–1) 1 × ( ) × 2 (129 – 127) = (–1) × 1.25 × 2 2 = –5.0

47 (done in class) Example: Converting Binary FP to Decimal

48 63.25 Example: Converting Decimal to FP (done in class)

49 Representation for 0  Represent 0? l exponent all zeroes l significand all zeroes l What about sign? Both cases valid +0: :

50 Special Numbers  What have we defined so far? (Single Precision) ExponentSignificandObject nonzero ??? anything+/- fl. pt. # /- ∞ 255 nonzero???

51 Representation for Not a Number  What do I get if I calculate sqrt(-4.0) or 0/0 ? l If ∞ not an error, these shouldn’t be either l Called Not a Number (NaN) l Exponent = 255, Significand nonzero  Why is this useful? l Hope NaNs help with debugging? l They contaminate: op(NaN, X) = NaN

52 Infinities and NaNs  Exponent = , Fraction = l ±Infinity l Can be used in subsequent calculations, avoiding need for overflow check  Exponent = , Fraction ≠ l Not-a-Number (NaN) l Indicates illegal or undefined result -e.g., 0.0 / 0.0 l Can be used in subsequent calculations

53 Representation for Denorms  Problem: There’s a gap among representable FP numbers around 0  Normalization and implicit 1 is to blame! (done in class) b a Gaps!

54 Representation for Denorms  Solution: l We still haven’t used Exponent = 0, Significand nonzero l Denormalized number: no (implied) leading 1, implicit exponent = -127 l Smallest representable pos num: a = l Second smallest representable pos num: b =

55 Special Numbers  What have we defined so far? (Single Precision) ExponentSignificandObject 000 0nonzeroDenorm 1-254anything+/- fl. pt. # 2550+/- ∞ 255nonzeroNaN

56 Floating-Point Addition  Consider a 4-digit decimal example l × × 10 –1  1. Align decimal points l Shift number with smaller exponent l × × 10 1  2. Add significands l × × 10 1 = × 10 1  3. Normalize result & check for over/underflow l × 10 2  4. Round and renormalize if necessary l × 10 2

57 Floating Point Addition  Addition (and subtraction) (  F1  2 E1 ) + (  F2  2 E2 ) =  F3  2 E3 l Step 0: Restore the hidden bit in F1 and in F2 l Step 1: Align fractions by right shifting F2 by E1 - E2 positions (assuming E1  E2) keeping track of (three of) the bits shifted out in G R and S l Step 2: Add the resulting F2 to F1 to form F3 l Step 3: Normalize F3 (so it is in the form 1.XXXXX …) -If F1 and F2 have the same sign  F3  [1,4)  1 bit right shift F3 and increment E3 (check for overflow) -If F1 and F2 have different signs  F3 may require many left shifts each time decrementing E3 (check for underflow) l Step 4: Round F3 and possibly normalize F3 again l Step 5: Rehide the most significant bit of F3 before storing the result

58 Floating Point Addition Example  Add (0.5 =  2 -1 ) + ( =  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5:

59 Floating Point Addition Example  Add (0.5 =  2 -1 ) + ( =  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5: Hidden bits restored in the representation above Shift significand with the smaller exponent (1.1100) right until its exponent matches the larger exponent (so once) Add significands (-0.111) = – = Normalize the sum, checking for exponent over/underflow x 2 -1 = x 2 -2 =.. = x 2 -4 The sum is already rounded, so we’re done Rehide the hidden bit before storing

60 Hardware for Floating-Point Addition Figure 12.5 Simplified schematic of a floating-point adder.

61 Floating Point Multiplication  Multiplication (  F1  2 E1 ) x (  F2  2 E2 ) =  F3  2 E3 l Step 0: Restore the hidden bit in F1 and in F2 l Step 1: Add the two (biased) exponents and subtract the bias from the sum, so E1 + E2 – 127 = E3 also determine the sign of the product (which depends on the sign of the operands (most significant bits)) l Step 2: Multiply F1 by F2 to form a double precision F3 l Step 3: Normalize F3 (so it is in the form 1.XXXXX …) -Since F1 and F2 come in normalized  F3  [1,4)  1 bit right shift F3 and increment E3 -Check for overflow/underflow l Step 4: Round F3 and possibly normalize F3 again l Step 5: Rehide the most significant bit of F3 before storing the result

62 Floating Point Multiplication Example  Multiply (0.5 =  2 -1 ) x ( =  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5:

63 Floating Point Multiplication Example  Multiply (0.5 =  2 -1 ) x ( =  2 -2 ) l Step 0: l Step 1: l Step 2: l Step 3: l Step 4: l Step 5: Hidden bits restored in the representation above Add the exponents (not in bias would be -1 + (-2) = -3 and in bias would be (-1+127) + (-2+127) – 127 = (-1 -2) + ( ) = = 124 Multiply the significands x = Normalized the product, checking for exp over/underflow x 2 -3 is already normalized The product is already rounded, so we’re done Rehide the hidden bit before storing

64 Hardware for Floating-Point Multiplication and Division Figure 12.6 Simplified schematic of a floating- point multiply/divide unit.

65 Floating Point Examples  Add (0.75) + (-0.375)  Multiplication (0.75) * (-0.375)

66 MIPS Floating Point Instructions  MIPS has a separate Floating Point Register File ( $f0, $f1, …, $f31 ) (whose registers are used in pairs for double precision values) with special instructions to load to and store from them lwcl $f1,54($s2) #$f1 = Memory[$s2+54] swcl $f1,58($s4) #Memory[$s4+58] = $f1  And supports IEEE 754 single add.s $f2,$f4,$f6 #$f2 = $f4 + $f6 and double precision operations add.d $f2,$f4,$f6 #$f2||$f3 = $f4||$f5 + $f6||$f7 similarly for sub.s, sub.d, mul.s, mul.d, div.s, div.d

67 MIPS Floating Point Instructions, Con’t  And floating point single precision comparison operations c.x.s $f2,$f4 #if($f2 < $f4) cond=1; else cond=0 where x may be eq, neq, lt, le, gt, ge and double precision comparison operations c.x.d $f2,$f4 #$f2||$f3 < $f4||$f5 cond=1; else cond=0  And floating point branch operations bclt 25 #if(cond==1) go to PC+4+25 bclf 25 #if(cond==0) go to PC+4+25

68 FP Example: °F to °C  C code: float f2c (float fahr) { return ((5.0/9.0)*(fahr )); } fahr in $f12, result in $f0, literals in global memory space  Compiled MIPS code: f2c: lwc1 $f16, const5($gp) lwcl $f18, const9($gp) div.s $f16, $f16, $f18 lwcl $f18, const32($gp) sub.s $f18, $f12, $f18 mul.s $f0, $f16, $f18 jr $ra

69 FP Example: Array Multiplication  X = X + Y × Z l All 32 × 32 matrices, 64-bit double-precision elements  C code: void mm (double x[][], double y[][], double z[][]) { int i, j, k; for (i = 0; i! = 32; i = i + 1) for (j = 0; j! = 32; j = j + 1) for (k = 0; k! = 32; k = k + 1) x[i][j] = x[i][j] + y[i][k] * z[k][j]; } Addresses of x, y, z in $a0, $a1, $a2, and i, j, k in $s0, $s1, $s2

70 FP Example: Array Multiplication MIPS code: li $t1, 32 # $t1 = 32 (row size/loop end) li $s0, 0 # i = 0; initialize 1st for loop L1: li $s1, 0 # j = 0; restart 2nd for loop L2: li $s2, 0 # k = 0; restart 3rd for loop sll $t2, $s0, 5 # $t2 = i * 32 (size of row of x) addu $t2, $t2, $s1 # $t2 = i * size(row) + j sll $t2, $t2, 3 # $t2 = byte offset of [i][j] addu $t2, $a0, $t2 # $t2 = byte address of x[i][j] l.d $f4, 0($t2) # $f4 = 8 bytes of x[i][j] L3: sll $t0, $s2, 5 # $t0 = k * 32 (size of row of z) addu $t0, $t0, $s1 # $t0 = k * size(row) + j sll $t0, $t0, 3 # $t0 = byte offset of [k][j] addu $t0, $a2, $t0 # $t0 = byte address of z[k][j] l.d $f16, 0($t0) # $f16 = 8 bytes of z[k][j] …

71 FP Example: Array Multiplication … sll $t0, $s0, 5 # $t0 = i*32 (size of row of y) addu $t0, $t0, $s2 # $t0 = i*size(row) + k sll $t0, $t0, 3 # $t0 = byte offset of [i][k] addu $t0, $a1, $t0 # $t0 = byte address of y[i][k] l.d $f18, 0($t0) # $f18 = 8 bytes of y[i][k] mul.d $f16, $f18, $f16 # $f16 = y[i][k] * z[k][j] add.d $f4, $f4, $f16 # f4=x[i][j] + y[i][k]*z[k][j] addiu $s2, $s2, 1 # $k k + 1 bne $s2, $t1, L3 # if (k != 32) go to L3 s.d $f4, 0($t2) # x[i][j] = $f4 addiu $s1, $s1, 1 # $j = j + 1 bne $s1, $t1, L2 # if (j != 32) go to L2 addiu $s0, $s0, 1 # $i = i + 1 bne $s0, $t1, L1 # if (i != 32) go to L1

72 Accurate Arithmetic  IEEE Std 754 specifies additional rounding control l Extra bits of precision (guard, round, sticky) l Choice of rounding modes l Allows programmer to fine-tune numerical behavior of a computation  Not all FP units implement all options l Most programming languages and FP libraries just use defaults  Trade-off between hardware complexity, performance, and market requirements

73 Problem  Calculate the sum of A and B assuming the 16-bit NVIDIA format (1 bit sign, 5 bit exponent and 10 bit significands), as well as, 1 guard, 1 round bit and 1 sticky bit.  A = x 10 3  B = x 10 -1

74 Problem  Calculate the product of A and B assuming the 16-bit NVIDIA format (1 bit sign, 5 bit exponent and 10 bit significands), as well as, 1 guard, 1 round bit and 1 sticky bit.  A = x 10 0  B = x 10 0

75 Associativity  Parallel programs may interleave operations in unexpected orders l Assumptions of associativity may fail

76 Problem  Calculate (A+B)+C and A+(B+C) assuming the 16-bit NVIDIA format (1 bit sign, 5 bit exponent and 10 bit significands), as well as, 1 guard, 1 round bit and 1 sticky bit.  A = x 10 1  B = x  C = x 10 1


Download ppt "CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson."

Similar presentations


Ads by Google