Presentation is loading. Please wait.

Presentation is loading. Please wait.

1  1998 Morgan Kaufmann Publishers Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture.

Similar presentations


Presentation on theme: "1  1998 Morgan Kaufmann Publishers Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture."— Presentation transcript:

1 1  1998 Morgan Kaufmann Publishers Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture Assembly Language and Machine Language What's up ahead: –Implementing the Architecture

2 2  1998 Morgan Kaufmann Publishers Arithmetic We start with the Arithmetic and Logic Unit 32 operation result a b ALU

3 3  1998 Morgan Kaufmann Publishers Bits are just bits (no inherent meaning) — conventions define relationship between bits and numbers Binary numbers (base 2) 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001... decimal: 0...2 n -1 Of course it gets more complicated: numbers are finite (overflow) fractions and real numbers negative numbers How do we represent negative numbers? i.e., which bit patterns will represent which numbers? Octal and hexadecimal numbers Floating-point numbers Numbers

4 4  1998 Morgan Kaufmann Publishers Sign Magnitude: One's Complement Two's Complement 000 = +0000 = +0000 = +0 001 = +1001 = +1001 = +1 010 = +2010 = +2010 = +2 011 = +3011 = +3011 = +3 100 = -0100 = -3100 = -4 101 = -1101 = -2101 = -3 110 = -2110 = -1110 = -2 111 = -3111 = -0111 = -1 Issues: balance, number of zeros, ease of operations. Two’s complement is best. Possible Representations of Signed Numbers

5 5  1998 Morgan Kaufmann Publishers 32 bit signed numbers: 0000 0000 0000 0000 0000 0000 0000 0000 2 = 0 10 0000 0000 0000 0000 0000 0000 0000 0001 2 = +1 10 0000 0000 0000 0000 0000 0000 0000 0010 2 = +2 10... 0111 1111 1111 1111 1111 1111 1111 1110 2 = +2,147,483,646 10 0111 1111 1111 1111 1111 1111 1111 1111 2 = +2,147,483,647 10 1000 0000 0000 0000 0000 0000 0000 0000 2 = –2,147,483,648 10 1000 0000 0000 0000 0000 0000 0000 0001 2 = –2,147,483,647 10 1000 0000 0000 0000 0000 0000 0000 0010 2 = –2,147,483,646 10... 1111 1111 1111 1111 1111 1111 1111 1101 2 = –3 10 1111 1111 1111 1111 1111 1111 1111 1110 2 = –2 10 1111 1111 1111 1111 1111 1111 1111 1111 2 = –1 10 maxint minint MIPS

6 6  1998 Morgan Kaufmann Publishers Negating a two's complement number: invert all bits and add 1 –Remember: “Negate” and “invert” are different operations. You negate a number but invert a bit. Converting n bit numbers into numbers with more than n bits: –MIPS 16 bit immediate gets converted to 32 bits for arithmetic –copy the most significant bit (the sign bit) into the other bits 0010 -> 0000 0010 1010 -> 1111 1010 "sign extension" –MIPS load byte instructions lbu: no sign extension lb: sign extension Two's Complement Operations

7 7  1998 Morgan Kaufmann Publishers Unsigned numbers just like in grade school (carry/borrow 1s) 0111 0111 0110 + 0110- 0110- 0101 1101 0001 0001 Two's complement operations easy –subtraction using addition of opposite number 0111 + 1010 10001 (result is 0001, carry bit is set) Two's complement overflow (result not within number range) –e.g., adding two 4-bit numbers does not yield an 4-bit number 0111 + 0011 1010 (result is - 6, overflow bit is set) Addition & Subtraction

8 8  1998 Morgan Kaufmann Publishers No overflow when adding a positive and a negative number No overflow when signs are the same for subtraction Overflow occurs when the value affects the sign: –overflow when adding two positives yields a negative –or, adding two negatives gives a positive –or, subtract a negative from a positive and get a negative –or, subtract a positive from a negative and get a positive Consider the operations A + B, and A – B –Can overflow occur if B is 0 ? No. –Can overflow occur if A is 0 ? Yes. Detecting Overflow

9 9  1998 Morgan Kaufmann Publishers An exception (interrupt) occurs –Control jumps to predefined address for exception –Interrupted address is saved for possible resumption Details based on software system / language –example: flight control vs. homework assignment Don't always want to detect overflow — new MIPS instructions: addu, addiu, subu note: addiu still sign-extends! note: sltu, sltiu for unsigned comparisons Effects of Overflow

10 10  1998 Morgan Kaufmann Publishers and, andi:bit-by-bit AND or, ori:bit-by-bit OR sll:shift left logical slr:shift right logical 0101 1010 shifting left two steps gives 0110 1000 0110 1010 shifting right three bits gives 0000 1011 Logical Operations

11 11  1998 Morgan Kaufmann Publishers Let's build a logical unit to support the and and or instructions –we'll just build a 1 bit unit, and use 32 of them –op=0: and; op=1: or Possible Implementation (sum-of-products): result = a b + a op + b op b a operation result Logical unit opabres 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 0 1 1 1 1 0 1 1 1

12 12  1998 Morgan Kaufmann Publishers Selects one of the inputs to be the output, based on a control input IEC symbol of a 4-input MUX: Lets build our logical unit using a MUX: S C A B 0 1 Review: The Multiplexor 0101 b a Operation Result MUX 0 1 0 1 2 3 G 0 3 _ EN

13 13  1998 Morgan Kaufmann Publishers Not easy to decide the “best” way to build something –Don't want too many inputs to a single gate –Don’t want to have to go through too many gates –For our purposes, ease of comprehension is important –We use multiplexors Let's look at a 1-bit ALU for addition: How could we build a 32-bit ALU for AND, OR and ADD? Different Implementations c out = a b + a c in + b c in sum = a  b  c in Sum CarryIn CarryOut a b a b c in c out sum 0 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 0 1 1 0 1 1 0 1 0 1 1 1 1 1

14 14  1998 Morgan Kaufmann Publishers Building a 32 bit ALU for AND, OR and ADD We need a 4-input MUX. 1-bit ALU:

15 15  1998 Morgan Kaufmann Publishers Two's complement approach: just negate b and add. A clever solution: In a multiple bit ALU the least significant CarryIn has to be equal to 1 for subtraction. What about subtraction (a – b) ?

16 16  1998 Morgan Kaufmann Publishers Need to support the set-on-less-than instruction (slt) –remember: slt is an arithmetic instruction –produces a 1 if rs < rt and 0 otherwise –use subtraction: (a-b) < 0 implies a < b Need to support test for equality (beq $t5, $t6, $t7) –use subtraction: (a-b) = 0 implies a = b Tailoring the ALU to the MIPS

17 Supporting slt Other ALUs: Most significant ALU:

18 18  1998 Morgan Kaufmann Publishers 32 bit ALU supporting slt a<b  a-b<0, thus Set is the sign bit of the result.

19 19  1998 Morgan Kaufmann Publishers Final ALU including test for equality Notice control lines: 000 = and 001 = or 010 = add 110 = subtract 111 = slt Note: Zero is a 1 when the result is zero!

20 20  1998 Morgan Kaufmann Publishers Conclusion We can build an ALU to support the MIPS instruction set –key idea: use a multiplexor to select the output we want –we can efficiently perform subtraction using two’s complement –we can replicate a 1-bit ALU to produce a 32-bit ALU Important points about hardware –all of the gates are always working –the speed of a gate is affected by the number of inputs to the gate –the speed of a circuit is affected by the number of gates in series (on the “critical path” or the “deepest level of logic”)

21 21  1998 Morgan Kaufmann Publishers Conclusion Our primary focus: comprehension, however, –clever changes to organization can improve performance (similar to using better algorithms in software) –we’ll look at examples for addition, multiplication and division

22 22  1998 Morgan Kaufmann Publishers A 32-bit ALU is much slower than a 1-bit ALU. There are more than one way to do addition. –the two extremes: ripple carry and sum-of-products Can you see the ripple? How could you get rid of it? c 1 = b 0 c 0 + a 0 c 0 + a 0 b 0 c 2 = b 1 c 1 + a 1 c 1 + a 1 b 1 c 2 = c 2 (a 0,b 0,c 0,a 1,b 1 ) c 3 = b 2 c 2 + a 2 c 2 + a 2 b 2 c 3 = c 3 (a 0,b 0,c 0,a 1,b 1,a 2,b 2 ) c 4 = b 3 c 3 + a 3 c 3 + a 3 b 3 c 4 = c 4 (a 0,b 0,c 0,a 1,b 1,a 2,b 2,a 3,b 3 ) Not feasible! Too many inputs to the gates. Problem: ripple carry adder is slow

23 23  1998 Morgan Kaufmann Publishers An approach in-between the two extremes Motivation: –If we didn't know the value of carry-in, what could we do? –When would we always generate a carry? g i = a i b i –When would we propagate the carry? p i = a i + b i –Look at the truth table! Did we get rid of the ripple? c 1 = g 0 + p 0 c 0 c 2 = g 1 + p 1 c 1 c 2 = g 1 +p 1 g 0 +p 1 p 0 c 0 c 3 = g 2 + p 2 c 2 c 3 = g 2 +p 2 g 1 +p 2 p 1 g 0 +p 2 p 1 p 0 c 0 c 4 = g 3 + p 3 c 3 c 4 =... Feasible! A smaller number of inputs to the gates. Carry-lookahead adder

24 24  1998 Morgan Kaufmann Publishers Can’t build a 16 bit CLA adder (too big) Could use ripple carry of 4-bit CLA adders Better: use the CLA principle again! Principle shown in the figure. See textbook for details. Use principle to build bigger adders

25 25  1998 Morgan Kaufmann Publishers More complicated than addition –can be accomplished via shifting and addition More time and more area Let's look at 2 versions based on grammar school algorithm 0010 (multiplicand) x_1011 (multiplier) 0010 0010 0000 0010___ 0010110 Negative numbers: –easy way: convert to positive and multiply –there are better techniques Multiplication

26 26  1998 Morgan Kaufmann Publishers Multiplication, First Version

27 27  1998 Morgan Kaufmann Publishers Multiplication, Final Version

28 28  1998 Morgan Kaufmann Publishers Booth’s Algorithm The grammar school method was implemented using addition and shifting Booth’s algorithm also uses subtraction Based on two bits of the multiplier either add, subtract or do nothing; always shift Handles two’s complement numbers

29 29  1998 Morgan Kaufmann Publishers Fast multipliers Combinational implementations –Conventional multiplier algorithm partial products with AND gates adders –Lots of modifications Sequential implementations –Pipelined multiplier registers between levels of logic result delayed effective speed of multiple multiplications increased

30 30  1998 Morgan Kaufmann Publishers Four-Bit Binary Multiplication Multiplicand B3 B2 B1 B0 Multiplier  A3 A2 A1 A0 1st partial product A0B3 A0B2 A0B1 A0B0 2nd partial product A1B3 A1B2 A1B1 A1B0 3rd partial product A2B3 A2B2 A2B1 A2B0 4th partial product + A3B3 A3B2 A3B1 A3B0 Final product P7 P6 P5 P4 P3 P2 P1 P0

31 31  1998 Morgan Kaufmann Publishers Classical Implementation P  7:0  PP1 PP2 PP3 PP4 & & & & A0 B0 A0 B1 A0 B2 A0 B3 4 / 4 / 4 / 4 / 6 / 6 /

32 32  1998 Morgan Kaufmann Publishers Pipelined Multiplier Clk // / / / / / / / /

33 33  1998 Morgan Kaufmann Publishers Division Simple method: –Initialise the remainder with the dividend –Start from most significant end –Subtract divisor from the remainder if possible (quotient bit 1) –Shift divisor to the right and repeat

34 34  1998 Morgan Kaufmann Publishers Division, First Version

35 35  1998 Morgan Kaufmann Publishers Division, Final Version Same hardware for multiply and divide.

36 36  1998 Morgan Kaufmann Publishers Floating Point (a brief look) We need a way to represent –numbers with fractions, e.g., 3.1416 –very small numbers, e.g.,.000000001 –very large numbers, e.g., 3.15576  10 9 Representation: –sign, exponent, significand: (–1) sign  significand  2 exponent –more bits for significand gives more accuracy –more bits for exponent increases range IEEE 754 floating point standard: –single precision: 8 bit exponent, 23 bit significand –double precision: 11 bit exponent, 52 bit significand

37 37  1998 Morgan Kaufmann Publishers IEEE 754 floating-point standard Leading “1” bit of significand is implicit Exponent is “biased” to make sorting easier –all 0s is smallest exponent all 1s is largest –bias of 127 for single precision and 1023 for double precision –summary: (–1) sign  significand)  2 exponent – bias Example: –decimal: -.75 = -3/4 = -3/2 2 –binary: -.11 = -1.1 x 2 -1 –floating point: exponent = 126 = 01111110 –IEEE single precision: 10111111010000000000000000000000

38 38  1998 Morgan Kaufmann Publishers Floating-point addition 1.Shift the significand of the number with the lesser exponent right until the exponents match 2.Add the significands 3.Normalise the sum, checking for overflow or underflow 4.Round the sum

39 39  1998 Morgan Kaufmann Publishers Floating-point addition

40 40  1998 Morgan Kaufmann Publishers Floating-point multiplication 1.Add the exponents 2.Multiply the significands 3.Normalise the product, checking for overflow or underflow 4.Round the product 5.Find out the sign of the product

41 41  1998 Morgan Kaufmann Publishers Floating Point Complexities Operations are somewhat more complicated (see text) In addition to overflow we can have “underflow” Accuracy can be a big problem –IEEE 754 keeps two extra bits during intermediate calculations, guard and round –four rounding modes –positive divided by zero yields “infinity” –zero divide by zero yields “not a number” –other complexities Implementing the standard can be tricky

42 42  1998 Morgan Kaufmann Publishers Chapter Four Summary Computer arithmetic is constrained by limited precision Bit patterns have no inherent meaning but standards do exist –two’s complement –IEEE 754 floating point Computer instructions determine “meaning” of the bit patterns Performance and accuracy are important so there are many complexities in real machines (i.e., algorithms and implementation). We are ready to move on (and implement the processor)


Download ppt "1  1998 Morgan Kaufmann Publishers Arithmetic Where we've been: –Performance (seconds, cycles, instructions) –Abstractions: Instruction Set Architecture."

Similar presentations


Ads by Google