Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Architecture Lecture Notes Spring 2005 Dr. Michael P. Frank Competency Area 4: Computer Arithmetic.

Similar presentations


Presentation on theme: "Computer Architecture Lecture Notes Spring 2005 Dr. Michael P. Frank Competency Area 4: Computer Arithmetic."— Presentation transcript:

1 Computer Architecture Lecture Notes Spring 2005 Dr. Michael P. Frank Competency Area 4: Computer Arithmetic

2 In previous chapters we’ve discussed: —Performance (execution time, clock cycles, instructions, MIPS, etc) —Abstractions: Instruction Set Architecture Assembly Language and Machine Language In this chapter: —Implementing the Architecture: –How does the hardware really add, subtract, multiply and divide? –Signed and unsigned representations –Constructing an ALU (Arithmetic Logic Unit) Introduction

3 Humans naturally represent numbers in base 10, however, computers understand base 2. Example: (1111 1111) 2 = 255 (Signed Representation) (Unsigned Representation) Note: Signed representation includes sign-magnitude and two’s complement. Also, one’s complement representation.

4 Sign Magnitude: One's Complement Two's Complement 000 = +0000 = +0000 = +0 001 = +1001 = +1001 = +1 010 = +2010 = +2010 = +2 011 = +3011 = +3011 = +3 100 = -0100 = -3100 = -4 101 = -1101 = -2101 = -3 110 = -2110 = -1110 = -2 111 = -3111 = -0111 = -1 Sign Magnitude (first bit is sign bit, others magnitude) Two’s Complement (negation: invert bits and add 1) One’s Complement (first bit is sign bit, invert other bits for magnitude) NOTE: Computers today use two’s complement binary representations for signed numbers. Possible Representations

5 32 bit signed numbers (MIPS): 0000 0000 0000 0000 0000 0000 0000 0000 two = 0 ten 0000 0000 0000 0000 0000 0000 0000 0001 two = + 1 ten 0000 0000 0000 0000 0000 0000 0000 0010 two = + 2 ten... 0111 1111 1111 1111 1111 1111 1111 1110 two = + 2,147,483,646 ten 0111 1111 1111 1111 1111 1111 1111 1111 two = + 2,147,483,647 ten 1000 0000 0000 0000 0000 0000 0000 0000 two = – 2,147,483,648 ten 1000 0000 0000 0000 0000 0000 0000 0001 two = – 2,147,483,647 ten 1000 0000 0000 0000 0000 0000 0000 0010 two = – 2,147,483,646 ten... 1111 1111 1111 1111 1111 1111 1111 1101 two = – 3 ten 1111 1111 1111 1111 1111 1111 1111 1110 two = – 2 ten 1111 1111 1111 1111 1111 1111 1111 1111 two = – 1 ten maxint minint Two’s Complement Representations The hardware need only test the first bit to determine the sign.

6 Negating a two's complement number: –invert all bits and add 1 –Or, preserve rightmost 1 and 0’s to its right, flip all bits to the left of the rightmost 1 Converting n-bit numbers into m-bit numbers with m > n: Example: Convert 4-bit signed number into 8-bit number. 0010  0000 0010(+2 10 ) 1010  1111 1010(-6 10 ) —"sign extension" is used. The most significant bit is copied into the right portion of the new word. For unsigned numbers, the leftmost bits are filled with 0’s. —Example instructions: lbu/lb, slt/sltu, etc. Two’s Complement Operations

7 Just like in grade school (carry/borrow 1s) 0111 0111 0110 + 0110- 0110- 0101 Two's complement operations easy —subtraction using addition of negative numbers 0111 + 1010 Overflow (result too large for finite computer word): —e.g., adding two n-bit numbers does not yield an n-bit number 0111 + 0001note that overflow term is somewhat misleading, 1000it does not mean a carry “overflowed” Addition and Subtraction

8 32-bit ALU with Zero Detect: * Recall that given following control lines, we get these functions: 000 = and 001 = or 010 = add 110 = subtract 111 = slt * We’ve learned how to build each of these functions in hardware.

9 We’ve studied how to implement a 1-bit ALU in hardware that supports the MIPS instruction set: —key idea: use multiplexor to select desired output function —we can efficiently perform subtraction using two’s complement —we can replicate a 1-bit ALU to produce a 32-bit ALU Important issues about hardware: —all of the gates are always working —the speed of a gate is affected by the number of inputs to the gate —the speed of a circuit is affected by the number of gates in series (on the “critical path” or the “deepest level of logic”) Changes in hardware organization can improve performance —we’ll look at examples for addition (carry lookahead adder) and multiplication, and division So far…

10 For adder design: —Problem  ripple carry adder is slow due to sequential evaluation of carry-in/carry-out bits Consider the carryin inputs: Better adder design Using substitution, we can see the “ripple” effect:

11 Faster carry schemes exist that improve the speed of adders in hardware and reduce complexity in equations, namely the carry lookahead adder. Let cin i represent the ith carryin bit, then: We can now define the terms generate and propagate: Then, Carry-Lookahead Adder

12 Suppose g i is 1. The adder generates a carryout independent of the value of the carryin, i.e. Now suppose g i is 0 and p i is 1: The adder propagates a carryin to a carryout. In summary, cout is 1 if either g i is 1 or both p i and cin are 1. This new approach creates the first level of abstraction. Carry-Lookahead Adder

13 Sometimes the first level of abstraction will produce large equations. It is beneficial then to look at the second level of abstraction. It is produced by considering a 4-bit adder where we propagate and generate signals at a higher level: Carry-Lookahead Adder We’re representing a 16-bit adder, with a “super” propagate signal and a “super” generate signal. So P i is true only if the each of the bits in the group propagates a carry.

14 For the “super” generate signals it matters only if there is a carry out in the most significant bit. Carry-Lookahead Adder Now we can represent the carryout signals for the 16-bit adder with two levels of abstraction as

15 2 nd Level of Abstraction Carry-LookAhead Adder Design

16 O(log n)-time carry-skip adder (8 bit segment shown) With this structure, we can do a 2 n -bit add in 2(n+1) logic stages Hardware overhead is <2× regular ripple-carry. 1 st carry tick 2 nd carry tick 3 rd carry tick 4 th carry tick

17 Recall that multiplication is accomplished via shifting and addition. Example: 0010(multiplicand) x0110(multiplier) 0000 +0010 (shift multiplicand left 1 bit) 00100 + 0010 0001100 (product) Multiplication Algorithms Intermediate product Multiply by LSB of multiplier

18 Multiplication Algorithm 1 Hardware implementation of Algorithm 1:

19 Multiplication Algorithm 1 For each bit:

20 Multiplication Algorithm 1 IterationStepMultiplierMultiplicandProduct 0Initial Steps00110000 00100000 11a  LSB multiplier = 100110000 0010 2  Shift Mcand lft--0000 01000000 0010 3  shift Multiplier rgt00010000 01000000 0010 21a  LSB multiplier = 100010000 01000000 0010 2  Shift Mcand lft--0000 10000000 0110 3  shift Multiplier rgt00000000 10000000 0110 31  LSB multiplier = 000000000 10000000 0110 2  Shift Mcand lft--0001 00000000 0110 3  shift Multiplier rgt00000001 00000000 0110 41  LSB multiplier = 000000001 00000000 0110 2  Shift Mcand lft0010 00000000 0110 3  shift Multiplier rgt00000010 00000000 0110 Example: (4-bit)

21 For Algorithm 1 we initialize the left half of the multiplicand to 0 to accommodate for its left shifts. All adds are 64 bits wide. This is wasteful and slow. Algorithm 2  instead of shifting multiplicand left, shift product register to the right => half the widths of the ALU and multiplicand Multiplication Algorithms

22 Multiplication Algorithm 2 For each bit:

23 Multiplication Algorithm 2 IterationStepMultiplierMultiplicandProduct 0Initial Steps001100100000 11a  LSB multiplier = 1001100100010 0000 2  Shift Product register rgt001100100001 0000 3  shift Multiplier rgt000100100001 0000 21a  LSB multiplier = 1000100100011 0000 2  Shift Product register rgt000100100001 1000 3  shift Multiplier rgt000000100001 1000 31  LSB multiplier = 0000000100001 1000 2  Shift Product register rgt000000100000 1100 3  shift Multiplier rgt000000100000 1100 41  LSB multiplier = 0000000100000 1100 2  Shift Product register rgt000000100000 0110 3  shift Multiplier rgt000000100000 0110 Example: (4-bit)

24 Multiplication Algorithm 3 The third multiplication algorithm combines the right half of the product with the multiplier. This reduces the number of steps to implement the multiply and it also saves space. Hardware Implementation of Algorithm 3:

25 Multiplication Algorithm 3 For each bit:

26 Multiplication Algorithm 3 IterationStepMultiplicandProduct 0Initial Steps00100000 0011 11a  LSB product = 100100010 0011 2  Shift Product register rgt00100001 21a  LSB product = 100100011 0001 2  Shift Product register rgt00100001 1000 31  LSB product = 000100001 1000 2  Shift Product register rgt00100000 1100 41  LSB product = 000100000 1100 2  Shift Product register rgt00100000 0110 Example: (4-bit)

27 Example: Hardware implementations are similar to multiplication algorithms: +Algorithm 1  implements conventional division method +Algorithm 2  reduces divisor register and ALU by half +Algorithm 3  eliminates quotient register completely Division Algorithms - - (Divisor) (Quotient) (Dividend)

28 We need a way to represent: —numbers with fractions, e.g., 3.1416 —very small numbers, e.g.,.000000001 —very large numbers, e.g., 3.15576  10 9 Representation: —sign, exponent, significand: (–1) sign  significand  2 exponent —more bits for significand gives more accuracy —more bits for exponent increases dynamic range IEEE 754 floating point standard: —For Single Precision: 8 bit exponent, 23 bit significand, 1 bit sign —For Double Precision: 11 bit exponent, 52 bit significand, 1 bit sign Floating Point Numbers SIGNEXPONENTSIGNIFICAND

29 Leading “1” bit of significand is implicit Exponent is usually “biased” to make sorting easier —All 0s is smallest exponent, all 1s is largest —bias of 127 for single precision and 1023 for double precision Summary: (–1) sign  fraction)  2 exponent – bias Example: −0.75 10 = −1.1 2  2 −1 — Single precision: (−1) 1  (1 +.1000…)  2 126−127 –1|01111110|10000000000000000000000 — Double precision: (−1) 1  (1 +.1000…)  2 1022−1023 –1|01111111110|10000000000000000000000… (32 more 0s) IEEE 754 floating-point standard

30 FP Addition Algorithm The number with the smaller exponent must be shifted right before adding. —So the “binary points” align. After adding, the sum must be normalized. —Then it is rounded, –and possibly re-normalized Possible errors include: —Overflow (exponent too big) —Underflow (exp. too small)

31 Floating-Point Addition Hardware Implements algorithm from prev. slide. Note high complexity compared with integer addition HW.

32 FP Multiplication Algorithm Add the exponents. —Adjusting for bias. Multiply the significands. Normalize, —Check for over/under flow, then round. —Repeat if necessary. Compute the sign.

33 Ethics Addendum: Intel Pentium FP bug In July 1994, Intel discovered there was a bug in the Pentium’s FP division hardware… —But decided not to announce the bug, and go ahead and ship chips having the flaw anyway, to save them time & money –Based on their analysis, they thought errors could arise only rarely. Even after the bug was discovered by users, Intel initially refused to replace the bad chips on request! —They got a lot of bad PR from this… Lesson: Good, ethical engineers fix problems when they first find them, and don’t cover them up!

34 Computer arithmetic is constrained by limited precision. Bit patterns have no inherent meaning but standards do exist: –two’s complement –IEEE 754 floating point Computer instructions determine “meaning” of the bit patterns. Performance and accuracy are important so there are many complexities in real machines (i.e., algorithms and implementation). * Please read the remainder of the chapter on your own. However, you will only be responsible for the material that has been covered in the lectures for exams. Summary


Download ppt "Computer Architecture Lecture Notes Spring 2005 Dr. Michael P. Frank Competency Area 4: Computer Arithmetic."

Similar presentations


Ads by Google