Module-4 Arithmetic.

Slides:



Advertisements
Similar presentations
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Advertisements

Princess Sumaya Univ. Computer Engineering Dept. Chapter 3:
Chapter 6 Arithmetic. Addition Carry in Carry out
CPSC 321 Computer Architecture ALU Design – Integer Addition, Multiplication & Division Copyright 2002 David H. Albonesi and the University of Rochester.
Arithmetic.
4 Operations On Data Foundations of Computer Science ã Cengage Learning.
3-1 Chapter 3 - Arithmetic Computer Architecture and Organization by M. Murdocca and V. Heuring © 2007 M. Murdocca and V. Heuring Computer Architecture.
Data Representation – Binary Numbers
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Computer Arithmetic Nizamettin AYDIN
Chapter 6-2 Multiplier Multiplier Next Lecture Divider
Computer Arithmetic. Instruction Formats Layout of bits in an instruction Includes opcode Includes (implicit or explicit) operand(s) Usually more than.
Computer Arithmetic.
Computing Systems Basic arithmetic for computers.
Arithmetic Chapter 4.
Arithmetic Chapter 4.
ECE232: Hardware Organization and Design
Chapter 6-1 ALU, Adder and Subtractor
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
L/O/G/O CPU Arithmetic Chapter 7 CS.216 Computer Architecture and Organization.
Computer Arithmetic See Stallings Chapter 9 Sep 10, 2009
Topics covered: Arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
ECE DIGITAL LOGIC LECTURE 15: COMBINATIONAL CIRCUITS Assistant Prof. Fareena Saqib Florida Institute of Technology Fall 2015, 10/20/2015.
Chapter 9 Computer Arithmetic
William Stallings Computer Organization and Architecture 8th Edition
Arithmetic UNIT-V.
Floating Point Representations
Dr.Faisal Alzyoud 2/20/2018 Binary Arithmetic.
Chapter Contents 3.1 Overview 3.2 Fixed Point Addition and Subtraction
Programming and Data Structure
Integer Division.
UNIVERSITY OF MASSACHUSETTS Dept
CS/COE0447 Computer Organization & Assembly Language
Unit -2 ARITHMETIC.
William Stallings Computer Organization and Architecture 7th Edition
CS1010 Programming Methodology
Data Structures Mohammed Thajeel To the second year students
University of Maryland
CSCE 350 Computer Architecture
Data Representation Data Types Complements Fixed Point Representation
UNIT-6 Arithmetic Course code: 10CS46 Prepared by :
Unsigned Multiplication
(Part 3-Floating Point Arithmetic)
Arithmetic Functions & Circuits
Arithmetic Circuits (Part I) Randy H
Arithmetic Logical Unit
Digital Logic & Design Lecture 02.
EE207: Digital Systems I, Semester I 2003/2004
ECEG-3202 Computer Architecture and Organization
Computer Architecture
Overview Part 1 – Design Procedure Part 2 – Combinational Logic
Chapter 3 DataStorage Foundations of Computer Science ã Cengage Learning.
Chapter 8 Computer Arithmetic
UNIVERSITY OF MASSACHUSETTS Dept
Overview Iterative combinational circuits Binary adders
ECE 352 Digital System Fundamentals
Copyright © Cengage Learning. All rights reserved.
UNIVERSITY OF MASSACHUSETTS Dept
Arithmetic Logic Unit A.R. Hurson Electrical and Computer Engineering Missouri University of Science & Technology A.R. Hurson.
Number Representation
Presentation transcript:

Module-4 Arithmetic

Text Books: Reference Books: Carl Hamacher, Zvonko Vranesic, Safwat Zaky: Computer Organization, 5th Edition, Tata McGraw Hill, 2002. Reference Books: Carl Hamacher, Zvonko Vranesic, Safwat Zaky, Naraig Manjikian : Computer Organization and Embedded Systems, 6th Edition, Tata McGraw Hill, 2012. William Stallings: Computer Organization & Architecture, 9th Edition, Pearson, 2015.

Objective Number and character representations Addition and subtraction of binary numbers Adder and subtractor circuits High-speed adders based on carry-lookahead logic circuits The Booth algorithm for multiplication of signed numbers High-speed multipliers based on carry-save addition Logic circuits for division Arithmetic operations on floating-point numbers conforming to the IEEE standard

Number, Arithmetic Operations, and Characters 4

Signed Integer 3 major representations: Sign and magnitude One’s complement Two’s complement Assumptions: 4-bit machine word 16 different values can be represented Roughly half are positive, half are negative 5

Sign and Magnitude Representation High order bit is sign: 0 = positive (or zero), 1 = negative Three low order bits is the magnitude: 0 (000) thru 7 (111) Number range for n bits = +/-2n-1 -1 Two representations for 0 6

One’s Complement Representation Subtraction implemented by addition & 1's complement Still two representations of 0! This causes some problems Some complexities in addition 7

Two’s Complement Representation like 1's comp except shifted one position clockwise Only one representation for 0 One more negative number than positive number 8

Binary, Signed-Integer Representations V alues represented Sign and b b b b magnitude 1' s complement 2' s complement 3 2 1 1 1 1 + 7 + 7 + 7 1 1 + 6 + 6 + 6 1 1 + 5 + 5 + 5 1 + 4 + 4 + 4 1 1 + 3 + 3 + 3 1 + 2 + 2 + 2 1 + 1 + 1 + 1 + + + 1 - - 7 - 8 1 1 - 1 - 6 - 7 1 1 - 2 - 5 - 6 1 1 1 - 3 - 4 - 5 1 1 - 4 - 3 - 4 1 1 1 - 5 - 2 - 3 1 1 1 - 6 - 1 - 2 1 1 1 1 - 7 - - 1 Figure 2.1. Binary, signed-integer representations. 9

Addition and Subtraction – 2’s Complement 4 + 3 7 0100 0011 0111 -4 + (-3) -7 1100 1101 11001 If carry-in to the high order bit = carry-out then ignore carry if carry-in differs from carry-out then overflow 4 - 3 1 0100 1101 10001 -4 + 3 -1 1100 0011 1111 Simpler addition scheme makes two’s complement the most common choice for integer number systems within digital systems 10

2’s-Complement Add and Subtract Operations 1 0 1 1 1 1 1 0 1 0 0 1 1 1 0 1 0 0 1 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 1 5 - ( ) 2 + 3 4 7 6 1 (a) (c) (e) (f) (g) (h) (i) (j) 1 1 1 0 0 1 0 0 1 0 1 0 0 1 1 1 1 1 0 1 6 - ( ) 2 4 + 3 7 (b) (d) 1 1 0 1 0 1 1 1 0 1 0 0 0 0 1 0 1 1 0 0 1 1 1 0 0 1 1 0 0 0 1 1 1 0 0 1 0 1 0 1 1 1 1 1 1 0 0 0 4 + ( ) 2 - 3 8 5 Figure 2.4. 2's-complement Add and Subtract operations. 11

Overflow - Add two positive numbers to get a negative number or two negative numbers to get a positive number -1 +0 -1 +0 -2 1111 -2 0000 +1 1111 0000 +1 1110 0001 1110 -3 -3 0001 +2 +2 1101 1101 0010 0010 -4 -4 1100 +3 0011 1100 +3 0011 -5 -5 1011 1011 0100 +4 0100 +4 1010 -6 1010 0101 -6 0101 +5 +5 1001 1001 0110 0110 -7 +6 -7 1000 +6 0111 1000 0111 -8 +7 -8 +7 5 + 3 = -8 -7 - 2 = +7 12

Overflow Conditions 0 1 1 1 0 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0 1 0 0 1 1 1 0 0 1 0 1 1 1 5 3 -8 -7 -2 7 Overflow Overflow 0 0 0 0 0 1 0 1 0 0 1 0 0 1 1 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 0 0 0 5 2 7 -3 -5 -8 No overflow No overflow Overflows can occur when the sign of the two operands is the same. Overflow occurs if the sign of the result is different from the sign of the operands. Overflow when carry-in to the high-order bit does not equal carry out 13

Sign Extension Task: Rule: Given w-bit signed integer x Convert it to w+k-bit integer with same value Rule: Make k copies of sign bit: X  = xw–1 ,…, xw–1 , xw–1 , xw–2 ,…, x0 w • • • X X  k copies of MSB k w 14

Sign Extension Example short int x = 15213; int ix = (int) x; short int y = -15213; int iy = (int) y; Decimal Hex Binary x 15213 3B 6D 00111011 01101101 ix 00 00 3B 6D 00000000 00000000 00111011 01101101 y -15213 C4 93 11000100 10010011 iy FF FF C4 93 11111111 11111111 11000100 10010011 15

Addition and Subtraction of Signed Numbers

Addition/subtraction of signed Numbers = c +1 1 x y Carry-in Sum Carry-out Å + At the ith stage: Input: ci is the carry-in Output: si is the sum ci+1 carry-out to (i+1)st state 13 7 + Y E xample: 1 = Legend for stage i X Z + 6 x y s Carry-out c +1 Carry-in

Addition logic for a single stage Sum Carry y i x c s 1 + c Full adder c i + 1 (F A) i s i Full Adder (FA): Symbol for the complete circuit for a single stage of addition.

n-bit adder Cascade n full adder (FA) blocks to form a n-bit adder. Carries propagate or ripple through this cascade, n-bit ripple carry adder. F A y n 1 - x c s Most significant bit (MSB) position y 1 x s F A c n - F A c 1 y x s Least significant bit (LSB) position Carry-in c0 into the LSB position provides a convenient way to perform subtraction.

K n-bit adder K n-bit numbers can be added by cascading k n-bit adders. c k n s 1 - ( ) y x bit adder y n x s 2 1 - bit adder y x n - bit c 1 s adder Each n-bit adder forms a block, so this is cascading of blocks. Carries ripple or propagate through blocks, Blocked Ripple Carry Adder

n-bit subtractor Recall X – Y is equivalent to adding 2’s complement of Y to X. 2’s complement is equivalent to 1’s complement + 1. X−Y=X+ Y +1 2’s complement of positive and negative numbers is computed similarly. F A y n 1 - x c s Most significant bit (MSB) position y 1 x s F A c n - F A 1 c y x s Least significant bit (LSB) position

n-bit adder/subtractor (contd..) y y y n - 1 1 Add/Sub control x x x n - 1 1 c n -bit adder n c s s s n - 1 1 Add/sub control = 0, addition. Add/sub control = 1, subtraction.

Detecting overflows Overflows can only occur when the sign of the two operands is the same. Overflow occurs if the sign of the result is different from the sign of the operands. Recall that the MSB represents the sign. xn-1, yn-1, sn-1 represent the sign of operand x, operand y and result sum s respectively. Circuit to detect overflow can be implemented by the following logic expressions:

Computing the add time x0 y0 Consider 0th stage: FA Consider 0th stage: c1 is available after 2 gate delays. s1 is available after 1 gate delay. Sum Carry y i c x i i x y s i c i i c i + 1 i c i x i y i

Computing the add time (contd..) Cascade of 4 Full Adders, or a 4-bit adder x0 y0 s2 FA s1 c2 s0 c1 c3 c0 s3 c4 s0 available after 1 gate delays, c1 available after 2 gate delays. s1 available after 3 gate delays, c2 available after 4 gate delays. s2 available after 5 gate delays, c3 available after 6 gate delays. s3 available after 7 gate delays, c4 available after 8 gate delays. For an n-bit adder, sn-1 is available after 2n-1 gate delays cn is available after 2n gate delays.

Fast addition xi yi ci si Pi Gi B cell Recall the equations: Second equation can be written as: We can write: Gi is called generate function and Pi is called propagate function Gi and Pi are computed only from xi and yi and not ci, thus they can be computed in one gate delay after X and Y are applied to the inputs of an n-bit adder. Ci+1 = 1 only when Generate= 1 and Propagate = 0 or 1 So we can modify the Pi

Carry lookahead All carries can be obtained 3 gate delays after X, Y and c0 are applied. -One gate delay for Pi and Gi -Two gate delays in the AND-OR circuit for ci+1 All sums can be obtained 1 gate delay after the carries are computed. Independent of n, n-bit addition requires only 4 gate delays. This is called Carry Lookahead adder.

Carry-lookahead adder Carry-lookahead logic B cell s 3 P G c 2 1 . 4 x y 4-bit carry-lookahead adder G i c . P s x y B cell B-cell for a single stage

Carry lookahead adder (contd..) Performing n-bit addition in 4 gate delays independent of n is good only theoretically because of fan-in constraints. Last AND gate and OR gate require a fan-in of (n+1) for a n-bit adder. For a 4-bit adder (n=4) fan-in of 5 is required. Practical limit for most gates. In order to add operands longer than 4 bits, we can cascade 4-bit Carry-Lookahead adders. Cascade of Carry-Lookahead adders is called Blocked Carry-Lookahead adder. Fan-in is the number of inputs a gate can handle. c i + 1 = G P - 2 .. ... c0

4-bit carry-lookahead Adder

Blocked Carry-Lookahead adder Carry-out from a 4-bit block can be given as: Rewrite this as: Subscript I denotes the blocked carry lookahead and identifies the block. Cascade 4 4-bit adders, c16 can be expressed as:

Blocked Carry-Lookahead adder Carry-lookahead logic 4-bit adder s 15-12 P 3 I G c 12 2 8 11-8 1 4 7-4 3-0 16 x y . After xi, yi and c0 are applied as inputs: - Gi and Pi for each stage are available after 1 gate delay. - PI is available after 2 and GI after 3 gate delays. - All carries are available after 5 gate delays. - c16 is available after 5 gate delays. - s15 which depends on c12 is available after 8 (5+3) gate delays (Recall that for a 4-bit carry lookahead adder, the last sum bit is available 3 gate delays after all inputs are available)

Multiplication

Multiplication of unsigned numbers Product of 2 n-bit numbers is at most a 2n-bit number. Unsigned multiplication can be viewed as addition of shifted versions of the multiplicand.

Multiplication of unsigned numbers (contd..) We added the partial products at end. Alternative would be to add the partial products at each stage. Rules to implement multiplication are: If the ith bit of the multiplier is 1, shift the multiplicand and add the shifted multiplicand to the current value of the partial product. Hand over the partial product to the next stage Value of the partial product at the start stage is 0.

Multiplication of unsigned numbers Typical multiplication cell ith multiplier bit carry in carry out jth multiplicand bit Bit of incoming partial product (PPi) Bit of outgoing partial product (PP(i+1)) FA

Typical multiplication cell The main component in each cell is a full adder, FA. The AND gate in each cell determines whether a multiplicand bit, mj , is added to the incoming partial-product bit, based on the value of the multiplier bit, qi. Each row i, where 0 ≤ i ≤ 3, adds the multiplicand (appropriately shifted) to the incoming partial product, PPi, to generate the outgoing partial product, PP(i + 1), if qi = 1. If qi = 0, PPi is passed vertically downward unchanged. PP0 is all 0s, and PP4 is the desired product. The multiplicand is shifted left one position per row by the diagonal signal path.

Combinatorial array multiplier Multiplicand m 3 2 1 q p 4 5 6 7 PP1 PP2 PP3 (PP0) , Product is: p7,p6,..p0 Multiplicand is shifted by displacing it through an array of adders.

Combinatorial array multiplier (contd..) Combinatorial array multipliers are: Extremely inefficient. Have a high gate count for multiplying numbers of practical size such as 32-bit or 64-bit numbers. Perform only one function, namely, unsigned integer product. Improve gate efficiency by using a mixture of combinatorial array techniques and sequential techniques requiring less combinational logic.

Sequential multiplication Recall the rule for generating partial products: If the ith bit of the multiplier is 1, add the appropriately shifted multiplicand to the current partial product. Multiplicand has been shifted left when added to the partial product. However, adding a left-shifted multiplicand to an unshifted partial product is equivalent to adding an unshifted multiplicand to a right-shifted partial product.

Sequential Circuit Multiplier 1 - m n-bit Adder Multiplicand M Control sequencer Multiplier Q C Shift right Register A (initially 0) Add/Noadd control a MUX

Sequential Circuit Multiplier Registers A and Q are shift registers, concatenated as shown. Together, they hold partial product PPi while multiplier bit qi generates the signal Add/No add. This signal causes the multiplexer MUX to select 0 when qi = 0, or to select the multiplicand M when qi = 1, to be added to PPi to generate PP(i + 1). The product is computed in n cycles. The partial product grows in length by one bit per cycle from the initial vector, PP0, of n 0s in register A. The carry-out from the adder is stored in flip-flop C, shown at the left end of register A. At the start, the multiplier is loaded into register Q, the multiplicand into register M, and C and A are cleared to 0. At the end of each cycle, C, A, and Q are shifted right one bit position to allow for growth of the partial product as the multiplier is shifted out of register Q. Because of this shifting, multiplier bit qi appears at the LSB position of Q to generate the Add/No add signal at the correct time, starting with q0 during the first cycle, q1 during the second cycle, and so on. After they are used, the multiplier bits are discarded by the right-shift operation. Note that the carry-out from the adder is the leftmost bit of PP(i + 1), and it must be held in the C flip-flop to be shifted right with the contents of A and Q. After n cycles, the high-order half of the product is held in register A and the low-order half is in register Q.

Sequential multiplication (contd..) 1 1 1 1 1 0 1 1 1 1 1 0 1 1 0 1 Initial configuration Add M C First cycle Second cycle Third cycle Fourth cycle No add Shift 1 0 0 0 0 0 1 1 0 0 0 1 1 1 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 Q A Product

Signed Multiplication

Signed Multiplication Considering 2’s-complement signed operands, what will happen to (-13)(+11) if following the same method of unsigned multiplication? 1 1 1 ( - 13 ) 1 1 1 ( + 11 ) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Sign extension is shown in blue 1 1 1 1 1 1 1 1 1 1 1 ( - 143 ) Sign extension of negative multiplicand.

Signed Multiplication For a negative multiplier, a straightforward solution is to form the 2’s-complement of both the multiplier and the multiplicand and proceed as in the case of a positive multiplier. This is possible because complementation of both operands does not change the value or the sign of the product. A technique that works equally well for both negative and positive multipliers – Booth algorithm.

Booth Algorithm In general, in the Booth scheme, -1 times the shifted multiplicand is selected when moving from 0 to 1, and +1 times the shifted multiplicand is selected when moving from 1 to 0, as the multiplier is scanned from right to left. 1 1 1 1 1 1 1 1 1 + 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 Treats both positive and negative 2’s-complement operands uniformly Algorithm (scanning multiplier right to left) 0 to 1 => add −1 times shifted multiplicand 1 to 0 => add +1 times shifted multiplicand 0 to 0 or 1 to 1 => add 0

Booth Algorithm 1 1 0 0 -4 x 5 -20 x 0 1 0 1 +1-1+1-1 0 0 0 0 0 1 0 0 1 1 1 1 1 0 0 0 0 0 1 0 0 + 1 1 1 0 0 1 1 1 0 1 1 0 0 0 0 0 1 0 1 0 0 = -20

Booth Algorithm 0 1 0 1 5 x-4 -20 x 1 1 0 0 0-1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 0 0 0 0 0 + 1 1 1 0 1 1 0 0 0 0 0 1 0 1 0 0 = -20

Booth Algorithm Consider in a multiplication, the multiplier is positive 0011110, how many appropriately shifted versions of the multiplicand are added in a standard procedure? 1 1 1 1 + 1 + 1 + 1 + 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

Booth Algorithm Since 0011110 = 0100000 – 0000010, if we use the expression to the right, what will happen? 1 1 1 1 + 1 - 1 2's complement of 1 1 1 1 1 1 1 1 1 1 the multiplicand 1 1 1 1 1 1 1 1 1

Booth Algorithm Booth multiplication with a negative multiplier. 1 1 1 1 1 1 ( + 13 ) 1 1 1 X 1 1 1 ( - 6 ) - 1 +1 - 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ( - 78 ) Booth multiplication with a negative multiplier.

Booth Algorithm Booth multiplier recoding table. Multiplier V ersion of multiplicand selected by bit i Bit i Bit i - 1 X M 1 + 1 X M 1  1 X M 1 1 X M Booth multiplier recoding table.

Booth Algorithm Best case – a long string of 1’s (skipping over 1s) Worst case – 0’s and 1’s are alternating 1 1 1 1 1 1 1 1 Worst-case multiplier + 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 + 1 - 1 1 1 1 1 1 1 1 1 1 Ordinary multiplier - 1 + 1 - 1 + 1 - 1 + 1 - 1 1 1 1 1 1 1 1 1 Good multiplier + 1 - 1 + 1 - 1

Fast Multiplication

Bit-Pair Recoding of Multipliers Bit-pair recoding halves the maximum number of summands (versions of the multiplicand). Sign extension Implied 0 to right of LSB 1 1 1 1  1 + 1  1  1  2 (a) Example of bit-pair recoding derived from Booth recoding

Bit-Pair Recoding of Multipliers 1 1 0 0 -4 x 5 -20 x 0 1 0 1 +1-1+1-1 +1 +1 1 1 1 1 1 1 0 0 + 1 1 1 1 0 0 1 1 1 0 1 1 0 0 0 0 0 1 0 1 0 0 = -20

Bit-Pair Recoding of Multipliers Multiplier bit-pair Multiplier bit on the right Multiplicand selected at position i i + 1 i i  1 X M 1 + 1 X M 1 + 1 X M 1 1 + 2 X M 1  2 X M 1 1  1 X M 1 1  1 X M 1 1 1 X M (b) Table of multiplicand selection decisions

Bit-Pair Recoding of Multipliers 1 1 1 - 1 + 1 - 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ( + 13 ) ´ 1 1 1 ( - 6 ) 1 1 1 1 1 1 ( - 78 ) 1 1 1 - 1 - 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Figure 6.15. Multiplication requiring only n/2 summands.

Carry-Save Addition of Summands CSA speeds up the addition process. P7 P6 P5 P4 P3 P2 P1 P0

Carry-Save Addition of Summands (Cont.,) P7 P6 P5 P4 P3 P2 P1 P0

Carry-Save Addition of Summands (Cont.,) Consider the addition of many summands, we can: Group the summands in threes and perform carry-save addition on each of these groups in parallel to generate a set of S and C vectors in one full-adder delay Group all of the S and C vectors into threes, and perform carry-save addition on them, generating a further set of S and C vectors in one more full-adder delay Continue with this process until there are only two vectors remaining They can be added in a RCA or CLA to produce the desired product

Carry-Save Addition of Summands 1 1 1 1 (45) M X 1 1 1 1 1 1 (63) Q 1 1 1 1 A 1 1 1 1 B 1 1 1 1 C 1 1 1 1 D 1 1 1 1 E 1 1 1 1 F 1 1 1 1 1 1 (2,835) Product Figure 6.17. A multiplication example used to illustrate carry-save addition as shown in Figure 6.18.

1 1 1 1 M x 1 1 1 1 1 1 Q A 1 1 1 1 B 1 1 1 1 C 1 1 1 1 1 1 1 1 S 1 1 1 1 1 C 1 1 1 1 1 D 1 1 1 1 E 1 1 1 1 F S 1 1 1 1 2 1 1 1 1 C 2 1 1 1 1 S 1 1 1 1 1 C 1 1 1 1 1 S 2 1 1 1 1 1 1 S 3 1 1 1 C 3 1 1 1 1 C 2 1 1 1 1 1 1 1 S 4 + 1 1 1 C 4 1 1 1 1 1 1 Product Figure 6.18. The multiplication example from Figure 6.17 performed using carry-save addition.

Integer Division

Longhand division examples. Manual Division 21 10101 13 1101 274 100010010 26 1101 14 10000 13 1101 1 1110 1101 1 Longhand division examples.

Longhand Division Steps Position the divisor appropriately with respect to the dividend and performs a subtraction. If the remainder is zero or positive, a quotient bit of 1 is determined, the remainder is extended by another bit of the dividend, the divisor is repositioned, and another subtraction is performed. If the remainder is negative, a quotient bit of 0 is determined, the dividend is restored by adding back the divisor, and the divisor is repositioned for another subtraction.

Circuit Arrangement Control Sequencer Shift left a0 an an-1 qn-1 q0 Dividend Q A Quotient Setting N+1 bit adder Add/Subtract Control Sequencer m0 mn-1 Divisor M Figure 6.21. Circuit arrangement for binary division.

Restoring Division Shift A and Q left one binary position Subtract M from A, and place the answer back in A If the sign of A is 1, set q0 to 0 and add M back to A (restore A); otherwise, set q0 to 1 Repeat these steps n times

Examples 1 Initially 1 1 1 Shift 1 Subtract 1 1 1 1 First cycle Set q 1 1 1 Shift 1 Subtract 1 1 1 1 First cycle Set q 1 1 1 1 Restore 1 1 1 1 Shift 1 Subtract 1 1 1 1 Set q 1 1 1 1 1 Second cycle Restore 1 1 1 Shift 1 Subtract 1 1 1 1 Set q 1 Third cycle Shift 1 1 Subtract 1 1 1 1 1 Set q 1 1 1 1 1 Fourth cycle Restore 1 1 1 1 Remainder Quotient Figure 6.22. A restoring-division example.

Nonrestoring Division Avoid the need for restoring A after an unsuccessful subtraction. Any idea? Step 1: (Repeat n times) If the sign of A is 0, shift A and Q left one bit position and subtract M from A; otherwise, shift A and Q left and add M to A. Now, if the sign of A is 0, set q0 to 1; otherwise, set q0 to 0. Step2: If the sign of A is 1, add M to A

Examples Initially 1 1 1 Shift 1 First cycle Subtract 1 1 1 1 Set q 1 1 1 1 Shift 1 First cycle Subtract 1 1 1 1 Set q 1 1 1 1 Shift 1 1 1 Add 1 1 Second cycle Set q 1 1 1 1 1 Shift 1 1 1 1 Remainder 1 Add 1 1 Third cycle Restore remainder Set q 1 1 Add Shift 1 1 Subtract 1 1 1 1 Fourth cycle Set q 1 1 1 1 1 1 Quotient A nonrestoring-division example.

Floating-Point Numbers and Operations

Fractions If b is a binary vector, then we have seen that it can be interpreted as an unsigned integer by: V(b) = b31.231 + b30.230 + bn-3.229 + .... + b1.21 + b0.20 This vector has an implicit binary point to its immediate right: b31b30b29....................b1b0. implicit binary point Suppose if the binary vector is interpreted with the implicit binary point is just left of the sign bit: implicit binary point .b31b30b29....................b1b0 The value of b is then given by: V(b) = b31.2-1 + b30.2-2 + b29.2-3 + .... + b1.2-31 + b0.2-32

Range of fractions The value of the unsigned binary fraction is: V(b) = b31.2-1 + b30.2-2 + b29.2-3 + .... + b1.2-31 + b0.2-32 The range of the numbers represented in this format is: In general for a n-bit binary fraction (a number with an assumed binary point at the immediate left of the vector), then the range of values is:

Scientific notation Previous representations have a fixed point. Either the point is to the immediate right or it is to the immediate left. This is called Fixed point representation. Fixed point representation suffers from a drawback that the representation can only represent a finite range (and quite small) range of numbers. A more convenient representation is the scientific representation, where the numbers are represented in the form: Components of these numbers are: Mantissa (m), implied base (b), and exponent (e)

Significant digits A number such as the following is said to have 7 significant digits Fractions in the range 0.0 to 0.9999999 need about 24 bits of precision (in binary). For example the binary fraction with 24 1’s: 111111111111111111111111 = 0.9999999404 Not every real number between 0 and 0.9999999404 can be represented by a 24-bit fractional number. The smallest non-zero number that can be represented is: 000000000000000000000001 = 5.96046 x 10-8 Every other non-zero number is constructed in increments of this value.

Sign and exponent digits In a 32-bit number, suppose we allocate 24 bits to represent a fractional mantissa. Assume that the mantissa is represented in sign and magnitude format, and we have allocated one bit to represent the sign. We allocate 7 bits to represent the exponent, and assume that the exponent is represented as a 2’s complement integer. There are no bits allocated to represent the base, we assume that the base is implied for now, that is the base is 2. Since a 7-bit 2’s complement number can represent values in the range -64 to 63, the range of numbers that can be represented is: 0.0000001 x 2-64 < = | x | <= 0.9999999 x 263 In decimal representation this range is: 0.5421 x 10-20 < = | x | <= 9.2237 x 1018

A sample representation Sign Exponent Fractional mantissa bit 1 7 24 24-bit mantissa with an implied binary point to the immediate left 7-bit exponent in 2’s complement form, and implied base is 2.

Normalization Consider the number: x = 0.0004056781 x 1012 If the number is to be represented using only 7 significant mantissa digits, the representation ignoring rounding is: x = 0.0004056 x 1012 If the number is shifted so that as many significant digits are brought into 7 available slots: x = 0.4056781 x 109 = 0.0004056 x 1012 Exponent of x was decreased by 1 for every left shift of x. A number which is brought into a form so that all of the available mantissa digits are optimally used (this is different from all occupied which may not hold), is called a normalized number. Same methodology holds in the case of binary mantissas 0001101000(10110) x 28 = 1101000101(10) x 25

Normalization (contd..) A floating point number is in normalized form if the most significant 1 in the mantissa is in the most significant bit of the mantissa. All normalized floating point numbers in this system will be of the form: 0.1xxxxx.......xx Range of numbers representable in this system, if every number must be normalized is: 0.5 x 2-64 <= | x | < 1 x 263

Normalization, overflow and underflow The procedure for normalizing a floating point number is: Do (until MSB of mantissa = = 1) Shift the mantissa left (or right) Decrement (increment) the exponent by 1 end do Applying the normalization procedure to: .000111001110....0010 x 2-62 gives: .111001110........ x 2-65 But we cannot represent an exponent of –65, in trying to normalize the number we have underflowed our representation. Applying the normalization procedure to: 1.00111000............x 263 gives: 0.100111..............x 264 This overflows the representation.

Changing the implied base So far we have assumed an implied base of 2, that is our floating point numbers are of the form: x = m 2e If we choose an implied base of 16, then: x = m 16e Then: y = (m.16) .16e-1 (m.24) .16e-1 = m . 16e = x Thus, every four left shifts of a binary mantissa results in a decrease of 1 in a base 16 exponent. Normalization in this case means shifting the mantissa until there is a 1 in the first four bits of the mantissa.

Excess notation Rather than representing an exponent in 2’s complement form, it turns out to be more beneficial to represent the exponent in excess notation. If 7 bits are allocated to the exponent, exponents can be represented in the range of -64 to +63, that is: -64 <= e <= 63 Exponent can also be represented using the following coding called as excess-64: E’ = Etrue + 64 In general, excess-p coding is represented as: E’ = Etrue + p True exponent of -64 is represented as 0 0 is represented as 64 63 is represented as 127 This enables efficient comparison of the relative sizes of two floating point numbers.

IEEE notation IEEE Floating Point notation is the standard representation in use. There are two representations: - Single precision. - Double precision. Both have an implied base of 2. Single precision: - 32 bits (23-bit mantissa, 8-bit exponent in excess-127 representation) Double precision: - 64 bits (52-bit mantissa, 11-bit exponent in excess-1023 representation) Fractional mantissa, with an implied binary point at immediate left. Sign Exponent Mantissa 1 8 or 11 23 or 52

IEEE notation Represent 1259.12510 in single precision Step 1 :Convert decimal number to binary format 1259(10)=10011101011(2) Fractional Part 0.125 (10)=0.001 (2) Binary number = 10011101011+0.001 =10011101011.001 Step 2: Normalize the number 10011101011.001=1.0011101011001 x 210 Step3: Single precision format: For a given number S=0,E=10 and M=0011101011001 Bias for single precision format is = 127 E’=E+127=10+127=137 (10) =10001001 (2) • Number in single precision format 0 10001001 0011101011001….0

Peculiarities of IEEE notation Floating point numbers have to be represented in a normalized form to maximize the use of available mantissa digits. In a base-2 representation, this implies that the MSB of the mantissa is always equal to 1. If every number is normalized, then the MSB of the mantissa is always 1. We can do away without storing the MSB. IEEE notation assumes that all numbers are normalized so that the MSB of the mantissa is a 1 and does not store this bit. So the real MSB of a number in the IEEE notation is either a 0 or a 1. The values of the numbers represented in the IEEE single precision notation are of the form: (+,-) 1.M x 2(E - 127) The hidden 1 forms the integer part of the mantissa. Note that excess-127 and excess-1023 (not excess-128 or excess-1024) are used to represent the exponent.

Exponent field In the IEEE representation, the exponent is in excess-127 (excess-1023) notation. The actual exponents represented are: -126 <= E <= 127 and -1022 <= E <= 1023 not -127 <= E <= 128 and -1023 <= E <= 1024 This is because the IEEE uses the exponents -127 and 128 (and -1023 and 1024), that is the actual values 0 and 255 to represent special conditions: - Exact zero - Infinity

Floating point arithmetic Addition: 3.1415 x 108 + 1.19 x 106 = 3.1415 x 108 + 0.0119 x 108 = 3.1534 x 108 Multiplication: 3.1415 x 108 x 1.19 x 106 = (3.1415 x 1.19 ) x 10(8+6) Division: 3.1415 x 108 / 1.19 x 106 = (3.1415 / 1.19 ) x 10(8-6) Biased exponent problem: If a true exponent e is represented in excess-p notation, that is as e+p. Then consider what happens under multiplication: a. 10(x + p) * b. 10(y + p) = (a.b). 10(x + p + y +p) = (a.b). 10(x +y + 2p) Representing the result in excess-p notation implies that the exponent should be x+y+p. Instead it is x+y+2p. Biases should be handled in floating point arithmetic.

Floating point arithmetic: ADD/SUB rule Choose the number with the smaller exponent. Shift its mantissa right until the exponents of both the numbers are equal. Add or subtract the mantissas. Determine the sign of the result. Normalize the result if necessary and truncate/round to the number of mantissa bits. Note: This does not consider the possibility of overflow/underflow.

Floating point arithmetic: MUL rule Add the exponents. Subtract the bias. Multiply the mantissas and determine the sign of the result. Normalize the result (if necessary). Truncate/round the mantissa of the result.

Floating point arithmetic: DIV rule Subtract the exponents Add the bias. Divide the mantissas and determine the sign of the result. Normalize the result if necessary. Truncate/round the mantissa of the result. Note: Multiplication and division does not require alignment of the mantissas the way addition and subtraction does.

Guard bits While adding two floating point numbers with 24-bit mantissas, we shift the mantissa of the number with the smaller exponent to the right until the two exponents are equalized. This implies that mantissa bits may be lost during the right shift (that is, bits of precision may be shifted out of the mantissa being shifted). To prevent this, floating point operations are implemented by keeping guard bits, that is, extra bits of precision at the least significant end of the mantissa. The arithmetic on the mantissas is performed with these extra bits of precision. After an arithmetic operation, the guarded mantissas are: - Normalized (if necessary) - Converted back by a process called truncation/rounding to a 24-bit mantissa.

Truncation/rounding Straight chopping: Von Neumann rounding: Rounding: The guard bits (excess bits of precision) are dropped. Von Neumann rounding: If the guard bits are all 0, they are dropped. However, if any bit of the guard bit is a 1, then the LSB of the retained bit is set to 1. Rounding: If there is a 1 in the MSB of the guard bit then a 1 is added to the LSB of the retained bits.

Rounding Rounding is evidently the most accurate truncation method. However, Rounding requires an addition operation. Rounding may require a renormalization, if the addition operation de-normalizes the truncated number. IEEE uses the rounding method. 0.111111100000 rounds to 0.111111 + 0.000001 =1.000000 which must be renormalized to 0.100000

Implementing Floating-Point Operations-1 The hardware implementation of floating-point operations designed using logic circuitry. and can also be implemented by software routines. In either case, the computer must be able to convert input and output from and to the user’s decimal representation of numbers. In many general-purpose processors, floating-point operations are available at the machine-instruction level, implemented in hardware.

Implementing Floating-Point Operations-2 Step 1: Compare the exponent for sign bit using 8bit subtractor Sign is sent to SWAP unit to decide on which number to be sent to SHIFTER unit. Step2: The exponent of the result is determined in two way multiplexer depending on the sign bit from step1 Step3: Control logic determines whether mantissas are to be added or subtracted. Depending on sign of the operand. There are many combinations are possible here, that depends on sign bits, exponent values of the operand. Step4: Normalization of the result depending on the leading zeros, and some special case like 1.xxxxx operands. Where result is 1x.xxx and X = -1, therefore will increase the exponent value.

Implementing Floating-Point Operations-3 Example Add single precision floating point numbers A and B, where A=44900000H and B = 42A00000H. Solution Step 1 : Represent numbers in single precision format A = 0 1000 1001 0010000….0 B = 0 1000 0101 0100000….0 Exponent for A = 1000 1001 =137 Therefore actual exponent = 137-127(Bias) =10 Exponent for B = 1000 0101 = 133 Therefore actual exponent = 133-127(Bias) = 6 With difference 4. Hence its mantissa is shifted right by 4 bits as shown below Step 2: Shift mantissa Shifted mantissa of B = 0 0 0 0 0 1 0 0…0 Step 3: Add mantissa Mantissa of A = 00100000…0 Mantissa of B = 00000100…0 Mantissa of result = 00100100…0 As both numbers are positive, sign of the result is positive Result =0100 0100 1001 0010 0…0 =44920000H