Computer Organization and Design Information Encoding II Montek Singh Wed, Aug 29, 2012 Lecture 3.

Slides:



Advertisements
Similar presentations
Lecture - 2 Number systems and computer data formats
Advertisements

ITEC 352 Lecture 7 Binary(2). Review Homework due on Friday Questions Binary Addition Subtraction Encoding.
©Brooks/Cole, 2003 Chapter 3 Number Representation.
L03 – Encoding Information 1 Comp411 – Spring /17/2007 Representing Information “Bit Juggling” - Representing information using bits - Number representations.
Assembly Language and Computer Architecture Using C++ and Java
Signed Numbers.
Assembly Language and Computer Architecture Using C++ and Java
Floating Point Numbers
Number Representation Rizwan Rehman, CCS, DU. Convert a number from decimal to binary notation and vice versa. Understand the different representations.
Data Representation – Chapter 3 Sections 3-2, 3-3, 3-4.
Number Systems Lecture 02.
Chapter 3 Data Representation part2 Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2010.
Binary Representation and Computer Arithmetic
Dr. Bernard Chen Ph.D. University of Central Arkansas
Chapter3 Fixed Point Representation Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2009.
Simple Data Type Representation and conversion of numbers
Data Representation – Binary Numbers
Ch. 2 Floating Point Numbers
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Computer Arithmetic Nizamettin AYDIN
Computer Arithmetic. Instruction Formats Layout of bits in an instruction Includes opcode Includes (implicit or explicit) operand(s) Usually more than.
Computer Arithmetic.
NUMBER REPRESENTATION CHAPTER 3 – part 3. ONE’S COMPLEMENT REPRESENTATION CHAPTER 3 – part 3.
IT253: Computer Organization
Computer Organization and Design Representing Operands
Computing Systems Basic arithmetic for computers.
Computer Architecture
Lecture Overview Introduction Positional Numbering System
Floating Point. Agenda  History  Basic Terms  General representation of floating point  Constructing a simple floating point representation  Floating.
Introduction to Computer Engineering ECE/CS 252, Fall 2010 Prof. Mikko Lipasti Department of Electrical and Computer Engineering University of Wisconsin.
CH09 Computer Arithmetic  CPU combines of ALU and Control Unit, this chapter discusses ALU The Arithmetic and Logic Unit (ALU) Number Systems Integer.
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
9.4 FLOATING-POINT REPRESENTATION
©Brooks/Cole, 2003 Chapter 3 Number Representation.
Chapter 3 Number Representation. Convert a number from decimal to binary notation and vice versa. Understand the different representations of an integer.
Fixed and Floating Point Numbers Lesson 3 Ioan Despi.
CSC 221 Computer Organization and Assembly Language
©Brooks/Cole, 2003 Chapter 3 Number Representation.
L03 – Encoding Information 1 Comp411 – Fall /30/06 Representing Information “Bit Juggling” - Representing information using bits - Number representations.
Computer Arithmetic See Stallings Chapter 9 Sep 10, 2009
Number Systems & Operations
CS1Q Computer Systems Lecture 2 Simon Gay. Lecture 2CS1Q Computer Systems - Simon Gay2 Binary Numbers We’ll look at some details of the representation.
FAMU-FSU College of Engineering 1 Part III The Arithmetic/Logic Unit.
Dr Mohamed Menacer College of Computer Science and Engineering Taibah University CE-321: Computer.
FAMU-FSU College of Engineering 1 Computer Architecture EEL 4713/5764, Spring 2006 Dr. Michael Frank Module #9 – Number Representations.
Numbers in Computers.
©Brooks/Cole, 2003 Chapter 3 Number Representation.
Computer Organization and Design Information Encoding Montek Singh Mon, Jan 12, 2011 Lecture 2.
Chapter 9 Computer Arithmetic
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 7th Edition
Data Structures Mohammed Thajeel To the second year students
Representing Information
ECEG-3202 Computer Architecture and Organization
Number Representation
Chapter 8 Computer Arithmetic
Representing Information
Presentation transcript:

Computer Organization and Design Information Encoding II Montek Singh Wed, Aug 29, 2012 Lecture 3

Today’s topics  Positive and Negative numbers  Non-Integers

Review: Sign-Magnitude Rep.  What about signed numbers? one obvious idea: use an extra bit to encode the sign one obvious idea: use an extra bit to encode the sign  convention: the most significant bit (leftmost) is used for the sign  called the SIGNED MAGNITUDE representation S

Review: Sign-Magnitude Rep.  The Good: Easy to negate, find absolute value  The Bad: add/subtract is complicated add/subtract is complicated  depends on the signs  4 different cases! two different ways of representing a 0 two different ways of representing a 0 it is not used that frequently in practice it is not used that frequently in practice  except in floating-point numbers

Alternative: 2’s Complement Rep … 2 N-2 -2 N-1 …… N bits The 2’s complement representation for signed integers is the most commonly used signed-integer representation. It is a simple modification of unsigned integers where the most significant bit is considered negative. “binary” point“sign bit” Range: – 2 N-1 to 2 N-1 – 1 8-bit 2’s complement example: = – = – = – 42 ladders chute

Why 2’s Complement?  Benefit: the same binary addition mod 2 n procedure will work for adding positive and negative numbers don’t need separate subtraction rules! don’t need separate subtraction rules! The same procedure will also handle unsigned numbers! The same procedure will also handle unsigned numbers! Example: = = = = = = When using signed magnitude representations, adding a negative value really means to subtract a positive value. However, in 2’s complement, adding is adding regardless of sign. In fact, you NEVER need to subtract when you use a 2’s complement representation.

2’s Complement  How to negate a number? First complement every bit (i.e. 1  0, 0  1), then add 1 First complement every bit (i.e. 1  0, 0  1), then add 1  Example: 20 = , -20 = =  Example: -20 = , 20 = = Why does this work? Why does this work?  You figure it out! Hint:

2’s Complement  Sign-Extension suppose you have an 8-bit number that needs to be “extended” to 16 bits suppose you have an 8-bit number that needs to be “extended” to 16 bits  Why? Maybe because we are adding it to a 16-bit number… Examples Examples  16-bit version of 42 =  8-bit version of -2 = Why does this work? Why does this work?  Same hint: 1111

CLASS EXERCISE 10’s-complement Arithmetic (You’ll never need to borrow again) Step 1) Write down two 3-digit numbers that you want to subtract Step 2) Form the 9’s-complement of each digit in the second number (the subtrahend) 0  9 1  8 2  7 3  6 4  5 5  4 6  3 7  2 8  1 9  0 Helpful Table of the 9’s complement for each digit Step 3) Add 1 to it (the subtrahend) Step 4) Add this number to the first What did you get? Why weren’t you taught to subtract this way? Step 5) If your result was less than 1000, form the 9’s complement again and add 1 and remember your result is negative else subtract 1000

Tutorial on Base Conversion (+ve ints)  Binary to Decimal multiply each bit by its positional power of 2 multiply each bit by its positional power of 2 add them together add them together  Decimal to Binary Problem: given v, find b i (inverse problem) Problem: given v, find b i (inverse problem) Hint: expand series Hint: expand series  observe: every term is even except first –this determines b 0 –divide both sides by 2

Tutorial on Base Conversion (+ve ints)  Decimal to Binary Problem: given v, find b i (inverse problem) Problem: given v, find b i (inverse problem) Algorithm: Algorithm:  Repeat –divide v by 2 –remainder becomes the next bit, b i –quotient becomes the next v  Until v equals 0  Note: Same algorithm applies to other number bases just replace divide-by-2 by divide-by- n for base n just replace divide-by-2 by divide-by- n for base n

Non-Integral Numbers  How about non-integers? examples examples     fixed-point representation fixed-point representation floating-point representation floating-point representation

Fixed-Point Representation  Set a definite position for the “binary” point everything to its left is the integral part of the number everything to its left is the integral part of the number everything to its right is the fractional part of the number everything to its right is the fractional part of the number = = = Or = 214 * 2 -4 = 214/16 =

Fixed-Point Base Conversion  Binary to Decimal multiply each bit by its positional power of 2 multiply each bit by its positional power of 2 just that the powers of 2 are now negative just that the powers of 2 are now negative for m fractional bits for m fractional bits  Examples = ½ = = ½ = = 1/8 + 1/16 = = 1/8 + 1/16 = = 1/8+1/16+1/128+1/256+1/2048+1/4096 = (getting close to 0.2) = 1/8+1/16+1/128+1/256+1/2048+1/4096 = (getting close to 0.2) (repeats) = (repeats) =

Fixed-Point Base Conversion  Decimal to Binary Problem: given v, find b i (inverse problem) Problem: given v, find b i (inverse problem) Hint: this time, try multiplying by 2 Hint: this time, try multiplying by 2  whole number part is b -1  remaining fractional part is the rest Algorithm: Algorithm:  Repeat –multiply v by 2 –whole part becomes the next bit, b i –remaining fractional part becomes the next v  Until ( v equals 0) or (desired accuracy is achieved)

Repeated Binary Fractions  Not all fractions have a finite representation e.g., in decimal, 1/3 = … (unending) e.g., in decimal, 1/3 = … (unending)  In binary, many of the fractions you are used to have an infinite representation! Examples Examples  1/10 = = … 2 =  1/5 = = = 0.333… 16  Question In Decimal: When do fractions repeat? In Decimal: When do fractions repeat?  when the denominator is mutually prime w.r.t. 5 and 2 In Binary: When do fractions repeat? In Binary: When do fractions repeat?  when the denominator is mutually prime w.r.t. 2  i.e., when denominator is anything other than a power of 2

Signed fixed-point numbers  How do you incorporate a sign? use sign magnitude representation use sign magnitude representation  an extra bit (leftmost) stores the sign  just as in negative integers 2’s complement 2’s complement  leftmost bit has a negative coefficient = = =  OR: –first ignore the binary point, use 2’s complement, put the point back = ( )* 2 -4 = -42/16 =

Bias Notation  Idea: add a large number to everything, to make everything look positive! must subtract this “bias” from every representation must subtract this “bias” from every representation This representation is called “Bias Notation”. This representation is called “Bias Notation” Ex: (Bias = 127) 6 * 1 = 6 13 * 16 = Why? Monotonicity

Floating-Point Representation  Another way to represent numbers is to use a notation similar to Scientific Notation. This format can be used to represent numbers with fractions (3.90 x ), very small numbers (1.60 x ), and large numbers (6.02 x ). This format can be used to represent numbers with fractions (3.90 x ), very small numbers (1.60 x ), and large numbers (6.02 x ). This notation uses two fields to represent each number. The first part represents a normalized fraction (called the significand), and the second part represents the exponent (i.e. the position of the “floating” binary point). This notation uses two fields to represent each number. The first part represents a normalized fraction (called the significand), and the second part represents the exponent (i.e. the position of the “floating” binary point). Normalized Fraction “dynamic range”“bits of accuracy” Exponent

IEEE 754 Format 1 S 111 S 52 SignificandExponent 23 Significand This is effectively a signed magnitude fixed-point number with a “hidden” 1. The 1 is hidden because it provides no information after the number is “normalized” 8 Exponent The exponent is represented in bias 127 notation. Why? Single-precision format Double-precision format IEEE 754 Floating-Point Formats

Summary  Selecting the encoding of information has important implications on how this information can be processed, and how much space it requires.  Computer arithmetic is constrained by finite representations, this has advantages (it allows for complement arithmetic) and disadvantages (it allows for overflows, numbers too big or small to be represented).  Bit patterns can be interpreted in an endless number of ways, however important standards do exist Two’s complement Two’s complement IEEE 754 floating point IEEE 754 floating point