Data Representation 15-213: Introduction to Computer Systems Recitation 2: Monday, Jan. 29th, 2015 Dhruven Shah.

Slides:



Advertisements
Similar presentations
Fixed Point Numbers The binary integer arithmetic you are used to is known by the more general term of Fixed Point arithmetic. Fixed Point means that we.
Advertisements

Fabián E. Bustamante, Spring 2007 Floating point Today IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.
– 1 – CS213, S’06 Floating Point 4/5/2006 Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties CS213.
Floating Point Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties CS 367.
Carnegie Mellon Instructors: Dave O’Hallaron, Greg Ganger, and Greg Kesden Floating Point : Introduction to Computer Systems 4 th Lecture, Sep 6,
1 Binghamton University Floating Point CS220 Computer Systems II 3rd Lecture.
Carnegie Mellon Instructors: Randy Bryant & Dave O’Hallaron Floating Point : Introduction to Computer Systems 3 rd Lecture, Aug. 31, 2010.
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
15213 Recitation Fall 2014 Section A, 8 th September Vinay Bhat.
Floating Point Numbers
University of Washington Today: Floats! 1. University of Washington Today Topics: Floating Point Background: Fractional binary numbers IEEE floating point.
University of Washington Today Topics: Floating Point Background: Fractional binary numbers IEEE floating point standard: Definition Example and properties.
Floating Point Arithmetic August 25, 2007
CSE 378 Floating-point1 How to represent real numbers In decimal scientific notation –sign –fraction –base (i.e., 10) to some power Most of the time, usual.
FLOATING POINT COMPUTER ARCHITECTURE AND ORGANIZATION.
Floating Point Sept 6, 2006 Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties class03.ppt “The course.
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI 230 Information Representation: Negative and Floating Point.
1 COMS 161 Introduction to Computing Title: Numeric Processing Date: October 22, 2004 Lecture Number: 24.
Computing Systems Basic arithmetic for computers.
ECE232: Hardware Organization and Design
Floating Point Numbers Topics –IEEE Floating Point Standard –Rounding –Floating Point Operations –Mathematical properties.
Carnegie Mellon Instructors: Seth Copen Goldstein, Anthony Rowe, Greg Kesden Floating Point : Introduction to Computer Systems 4 th Lecture, Jan.
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
Instructor: Andrew Case Floating Point Slides adapted from Randy Bryant & Dave O’Hallaron.
Carnegie Mellon Instructors: Greg Ganger, Greg Kesden, and Dave O’Hallaron Floating Point : Introduction to Computer Systems 4 th Lecture, Sep 4,
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Carnegie Mellon Instructor: San Skulrattanakulchai Floating Point MCS-284:
Carnegie Mellon 1 Representing Data : Introduction to Computer Systems Recitation 3: Monday, Sept. 9 th, 2013 Marjorie Carlson Section A.
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
Floating Point Numbers Representation, Operations, and Accuracy CS223 Digital Design.
Floating Point CENG331 - Computer Organization Instructors:
1 Carnegie Mellon Welcome to recitation! : Introduction to Computer Systems 3 rd Recitation, Sept. 10, 2012 Instructor: Adrian Trejo (atrejo) Section.
Carnegie Mellon Instructors: Randy Bryant, Dave O’Hallaron, and Greg Kesden Floating Point : Introduction to Computer Systems 4 th Lecture, Sep 5,
Floating Point Sept 9, 2004 Topics IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties class04.ppt “The course.
1 University of Washington Fractional binary numbers What is ?
1 Floating Point. 2 Topics Fractional Binary Numbers IEEE 754 Standard Rounding Mode FP Operations Floating Point in C Suggested Reading: 2.4.
Floating Point Carnegie Mellon /18-243: Introduction to Computer Systems 4 th Lecture, 26 May 2011 Instructors: Gregory Kesden.
Lecture 4 IEEE 754 Floating Point Topics IEEE 754 standard Tiny float January 25, 2016 CSCE 212 Computer Architecture.
Floating Point Topics IEEE Floating-Point Standard Rounding Floating-Point Operations Mathematical Properties CS 105 “Tour of the Black Holes of Computing!”
Floating Point Representations
Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Carnegie Mellon Today’s Instructor: Randy Bryant Floating Point :
Floating Point Representations
Floating Point CSE 238/2038/2138: Systems Programming
Floating Point Borrowed from the Carnegie Mellon (Thanks, guys!)
Floating Point Numbers
CS 105 “Tour of the Black Holes of Computing!”
2.4. Floating Point Numbers
Recitation 4&5 and review 1 & 2 & 3
Integer Division.
Topics IEEE Floating Point Standard Rounding Floating Point Operations
CS 367 Floating Point Topics (Ch 2.4) IEEE Floating Point Standard
Topic 3d Representation of Real Numbers
Recitation: Data Lab The TAs Sep 11, 2017.
CS201 - Lecture 5 Floating Point
CSCI206 - Computer Organization & Programming
CS 105 “Tour of the Black Holes of Computing!”
How to represent real numbers
CS 105 “Tour of the Black Holes of Computing!”
Floating Point Arithmetic August 31, 2009
CS 105 “Tour of the Black Holes of Computing!”
Integers II CSE 351 Summer 2018 Instructor: Justin Hsia
CS213 Floating Point Topics IEEE Floating Point Standard Rounding
Review In last lecture, done with unsigned and signed number representation. Introduced how to represent real numbers in float format.
Topic 3d Representation of Real Numbers
CS 105 “Tour of the Black Holes of Computing!”
“The course that gives CMU its Zip!”
Computer Organization and Assembly Language
Presentation transcript:

Data Representation 15-213: Introduction to Computer Systems Recitation 2: Monday, Jan. 29th, 2015 Dhruven Shah

Welcome to Recitation Recitation is a place for interaction If you have questions, please ask. If you want to go over an example not planned for recitation, let me know. We’ll cover: A quick recap of topics from class, especially ones we have found students struggled with in the past Example problems to reinforce those topics and prepare for exams Tips and questions for labs

News Access to Autolab and Blackboard C boot camp slides available on the website Office hours in GHC 5208, Everyday 5:30-8:00pm Data lab due June 1, 11:59 pm

Agenda How do I Data Lab? Integers Floating point Biasing division Endianness Floating point Binary fractions IEEE standard Example problem

How do I Data Lab? Step 1: Download lab files All lab files are on autolab Remember to also read the lab handout (“view writeup” link) Step 2: Work on the right machines Remember to do all your lab work on Andrew or Shark machines Some later labs will restrict you to just the shark machines (bomb lab, for example) This includes untaring the handout. Otherwise, you may lose some permissions bits If you get a permission denied error, try “chmod +x filename”

How do I Data Lab? Step 3: Edit and test Step 4: Submit Bits.c is the file you’re looking for Remember you have 3 ways to test your solutions. See the writeup for details. driver.pl runs the same tests as autolab Step 4: Submit Unlimited submissions, but please don’t use autolab in place of driver.pl Must submit via web form To package/download files to your computer, use “tar -cvzf out.tar.gz in1 in2 …” and your favorite file transfer protocol

How do I Data Lab? Tips Write C like it’s 1989 Declare variable at top of function Make sure closing brace (“}”) is in 1st column We won’t be using the dlc compiler for later labs Be careful of operator precedence Do you know what order ~a+1+b*c<<3*2 will execute in? Neither do I. Use parentheses: (~a)+1+(b*(c<<3)*2) Take advantage of special operators and values like !, 0, and Tmin Reducing ops once you’re under the threshold won’t get you extra points. Undefined behavior Like shifting by <32. See Anita’s rant.

Anita’s Rant From the Intel x86 Reference: “These instructions shift the bits in the first operand (destination operand) to the left or right by the number of bits specified in the second operand (count operand). Bits shifted beyond the destination operand boundary are first shifted into the CF flag, then discarded. At the end of the shift operation, the CF flag contains the last bit shifted out of the destination operand. The destination operand can be a register or a memory location. The count operand can be an immediate value or register CL. The count is masked to five bits, which limits the count range to 0 to 31. A special opcode encoding is provided for a count of 1.”

Integers - Biasing Can multiply/divide powers of 2 with shift Left shift by k to multiply by 2k Divide: Right shift by k to divide by 2k … for positive numbers Shifting rounds towards -inf, but we want to round to 0 Solution: biasing when negative

Integers – Endianness Endianness describes which bit is most significant in a binary number You won’t need to work with this until bomb lab Big endian: First byte (lowest address) is the most significant This is how we typically talk about binary numbers Little endian: First byte (lowest address) is the least significant Intel x86 (shark/andrew linux machines) implement this

Floating Point – Fractions in Binary 4 2 1 • • • bi bi-1 ••• b2 b1 b0 b-1 b-2 b-3 b-j 1/2 1/4 1/8 2-j • • • Latex source for equation: \sum_{k=-j}^i b_k \times 2^k Representation Bits to right of “binary point” represent fractional powers of 2 Represents rational number:

Floating Point – IEEE Standard Single precision: 32 bits Double precision: 64 bits Extended precision: 80 bits (Intel only) s exp frac 1 8-bits 23-bits s exp frac 1 11-bits 52-bits s exp frac 1 15-bits 63 or 64-bits

Floating Point – IEEE Standard What does this mean? We can think of floating point as binary scientific notation IEEE format includes a few optimizations to increase range for our given number of bits The number represented is essentially (sign * frac * 2E) There are a few steps I left out there Example: Assume our floating point format has no sign bit, k = 3 exponent bits, and n=2 fraction bits What does 10010 represent?

Floating Point – IEEE Standard What does this mean? We can think of floating point as binary scientific notation IEEE format includes a few optimizations to increase range for our given number of bits The number represented is essentially (sign * frac * 2exp) There are a few steps I left out there Example: Assume our floating point format has no sign bit, k = 3 exponent bits, and n=2 fraction bits What does 10010 represent? 3

Floating Point – IEEE Standard Bias exp is unsigned; needs a bias to represent negative numbers Bias = 2k-1 - 1, where k is the number of exponent bits Can also be thought of as bit pattern 0b011…111 When converting frac/int => float, assume normalized until proven otherwise Normalized Denormalized exp = 0 Implied leading 1 E = exp - Bias Denser near origin Represents small numbers

Floating Point – IEEE Standard Bias exp is unsigned; needs a bias to represent negative numbers Bias = 2k-1 - 1, where k is the number of exponent bits Can also be thought of as bit pattern 0b011…111 When converting frac/int => float, assume normalized until proven otherwise Normalized Denormalized 0 < exp < (2k-1) exp = 0 Implied leading 1 Leading 0 E = exp - Bias E = 1 - Bias. Why? Denser near origin Evenly spaced Represents large numbers Represents small numbers

Floating Point – IEEE Standard Special Cases (exp = 2k-1) Infinity Result of an overflow during calculation or division by 0 exp = 2k-1, frac = 0 Not a Number (NaN) Result of illegal operation (sqrt(-1), inf – inf, inf * 0) exp = 2k-1, frac != 0 Keep in mind these special cases are not the same

Floating Point – IEEE Standard Round to even Why? Avoid statistical bias of rounding up or down on half. How? Like this: 1.01002 truncate 1.012 1.01012 below half; round down 1.01102 interesting case; round to even 1.102 1.01112 above half; round up 1.10002 1.10012 1.10102 Interesting case; round to even 1.10112 1.112 1.11002

Rounding 1.BBGRXXX Guard bit: LSB of result Sticky bit: OR of remaining bits Round bit: 1st bit removed Round up conditions Round = 1, Sticky = 1 ➙ > 0.5 Guard = 1, Round = 1, Sticky = 0 ➙ Round to even Value Fraction GRS Incr? Rounded 128 1.0000000 000 N 1.000 15 1.1010000 100 N 1.101 17 1.0001000 010 N 1.000 19 1.0011000 110 Y 1.010 138 1.0001010 011 Y 1.001 63 1.1111100 111 Y 10.000 This slide is stolen from the end of floating point notes. May be helpful if you just want case-by-case rules.

Floating Point – Example Consider the following 5‐bit floating point representation based on the IEEE floating point format. This format does not have a sign bit – it can only represent nonnegative numbers. There are k=3 exponent bits. There are n=2 fraction bits. What is the… Bias? Largest denormalized number? Smallest normalized number? Largest finite number it can represent? Smallest non-zero value it can represent? 4 3 2 1 exp frac

Floating Point – Example Consider the following 5‐bit floating point representation based on the IEEE floating point format. This format does not have a sign bit – it can only represent nonnegative numbers. There are k=3 exponent bits. There are n=2 fraction bits. What is the… Bias? 0112 = 3 Largest denormalized number? 000 112 = 0.00112 = 3/16 Smallest normalized number? 001 002 = 0.01002 = 1/4 Largest finite number it can represent? 110 112 = 1110.02 = 14 Smallest non-zero value it can represent? 000 012 = 0.00012 = 1/16 4 3 2 1 exp frac

Floating Point – Example For the same problem, complete the following table: Value Floating Point Bits Rounded Value 9/32 8 9 000 10 19

Floating Point – Example For the same problem, complete the following table: Value Floating Point Bits Rounded Value 9/32 001 00 1/4 8 110 00 9 1/8 000 10 19 111 00 inf

Floating Point Recap Floating point = (-1)s M 2E MSB is sign bit s Bias = 2(k-1) – 1 (k is num of exp bits) Normalized (larger numbers, denser towards 0) exp ≠ 000…0 and exp ≠ 111…1 M = 1.frac E = exp - Bias Denormalized (smaller numbers, evenly spread) exp = 000….0 M = 0.frac E = - Bias + 1

Floating Point Recap Special Cases +/- Infinity: exp = 111…1 and frac = 000…0 +/- NaN: exp = 111…1 and frac ≠ 000…0 +0: s = 0, exp = 000…0 and frac = 000…0 -0: s = 1, exp = 000…0 and frac = 000…0 Round towards even when half way (lsb of frac = 0)

Questions?