We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published bySonya Pascal
Modified over 2 years ago
Robert Enenkel, Allan Martin IBM ® Toronto Lab Speeding Up Floating- Point Division With In- lined Iterative Algorithms
© Copyright IBM Corp. 2005 Outline Hardware floating-point division The case for software division Software division algorithms Special cases/tradeoffs Performance results Automatic generation
© Copyright IBM Corp. 2005 Hardware Division PPC fdiv, fdivs Advantages ƒaccurate (correctly rounded) ƒhandles exceptional cases (Inf, NaN) ƒlower latency than SW Disadvantages ƒoccupies FPU completely ƒinhibits parallelism
© Copyright IBM Corp. 2005 Alternatives to HW division Vector libraries ƒMASS ƒhigher overhead, greater speedup In-lined software division ƒlow overhead, medium speedup
© Copyright IBM Corp. 2005 Rationale for Software Division Write SW division algorithm in terms of HW arithmetic instructions ƒNewton's method or Taylor series Latency will be higher than HW division But...SW instructions can be interleaved, so throughput may be better Requires enough independent instructions to interleave ƒloop of divisions ƒother work
© Copyright IBM Corp. 2005 Newton's Method To find x such that f(x) = 0, Initial guess x 0 x n+1 = x n - f(x n )/f'(x n ), n=0, 1, 2,... Provided x 0 is close enough ƒx n converges to x ƒIt converges quadratically |x n+1 -x| < c|x n -x|^2 ƒNumber of bits of accuracy doubles with each iteration
© Copyright IBM Corp. 2005 Newton's Method
© Copyright IBM Corp. 2005 Newton Iteration for Division For 1/b, let f(x) = 1/x - b For a/b, use a*(1/b) or f(x) = a/x - b Algorithm for 1/b ƒx 0 ~ 1/b initial guess ƒe 0 = 1 - b*y 0 ƒx 1 = x 0 + e 0 *x 0 ƒe 1 = e 0 *e 0 ƒx 2 = x 1 + e 1 *x 1 ƒetc...
© Copyright IBM Corp. 2005 How Many Iterations Needed? Power5 reciprocal estimate instructions ƒFRES (single precision), FRE (double prec.) ƒ|relative error| <= 2^(-8) Floating-point precision ƒsingle:24 bits ƒdouble:53 bits Newton iterations ƒerror: 2^(-16), 2^(-32), 2^(-64), 2^(-128) ƒsingle: 2 iterations for 1 ulp ƒdouble:3 iterations for 1 ulp ƒ+1 iteration for correct rounding (0.5 ulps)
© Copyright IBM Corp. 2005 Taylor Series for Reciprocal x 0 ~ 1/b initial guess e = 1 - b x 0 1/b = x 0 /(b x 0 ) = x 0 (1/(1-e)) = x 0 (1 + e + e^2 + e^3 + e^4 +...) Algorithm (6 terms) ƒe = 1 - d*x 0 ƒt 1 = 0.5 + e * e ƒq 1 = x 0 + x 0 * e ƒt 2 = 0.75 + t 1 *t 1 ƒt 3 = q 1 *e ƒq 2 = x 0 + t 2 *t 3
© Copyright IBM Corp. 2005 Speed/Accuracy tradeoff IBM compilers have -qstrict/-qnostrict -qstrict: SW result should match HW division exactly -qnostrict: SW result may be slightly less accurate for speed
© Copyright IBM Corp. 2005 Exceptions Even when a/b is representable... 1/b may underflow ƒa ~ b ~ huge, a/b ~ 1, 1/b denormalized ƒCauses loss of accuracy 1/b may overflow ƒa, b denormalized, a/b ~ 1, 1/b = Inf ƒCauses SW algorithm to produce NaN Handle with tests in algorithm ƒUse HW divide for exceptional cases
© Copyright IBM Corp. 2005 Algorithm variations User callable built-in functions ƒswdiv(a,b): double precision, checking ƒswdivs(a,b): single precision, checking ƒswdiv_nochk(a,b): double, non-checking ƒswdivs_nochk(a,b): single, non-checking Accuracy of swdiv, swdiv_nochk depends on -qstrict/-qnostrict _nochk versions faster but have argument restrictions
© Copyright IBM Corp. 2005 Accuracy and Performance Power5 speedup ratio Power4 speedup ratio Power5 ulps max error Power4 ulps max error swdivs1.07 1.050.5 swdivs_nochk1.461.280.5 swdiv strict1.050.5 swdiv nostrict1.501.5 swdiv_nochk strict 1.510.5 swdiv_nochk nostrict 1.771.5
© Copyright IBM Corp. 2005 Automatic Generation of Software Division The swdivs and swdiv algorithms can also be automatically generated by the compiler Compiler can detect situations where throughput is more important than latency
© Copyright IBM Corp. 2005 Automatic Generation of Software Division In straight-line code, we use a heuristic that calculates how much FP can be executed in parallel ƒindependent instructions are good, especially other divides ƒdependent instructions are bad (they increase latency)
© Copyright IBM Corp. 2005 Automatic Generation of Software Division In modulo scheduled loops software-divide code can be pipelined, interleaving multiple iterations Divides are expanded if divide does not appear in a recurrence (cyclic data- dependence)
© Copyright IBM Corp. 2005 Summary Software divide algorithms ƒuser callable ƒcompiler generated Loops of divides ƒup to 1.77x speedup UMT2K benchmark ƒ1.19x speedup
L9: Floating Point Issues CS6963. Outline Finish control flow and predicated execution discussion Floating point – Mostly single precision until recent.
Lecture 22 Review of floating point representation from last time The IEEE floating point standard (notes) Quit early because half class still not back.
Long Modular Multiplication for Cryptographic Applications Laszlo Hars Seagate Research Workshop on Cryptographic Hardware and Embedded Systems, CHES 2004.
Lecture 5 Newton-Raphson Method
Compiler Exploitation of Decimal Floating-Point Hardware Ian McIntosh, Ivan Sham IBM Toronto Lab.
Chapter 3 Root Finding.
Newton's Method for Functions of Several Variables
Digital Kommunikationselektronik TNE027 Lecture 2 1 FA x n –1 c n c n1- y n1– s n1– FA x 1 c 2 y 1 s 1 c 1 x 0 y 0 s 0 c 0 MSB positionLSB position Ripple-Carry.
Copyright © 2005 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
A few words about convergence We have been looking at e a as our measure of convergence A more technical means of differentiating the speed of convergence.
HPEC 2003 Linear Algebra Processor using FPGA Jeremy Johnson, Prawat Nagvajara, Chika Nwankpa Drexel University.
Round each number to the level of accuracy given.
Floating-Point Divide and Square Root for Efficient FPGA Implementation of Image and Signal Processing Algorithms Xiaojun Wang, Miriam Leeser
CSCI 125 & 161 Lecture 13 Martin van Bommel. Floating Point Data Floating point numbers are not exact Value 0.1 in binary is very close to 1/10, but not.
Floating Point Arithmetic
Copyright 2008 Koren ECE666/Koren Sample Mid-term 2.1 Israel Koren Spring 2008 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Digital.
CISE301_Topic11 CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4:
Accuracy Robert Strzodka. 2Overview Precision and Accuracy Hardware Resources Mixed Precision Iterative Refinement.
Gaj1P230/MAPLD 2004 Elliptic Curve Cryptography over GF(2 m ) on a Reconfigurable Computer: Polynomial Basis vs. Optimal Normal Basis Representation Comparative.
© 2017 SlidePlayer.com Inc. All rights reserved.