Download presentation

Presentation is loading. Please wait.

Published byAiden Whitaker Modified over 3 years ago

1
Binary-Level Tools for Floating-Point Correctness Analysis Michael Lam LLNL Summer Intern 2011 Bronis de Supinski, Mentor

2
Background Floating-point represents real numbers as (± sgnf × 2 exp ) o Sign bit o Exponent o Significand (mantissa or fraction) Floating-point numbers have finite precision o Single-precision: 24 bits (~7 decimal digits) o Double-precision: 53 bits (~16 decimal digits) 032 16 84 Significand (23 bits)Exponent (8 bits) IEEE Single 2 03264 16 84 Significand (52 bits)Exponent (11 bits) IEEE Double

3
Example π 3.141592… 3 Images courtesy of BinaryConvert.com Single-precision Double-precision

4
1/10 0.1 4 Images courtesy of BinaryConvert.com Single-precision Double-precisionExample

5
Motivation Finite precision causes round-off error o Compromises ill-conditioned calculations o Hard to detect and diagnose Increasingly important as HPC scales o Need to balance speed (singles) and accuracy (doubles) o Double-precision may still fail on long-running computations 5

6
Previous Solutions Analytical (Wilkinson, et al.) o Requires numerical analysis expertise o Conservative static error bounds are largely unhelpful Ad-hoc o Run experiments at different precisions o Increase precision where necessary o Tedious and time-consuming 6

7
Our Approach Run Dyninst-based mutator o Find floating-point instructions o Insert new code or a call to shared library Run instrumented program o Analysis augments/replaces original program o Store results in a log file View output with GUI 7

8
Advantages Automated (vs. manual) o Minimize developer effort o Ensure consistency and correctness Binary-level (vs. source-level) o Include shared libraries without source code o Include compiler optimizations Runtime (vs. compile time) o Dataset and communication sensitivity 8

9
Previous Work Cancellation detection o Logs numerical cancellation of binary digits Alternate-precision analysis o Simulates re-compiling with different precision 9

10
Summer Contributions Cancellation detection o Improved support for multi-core analysis Overflow detection o New tool for logging integer overflow o Possibilities for extension and incorporation into floating-point analysis Alternate-precision analysis o New in-place analysis o Much-improved performance and robustness 10

11
Cancellation Loss of significant digits during subtraction operations Cancellation is a symptom, not the root problem Indicates that a loss of information has occurred that may cause problems later 1.613647 (7) 1.613647 (7) - 1.613635 (7) - 1.613647 (7) 0.000012 (2) 0.000000 (0) (5 digits cancelled) (all digits cancelled) 1.6136473 - 1.6136467 0.0000006 11

12
Cancellation Detector Instrument every addition and subtraction o Simple exponent-based test for cancellation o Log the results to an output file 12

13
Better support for multi-core o Log to multiple files o Future work: exploring GUI aggregation schemes Ran experiments on AMG2006 13Contributions

14
Contributions New proof-of-concept tool o Instruments all instructions that set OF (the overflow flag) o Log instruction pointer to output o Works on integer instructions o Introduces ~10x overhead Future work o Pruning false positives o Overflow/underflow detection on floating-point instructions o NaN/Inf detection on floating-point instructions 14

15
Alternate-precision Analysis Previous approach o Replace floating-point values with a pointer oShadow values allocated on heap Disadvantages o Major change in program semantics (copying vs. aliasing) o Lots of pointer-related bugs o Required function calls and use of a garbage collector o Large performance impact (>200-300x) o Increased memory usage (>1.5x) 15

16
downcast conversion New shadow-value analysis scheme o Narrowed focus: doubles singles o In-place downcast conversion (no heap allocations) o Flag in the high bits to indicate replacement 03264 16 84 Double 03264 16 84 Replaced Double 7FF4DEAD Non-signalling NaN 032 16 84 Single 16Contributions

17
Contributions Simpler analysis o Instrument instructions w/ double-precision operands o Check and replace operands o Replace double-precision opcodes o Fix up flags if necessary Streamlined instrumentation o Insert binary blobs of optimized machine code o Pre-generated by mini-assembler inside mutator o Prevents overhead of added function calls o No memory overhead 17

18
Example gvec[i,j] = gvec[i,j] * lvec[3] + gvar 1movsd 0x601e38(%rax, %rbx, 8) %xmm0 2mulsd -0x78(%rsp) %xmm0 3addsd -0x4f02(%rip) %xmm0 4movsd %xmm0 0x601e38(%rax, %rbx, 8) 18

19
gvec[i,j] = gvec[i,j] * lvec[3] + gvar 1movsd 0x601e38(%rax, %rbx, 8) %xmm0 check/replace -0x78(%rsp) and %xmm0 2mulss -0x78(%rsp) %xmm0 check/replace -0x4f02(%rip) and %xmm0 3addss -0x20dd43(%rip) %xmm0 4movsd %xmm0 0x601e38(%rax, %rbx, 8) 19Example

20
XMM register Challenges Currently handled o %rip - and %rsp -relative addressing o %rflags preservation o Math functions from libm o Bitwise operations ( AND/OR/XOR/BTC ) o Size and type conversions o Compiler optimization levels o Packed instructions 20 IEEE Single 0 32 64128 downcast conversiondowncast conversion IEEE Double downcast conversiondowncast conversion IEEE Single 0x7FF4DEAD

21
Challenges Future work o 80-bit long double precision o 16-bit IEEE half-precision o 128-bit IEEE quad-precision o Width-dependent random number generation o Non-gcc compilers o Arcane floating-point hacks Sqrt: (1 > 1) - (1<<22) Fast InvSqrt: 0x5f3759df – (val >> 1) 21

22
Results Runs correctly on Sequoia kernels and other examples: AMGmk4x CrystalMk4x IRSmk7x UMTmk3x LULESH4x Real code with manageable overhead Future work: more optimization Future work: run on full benchmarks 22

23
Conclusion Cancellation detection o Improved support for multi-core analysis Overflow detection o New tool for logging integer overflow o Possibilities for extension and incorporation into floating-point analysis Alternate-precision analysis o New in-place analysis o Much-improved performance and robustness 23

24
Future Goals Selective analysis o Data-centric (variables or matrices) o Control-centric (basic blocks or functions) Analysis search space o Minimize precision o Maximize accuracy Goal: Tool for automated floating-point precision analysis and recommendation 24

25
Acknowledgements Jeff Hollingsworth, University of Maryland (Advisor) Bronis de Supinski, LLNL (Mentor) Tony Baylis, LLNL (Supervisor) Barry Rountree, LLNL Matt Legendre, LLNL Greg Lee, LLNL Dong Ahn, LLNL Thank you! 25

26
Bitfield Templates 03264 16 84 03264 16 84 032 16 84 Double Single 26 XMM register IEEE Single 0 32 64128 downcast conversiondowncast conversion IEEE Double downcast conversiondowncast conversion IEEE Single 0x7FF4DEAD

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google