Presentation is loading. Please wait.

Presentation is loading. Please wait.

Peephole Optimization & Other Post-Compilation Techniques COMP 512 Rice University Houston, Texas Fall 2003 Copyright 2003, Keith D. Cooper & Linda Torczon,

Similar presentations


Presentation on theme: "Peephole Optimization & Other Post-Compilation Techniques COMP 512 Rice University Houston, Texas Fall 2003 Copyright 2003, Keith D. Cooper & Linda Torczon,"— Presentation transcript:

1 Peephole Optimization & Other Post-Compilation Techniques COMP 512 Rice University Houston, Texas Fall 2003 Copyright 2003, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 512 at Rice University have explicit permission to make copies of these materials for their personal use.

2 The Problem After compilation, the code still has some flaws
Scheduling & allocation really are NP-Complete Optimizer may not implement every needed transformation Curing the problem More work on scheduling and allocation Implement more optimizations — or — Optimize after compilation Peephole optimization Link-time optimization COMP 512, Fall 2003

3 Peephole Optimization
The Basic Idea Discover local improvements by looking at a window on the code A tiny window is good enough — a peephole Slide the peephole over the code and examine the contents Pattern match with a limited set of patterns Examples storeAI r1  r0,8 loadAI r0,8  r15 i2i r1  r15 addI r2,0  r7 mult r4,r7  r10 mult r4,r2  r10 jumpI  l10 l10: jumpI  l11 jumpI  l11 } Less likely as a local sequence, but might happen COMP 512, Fall 2003

4 Peephole Optimization
Early Peephole Optimizers (McKeeman) Used limited set of hand-coded patterns Matched with exhaustive search Small window, small pattern set  quick execution They proved effective at cleaning up the rough edges Code generation is inherently local Boundaries between local regions are trouble spots Improvements in code generation, optimization, & architecture should have let these fade into obscurity Much better allocation & scheduling today than in 1965 But, we have much more complex architectures Window of 2 to 5 ops COMP 512, Fall 2003

5 Peephole Optimization
Modern Peephole Optimizers (Davidson, Fraser) Larger, more complex ISAs  larger pattern sets This has produced a more systematic approach Expander Operation-by-operation expansion into LLIR Needs no context Captures full effect of an operation Expander ASMLLIR Simplifier LLIRLLIR Matcher LLIRASM ASM LLIR add r1,r2  r4 r4  r1 + r2 cc  f(r1 + r2) COMP 512, Fall 2003

6 Peephole Optimization
Modern Peephole Optimizers (Davidson, Fraser) Larger, more complex ISAs  larger pattern sets This has produced a more systematic approach Simplifier Single pass over LLIR, moving the peephole Forward substitution, algebraic simplification, constant folding, & eliminating useless effects (must know what is dead ) Eliminate as many LLIR operations as possible Expander ASMLLIR Simplifier LLIRLLIR Matcher LLIRASM ASM LLIR COMP 512, Fall 2003

7 Peephole Optimization
Modern Peephole Optimizers (Davidson, Fraser) Larger, more complex ISAs  larger pattern sets This has produced a more systematic approach Matcher Starts with reduced LLIR program Compares LLIR from peephole against pattern library Selects one or more ASM patterns that “cover” the LLIR Capture all of its effects May create new useless effects (setting the cc ) Expander ASMLLIR Simplifier LLIRLLIR Matcher LLIRASM ASM LLIR COMP 512, Fall 2003

8 Finding Dead Effects The simplifier must know what is useless (i.e., dead) Expander works in a context-independent fashion It can process the operations in any order Use a backward walk and compute local LIVE information Tag each operation with a list of useless values What about non-local effects? Most useless effects are local — DEF & USE in same block It can be conservative & assume LIVE until proven dead ASM mult r5,r9  r12 add r12,r17  r13 LLIR r12  r5 * r9 cc  f(r5*r9) r13  r12 + r17 cc  f(r12+r17) madd r5,r9,r17  r13 This effect would prevent multiply-add from matching expand simplify match COMP 512, Fall 2003

9 Peephole Optimization
Can use it to perform instruction selection Key issue in selection is effective pattern matching Using peephole system for instruction selection Have front-end generate LLIR directly Eliminates need for the Expander Keep Simplifier and Matcher Add a simple register assigner, follow with real allocation This basic scheme is used in GCC Expander ASMLLIR Simplifier LLIRLLIR Matcher LLIRASM ASM LLIR COMP 512, Fall 2003

10 Peephole-Based Selection
Optimizer LLIRLLIR LLIR Simplifier Matcher LLIRASM ASM Allocator ASMASM Front End Source LLIR Source Basic Structure of Compilers like GCC Uses RTL as its IR (very low level, register-transfer language) Numerous optimization passes Quick translation into RTL limits what optimizer can do ... Matcher generated from spec (hard-coded tree-pattern matcher) COMP 512, Fall 2003

11   An Example — or — LLIR r10  2 Original Code r11  @ y
r12  r0 + r11 r13  M(r12) r14  r10 x r13 r15  @ x r16  r0 + r15 r17  M(r16) r18  r17 - r14 r19  @ w r20  r0 + r19 M(r20)  r18 Original Code w  x - 2 * y Translation — or — Compiler’s IR Expander COMP 512, Fall 2003

12 Simplification with a three operation window
r11 r12  r0 + r11 r10  2 r12  r0 y r13  M(r12) r10  2 r13  M(r0 y) r14  r10 x r13 r10  2 r11  @ y r12  r0 + r11 r13  M(r12) r14  r10 x r13 r15  @ x r16  r0 + r15 r17  M(r16) r18  r17 - r14 r19  @ w r20  r0 + r19 M(r20)  r18 r13  M(r0 y) r14  2 x r13 r15  @ x r14  2 x r13 r16  r0 x r17  M(r16) r14  2 x r13 r15  @ x r16  r0 + r15 r14  2 x r13 r17  M(r0 x) r18  r17 - r14 r17  M(r0 x) r18  r17 - r14 r19  @ w r18  r17 - r14 r19  @ w r20  r0 + r19 r18  r17 - r14 r20  r0 w M(r20)  r18 Original Code No further improvement is found r18  r17 - r14 w)  r18 COMP 512, Fall 2003

13   Example, Continued Simplification shrinks the code significantly
r10  @ y r11  r0 + r10 r12  M(r11) r13  2 r14  r12 x r13 r15  @ x r16  r0 + r15 r17  M(r16) r18  r17 - r14 r19  @ w r20  r0 + r19 M(r20)  r18 r13  M(r0 y) r14  2 x r13 r17  M(r0 x) r18  r17 - r14 w)  r18 Takes 5 operations instead of 12 Uses 4 registers instead of 11. Simplify Match loadAI y  r13 multI r13,2  r14 loadAI x  r17 sub r17,r14  r18 storeAI r  w and, we’re done ... COMP 512, Fall 2003

14 Other Considerations Control-flow operations
Can clear simplifier’s window at branch or label More aggressive approach: combine across branches Must account for effects on all paths Not clear that this pays off …. Same considerations arise with predication Physical versus logical windows Can run optimizer over a logical window k operations connected by DEF-USE chains Expander can link DEFs &USEs Logical windows (within block) improve effectiveness Davidson & Fraser report 30% faster & 20% fewer ops with local logical window. COMP 512, Fall 2003

15 Peephole Optimization
So, … Peephole optimization remains viable Post allocation improvements Cleans up rough edges Peephole technology works for selection Description driven matchers (hard coded or LR(1) ) Used in several important systems Simplification pays off late in process Low-level substitution, identities, folding, & dead effects All of this will work equally well in binary-to-binary translation COMP 512, Fall 2003

16 Other Post-Compilation Techniques
What else makes sense to do after compilation? Profile-guided code positioning Allocation intact, schedule intact Cross-jumping Allocation intact, schedule changed Hoisting (harder ) Changes allocation & schedule, needs data-flow analysis Procedure abstraction Changes allocation & schedule, really needs an allocator Register scavenging Changes allocation & schedule, purely local transformation Bit-transition reduction Schedule & allocation intact, assignment changed Harder problems: 1. Rescheduling 2. Operation placement 3. Vectorization COMP 512, Fall 2003

17 Register Scavenging Simple idea
Global allocation does a good job on the big picture items Leaves behind blocks where some registers are unused Let’s scavenge those unused registers Compute LIVE information Walk each block to find underallocated region Find spilled local subranges Opportunistically promote them to registers A note of realism: Opportunities exist, but this is a 1% to 2% improvement T.J. Harvey, Reducing the Impact of Spill Code, MS Thesis, Rice University, May 1998 COMP 512, Fall 2003

18 Bit-transition Reduction
Inter-operation bit-transitions relate to power consumption Large fraction of CMOS power is spent switching states Same op on same functional unit costs less power All other things being equal Simple idea Reassign registers to minimize interoperation bit transitions Build some sort of weighted graph Use a greedy algorithm to pick names by distance Should reduce power consumption in fetch & decode hardware Waterman’s MS thesis (in preparation) COMP 512, Fall 2003

19 Bit-transition Reduction
Other transformations Swap operands on commutative operators More complex than it sounds Shoot for zero-transition pairs Swap operations within “fetch packets” Works for superscalar, not VLIW Consider bit transitions in scheduling Same ops to same functional unit Nearby (Hamming distance) ops next, and so on … Factor bit transitions into instruction selection Maybe use a BURS model with dynamic costs Again, most of this fits into a post-compilation framework….. COMP 512, Fall 2003


Download ppt "Peephole Optimization & Other Post-Compilation Techniques COMP 512 Rice University Houston, Texas Fall 2003 Copyright 2003, Keith D. Cooper & Linda Torczon,"

Similar presentations


Ads by Google