Presentation is loading. Please wait.

Presentation is loading. Please wait.

Compiler Research in HPC Lab R. Govindarajan High Performance Computing Lab.

Similar presentations


Presentation on theme: "Compiler Research in HPC Lab R. Govindarajan High Performance Computing Lab."— Presentation transcript:

1 Compiler Research in HPC Lab R. Govindarajan High Performance Computing Lab. govind@serc.iisc.ernet.in

2 Organization HPC Lab Research Overview Compiler Analysis & Optimizations  Precise Dataflow Analysis  Energy Reduction for Embedded Systems Array Allocation for Partitioned Memory Arch. Dynamic Voltage Scaling  Integrated Spill Code Generation & Scheduling Conclusions

3 HPC Team (or HPC– XI) Mruggesh Gajjar B.C. Girish R. Karthikeyan R. Manikantan Santosh Nagarakatte Rupesh Nasre Sreepathi Pai Kaushik Rajan T.S. Rajesh Kumar V.Santhosh Kumar Aditya Thakur Coach: R. Govindarajan

4 Compiler Optimizations  Traditional analysis & optimizations, power-aware compiling techniques, compilation techniques for embedded systems Computer Architecture  Superscalar architecture, architecture- compiler interaction, application-specific processors, embedded systems High Performance Computing  Cluster computing, HPC Applications HPC Lab Research Overview

5 ILP Compilation Techniques Compiling Techniques for Embedded Systems Compiling Techniques for Application- Specific Systems Dataflow Analysis Compiler Research in HPC Lab.

6 ILP Compilation Techniques Instruction Scheduling Software pipelining Register Allocation Power/Energy Aware Compilation techniques Compiling Techniques for embedded systems/application specific processors (DSP, Network Processors, …)

7 Power-aware software pipelining method (using integer linear program formulation) Simple Offset Assignment for code-size reduction. Loop transformation and memory bank assignment for power reduction. Compiler Assisted Dynamic Voltage Scaling Memory layout problem for embedded systems MMX code generation using vectorization Compiling Techniques for Embedded Systems

8 Framework for exploring application design space for network application Compiling techniques for Streaming Applications and Program Models  Buffer-Aware, Schedule-size Aware, Throughput Optimal Schedules Compiling Techniques for Application Specific Systems

9 Precise Dataflow Analysis Pointer Analysis Compiler Analysis

10 Compiler problems are  Optimization problems – solved by formulating the problem as Integer Linear Program problem. Involves non-trivial effort! Efficient formulation for reducing exec. time! Other evolutionary approaches can also used.  Graph Theoretic problems – leverage existing well-known approaches  Modelled using Automaton – elegant problem formulation to ensure correctness So, What is the Connection?

11 The Problem: Improve precision of data-flow analysis used in compiler optimization Precise Dataflow Analysis

12 … : statements unrelated to x or y Can’t replace the use of x at G with a constant. {x = 1}{x = 2} {x = nc} nc : not constant { } : Data-flow information Constant Propagation

13 Can replace uses of x at G1 and G2 Overview of our Solution

14 Challenges The Problem: Improve precision of data-flow analysis Approach: Restructuring control-flow of the program Challenges:  Developed generic framework  Guarantees optimization opportunities  Handles the precision and code size trade-off  Approach is simple and clean

15 … : statements unrelated to x or y A brief look at our example. At control-flow merge D, we lose precision. {x = 1}{x = 2} {x = nc} nc : not constant { } : Data-flow information

16 … : statements unrelated to x or y nc : not constant { } : Data-flow information Need to duplicate this in order to optimize node G…

17 … : statements unrelated to x or y nc : not constant { } : Data-flow information …such that paths with differing dataflow information do not intersect.

18 … : statements unrelated to x or y nc : not constant { } : Data-flow information No need to duplicate this.

19 Control-flow Graph = Automaton View a control-flow graph G as a finite automaton with  states as nodes  start state as entry node  accepting state as exit node  transitions as the edges

20 0 21 B-D C-D G-H G-I Split Automaton for D The Automaton

21 0 21 B-D C-D G-H G-I Split Automaton for D The Automaton

22 0 21 B-D C-D G-H G-I Split Automaton for D more CFG x Automaton = Split Graph

23 Energy Reduction: Array Alloc. for Partitioned Memory Arch. Dynamic Energy reduction in Memory Subsystem.  Memory subsystem consumes significant energy  Many embedded applications are array intensive  Memory architecture with multiple banks Exploiting various low-power modes of partitioned memory architectures.  Put idle memory banks in low-power mode  Allocate arrays to memory banks s.t. more memory banks can be in low-power mode for longer duration

24 Partitioned Memory Architectures Memory banks with low-power modes.  Active, Stand-by, Napping, Power-down, Disabled. Resynchronization time – time to move from lower power mode to Active mode Mode Resynch. Time (cycles) Energy Consumed (nJ) Active00.718 Standby20.468 Napping300.0206 Power Down90000.00875

25 Motivating Example Array Relation Graph Example : float a[N], d[N]; double b[N], c[N]; L1: for (ia=0;ia < N;ia++) d[ia] = a[ia] + k; L2: for (ia=0;ia < N;ia++) a[ia] = b[ia] * k ; L3: for (ia=0;ia < N;ia++) c[ia] = d[ia] / k; L4: for (ia=0;ia < N;ia++) b[ia] = c[ia] - k; L5: for (ia=0;ia < N;ia++) b[ia] = d[ia] + k; Arrays a, d ~ 1 MB each Arrays b, c ~ 2 MB each Memory bank size = 4MB b c 2N d a N 8N 4N N Memory banks active for a total of 32N cycles!

26 Motivating Example -- Our Approach Array Relation Graph Array allocation requires partitioning the ARG! Graph partitioning such that each subgraph can be accommodated in a memory bank. Weights of edges across subgraphs is the cost of keeping multiple banks active together. Minimize them! Arrays b and c in one subgraph and a and d in another b c 2N d a N 8N 4N N Memory banks active for a total of 23N cycles!

27 Dynamic Voltage Scaling Dynamically vary the CPU frequency and supply voltage. Dynamic Power proportional to C * V 2 * f  C capacitance  V supply voltage  f operating frequency Processors support different Voltage (and Frequency) modes and can switch betn. them. AMD, Transmeta, Xscale provide support for DVS, have multiple operating frequencies.

28 Identify program regions where DVS can be performed. For each program region, identify the voltage (freq.) mode to operate on, s.t. energy is minimized Ensure that performance is not degraded. Compiler Assisted DVS

29 Motivating Example Freq.P1P2P3P4P5Total 200 Exec. Time1516827335682733514475 Energy821253912539410 300 Exec. Time100455222345522239650 Energy1491637216372619 400 Exec. Time76341416834141687240 Energy1982741762741761098 DVS Freq.200400300400300-- Exec. Time151341422334142237425 Energy822747227472778 2 % Increase30 % decrease

30 Program divided into number of regions. Assign an operating frequency for each program region.  Constraint Marginal increase in exec. time of the program.  Objective Minimizing program Energy Consumption. Multiple Choice Knapsack Problem DVS Problem Formulation

31 Compiler Problem as Optimization Problem Integrated register allocation, spill code generation and scheduling in Software Pipelined loop Problem: Given Machine M, Loop L, a software pipelined schedule S with initiation interval II, perform Register Allocation and generate spill code, if necessary, and schedule them such that the register requirement of the schedule  Number of Registers and resource constraints are met!

32 Live Range Representation Register R0 Register Rn.................... A 1 2 3 4 5 6 7 use 1 2 3 4 5 6 7 0 def Modeling Liverange

33 Store decision variables Register R0 Register Rn.................... A 1 2 3 4 5 6 7 use 1 2 3 4 5 6 7 0 def Latencies: Load : 1, Store : 1, Instruction : 1 store Modeling Spill Stores

34 Load decision variables Register R0 Register Rn.................... A 1 2 3 4 5 6 7 use 1 2 3 4 5 6 7 0 def Latencies: Load : 1, Store : 1, Instruction : 1 load Modeling Spill Loads

35 Constraints- Overview  Every live range must be in a register at the definition time and the use time.  Spill load can take place only if the spill store has already taken place.  After a spill store, a live range can continue or cease to exist.  Ensure that the spill loads and stores don't saturate the memory units.  Minimize the number of spill loads and stores. Constraints

36 Objective No Objective function – just a constrain solving problem! Minimize the number of spill loads and stores  STN i,r,t +LTN i,r,t

37 Conclusions Compiler research is fun! It is cool to do compiler research! But, remember Proebsting’s Law: Compiler Technology Doubles CPU Power Every 18 YEARS!! Plenty of opportunities in compiler research! However, NO VACANCY in HPC lab this year! 


Download ppt "Compiler Research in HPC Lab R. Govindarajan High Performance Computing Lab."

Similar presentations


Ads by Google