Optimizing single thread performance

Slides:



Advertisements
Similar presentations
1 Optimizing compilers Managing Cache Bercovici Sivan.
Advertisements

Optimizing Compilers for Modern Architectures Allen and Kennedy, Chapter 13 Compiling Array Assignments.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Parallelism & Locality Optimization.
CS 378 Programming for Performance Single-Thread Performance: Compiler Scheduling for Pipelines Adopted from Siddhartha Chatterjee Spring 2009.
Optimizing single thread performance Dependence Loop transformations.
EECC551 - Shaaban #1 Fall 2003 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
1 COMP 740: Computer Architecture and Implementation Montek Singh Tue, Feb 24, 2009 Topic: Instruction-Level Parallelism IV (Software Approaches/Compiler.
Compiler Challenges for High Performance Architectures
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache IV Steve Ko Computer Sciences and Engineering University at Buffalo.
Stanford University CS243 Winter 2006 Wei Li 1 Loop Transformations and Locality.
Enhancing Fine- Grained Parallelism Chapter 5 of Allen and Kennedy Mirit & Haim.
Data Locality CS 524 – High-Performance Computing.
Compiler Challenges, Introduction to Data Dependences Allen and Kennedy, Chapter 1, 2.
Compiler Optimizations for Memory Hierarchy Chapter 20 High Performance Compilers.
EECC551 - Shaaban #1 Winter 2002 lec# Pipelining and Exploiting Instruction-Level Parallelism (ILP) Pipelining increases performance by overlapping.
1 Tuesday, September 19, 2006 The practical scientist is trying to solve tomorrow's problem on yesterday's computer. Computer scientists often have it.
Chapter 2 Instruction-Level Parallelism and Its Exploitation
CMPUT Compiler Design and Optimization1 CMPUT680 - Winter 2006 Topic B: Loop Restructuring José Nelson Amaral
Parallelizing Compilers Presented by Yiwei Zhang.
A Data Locality Optimizing Algorithm based on A Data Locality Optimizing Algorithm by Michael E. Wolf and Monica S. Lam.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
Data Dependences CS 524 – High-Performance Computing.
Data Locality CS 524 – High-Performance Computing.
DATA LOCALITY & ITS OPTIMIZATION TECHNIQUES Presented by Preethi Rajaram CSS 548 Introduction to Compilers Professor Carol Zander Fall 2012.
Optimizing Compilers for Modern Architectures Dependence: Theory and Practice Allen and Kennedy, Chapter 2.
Applying Data Copy To Improve Memory Performance of General Array Computations Qing Yi University of Texas at San Antonio.
09/15/2011CS4961 CS4961 Parallel Programming Lecture 8: Dependences and Locality Optimizations Mary Hall September 15,
Recitation 7: 10/21/02 Outline Program Optimization –Machine Independent –Machine Dependent Loop Unrolling Blocking Annie Luo
U NIVERSITY OF D ELAWARE C OMPUTER & I NFORMATION S CIENCES D EPARTMENT Optimizing Compilers CISC 673 Spring 2011 Dependence Analysis and Loop Transformations.
Carnegie Mellon Lecture 14 Loop Optimization and Array Analysis I. Motivation II. Data dependence analysis Chapter , 11.6 Dror E. MaydanCS243:
CIS 662 – Computer Architecture – Fall Class 16 – 11/09/04 1 Compiler Techniques for ILP  So far we have explored dynamic hardware techniques for.
Carnegie Mellon Lecture 15 Loop Transformations Chapter Dror E. MaydanCS243: Loop Optimization and Array Analysis1.
ECE 454 Computer Systems Programming Memory performance (Part II: Optimizing for caches) Ding Yuan ECE Dept., University of Toronto
High-Level Transformations for Embedded Computing
Optimizing Compilers for Modern Architectures Creating Coarse-grained Parallelism for Loop Nests Chapter 6, Sections 6.3 through 6.9.
1 Cache Memory. 2 Outline Cache mountain Matrix multiplication Suggested Reading: 6.6, 6.7.
Optimizing for the Memory Hierarchy Topics Impact of caches on performance Memory hierarchy considerations Systems I.
Dependence Analysis and Loops CS 3220 Spring 2016.
Programming for Cache Performance Topics Impact of caches on performance Blocking Loop reordering.
Compiler Support for Better Memory Utilization in Scientific Code Rob Fowler, John Mellor-Crummey, Guohua Jin, Apan Qasem {rjf, johnmc, jin,
A few words on locality and arrays
Lecture 38: Compiling for Modern Architectures 03 May 02
Automatic Thread Extraction with Decoupled Software Pipelining
CSE 351 Section 9 3/1/12.
Introduction To Computer Systems
Cache Memories CSE 238/2038/2138: Systems Programming
CS4961 Parallel Programming Lecture 11: Data Locality, cont
Simone Campanoni Loop transformations Simone Campanoni
The Hardware/Software Interface CSE351 Winter 2013
Section 7: Memory and Caches
Data Locality Analysis and Optimization
Data Dependence, Parallelization, and Locality Enhancement (courtesy of Tarek Abdelrahman, University of Toronto)
Loop Restructuring Loop unswitching Loop peeling Loop fusion
Parallelizing Loops Moreno Marzolla
Prof. Zhang Gang School of Computer Sci. & Tech.
Compilers for Embedded Systems
Presented by: Huston Bokinsky Ying Zhang 25 April, 2013
Memory Hierarchies.
CS4961 Parallel Programming Lecture 12: Data Locality, cont
Register Pressure Guided Unroll-and-Jam
Siddhartha Chatterjee Spring 2008
Copyright 2003, Keith D. Cooper & Linda Torczon, all rights reserved.
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
ECE 463/563, Microprocessor Architecture, Prof. Eric Rotenberg
Pipelining and Exploiting Instruction-Level Parallelism (ILP)
Loop-Level Parallelism
6- General Purpose GPU Programming
Introduction to Optimization
Writing Cache Friendly Code
Presentation transcript:

Optimizing single thread performance Locality and array allocation Dependence Loop transformations

Computer memory hierarchy Implication: Explore data locality to achieve high performance Make sure that a program has a small footprint (to fix the upper level cache/registers). Cache line (64 bytes in x86) In a cache miss, a while cache line is brought in – good for sequential access.

Optimizing single thread performance Explore data locality Need to understand how arrays are dealt with at the low level How is array a[i][j] accessed? Compute the offset Add the starting address to access the location How is multi-dimensional array allocated in memory? Row major and column major Row major: a[0][0], a[0][1], …, a[0][100], a[1][0], …, a[1][100], …, a[100][0], … a[100][100] Column major: a[0][0], a[1][0], …, a[100][0], a[0][1], …, a[100][1], …, a[0][100], …, a[100][100]

Optimizing single thread performance Assuming that all instructions are doing useful work, how can you make the code run faster? Some sequence of code runs faster than other sequence Optimize for memory hierarchy Optimize for specific architecture features such as pipelining Both optimization requires changing the execution order of the instructions. A[0][0] = 0.0; A[1][0] = 0.0; … A[1000][1000] = 0.0; A[0][0] = 0.0; A[0][1] = 0.0; … A[1000][1000] = 0.0; Both code initializes A, is one better than the other?

Changing the order of instructions without changing the semantics of the program The semantics of a program is defined by the sequential execution of the program. Optimization should not change what the program does. Parallel execution also changes the order of instructions. When is it safe to change the execution order (e.g. run instructions in parallel)? A=1 B=A+1 C=B+1 D=C+1 A=1; B=A+1 C=B+1;D=C+1 A=1 B=2 C=3 D=4 A=1; B=2 C=3; D=4 A=1, B=?, C=?, D=? A=1,B=2, C=3, D=4

When is it safe to change order? When can you change the order of two instructions without changing the semantics? They do not operate (read or write) on the same variables. They can be only read the same variables One read and one write is bad (the read will not get the right value) Two writes are also bad (the end result is different). This is formally captured in the concept of data dependence True dependence: Write X-Read X (RAW) Output dependence: Write X – Write X (WAW) Anti dependence: Read X – Write X (WAR) What about RAR?

Data dependence examples B=A+1 C=B+1 D=C+1 A=1; B=A+1 C=B+1;D=C+1 A=1 B=2 C=3 D=4 A=1; B=2 C=3; D=4 When two instructions have no dependence, their execution order can be changed, or the two instructions can be executed in parallel

Data dependence in loops For (I=1; I<500; i++) a(I) = 0; For (I=1; I<500; i++) a(I) = a(I-1) + 1; Loop-carried dependency When there is no loop-carried dependency, the order for executing the loop body does not matter: the loop can be parallelized (executed in parallel)

Loop-carried dependence A loop-carried dependence is a dependence that is present only when the dependence is between statements in different iterations of a loop. Otherwise, we call it loop-independent dependence. Loop-carried dependence is what prevents loops from being parallelized. Important since loops contains most parallelism in a program. Loop-carried dependence can sometimes be represented by dependence vector (or direction) that tells which iteration depends on which iteration. When one tries to change the loop execution order, the loop carried dependence needs to be honored.

Dependence and parallelization For a set of instruction without dependence Execution in any order will produce the same results The instructions can be executed in parallel For two instructions with dependence They must be executed in the original sequence They cannot be executed in parallel Loops with no loop carried dependence can parallelized (iterations executed in parallel) Loops with loop carried dependence cannot be parallelized (must be executed in the original order).

Optimizing single thread performance through loop transformations 90% of execution time in 10% of the code Mostly in loops Relatively easy to analyze Loop optimizations Different ways to transform loops with the same semantics Objective? Single-thread system: mostly optimizing for memory hierarchy. Multi-thread system: loop parallelization Parallelizing compiler automatically finds the loops that can be executed in parallel.

Loop optimization: scalar replacement of array elements For (i=0; i<N; i++) for(j=0; j<N; j++) for (k=0; k<N; k++) c(I, j) = c(I, j) + a(I, k)* b(k, j); Registers are almost never allocated to array elements. Why? Scalar replacement Allows registers to be allocated to the scalar, which reduces memory reference. Also known as register pipelining. For (i=0; i<N; i++) for(j=0; j<N; j++) { ct = c(I, j) for (k=0; k<N; k++) ct = ct + a(I, k)* b(k, j); c(I, j) = ct; }

Loop normalization For (i=a; i<=b; i+= c) { …… } For (ii=1; ii<???; ii++) { i = a + (ii-1) *b; …… } Loop normalization does not do too much by itself. But it makes the iteration space much easy to manipulate, which enables other optimizations.

Loop transformations Change the shape of loop iterations Change the access pattern Increase data reuse (locality) Reduce overheads Valid transformations need to maintain the dependence. If (i1, i2, i3, …in) depends on (j1, j2, …, jn), then (j1’, j2’, …, jn’) needs to happen before (i1’, i2’, …, in’) in a valid transformation.

Loop transformations Unimodular transformations Loop interchange, loop permutation, loop reversal, loop skewing, and many others Loop fusion and distribution Loop tiling Loop unrolling

Unimodular transformations A unimodular matrix is a square matrix with all integral components and with a determinant of 1 or –1. Let the unimodular matrix be U, it transforms iteration I = (i1, i2, …, in) to iteration U I. Applicability (proven by Michael Wolf) A unimodular transformation represented by matrix U is legal when applied to a loop nest with a set of distance vector D if and only if for each d in D, Ud >= 0. Distance vector tells the dependences in the loop.

Unimodular transformations example: loop interchange For (I=0; I<n; I++) for (j=0; j < n; j++) a(I,j) = a(I-1, j) + 1; For (j=0; j<n; j++) for (i=0; i < n; i++) a(i,j) = a(i-1, j) + 1; Why is this transformation valid? The calculation of a(i-1,j) must happen before a(I, j)

Unimodular transformations example: loop permutation For (I=0; I<n; I++) for (j=0; j < n; j++) for (k=0; k < n; k++) for (l=0; l<n; l++) ……

Unimodular transformations example: loop reversal For (I=0; I<n; I++) for (j=0; j < n; j++) a(I,j) = a(I-1, j) + 1.0; For (I=0; I<n; I++) for (j=n-1; j >=0; j--) a(I,j) = a(I-1, j) + 1.0;

Unimodular transformations example: loop skewing For (I=0; I<n; I++) for (j=0; j < n; j++) a(I) = a(I+ j) + 1.0; For (I=0; I<n; I++) for (j=I+1; j <i+n; j++) a(i) = a(j) + 1.0;

Loop fusion Takes two adjacent loops that have the same iteration space and combines the body. Legal when there are no flow, anti- and output dependences in the fused loop. Why Increase the loop body, reduce loop overheads Increase the chance of instruction scheduling May improve locality For (I=0; I<n; I++) a(I) = 1.0; For (j=0; j<n; j++) b(j) = 1.0 For (I=0; I<n; I++) { a(I) = 1.0; b(i) = 1.0; }

Loop distribution Takes one loop and partition it into two loops. Legal when no dependence loop is broken. Why Reduce memory trace Improve locality Increase the chance of instruction scheduling For (I=0; I<n; I++) { a(I) = 1.0; b(i) = a(I); } For (I=0; I<n; I++) a(I) = 1.0; For (j=0; j<n; j++) b(j) = a(I)

Loop tiling N-deep nest can be changed into n+1-deep to 2n-deep nest. Replaceing a single loop into two loops. for(I=0; I<n; I++) …  for(I=0; I<n; I+=t) for (ii=I, ii < min(I+t,n); ii++) … T is call tile size; N-deep nest can be changed into n+1-deep to 2n-deep nest. For (i=0; i<n; i++) for (j=0; j<n; j++) for (k=0; j<n; k++) For (i=0; i<n; i+=t) for (ii=I; ii<min(i+t, n); ii++) for (j=0; j<n; j+=t) for (jj=j; jj < min(j+t, n); jj++) for (k=0; j<n; k+=t) for (kk = k; kk<min(k+t, n); kk++)

Loop tiling When using with loop interchange, loop tiling create inner loops with smaller memory trace – great for locality. Loop tiling is one of the most important techniques to optimize for locality Reduce the size of the working set and change the memory reference pattern. For (i=0; i<n; i+=t) for (ii=I; ii<min(i+t, n); ii++) for (j=0; j<n; j+=t) for (jj=j; jj < min(j+t, n); jj++) for (k=0; j<n; k+=t) for (kk = k; kk<min(k+t, n); kk++) For (i=0; i<n; i+=t) for (j=0; j<n; j+=t) for (k=0; k<n; k+=t) for (ii=I; ii<min(i+t, n); ii++) for (jj=j; jj < min(j+t, n); jj++) for (kk = k; kk<min(k+t, n); kk++) Inner loop with much smaller memory footprint

Loop unrolling For (I=0; I<100; I++) a(I) = 1.0; } Reduce control overheads. Increase chance for instruction scheduling. Large body may require more resources (register). This can be very effective!!!!

Loop optimization in action Optimizing matrix multiply: For (i=1; i<=N; i++) for (j=1; j<=N; j++) for(k=1; k<=N; k++) c(I, j) = c(I, j) + A(I, k)*B(k, j) Where should we focus on the optimization? Innermost loop. Memory references: c(I, j), A(I, 1..N), B(1..N, j) Spatial locality: memory reference stride = 1 is the best Temporal locality: hard to reuse cache data since the memory trace is too large.

Loop optimization in action Initial improvement: increase spatial locality in the inner loop, references to both A and B have a stride 1. Transpose A before go into this operation (assuming column-major storage). Demonstrate my_mm.c method 1 Transpose A /* for all I, j, A’(I, j) = A(j, i) */ For (i=1; i<=N; i++) for (j=1; j<=N; j++) for(k=1; k<=N; k++) c(I, j) = c(I, j) + A’(k, I)*B(k, j)

Loop optimization in action C(i, j) are repeatedly referenced in the inner loop: scalar replacement (method 2) Transpose A For (i=1; i<=N; i++) for (j=1; j<=N; j++) { t = c(I, j); for(k=1; k<=N; k++) t = t + A(k, I)*B(k, j); c(I, j) = t; } Transpose A For (i=1; i<=N; i++) for (j=1; j<=N; j++) for(k=1; k<=N; k++) c(I, j) = c(I, j) + A(k, I)*B(k, j)

Loop optimization in action Inner loops memory footprint is too large: A(1..N, i), B(1..N, i) Loop tiling + loop interchange Memory footprint in the inner loop A(1..t, i), B(1..t, i) Using blocking, one can tune the performance for the memory hierarchy: Innermost loop fits in register; second innermost loop fits in L2 cache, … Method 4 for (j=1; j<=N; j+=t) for(k=1; k<=N; k+=t) for(I=1; i<=N; i+=t) for (ii=I; ii<=min(I+t-1, N); ii++) for (jj = j; jj<=min(j+t-1,N);jj++) { t = c(ii, jj); for(kk=k; kk <=min(k+t-1, N); kk++) t = t + A(kk, ii)*B(kk, jj) c(ii, jj) = t }

Loop optimization in action Loop unrolling (method 5) for (j=1; j<=N; j+=t) for(k=1; k<=N; k+=t) for(I=1; i<=N; i+=t) for (ii=I; ii<=min(I+t-1, N); ii++) for (jj = j; jj<=min(j+t-1,N);jj++) { t = c(ii, jj); t = t + A(kk, ii) * B(kk, jj); t = t + A(kk+1, ii) * B(kk+1, jj); …… t = t + A(kk+15, ii) * B(kk + 15, jj); c(ii, jj) = t } This assumes the loop can be nicely unrolled, you need to take care of the boundary condition.

Loop optimization in action Instruction scheduling (method 6) ‘+’ would have to wait on the results of ‘*’ in a typical processor. ‘*’ is often deeply pipelined: feed the pipeline with many ‘*’ operation. for (j=1; j<=N; j+=t) for(k=1; k<=N; k+=t) for(I=1; i<=N; i+=t) for (ii=I; ii<=min(I+t-1, N); ii++) for (jj = j; jj<=min(j+t-1,N);jj++) { t0 = A(kk, ii) * B(kk, jj); t1 = A(kk+1, ii) * B(kk+1, jj); …… t15 = A(kk+15, ii) * B(kk + 15, jj); c(ii, jj) = c(ii, jj) + t0 + t1 + … + t15; }

Loop optimization in action Further locality improve: block order storage of A, B, and C. (method 7) for (j=1; j<=N; j+=t) for(k=1; k<=N; k+=t) for(I=1; i<=N; i+=t) for (ii=I; ii<=min(I+t-1, N); ii++) for (jj = j; jj<=min(j+t-1,N);jj++) { t0 = A(kk, ii) * B(kk, jj); t1 = A(kk+1, ii) * B(kk+1, jj); …… t15 = A(kk+15, ii) * B(kk + 15, jj); c(ii, jj) = c(ii, jj) + t0 + t1 + … + t15; }

Loop optimization in action See the ATLAS paper for the complete story: C. Whaley, et. al, "Automated Empirical Optimization of Software and the ATLAS Project," Parallel Computing, 27(1-2):3-35, 2001.

Summary Dependence and parallelization What can a loop be parallelized? Loop transformations What do they do? When is a loop transformation valid? Examples of loop transformations.