ECE408/CS483 Applied Parallel Programming Lectures 5 and 6: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al,

Slides:



Advertisements
Similar presentations
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 More on Performance Considerations.
Advertisements

Taking CUDA to Ludicrous Speed Getting Righteous Performance from your GPU 1.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2011 GPUMemories.ppt GPU Memories These notes will introduce: The basic memory hierarchy.
ECE 598HK Computational Thinking for Many-core Computing Lecture 2: Many-core GPU Performance Considerations © Wen-mei W. Hwu and David Kirk/NVIDIA,
©Wen-mei W. Hwu and David Kirk/NVIDIA, SSL(2014), ECE408/CS483/ECE498AL, University of Illinois, ECE408/CS483 Applied Parallel Programming Lecture.
Introduction to CUDA and CELL SpursEngine Multi-core Programming 1 Reference: 1. NVidia CUDA (Compute Unified Device Architecture) documents 2. Presentation.
1 CUDA Threads. © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 2 Block IDs and Thread IDs Each thread.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors CUDA Threads.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core processors for Science and Engineering.
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395: CUDA Lecture 5 Memory coalescing (from.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 9: Memory Hardware in G80.
1 ECE 8823A GPU Architectures Module 3: CUDA Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois,
1. Could I be getting better performance? Probably a little bit. Most of the performance is handled in HW How much better? If you compile –O3, you can.
1 ECE 8823A GPU Architectures Module 5: Execution and Resources - I.
CS/EE 217 GPU Architecture and Parallel Programming Lecture 3: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu,
CUDA - 2.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 Programming Massively Parallel Processors Lecture.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 ECE498AL Lecture 3: A Simple Example, Tools, and.
Matrix Multiplication in CUDA
CUDA Performance Considerations (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
GPU PROGRAMMING GPU Programming 1. Assignment 4 Consists of two programming assignments Concurrency GPU programming Requires a computer with a CUDA/OpenCL/DirectCompute.
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30-July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
CUDA Parallel Execution Model with Fermi Updates © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois, Urbana-Champaign.
ECE408/CS483 Applied Parallel Programming Lecture 4: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al,
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
©Wen-mei W. Hwu and David Kirk/NVIDIA Urbana, Illinois, August 2-5, 2010 VSCSE Summer School Proven Algorithmic Techniques for Many-core Processors Lecture.
CUDA Memory Types All material not from online sources/textbook copyright © Travis Desell, 2012.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408, University of Illinois, Urbana Champaign 1 Programming Massively Parallel Processors CUDA Memories.
1 2D Convolution, Constant Memory and Constant Caching © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois,
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
1 ECE 8823A GPU Architectures Module 4: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei Hwu, ECE408/CS483/ECE498al, University of Illinois,
CS/EE 217 GPU Architecture and Parallel Programming Lectures 4 and 5: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei W. Hwu,
©Wen-mei W. Hwu and David Kirk/NVIDIA, University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 6: DRAM Bandwidth.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads.
ECE408/CS483 Applied Parallel Programming Lecture 4: Kernel-Based Data Parallel Execution Model © David Kirk/NVIDIA and Wen-mei Hwu, , SSL 2014.
Introduction to CUDA (2 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2011.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 19: Atomic.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana Champaign 1 ECE 498AL Programming Massively Parallel Processors.
© David Kirk/NVIDIA and Wen-mei W
GPU Computing CIS-543 Lecture 09: Shared and Constant Memory
© David Kirk/NVIDIA and Wen-mei W
CS427 Multicore Architecture and Parallel Computing
ECE408/CS483 Fall 2015 Applied Parallel Programming Lecture 7: DRAM Bandwidth ©Wen-mei W. Hwu and David Kirk/NVIDIA, ECE408/CS483/ECE498AL, University.
ECE408/CS483 Applied Parallel Programming Lecture 7: DRAM Bandwidth
ECE 498AL Spring 2010 Lectures 8: Threading & Memory Hardware in G80
Slides from “PMPP” book
L18: CUDA, cont. Memory Hierarchy and Examples
CUDA Parallelism Model
DRAM Bandwidth Slide credit: Slides adapted from
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
L4: Memory Hierarchy Optimization II, Locality and Data Placement
Memory and Data Locality
ECE 8823A GPU Architectures Module 4: Memory Model and Locality
ECE 8823A GPU Architectures Module 3: CUDA Execution Model -I
Mattan Erez The University of Texas at Austin
ECE 8823A GPU Architectures Module 5: Execution and Resources - I
GPU Programming Lecture 5: Convolution
© David Kirk/NVIDIA and Wen-mei W. Hwu,
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Mattan Erez The University of Texas at Austin
6- General Purpose GPU Programming
© David Kirk/NVIDIA and Wen-mei W
CIS 6930: Chip Multiprocessor: Parallel Architecture and Programming
Presentation transcript:

ECE408/CS483 Applied Parallel Programming Lectures 5 and 6: Memory Model and Locality © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Objective To learn to efficiently use the important levels of the CUDA memory hierarchy Registers, shared memory, global memory Tiled algorithms and barrier synchronization © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

The Von-Neumann Model Memory Reg File Processing Unit ALU I/O Insert von-neumann schematic here Control Unit PC IR © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

Going back to the program Every instruction needs to be fetched from memory, decoded, then executed. The decode stage typically accesses register file Instructions come in three flavors: Operate, Data transfer, and Program Control Flow. An example instruction cycle is the following: Fetch | Decode | Execute | Memory victoria@samsung.com © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

Fetch | Decode | Execute | Memory Operate Instructions Example of an operate instruction: ADD R1, R2, R3 Instruction cycle for an operate instruction: Fetch | Decode | Execute | Memory © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

Memory Access Instructions Examples of memory access instruction: LDR R1, R2, #2 STR R1, R2, #2 Instruction cycle for an operate instruction: Fetch | Decode | Execute | Memory © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

Registers vs Memory Registers are “free” No additional memory access instruction Very fast to use, however, there are very few of them Memory is expensive (slow), but very large Desk analogy: books on your desk are registers. The library is memory © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

Programmer View of CUDA Memories Each thread can: Read/write per-thread registers (~1 cycle) Read/write per-block shared memory (~5 cycles) Read/write per-grid global memory (~500 cycles) Read/only per-grid constant memory (~5 cycles with caching) Grid Global Memory Block (0, 0) Shared Memory Thread (0, 0) Registers Thread (1, 0) Block (1, 0) Host Constant Memory Global, constant, and texture memory spaces are persistent across kernels called by the same application. © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Shared Memory in CUDA A special type of memory whose contents are explicitly declared and used in the source code Located in the processor Accessed at much higher speed (in both latency and throughput) Still accessed by memory access instructions Commonly referred to as scratchpad memory in computer architecture © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

CUDA Variable Type Qualifiers Variable declaration Memory Scope Lifetime int LocalVar; register thread __device__ __shared__ int SharedVar; shared block __device__ int GlobalVar; global grid application __device__ __constant__ int ConstantVar; constant __device__ is optional when used with __shared__, or __constant__ Automatic variables without any qualifier reside in a register Except per-thread arrays that reside in global memory © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Where to Declare Variables? Can host access it? Outside of any Function In the kernel yes no register (automatic) shared local global constant © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

1. __shared__ float ds_M[TILE_WIDTH][TILE_WIDTH]; __global__ void MatrixMulKernel(float* d_M, float* d_N, float* d_P, int Width) { 1. __shared__ float ds_M[TILE_WIDTH][TILE_WIDTH]; 2. __shared__ float ds_N[TILE_WIDTH][TILE_WIDTH]; © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

A Common Programming Strategy Global memory resides in device memory (DRAM) - slow access So, a profitable way of performing computation on the device is to tile input data to take advantage of fast shared memory: Partition data into subsets that fit into shared memory Handle each data subset with one thread block by: Loading the subset from global memory to shared memory, using multiple threads to exploit memory-level parallelism Performing the computation on the subset from shared memory; each thread can efficiently multi-pass over any data element Copying results from shared memory to global memory © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Matrix-Matrix Multiplication using Shared Memory © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Base Matrix Multiplication Kernel __global__ void MatrixMulKernel(float* d_M, float* d_N, float* d_P, int Width) { // Calculate the row index of the Pd element and M int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; float Pvalue = 0; // each thread computes one element of the block sub-matrix for (int k = 0; k < Width; ++k) Pvalue += d_M[Row*Width+k]* d_N[k*Width+Col]; d_P[Row*Width+Col] = Pvalue; } © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

How about performance on Fermi? All threads access global memory for their input matrix elements Two memory accesses (8 bytes) per floating point multiply-add 4B/s of memory bandwidth/FLOPS 4*1,000 = 4,000 GB/s required to achieve peak FLOP rating 150 GB/s limits the code at 37.5 GFLOPS The actual code runs at about 25 GFLOPS Need to drastically cut down memory accesses to get closer to the peak 1,000 GFLOPS Grid Block (0, 0) Block (1, 0) Shared Memory Shared Memory Registers Registers Registers Registers Thread (0, 0) Thread (1, 0) Thread (0, 0) Thread (1, 0) Host Global Memory Constant Memory © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

Shared Memory Blocking Basic Idea Global Memory … Thread 1 Thread 2 Global Memory in On-chip Memory … Thread 1 Thread 2 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Basic Concept of Blocking/Tiling In a congested traffic system, significant reduction of vehicles can greatly improve the delay seen by all vehicles Carpooling for commuters Blocking/Tiling for global memory accesses drivers = threads, cars = data © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Some computations are more challenging to block/tile than others. Some carpools may be easier than others More efficient if neighbors are also classmates or co-workers Some vehicles may be more suitable for carpooling Similar variations exist in blocking/tiling © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Carpools need synchronization. Good – when people have similar schedule Worker A sleep work dinner Time Worker B sleep work dinner Bad – when people have very different schedule Worker A party sleep work time Worker B sleep work dinner © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Same with Blocking/Tiling Good – when threads have similar access timing Thread 1 Time … Thread 2 Bad – when threads have very different timing Thread 1 time Thread 2 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Outline of Technique Identify a block/tile of global memory content that are accessed by multiple threads Load the block/tile from global memory into on-chip memory Have the multiple threads to access their data from the on-chip memory Move on to the next block/tile © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Idea: Use Shared Memory to reuse global memory data Each input element is read by WIDTH threads. Load each element into Shared Memory and have several threads use the local version to reduce the memory bandwidth Tiled algorithms N WIDTH P M ty WIDTH tx WIDTH WIDTH © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Work for Block (0,0) in a TILE_WIDTH = 2 Configuration blockDim.x blockDim.y Col = 0 Col = 1 Col = 0 * 2 + threadIdx.x Row = 0 * 2 + threadIdx.y N0,0 N0,1 N0,2 N0,3 N1,0 N1,1 N1,2 N1,3 blockIdx.x blockIdx.y N2,0 N2,1 N2,2 N2,3 N3,0 N3,1 N3,2 N3,3 Row = 0 M0,0 M0,1 M0,2 M0,3 P0,0 P0,1 P0,2 P0,3 Row = 1 M1,0 M1,1 M1,2 M1,3 P1,0 P1,1 P1,2 P1,3 M2,0 M2,1 M2,2 M2,3 P2,0 P2,1 P2,2 P2,3 M3,0 M3,1 M3,2 M3,3 P3,0 P3,1 P3,2 P3,3

Tiled Multiply Md Nd Pd Pdsub TILE_WIDTH WIDTH bx tx 1 TILE_WIDTH-1 2 by ty TILE_WIDTHE Break up the execution of the kernel into phases so that the data accesses in each phase is focused on one subset (tile) of Md and Nd © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Loading a Tile All threads in a block participate Each thread loads one Md element and one Nd element in based tiled code Assign the loaded element to each thread such that the accesses within each warp is coalesced (more later). © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Work for Block (0,0) SM N0,0 N0,1 N0,2 N0,3 N0,0 N0,1 N1,0 N1,1 N1,2 P0,0 P0,1 P0,2 P0,3 M1,0 M1,1 M1,2 M1,3 M1,0 M1,1 P1,0 P1,1 P1,2 P1,3 M2,0 M2,1 M2,2 M2,3 P2,0 P2,1 P2,2 P2,3 M3,0 M3,1 M3,2 M3,3 P3,0 P3,1 P3,2 P3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Work for Block (0,0) N0,0 N0,1 N0,2 N0,3 N0,0 N0,1 SM N1,0 N1,1 N1,2 P0,0 P0,1 P0,2 P0,3 M1,0 M1,1 M1,2 M1,3 M1,0 M1,1 P1,0 P1,1 P1,2 P1,3 M2,0 M2,1 M2,2 M2,3 P2,0 P2,1 P2,2 P2,3 M3,0 M3,1 M3,2 M3,3 P3,0 P3,1 P3,2 P3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Work for Block (0,0) N0,0 N0,1 N0,2 N0,3 N0,0 N0,1 SM N1,0 N1,1 N1,2 P0,0 P0,1 P0,2 P0,3 M1,0 M1,1 M1,2 M1,3 M1,0 M1,1 P1,0 P1,1 P1,2 P1,3 M2,0 M2,1 M2,2 M2,3 P2,0 P2,1 P2,2 P2,3 M3,0 M3,1 M3,2 M3,3 P3,0 P3,1 P3,2 P3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Work for Block (0,0) N0,0 N0,1 N0,2 N0,3 N1,0 N1,1 N1,2 N1,3 N0,0 N0,1 SM M0,0 M0,1 M0,2 M0,3 M0,0 M0,1 P0,0 P0,1 P0,2 P0,3 M1,0 M1,1 M1,2 M1,3 M1,0 M1,1 P1,0 P1,1 P1,2 P1,3 M2,0 M2,1 M2,2 M2,3 P2,0 P2,1 P2,2 P2,3 M3,0 M3,1 M3,2 M3,3 P3,0 P3,1 P3,2 P3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Work for Block (0,0) N0,0 N0,1 N0,2 N0,3 N1,0 N1,1 N1,2 N1,3 N0,0 N0,1 SM N1,0 N1,1 N3,0 N3,1 N3,2 N3,3 SM M0,0 M0,1 M0,2 M0,3 M0,0 M0,1 P0,0 P0,1 P0,2 P0,3 M1,0 M1,1 M1,2 M1,3 M1,0 M1,1 P1,0 P1,1 P1,2 P1,3 M2,0 M2,1 M2,2 M2,3 P2,0 P2,1 P2,2 P2,3 M3,0 M3,1 M3,2 M3,3 P3,0 P3,1 P3,2 P3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Barrier Synchronization An API function call in CUDA __syncthreads() All threads in the same block must reach the __synctrheads() before any can move on Best used to coordinate tiled algorithms To ensure that all elements of a tile are loaded To ensure that all elements of a tile are consumed © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

… Time Thread 0 Thread 1 Thread 2 Thread 3 Thread 4 Thread N-3 Figure 4.11 An example execution timing of barrier synchronization. … Thread 0 Thread 1 Thread 2 Thread 3 Thread 4 Thread N-3 Thread N-2 Thread N-1 Time

Loading an Input Tile bx tx by ty m Accessing tile 0 2D indexing: Md Nd Pd Pdsub TILE_WIDTH WIDTH bx tx 1 TILE_WIDTH-1 2 by ty TILE_WIDTHE m Accessing tile 0 2D indexing: M[Row][tx] N[ty][Col] bx k by m k Row = by * TILE_WIDTH +ty ©Wen-mei W. Hwu and David Kirk/NVIDIA, Urbana, August 13-17, 2012

Loading an Input Tile bx tx by ty m Accessing tile 1 in 2D indexing: Md Nd Pd Pdsub TILE_WIDTH WIDTH bx tx 1 TILE_WIDTH-1 2 by ty TILE_WIDTHE m Accessing tile 1 in 2D indexing: M[Row][1*TILE_WIDTH+tx] N[1*TILE_WIDTH+ty][Col] bx k by m k Row = by * TILE_WIDTH +ty ©Wen-mei W. Hwu and David Kirk/NVIDIA, Urbana, August 13-17, 2012

Loading Input Tile m However, M and N are dynamically allocated and can only use 1D indexing: M[Row][m*TILE_WIDTH+tx] M[Row*Width + m*TILE_WIDTH + tx] N[m*TILE_WIDTH+ty][Col] N[(m*TILE_WIDTH+ty) * Width + Col] d_M d_N d_P Pdsub TILE_WIDTH WIDTH TILE_WIDTHE m*TILE_WIDTH Col Row …

Tiled Matrix Multiplication Kernel __global__ void MatrixMulKernel(float* d_M, float* d_N, float* d_P, int Width) { 1. __shared__ float ds_M[TILE_WIDTH][TILE_WIDTH]; 2. __shared__ float ds_N[TILE_WIDTH][TILE_WIDTH]; 3. int bx = blockIdx.x; int by = blockIdx.y; 4. int tx = threadIdx.x; int ty = threadIdx.y; // Identify the row and column of the Pd element to work on 5. int Row = by * TILE_WIDTH + ty; 6. int Col = bx * TILE_WIDTH + tx; 7. float Pvalue = 0; // Loop over the Md and Nd tiles required to compute the Pd element 8. for (int m = 0; m < Width/TILE_WIDTH; ++m) { // Coolaborative loading of Md and Nd tiles into shared memory 9. ds_M[ty][tx] = d_M[Row*Width + m*TILE_WIDTH+tx]; ds_N[ty][tx] = d_N[Col+(m*TILE_WIDTH+ty)*Width]; __syncthreads(); 12. for (int k = 0; k < TILE_WIDTH; ++k) 13. Pvalue += ds_M[ty][k] * ds_N[k][tx]; 14. __synchthreads(); 15.} 16. d_P[Row*Width+Col] = Pvalue; } © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al University of Illinois, 2007-2011

Compare with the Base Kernel __global__ void MatrixMulKernel(float* d_M, float* d_N, float* d_P, int Width) { // Calculate the row index of the Pd element and M int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; float Pvalue = 0; // each thread computes one element of the block sub-matrix for (int k = 0; k < Width; ++k) Pvalue += d_M[Row*Width+k]* d_N[k*Width+Col]; d_P[Row*Width+Col] = Pvalue; } © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

First-order Size Considerations Each thread block should have many threads TILE_WIDTH of 16 gives 16*16 = 256 threads TILE_WIDTH of 32 gives 32*32 = 1024 threads For 16, each block performs 2*256 = 512 float loads from global memory for 256 * (2*16) = 8,192 mul/add operations. For 32, each block performs 2*1024 = 2048 float loads from global memory for 1024 * (2*32) = 65,536 mul/add operations © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Shared Memory and Threading Each SM in Fermi has 16KB or 48KB shared memory* SM size is implementation dependent! For TILE_WIDTH = 16, each thread block uses 2*256*4B = 2KB of shared memory. Can potentially have up to 8 Thread Blocks actively executing This allows up to 8*512 = 4,096 pending loads. (2 per thread, 256 threads per block) The next TILE_WIDTH 32 would lead to 2*32*32*4B= 8KB shared memory usage per thread block, allowing 2 or 6 thread blocks active at the same time Using 16x16 tiling, we reduce the accesses to the global memory by a factor of 16 The 86.4GB/s bandwidth can now support (86.4/4)*16 = 347.6 GFLOPS! *Configurable vs L1, total 64KB © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Device Query Number of devices in the system Capability of devices int dev_count; cudaGetDeviceCount( &dev_count); Capability of devices cudaDeviceProp dev_prop; for (i = 0; i < dev_count; i++) { cudaGetDeviceProperties( &dev_prop, i); // decide if device has sufficient resources and capabilities  } cudaDeviceProp is a built-in C structure type dev_prop.dev_prop.maxThreadsPerBlock Dev_prop.sharedMemoryPerBlock … © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Summary- Typical Structure of a CUDA Program Global variables declaration __host__ __device__... __global__, __constant__, __texture__ Function prototypes __global__ void kernelOne(…) float handyFunction(…) Main () allocate memory space on the device – cudaMalloc(&d_GlblVarPtr, bytes ) transfer data from host to device – cudaMemCpy(d_GlblVarPtr, h_Gl…) execution configuration setup kernel call – kernelOne<<<execution configuration>>>( args… ); transfer results from device to host – cudaMemCpy(h_GlblVarPtr,…) optional: compare against golden (host computed) solution Kernel – void kernelOne(type args,…) variables declaration - auto, __shared__ automatic variables transparently assigned to registers syncthreads()… Other functions float handyFunction(int inVar…); repeat as needed © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012

Any more questions? Read Chapter 5! © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/ECE498al, 2007-2012