Presentation is loading. Please wait.

Presentation is loading. Please wait.

© David Kirk/NVIDIA and Wen-mei W. Hwu,

Similar presentations


Presentation on theme: "© David Kirk/NVIDIA and Wen-mei W. Hwu,"— Presentation transcript:

1 Programming Massively Parallel Processors Lecture Slides for Chapter 4: CUDA Threads
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

2 Block IDs and Thread IDs
Each thread uses IDs to decide what data to work on Block ID: 1D or 2D Thread ID: 1D, 2D, or 3D Simplifies memory addressing when processing multidimensional data Image processing Solving PDEs on volumes © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

3 Matrix Multiplication Using Multiple Blocks
bx Matrix Multiplication Using Multiple Blocks 1 2 tx 1 2 TILE_WIDTH-1 Nd Break-up Pd into tiles Each block calculates one tile Each thread calculates one element Block size equal tile size WIDTH Md Pd Pdsub 1 ty 2 WIDTH by TILE_WIDTHE 1 TILE_WIDTH-1 TILE_WIDTH 2 WIDTH WIDTH © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

4 A Small Example Block(0,0) Block(1,0) P0,0 P1,0 P2,0 P3,0
TILE_WIDTH = 2 P0,1 P1,1 P2,1 P3,1 P0,2 P1,2 P2,2 P3,2 P0,3 P1,3 P2,3 P3,3 Block(0,1) Block(1,1) © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

5 A Small Example: Multiplication
Nd0,0 Nd1,0 Nd0,1 Nd1,1 Nd0,2 Nd1,2 Nd0,3 Nd1,3 Md0,0 Md1,0 Md2,0 Md3,0 Pd0,0 Pd1,0 Pd2,0 Pd3,0 Md0,1 Md1,1 Md2,1 Md3,1 Pd0,1 Pd1,1 Pd2,1 Pd3,1 Pd0,2 Pd1,2 Pd2,2 Pd3,2 Pd0,3 Pd1,3 Pd2,3 Pd3,3 © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

6 Revised Matrix Multiplication Kernel using Multiple Blocks
__global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width) { // Calculate the row index of the Pd element and M int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; float Pvalue = 0; // each thread computes one element of the block sub-matrix for (int k = 0; k < Width; ++k) Pvalue += Md[Row*Width+k] * Nd[k*Width+Col]; Pd[Row*Width+Col] = Pvalue; } © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

7 Revised Step 5: Kernel Invocation (Host-side Code)
// Setup the execution configuration dim3 dimGrid(Width/TILE_WIDTH, Width/TILE_WIDTH); dim3 dimBlock(TILE_WIDTH, TILE_WIDTH); // Launch the device computation threads! MatrixMulKernel<<<dimGrid, dimBlock>>>(Md, Nd, Pd, Width); © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

8 CUDA Thread Block CUDA Thread Block
All threads in a block execute the same kernel program (SPMD) Programmer declares block: Block size 1 to 512 concurrent threads Block shape 1D, 2D, or 3D Block dimensions in threads Threads have thread id numbers within block Thread program uses thread id to select work and address shared data Threads in the same block share data and synchronize while doing their share of the work Threads in different blocks cannot cooperate Each block can execute in any order relative to other blocs! CUDA Thread Block Thread Id #: … m Thread program Courtesy: John Nickolls, NVIDIA © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

9 Transparent Scalability
Hardware is free to assigns blocks to any processor at any time A kernel scales across any number of parallel processors Device Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Kernel grid Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Device Block 0 Block 1 Block 2 Block 3 time Block 4 Block 5 Block 6 Block 7 Each block can execute in any order relative to other blocks. © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

10 Thread Execution Manager
G80 CUDA mode – A Review Processors execute computing threads New operating mode/HW interface for computing Load/store Global Memory Thread Execution Manager Input Assembler Host Texture Parallel Data Cache © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

11 G80 Example: Executing Thread Blocks
SM 0 SM 1 t0 t1 t2 … tm t0 t1 t2 … tm SP Shared Memory MT IU SP Shared Memory MT IU Blocks Threads are assigned to Streaming Multiprocessors in block granularity Up to 8 blocks to each SM as resource allows SM in G80 can take up to 768 threads Could be 256 (threads/block) * 3 blocks Or 128 (threads/block) * 6 blocks, etc. Threads run concurrently SM maintains thread/block id #s SM manages/schedules thread execution Blocks © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

12 G80 Example: Thread Scheduling
Each Block is executed as 32-thread Warps An implementation decision, not part of the CUDA programming model Warps are scheduling units in SM If 3 blocks are assigned to an SM and each block has 256 threads, how many Warps are there in an SM? Each Block is divided into 256/32 = 8 Warps There are 8 * 3 = 24 Warps Block 1 Warps Block 2 Warps Block 1 Warps t0 t1 t2 … t31 t0 t1 t2 … t31 t0 t1 t2 … t31 Streaming Multiprocessor Instruction L1 Instruction Fetch/Dispatch Shared Memory SP SP SP SP SFU SFU SP SP SP SP © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

13 G80 Example: Thread Scheduling (Cont.)
SM implements zero-overhead warp scheduling Warps whose next instruction has its operands ready for consumption are eligible for execution Eligible Warps are selected for execution on a prioritized scheduling policy All threads in a warp execute the same instruction when selected © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

14 G80 Block Granularity Considerations
For Matrix Multiplication using multiple blocks, should I use 8X8, 16X16 or 32X32 blocks? For 8X8, we have 64 threads per Block. Since each SM can take up to 768 threads, there are 12 Blocks. However, each SM can only take up to 8 Blocks, only 512 threads will go into each SM! For 16X16, we have 256 threads per Block. Since each SM can take up to 768 threads, it can take up to 3 Blocks and achieve full capacity unless other resource considerations overrule. For 32X32, we have 1024 threads per Block. Not even one can fit into an SM! © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

15 More Details of API Features
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

16 Application Programming Interface
The API is an extension to the C programming language It consists of: Language extensions To target portions of the code for execution on the device A runtime library split into: A common component providing built-in vector types and a subset of the C runtime library in both host and device codes A host component to control and access one or more devices from the host A device component providing device-specific functions © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

17 Language Extensions: Built-in Variables
dim3 gridDim; Dimensions of the grid in blocks (gridDim.z unused) dim3 blockDim; Dimensions of the block in threads dim3 blockIdx; Block index within the grid dim3 threadIdx; Thread index within the block © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

18 Common Runtime Component: Mathematical Functions
pow, sqrt, cbrt, hypot exp, exp2, expm1 log, log2, log10, log1p sin, cos, tan, asin, acos, atan, atan2 sinh, cosh, tanh, asinh, acosh, atanh ceil, floor, trunc, round Etc. When executed on the host, a given function uses the C runtime implementation if available These functions are only supported for scalar types, not vector types © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

19 Device Runtime Component: Mathematical Functions
Some mathematical functions (e.g. sin(x)) have a less accurate, but faster device-only version (e.g. __sin(x)) __pow __log, __log2, __log10 __exp __sin, __cos, __tan © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

20 Host Runtime Component
Provides functions to deal with: Device management (including multi-device systems) Memory management Error handling Initializes the first time a runtime function is called A host thread can invoke device code on only one device Multiple host threads required to run on multiple devices © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign

21 Device Runtime Component: Synchronization Function
void __syncthreads(); Synchronizes all threads in a block Once all threads have reached this point, execution resumes normally Used to avoid RAW / WAR / WAW hazards when accessing shared or global memory Allowed in conditional constructs only if the conditional is uniform across the entire thread block Atomic Operations? © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign


Download ppt "© David Kirk/NVIDIA and Wen-mei W. Hwu,"

Similar presentations


Ads by Google