Presentation is loading. Please wait.

Presentation is loading. Please wait.

CUDA Programming continued ITCS 4145/5145 Nov 24, 2010 © Barry Wilkinson Revised.

Similar presentations


Presentation on theme: "CUDA Programming continued ITCS 4145/5145 Nov 24, 2010 © Barry Wilkinson Revised."— Presentation transcript:

1 CUDA Programming continued ITCS 4145/5145 Nov 24, 2010 © Barry Wilkinson Revised

2 2 Timing GPU Execution Can use CUDA “events” – create two events and compute the time between them: cudaEvent_t start, stop; float elapsedTime; cudaEventCreate(&start); // create event objects cudaEventCreate(&stop); cudaEventRecord(start, 0); // Record start event. cudaEventRecord(stop, 0); // record end event cudaEventSynchronize(stop); // wait work preceding to complete cudaEventRecord(stop,0) cudaEventElapsedTime(&elapsedTime, start, stop); //compute elapsed time between events cudaEventDestroy(start);//destroy start event cudaEventDestroy(stop);); //destroy stop event

3 3

4 4

5 5

6 6

7 7

8 8 Host Synchronization Kernels Control returned to CPU immediately (asynchronous, non-blocking) Kernel starts after all previous CUDA calls completed cudaMemcpy Returns after copy complete (synchronous) Copy starts after all previous CUDA calls completed

9 9 CUDA Synchronization Routines Host cudaThreadSynchronize() Blocks until all previous CUDA calls complete GPU void __syncthreads() Synchronizes all threads in a block Barrier – no thread can pass until all threads in block reach it. All threads must reach __syncthread in thread block.

10 10 GPU Atomic Operations Performs a read-modify-write atomic operation on one word residing in global or shared memory. Associative operations on signed/unsigned integers, add, sub, min, max, and, or, xor, increment, decrement, exchange, compare and swap. Requires GPU with compute capability 1.1+ (Shared memory operations and 64-bit words require higher capability) coit-grid06 Tesla C2050 has compute capability 2.0 See http://www.nvidia.com/object/cuda_gpus.html for GPU compute capabilities

11 11 int atomicAdd(int* address, int val); reads old located at address address in global or shared memory, computes (old + val), and stores result back to memory at same address. These three operations (read, compute, and write) are performed in one atomic transaction.* Function returns old. Atomic Operation Example * Once stated, it continues to completion without being able to be interrupted by other processors. Other processors cannot read or write to memory location once atomic operation starts. Mechanism implemented in hardware.

12 12 Other operations int atomicSub(int* address, int val); int atomicExch(int* address, int val); int atomicMin(int* address, int val); int atomicMax(int* address, int val); unsigned int atomicInc(unsigned int* address, unsigned int val); unsigned int atomicDec(unsigned int* address, unsigned int val); int atomicCAS(int* address, int compare, int val); //compare and swap int atomicAnd(int* address, int val); int atomicOr(int* address, int val); int atomicXor(int* address, int val); Source: NVIDIA CUDA C Programming Guide, version 3.2, 11/9/2010

13 13 int atomicCAS(int* address, int compare, int val); reads the word old located at address address in global or shared memory, and compares old with compare. If they are the same, it set old to val (stores val at address address), i.e.: if (old == compare) old = val; // else old = old The three operations (read, compute, and write) are performed in one atomic transaction. The function returns the original value of old. Also unsigned and unsigned long long int versions. Compare and Swap (also called compare and exchange)

14 14 __device__ int lock=0; // unlocked __global__ void kernel(...) {... do {} while(atomicCAS(&lock,0,1)); // if lock = 0 set to1 // and continue... // critical section lock = 0; // free lock } Coding Critical Sections with Locks

15 15 Memory Fences Threads may see the effects of a series of writes to memory executed by another thread in different orders. To enforce ordering: void __threadfence_block(); waits until all global and shared memory accesses made by the calling thread prior to __threadfence_block() are visible to all threads in the thread block. Other routines: void __threadfence(); void __threadfence_system();

16 16 Writes to device memory not guaranteed in any order, so global writes may not have completed by the time the lock is unlocked __global__ void kernel(...) {... do {} while(atomicCAS(&lock,0,1));...// criticial section __threadfence(); // wait for writes to finish lock = 0; } Critical sections with memory operations

17 17 Error reporting All CUDA calls (except kernel launches) return error code of type cudaError_t cudaError_t cudaGetLastError(void) Returns code for the last error Can be used to get error from kernel execution. Char* cudaGetErroprString(cudaError_t code) Returns a null-terminated character string describing error Example print(“%s\n”,cudaGetErrorString(cudaGetLastError());

18 Questions


Download ppt "CUDA Programming continued ITCS 4145/5145 Nov 24, 2010 © Barry Wilkinson Revised."

Similar presentations


Ads by Google