Presentation is loading. Please wait.

Presentation is loading. Please wait.

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign ECE408 Fall 2015 Applied Parallel Programming.

Similar presentations


Presentation on theme: "© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign ECE408 Fall 2015 Applied Parallel Programming."— Presentation transcript:

1 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign ECE408 Fall 2015 Applied Parallel Programming Lecture 19: GPU System Architecture

2 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Imaging for IoT Principles of a camera Design of a CMOS imaging system Mobile Imaging in Android / iOS Image processing basics Image / Video coding Computer vision basics 3D imaging Applications: robots, cars, drones, VR/AR, mobile devices, things Project: build something cool Looking for ~4 students to help me create this course Juniors/Seniors/MS students – CompE+DSP background

3 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Project Timeline Today: Project Proposal (5 slides max / PPT) Week of Nov 1 st : project review #1 with staff Week of Nov 16 th : project review #2 with staff Week of Dec 7 th : demo to staff Week of Dec 7 th : poster session

4 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign PCIe PC Architecture

5 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Data Transfer Overheads The overheads between key components ultimately dictates system performance –Especially true for massively parallel systems processing massive amount of data –Tricks like buffering, reordering, caching can temporarily defy the rules in some cases –Ultimately, the performance falls back to what the data transfer architecture dictates

6 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign GeForce 7800 GTX Board Details 256MB/256-bit DDR3 600 MHz 8 pieces of 8Mx32 16x PCI-Express SLI Connector DVI x 2 sVideo TV Out Single slot cooling

7 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign PCIe Data Transfer using DMA DMA (Direct Memory Access) is used to fully utilize the bandwidth of an I/O bus –DMA uses physical address for source and destination –Transfers a number of bytes requested by OS –But what about paging?? Main Memory (DRAM) GPU card (or other I/O cards) CPU DMA Global Memory

8 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Pinned Memory DMA uses physical addresses The OS could accidentally page out the data that is being read or written by a DMA and page in another virtual page into the same location Pinned memory cannot not be paged out If a source or destination of a cudaMemCpy() in the host memory is not pinned, it needs to be first copied to a pinned memory – extra overhead cudaMemcpy is much faster with pinned host memory source or destination

9 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Allocate/Free Pinned Memory (a.k.a. Page Locked Memory) cudaHostAlloc() –Three parameters –Address of pointer to the allocated memory –Size of the allocated memory in bytes –Option – use cudaHostAllocDefault for now cudaFreeHost() –One parameter –Pointer to the memory to be freed

10 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Using Pinned Memory Use the allocated memory and its pointer the same way those returned by malloc(); The only difference is that the allocated memory cannot be paged by the OS The cudaMemcpy function should be about 2X faster with pinned memory Pinned memory is a limited resource whose over-subscription can have serious consequences

11 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Serialized Data Transfer and GPU computation So far, the way we use cudaMemcpy serializes data transfer and GPU computation Trans. A Trans. BVector Add Tranfer C time Only use one direction, GPU idle PCIe Idle Only use one direction, GPU idle

12 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Overlapped (Pipelined) Timing Divide large vectors into segments Overlap transfer and compute of adjacent segments Trans A.1 Trans B.1 Trans C.1 Trans A.2 Comp C.1 = A.1 + B.1 Trans B.2 Comp C.2 = A.2 + B.2 Trans A.3 Trans B.3 Trans C.2 Comp C.3 = A.3 + B.3 Trans A.4 Trans B.4

13 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Using CUDA Streams and Asynchronous MemCpy CUDA supports parallel execution of kernels and cudaMemcpy with “Streams” Each stream is a queue of operations (kernel launches and cudaMemcpy’s) Operations (tasks) in different streams can go in parallel –“Task parallelism”

14 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign 14 Streams Device requests made from the host code are put into a queue –Queue is read and processed asynchronously by the driver and device –Driver ensures that commands in the queue are processed in sequence. Memory copies end before kernel launch, etc. host thread cudaMemcpy Kernel launch sync fifo device driver

15 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign 15 Streams cont. To allow concurrent copying and kernel execution, you need to use multiple queues, called “streams” –CUDA “events” allow the host thread to query and synchronize with the individual queues. host thread device driver Stream 1 Stream 2 Event

16 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Conceptual View of Streams MemCpy A.1 MemCpy B.1 Kernel 1 MemCpy C.1 MemCpy A.2 MemCpy B.2 Kernel 2 MemCpy C.2 Stream 0 Stream 1 Copy Engine PCIe UP PCIe Down Kernel Engine Operations (Kernels, MemCpys)

17 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign A Simple Multi-Stream Host Code cudaStream_tstream0, stream1; cudaStreamCreate( &stream0); cudaStreamCreate( &stream1); float *d_A0, *d_B0, *d_C0;// device memory for stream 0 float *d_A1, *d_B1, *d_C1; // device memory for stream 1 // cudaMalloc for d_A0, d_B0, d_C0, d_A1, d_B1, d_C1 go here for (int i=0; i<n; i+=SegSize*2) { cudaMemCpyAsync(d_A0, h_A+i, SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_B0, h_B+i, SegSize*sizeof(float),.., stream0); vecAdd<<<SegSize/256, 256, 0, stream0); cudaMemCpyAsync(d_C0, h_C+I, SegSize*sizeof(float),.., stream0);

18 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign A Simple Multi-Stream Host Code (Cont.) for (int i=0; i<n; i+=SegSize*2) { cudaMemCpyAsync(d_A0, h_A+i, SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_B0, h_B+i, SegSize*sizeof(float),.., stream0); vecAdd<<<SegSize/256, 256, 0, stream0)(d_A0, d_B0, …); cudaMemCpyAsync(d_C0, h_C+I, SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_A1, h_A+i+SegSize, SegSize*sizeof(float),.., stream1); cudaMemCpyAsync(d_B1, h_B+i+SegSize, SegSize*sizeof(float),.., stream1); vecAdd >>(d_A1, d_B1, …); cudaMemCpyAsync(d_C1, h_C+i+SegSize, SegSize*sizeof(float),.., stream1); }

19 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign A View Closer to Reality MemCpy A.1 MemCpy B.1 MemCpy C.1 MemCpy A.2 MemCpy B.2 Kernel 1 Kernel 2 Stream 0 Stream 1 Copy Engine PCI UP PCI Down Kernel Engine Operations (Kernels, MemCpys) MemCpy C.2

20 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Not quite the overlap we want C.1 blocks A.2 and B.2 in the copy engine queue Trans A.1 Trans B.1 Trans C.1 Trans A.2 Comp C.1 = A.1 + B.1 Trans B.2 Comp C.2 = A.2 + B.2

21 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign A Better Multi-Stream Host Code (Cont.) for (int i=0; i<n; i+=SegSize*2) { cudaMemCpyAsync(d_A0, h_A+i, SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_B0, h_B+i, SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_A1, h_A+i+SegSize, SegSize*sizeof(float),.., stream1); cudaMemCpyAsync(d_B1, h_B+i+SegSize, SegSize*sizeof(float),.., stream1); vecAdd<<<SegSize/256, 256, 0, stream0)(d_A0, d_B0, …); vecAdd >>(d_A1, d_B1, …); cudaMemCpyAsync(d_C0, h_C+I; SegSize*sizeof(float),.., stream0); cudaMemCpyAsync(d_C1, h_C+i+SegSize, SegSize*sizeof(float),.., stream1); }

22 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign A View Closer to Reality MemCpy A.1 MemCpy B.1 MemCpy A.2 MemCpy B.2 MemCpy C.1 Kernel 1 Kernel 2 Stream 0 Stream 1 Copy Engine PCI UP PCI Down Kernel Engine Operations (Kernels, MemCpys) MemCpy C.2

23 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Overlapped (Pipelined) Timing Trans A.1 Trans B.1 Trans C.1 Trans A.2 Comp C.1 = A.1 + B.1 Trans B.2 Comp C.2 = A.2 + B.2 Trans A.3 Trans B.3 Trans C.2 Comp C.3 = A.3 + B.3 Trans A.4 Trans B.4

24 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Hyper Queue Provide multiple real queues for each engine Allow much more concurrency by allowing some streams to make progress for an engine while others are blocked

25 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Fermi (and older) Concurrency Fermi allows 16-way concurrency –Up to 16 grids can run at once –But CUDA streams multiplex into a single queue –Overlap only at stream edges P -- Q -- R A -- B -- C X -- Y -- Z Stream 1 Stream 2 Stream 3 Hardware Work Queue A--B--C P--Q--R X--Y--Z

26 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign Kepler Improved Concurrency P -- Q -- R A -- B -- C X -- Y -- Z Stream 1 Stream 2 Stream 3 Multiple Hardware Work Queues A--B--CP--Q--RX--Y--Z Kepler allows 32-way concurrency One work queue per stream Concurrency at full-stream level No inter-stream dependencies

27 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign GPU/CPU integration AMD Trinity (Oct 2012) Qualcomm Snapdragon


Download ppt "© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2012 ECE408/CS483, ECE 498AL, University of Illinois, Urbana-Champaign ECE408 Fall 2015 Applied Parallel Programming."

Similar presentations


Ads by Google