Presentation is loading. Please wait.

Presentation is loading. Please wait.

COMPUTER GRAPHICS CHAPTER 38 CS 482 – Fall 2017 GRAPHICS HARDWARE

Similar presentations


Presentation on theme: "COMPUTER GRAPHICS CHAPTER 38 CS 482 – Fall 2017 GRAPHICS HARDWARE"— Presentation transcript:

1 COMPUTER GRAPHICS CHAPTER 38 CS 482 – Fall 2017 GRAPHICS HARDWARE
GRAPHICS PROCESSING UNITS PARALLELISM

2 GRAPHICS PROCESSING UNITS
HISTORICAL CONTEXT Early 1990’s VGA Controllers Early 2000’s First GPUs Early 2010’s Modern GPUs With increased processing power being demanded, especially by the game industry, chip developers began adding enough to gpus to rival that in cpus. The fixed function capabilities were extended to include transformations, lighting, and shading. Also known as a “graphics accelerators”, a Video Graphics Array controller combined a memory controller and a display generator with attached DRAM. These controllers Contained fixed function capabilities for triangulation, rasterization, and texture mapping. Fixed function dedicated logic on these chips was replaced by programmable processors. Integer arithmetic was replaced with floating- point arithmetic. Parallelism was vastly increased on the chips. Instructions and memory began to be added to allow GPUs to be used for general purpose programming, not just graphics. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 305

3 GRAPHICS PROCESSING UNITS
DISTINGUISHING FEATURES As instruction sets and memory expand on GPUs, they become increasingly capable of general purpose processing, but there are still important differences between GPUs and CPUs. GPU instruction sets are still rather narrowly defined, focusing on graphics acceleration. GPU programming interfaces are high- level APIs like OpenGL and DirectX, together with high-level shading languages like CG (C for Graphics) and HLSL (High Level Shader Language). These are supported by compilers that generate intermediate languages, which are optimized by the specific GPU driver software that generates the GPU’s machine instructions. Graphics processing involves many stages of operations, such as vertex shading, geometry shading, rasterization, and fragment shading, which are performed on a massively parallel scale in a pipelined fashion. Vertices can be drawn independently and fragments can be rendered independently, allowing computation to take place along many parallel threads of control, relying on those threads to hide latency rather than relying on caches to avoid latency. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 306

4 GRAPHICS PROCESSING UNITS
LOW LATENCY VS. HIGH THROUGHPUT CPUs are optimized to provide low-latency access to cached data sets. GPUs are optimized to provide data-parallel, high-throughput computation. CPUs contain control logic for out-of-order and speculative execution. GPUs are tolerant of memory latency. CPUs may use tens of threads. GPUs have many more transistors dedicated to computation. GPUs may use thousands of threads. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 307

5 GRAPHICS PROCESSING UNITS
THREAD AND MEMORY HIERARCHIES A thread executes a kernel, using very fast memory registers. Threads are grouped into blocks, which can synchronize execution. Threads collaborate by means of fast shared memory. Thread blocks are grouped into grids, which are independent (i.e., they can be executed in any order). I/O for grids is handled in slower (but much larger) global memory. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 308

6 CHAPTER 38: GRAPHICS HARDWARE
PARALLELISM PARALLEL RENDERING When rendering was handled by serial processing managed by the CPU, graphical primitives were periodically fed to a GPU, but modern programmers must address the parallelism of multiple CPUs and GPUs. Objects may be submitted to the frame buffer in any order, but they must be sorted as a last step before rasterization for two reasons: Transparent objects need to be drawn back-to-front so anything behind a transparent object shows through. GPU state changes (uploading textures, activating lighting, etc.) can be expensive, so sorting all similar states together (everything with the same texture, things that are lit, etc.), minimizes state changes, and the GPU takes less time to render the scene. Database Distribution Geometry Processors Distribution Raster Processors Frame Buffer Composition In general, a multiprocessor-based parallel pipeline distributes geometry among several processors, whose results must ultimately be gathered together into the frame buffer. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 309

7 PARALLELISM SORT-FIRST MULTIPROCESSOR-BASED ARCHITECTURE
Frame Buffer Raster Processors Geometry Processors Database Subdivide the frame buffer into tiles that are mapped to the available processors, distributing primitives to processors based upon their tiles. Couple each geometry processor with a rasterizer to form a complete rendering unit. Sort Before Submission To Geometry Processors Subdivide the screen P1 P2 “Pre-transform” the primitives into screen coordinates via bounding boxes Distribute the primitives Each processor renders its own primitives No communication needed afterwards P1: P2: P3: P4: P4 P3 Advantage: This architecture can exploit frame-to-frame coherence, redistributing primitives to processors only when they move between screen regions. Disadvantage: It is susceptible to load imbalances since some portions of the screen may have many more things to render than other portions. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 310

8 PARALLELISM SORT-MIDDLE MULTIPROCESSOR-BASED ARCHITECTURE Database
Frame Buffer Raster Processors Geometry Processors Database Primitives are transformed into screen coordinates, sorted by region, and routed from geometry processors to rasterizers, which render their region. Fragments are then collected and assembled into the frame buffer. Arbitrary assignment P1 P2 Geometry processing Sort Before Submission To Raster Processors Sorting P1: P2: P3: P4: P1: P2: P3: P4: P4 P3 Advantage: In this architecture, geometry can be distributed among processors without regard to the subdivision of the screen. Rasterization Disadvantage: Poor load distribution - some areas of screen may be relatively empty. Disadvantage: Latency - all processors must finish before final image is composed. Disadvantage: Order- dependent primitives (such as transparent objects) are difficult to accommodate since fragments arrive for processing in nondeterministic order. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 311

9 PARALLELISM SORT-LAST MULTIPROCESSOR-BASED ARCHITECTURE
Frame Buffer Raster Processors Geometry Processors Database Renderers are responsible for rendering a full- screen image using their share of the primitives. Each processor is assigned a share of the primitives and renders them as a complete scene. P1 P2 P3 P4 Each processor is assigned a sector’s portion of the sub-scenes and Sorts the images with z-compositing/z-buffer P1 P2 P3 P4 Sort During Composition Into Frame Buffer The partial images are composited together, taking into account the distance of each pixel in each layer from the camera, which guarantees that the results of the individual renderers are layered correctly. Advantage: No requirement to sort or redistribute primitives; each renderer computes its image as if it were the only renderer in the system. Disadvantage: It requires a high bandwidth image compositor. CS 482 – Fall 2017 CHAPTER 38: GRAPHICS HARDWARE PAGE 312


Download ppt "COMPUTER GRAPHICS CHAPTER 38 CS 482 – Fall 2017 GRAPHICS HARDWARE"

Similar presentations


Ads by Google