Presentation on theme: "CSE 431 Computer Architecture Fall 2008 Chapter 7B: SIMDs, Vectors, and GPUs Mary Jane Irwin ( www.cse.psu.edu/~mji ) [Adapted from Computer Organization."— Presentation transcript:
2 Flynn’s Classification Scheme SISD – single instruction, single data streamaka uniprocessor - what we have been talking about all semesterSIMD – single instruction, multiple data streamssingle control unit broadcasting operations to multiple datapathsMISD – multiple instruction, single datano such machine (although some people put vector machines in this category)MIMD – multiple instructions, multiple data streamsaka multiprocessors (SMPs, MPPs, clusters, NOWs)Now obsolete terminology except for
3 SIMD Processors Single control unit (one copy of the code) PEControlSingle control unit (one copy of the code)Multiple datapaths (Processing Elements – PEs) running in parallelQ1 – PEs are interconnected (usually via a mesh or torus) and exchange/share data as directed by the control unitQ2 – Each PE performs the same operation on its own local data
4 Example SIMD Machines Did SIMDs die out in the early 1990s ?? Maker Year# PEs# b/ PEMax memory (MB)PE clock (MHz)System BW (MB/s)Illiac IVUIUC1972641132,560DAPICL19804,09625MPPGoodyear198216,3841020,480CM-2Thinking Machines198765,5365127MP-1216MasPar1989410242523,000No, the answer is that now they are EVERYWHEREDid SIMDs die out in the early 1990s ??
5 Multimedia SIMD Extensions The most widely used variation of SIMD is found in almost every microprocessor today – as the basis of MMX and SSE instructions added to improve the performance of multimedia programsA single, wide ALU is partitioned into many smaller ALUs that operate in parallel8 bit +16 bit adder8 bit +32 bit adder16 bit adder8 bit +8 bit +Loads and stores are simply as wide as the widest ALU, so the same data transfer can transfer one 32 bit value, two 16 bit values or four 8 bit valuesThere are now hundreds of SSE instructions in the x86 to support multimedia operations
6 Vector ProcessorsA vector processor (e.g., Cray) pipelines the ALUs to get good performance at lower cost. A key feature is a set of vector registers to hold the operands and results.Collect the data elements from memory, put them in order into a large set of registers, operate on them sequentially in registers, and then write the results back to memoryThey formed the basis of supercomputers in the 1980’s and 90’sConsider extending the MIPS instruction set (VMIPS) to include vector instructions, e.g.,addv.d to add two double precision vector register valuesaddvs.d and mulvs.d to add (or multiply) a scalar register to (by) each element in a vector registerlv and sv do vector load and vector store and load or store an entire vector of double precision data
7 MIPS vs VMIPS DAXPY Codes: Y = a × X + Y l.d $f0,a($sp) ;load scalar a addiu r4,$s0,#512 ;upper bound to load to loop: l.d $f2,0($s0) ;load X(i) mul.d $f2,$f2,$f0 ;a × X(i) l.d $f4,0($s1) ;load Y(i) add.d $f4,$f4,$f2 ;a × X(i) + Y(i) s.d $f4,0($s1) ;store into Y(i) addiu $s0,$s0,#8 ;increment X index addiu $s1,$s1,#8 ;increment Y index subu $t0,r4,$s0 ;compute bound bne $t0,$zero,loop ;check if doneFor class handout
8 MIPS vs VMIPS DAXPY Codes: Y = a × X + Y l.d $f0,a($sp) ;load scalar a addiu r4,$s0,#512 ;upper bound to load to loop: l.d $f2,0($s0) ;load X(i) mul.d $f2,$f2,$f0 ;a × X(i) l.d $f4,0($s1) ;load Y(i) add.d $f4,$f4,$f2 ;a × X(i) + Y(i) s.d $f4,0($s1) ;store into Y(i) addiu $s0,$s0,#8 ;increment X index addiu $s1,$s1,#8 ;increment Y index subu $t0,r4,$s0 ;compute bound bne $t0,$zero,loop ;check if donel.d $f0,a($sp) ;load scalar alv $v1,0($s0) ;load vector Xmulvs.d $v2,$v1,$f0 ;vector-scalar multiplylv $v3,0($s1) ;load vector Yaddv.d $v4,$v2,$v3 ;add Y to a × Xsv $v4,0($s1) ;store vector resultFor lectureDAXPY – Double precision a x X Plus Y – forms the inner loop of the Linpack benchmark.Assume that the starting addresses of X is in $s0 and Y is in $s1
9 Vector verus ScalarInstruction fetch and decode bandwidth is dramatically reduced (also saves power)Only six instructions in VMIPS versus almost 600 in MIPS for 64 element DAXPYHardware doesn’t have to check for data hazards within a vector instruction. A vector instruction will only stall for the first element, then subsequent elements will flow smoothly down the pipeline. And control hazards are nonexistent.MIPS stall frequency is about 64 times higher than VMIPS for DAXPYEasier to write code for data-level parallel app’sHave a known access pattern to memory, so heavily interleaved memory banks work well. The cost of latency to memory is seen only once for the entire vectorRecent announcements from Intel suggest that vectors will play a bigger role in commodity processors. Intel’s Advanced Vector Instructions (AVI), to arrive in 2010, will expand the width of the SSE registers from 128 bits to 256 bits and eventually to 1024 bits (16 double-precision floating point numbers). And Larrabee is reputed to have vector instructions.
10 Example Vector Machines MakerYearPeak perf.# vector ProcessorsPE clock (MHz)STAR-100CDC1970??1132ASCTI20 MFLOPS1, 2, or 416Cray 1Cray197680 to 240 MFLOPS80Cray Y-MP1988333 MFLOPS2, 4, or 8167Earth SimulatorNEC200235.86 TFLOPS8Did Vector machines die out in the late 1990s ??
11 The PS3 “Cell” Processor Architecture Composed of a non-SMP architecture234M 4Ghz1 Power Processing Element (PPE) “control” processor. The PPE is similar to a Xenon coreSlight ISA differences, and fine-grained MT instead of real SMTAnd 8 “Synergistic” (SIMD) Processing Elements (SPEs). The real compute power and differences lie in the SPEs (21M transistors each)An attempt to ‘fix’ the memory latency problem by giving each SPE complete control over it’s own 256KB “scratchpad” memory – 14M transistorsDirect mapped for low latency4 vector units per SPE, 1 of everything else – 7M transistors512KB L2$ and a massively high bandwidth (200GB/s) processor-memory busMarketing-related info: the PPE is /so/ similar to the Xenon that other than some specialized SIMD instructions, code is near compatible. (Instruction length also differs, but that's a 'minor' issue). What really matters is that Microsoft has a real leg up on the 'mental pull' to developers. The reason is that code that's developed on the Xenon will compile and run, with very few modifications, on the PPE of the Cell. As such, Xenon has 3 "PPE-style" processors, allowing the primary development path to be MS-based. After all, once you get the game working with the much more comfortable Xenon architecture, you can then try to put some rough segments onto the SPE's, and hope for some speedup. The trick is that this way, most of the development time will be in a Xenon-native development, rather than Cell-native. This gives the dev-team more time to optimize the Xenon code, and more importantly tends to increase the amount of code that will eventually run on the PPE. A full Cell development process would start with the SPE sub-programs, but since that isn't a portable development process on either the Xbox or the Revolution, MS is hoping developers won't use it.By short-circuiting the PS3 development process by providing such a compatible and comfortable platform, MS is hoping to reduce utilization of the SPEs, and over-reliance on the PPE, reducing the Cell's functional utilization.
12 How to make use of the SPEs Note that this process requires 8 SPEs, and only 7 are enabled in the PS3's Cell. As such, some routines must be run on the same SPE, resulting in lower performance. Also note that the memory subsystem on your average desktop machine is around 6.5 GB/s. The graphics memory on your high-end video card gives maybe 25GB/s. The bus transmitting all of that data gives 200GB/s, enough for the PPE and all 7 SPE's to run at 25GB/s on the "EIM" (Element Interface Bus), which allows all of this performance to happen.That bus is a 3-segment 96B/cycle bus, and really is the backbone of the design. Without it, none of this would matter.
13 What about the Software? Uses special IBM “Hypervisor”Like an OS for OS’sRuns both a real time OS (for sound) and non-real time OS (for things like AI)Software must be specially coded to run wellThe single PPE will be quickly bogged downMust make use of SPEs wherever possibleThis isn’t easy, by any standardWhat about Microsoft?Development suite identifies which 6 threads you’re expected to runFour of them are DirectX based, and handled by the OSOnly need to write two threads, functionally
14 Graphics Processing Units (GPUs) GPUs are accelerators that supplement a CPU so they do not need to be able to perform all of the tasks of a CPU. They dedicate all of their resources to graphicsCPU-GPU combination – heterogeneous multiprocessingProgramming interfaces that are free from backward binary compatibility constraints resulting in more rapid innovation in GPUs than in CPUsApplication programming interfaces (APIs) such as OpenGL and DirectX coupled with high-level graphics shading languages such as NVIDIA’s Cg and CUDA and Microsoft’s HLSLGPU data types are vertices (x, y, z, w) coordinates and pixels (red, green, blue, alpha) color componentsGPUs execute many threads (e.g., vertex and pixel shading) in parallel – lots of data-level parallelism
15 Typical GPU Architecture Features Rely on having enough threads to hide the latency to memory (not caches as in CPUs)Each GPU is highly multithreadedUse extensive parallelism to get high performanceHave extensive set of SIMD instructions; moving towards multicoreMain memory is bandwidth, not latency drivenGPU DRAMs are wider and have higher bandwidth, but are typically smaller, than CPU memoriesLeaders in the marketplace (in 2008)NVIDIA GeForce 8800 GTX (16 multiprocessors each with 8 multithreaded processing units)AMD’s ATI Radeon and ATI FireGLWatch out for Intel’s LarrabeeGPGPUs as well
16 Multicore Xbox360 – “Xenon” processor To provide game developers with a balanced and powerful platformThree SMT processors, 32KB L1 D$ & I$, 1MB UL2 cache165M transistors total3.2 Ghz “near” POWER ISA2-issue, 21 stage pipeline, with bit registersWeak branch prediction – supported by software hintingIn order instructionsNarrow cores – 2 INT units, bit VMX units, 1 of anything elseAn ATI-designed 500MZ GPU w/ 512MB of DDR3 DRAM337M transistors, 10MB framebuffer48 pixel shader cores, each with 4 ALUsThese are both in HIDE MODE – need to replace with NVIDIA processor descriptionThings to note: the 32-bit Power ISA supports 32 registers natively. Moving to 128 registers requires ‘cramming’ 7-bit register operands in. No one knows how they do it, but it’s quirky. The branch predictor is quite simple, and my guess is that it’s either a 1-bit predictor or a small 2-bit predictor. Microsoft has presented a number of papers on how software hinted and compiler supported branch prediction can help.A “VMX” unit is the colloquial term for the SIMD operations similar to AltiVec we see on board. This one is custom modified to support Direct3D data format packing and unpacking.Other notes: the GPU is twice as big as the CPU. The 10MB framebuffer is an off-chip high-speed memory explicitly for full-screen anti-aliasing. In FSAA, you need to do 5 reads and 1 write per pixel, which quickly floods any memory subsystem. Instead, they build it into the framebuffer itself, which is a very fast little chip that does nothing but hold the image and smooth it out.
17 Xenon Block Diagram Core 0 Core 1 Core 2 1MB UL2 GPU 512MB DRAM DVD HDD PortFront USBs (2)WirelessMU ports (2 USBs)Rear USB (1)EthernetIRAudio OutFlashSystems ControlCore 0L1D L1ICore 1L1D L1ICore 2L1D L1I1MB UL2XMA DecGPU512MBDRAMBIU/IO IntfSMCIt is important to note the way that data can be streamed from the L2 cache to the GPU. In particular, the L2 can have banks ‘locked’ away from normal use, and allowed for direct-FIFO access to the GPU. This allows the processor to stream data into the GPU very efficiently, without clogging up the cache, and ensuring optimal bandwidth usage. This is especially useful in "procedural synthesis", where a template object (such as a tree) is programmatically modified slightly each time it is drawn, to make it look natural. The locked cache allows FIFO streaming of such objects to the GPU without reducing available bandwidth to the processor, and without trashing the cache. Also of note is that if you run two of the three processors at full-tilt, it's just enough to feed the GPU at full-rate. The system was meant for 6 threads, four of which are graphics threads doing procedural synthesis and the like.MC13D Core10MBEDRAMVideoOutAnalogChipVideo OutMC0
18 Next Lecture and Reminders Multiprocessor network topologiesReading assignment – PH, Chapter PHRemindersHW6 out November 13th and due December 11thCheck grade posting on-line (by your midterm exam number) for correctnessSecond evening midterm exam scheduledTuesday, November 18, 20:15 to 22:15, Location 262 WillardPlease let me know ASAP (via ) if you have a conflict
Your consent to our cookies if you continue to use this website.