Lecture 29 Computer Arch. From: Supercomputing in Plain English Part VII: Multicore Madness Henry Neeman, Director OU Supercomputing Center for Education.

Slides:



Advertisements
Similar presentations
CS136, Advanced Architecture Limits to ILP Simultaneous Multithreading.
Advertisements

Multicore Architectures Michael Gerndt. Development of Microprocessors Transistor capacity doubles every 18 months © Intel.
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
CSE 431 Computer Architecture Fall 2008 Chapter 7B: SIMDs, Vectors, and GPUs Mary Jane Irwin ( ) [Adapted from Computer Organization.
Lecture 6: Multicore Systems
Multithreading processors Adapted from Bhuyan, Patterson, Eggers, probably others.
Microprocessor Microarchitecture Multithreading Lynn Choi School of Electrical Engineering.
Introduction to Parallel Programming & Cluster Computing Multicore Madness Joshua Alexander, U Oklahoma Ivan Babic, Earlham College Michial Green, Contra.
Parallel Programming & Cluster Computing Multicore Madness Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Dec 5, 2005 Topic: Intro to Multiprocessors and Thread-Level Parallelism.
Chapter 14 Superscalar Processors. What is Superscalar? “Common” instructions (arithmetic, load/store, conditional branch) can be executed independently.
Review: Multiprocessor Basics
1 Lecture 26: Case Studies Topics: processor case studies, Flash memory Final exam stats:  Highest 83, median 67  70+: 16 students, 60-69: 20 students.
CS 7810 Lecture 24 The Cell Processor H. Peter Hofstee Proceedings of HPCA-11 February 2005.
Multi-core Processing The Past and The Future Amir Moghimi, ASIC Course, UT ECE.
1 Instant replay  The semester was split into roughly four parts. —The 1st quarter covered instruction set architectures—the connection between software.
Parallel Programming & Cluster Computing The Tyranny of the Storage Hierarchy Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday.
Cell Architecture. Introduction The Cell concept was originally thought up by Sony Computer Entertainment inc. of Japan, for the PlayStation 3 The architecture.
CPE 631: Multithreading: Thread-Level Parallelism Within a Processor Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar.
Supercomputing in Plain English An Introduction to High Performance Computing Part II: The Tyranny of the Storage Hierarchy Henry Neeman, Director OU Supercomputing.
Supercomputing in Plain English The Tyranny of the Storage Hierarchy Henry Neeman, Director OU Supercomputing Center for Education & Research Blue Waters.
Parallel & Cluster Computing The Tyranny of the Storage Hierarchy
Supercomputing in Plain English The Tyranny of the Storage Hierarchy PRESENTERNAME PRESENTERTITLE PRESENTERDEPARTMENT PRESENTERINSTITUTION DAY MONTH DATE.
Lecture 30Fall 2006 Computer Architecture Fall 2006 Lecture 30. CMPs & SMTs Adapted from Mary Jane Irwin ( ) [Adapted.
1 The Storage Hierarchy Registers Cache memory Main memory (RAM) Hard disk Removable media (CD, DVD etc) Internet Fast, expensive, few Slow, cheap, a lot.
Introduction CSE 410, Spring 2008 Computer Systems
Supercomputing in Plain English Multicore Madness Blue Waters Undergraduate Petascale Education Program May 23 – June
Parallel Programming & Cluster Computing Multicore Madness Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Lecture 27 Computer Architecture Adapted From: Supercomputing in Plain English Part VII: Multicore Madness Henry Neeman, Director OU Supercomputing Center.
1 Multi-core processors 12/1/09. 2 Multiprocessors inside a single chip It is now possible to implement multiple processors (cores) inside a single chip.
Supercomputing in Plain English Multicore Madness PRESENTERNAME PRESENTERTITLE PRESENTERDEPARTMENT PRESENTERINSTITUTION DAY MONTH DATE YEAR Your Logo Here.
Supercomputing in Plain English Part VII: Multicore Madness Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma.
Multi-core Programming Introduction Topics. Topics General Ideas Moore’s Law Amdahl's Law Processes and Threads Concurrency vs. Parallelism.
CSE431 L22 TLBs.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 22. Virtual Memory Hardware Support Mary Jane Irwin (
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? Multithreaded and multicore processors Marco D. Santambrogio:
Supercomputing and Science An Introduction to High Performance Computing Part II: The Tyranny of the Storage Hierarchy: From Registers to the Internet.
IT253: Computer Organization
Parallel & Cluster Computing Multicore Madness Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma SC08 Education.
Kevin Eady Ben Plunkett Prateeksha Satyamoorthy.
Parallel Programming & Cluster Computing The Tyranny of the Storage Hierarchy Henry Neeman, Director OU Supercomputing Center for Education & Research.
Parallel & Cluster Computing The Storage Hierarchy Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra.
.1 Multiprocessor on a Chip & Simultaneous Multi-threads [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005]
Lecture 28 Computer Arch. From: Supercomputing in Plain English Part VII: Multicore Madness Henry Neeman, Director OU Supercomputing Center for Education.
Thread Level Parallelism Since ILP has inherent limitations, can we exploit multithreading? –a thread is defined as a separate process with its own instructions.
Processor Level Parallelism. Improving the Pipeline Pipelined processor – Ideal speedup = num stages – Branches / conflicts mean limited returns after.
Parallel Programming & Cluster Computing The Tyranny of the Storage Hierarchy Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa.
Parallel & Cluster Computing 2005 The Tyranny of the Storage Hierarchy Paul Gray, University of Northern Iowa David Joiner, Kean University Tom Murphy,
.1 Multiprocessor on a Chip & Simultaneous Multi-threads [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005]
Computer Hardware & Processing Inside the Box CSC September 16, 2010.
Advanced Computer Architecture pg 1 Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8) Henk Corporaal
Computer Structure 2015 – Intel ® Core TM μArch 1 Computer Structure Multi-Threading Lihu Rappoport and Adi Yoaz.
Parallel Programming & Cluster Computing The Tyranny of the Storage Hierarchy Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Andrew.
Niagara: A 32-Way Multithreaded Sparc Processor Kongetira, Aingaran, Olukotun Presentation by: Mohamed Abuobaida Mohamed For COE502 : Parallel Processing.
CSE431 L28 CMP&SMT.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 28. CMPs & SMTs Mary Jane Irwin ( )
COMP 740: Computer Architecture and Implementation
Lecture 29 Computer Systems From: Supercomputing in Plain English Part VII: Multicore Madness Henry Neeman, Director OU Supercomputing Center for Education.
Simultaneous Multithreading
Multi-core processors
Computer Structure Multi-Threading
Supercomputing in Plain English The Tyranny of the Storage Hierarchy
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
/ Computer Architecture and Design
Hyperthreading Technology
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
CPE 631: Multithreading: Thread-Level Parallelism Within a Processor
/ Computer Architecture and Design
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
Computer Systems Fall 2006 Lecture 28. CMPs & SMTs
Memory System Performance Chapter 3
Presentation transcript:

Lecture 29 Computer Arch. From: Supercomputing in Plain English Part VII: Multicore Madness Henry Neeman, Director OU Supercomputing Center for Education & Research University of Oklahoma Wednesday October

Supercomputing in Plain English: Multicore Madness Wednesday October Outline The March of Progress Multicore/Many-core Basics Software Strategies for Multicore/Many-core A Concrete Example: Weather Forecasting

The March of Progress

Supercomputing in Plain English: Multicore Madness Wednesday October lbs per rack 270 Pentium4 Xeon CPUs, 2.0 GHz, 512 KB L2 cache 270 GB RAM, 400 MHz FSB 8 TB disk Myrinet2000 Interconnect 100 Mbps Ethernet Interconnect OS: Red Hat Linux Peak speed: 1.08 TFLOP/s (1.08 trillion calculations per second) One of the first Pentium4 clusters! OU’s TeraFLOP Cluster, 2002 boomer.oscer.ou.edu

Supercomputing in Plain English: Multicore Madness Wednesday October TeraFLOP, Prototype 2006, Sale years from room to chip!

Supercomputing in Plain English: Multicore Madness Wednesday October Moore’s Law In 1965, Gordon Moore was an engineer at Fairchild Semiconductor. He noticed that the number of transistors that could be squeezed onto a chip was doubling about every 18 months. It turns out that computer speed is roughly proportional to the number of transistors per unit area. Moore wrote a paper about this concept, which became known as “Moore’s Law.”

Supercomputing in Plain English: Multicore Madness Wednesday October Moore’s Law in Practice Year log(Speed) CPU

Supercomputing in Plain English: Multicore Madness Wednesday October Moore’s Law in Practice Year log(Speed) CPU Network Bandwidth

Supercomputing in Plain English: Multicore Madness Wednesday October Moore’s Law in Practice Year log(Speed) CPU Network Bandwidth RAM

Supercomputing in Plain English: Multicore Madness Wednesday October Moore’s Law in Practice Year log(Speed) CPU Network Bandwidth RAM 1/Network Latency

Supercomputing in Plain English: Multicore Madness Wednesday October Moore’s Law in Practice Year log(Speed) CPU Network Bandwidth RAM 1/Network Latency Software

Supercomputing in Plain English: Multicore Madness Wednesday October Fastest Supercomputer vs. Moore

The Tyranny of the Storage Hierarchy

Supercomputing in Plain English: Multicore Madness Wednesday October The Storage Hierarchy Registers Cache memory Main memory (RAM) Hard disk Removable media (e.g., DVD) Internet Fast, expensive, few Slow, cheap, a lot [5] [6]

Supercomputing in Plain English: Multicore Madness Wednesday October RAM is Slow CPU 351 GB/sec [7] GB/sec [9] (3%) Bottleneck The speed of data transfer between Main Memory and the CPU is much slower than the speed of calculating, so the CPU spends most of its time waiting for data to come in or go out.

Supercomputing in Plain English: Multicore Madness Wednesday October Why Have Cache? CPU Cache is nearly the same speed as the CPU, so the CPU doesn’t have to wait nearly as long for stuff that’s already in cache: it can do more operations per second! 351 GB/sec [7] GB/sec [9] (3%) 253 GB/sec [8] (72%)

Supercomputing in Plain English: Multicore Madness Wednesday October Storage Use Strategies Register reuse: Do a lot of work on the same data before working on new data. Cache reuse: The program is much more efficient if all of the data and instructions fit in cache; if not, try to use what’s in cache a lot before using anything that isn’t in cache. Data locality: Try to access data that are near each other in memory before data that are far. I/O efficiency: Do a bunch of I/O all at once rather than a little bit at a time; don’t mix calculations and I/O.

Supercomputing in Plain English: Multicore Madness Wednesday October A Concrete Example OSCER’s big cluster, topdawg, has Irwindale CPUs: single core, 3.2 GHz, 800 MHz Front Side Bus. The theoretical peak CPU speed is 6.4 GFLOPs (double precision) per CPU, and in practice we’ve gotten as high as 94% of that. So, in theory each CPU could consume 143 GB/sec. The theoretical peak RAM bandwidth is 6.4 GB/sec, but in practice we get about half that. So, any code that does less than 45 calculations per byte transferred between RAM and cache has speed limited by RAM bandwidth.

Good Cache Reuse Example

Supercomputing in Plain English: Multicore Madness Wednesday October A Sample Application Matrix-Matrix Multiply Let A, B and C be matrices of sizes nr  nc, nr  nk and nk  nc, respectively: The definition of A = B C is for r  {1, nr}, c  {1, nc}.

Supercomputing in Plain English: Multicore Madness Wednesday October Matrix Multiply: Naïve Version SUBROUTINE matrix_matrix_mult_naive (dst, src1, src2, & & nr, nc, nq) IMPLICIT NONE INTEGER,INTENT(IN) :: nr, nc, nq REAL,DIMENSION(nr,nc),INTENT(OUT) :: dst REAL,DIMENSION(nr,nq),INTENT(IN) :: src1 REAL,DIMENSION(nq,nc),INTENT(IN) :: src2 INTEGER :: r, c, q DO c = 1, nc DO r = 1, nr dst(r,c) = 0.0 DO q = 1, nq dst(r,c) = dst(r,c) + src1(r,q) * src2(q,c) END DO END SUBROUTINE matrix_matrix_mult_naive

Supercomputing in Plain English: Multicore Madness Wednesday October Performance of Matrix Multiply Better

Supercomputing in Plain English: Multicore Madness Wednesday October Tiling

Supercomputing in Plain English: Multicore Madness Wednesday October Tiling Tile: A small rectangular subdomain of a problem domain. Sometimes called a block or a chunk. Tiling: Breaking the domain into tiles. Tiling strategy: Operate on each tile to completion, then move to the next tile. Tile size can be set at runtime, according to what’s best for the machine that you’re running on.

Supercomputing in Plain English: Multicore Madness Wednesday October Tiling Code SUBROUTINE matrix_matrix_mult_by_tiling (dst, src1, src2, nr, nc, nq, & & rtilesize, ctilesize, qtilesize) IMPLICIT NONE INTEGER,INTENT(IN) :: nr, nc, nq REAL,DIMENSION(nr,nc),INTENT(OUT) :: dst REAL,DIMENSION(nr,nq),INTENT(IN) :: src1 REAL,DIMENSION(nq,nc),INTENT(IN) :: src2 INTEGER,INTENT(IN) :: rtilesize, ctilesize, qtilesize INTEGER :: rstart, rend, cstart, cend, qstart, qend DO cstart = 1, nc, ctilesize cend = cstart + ctilesize - 1 IF (cend > nc) cend = nc DO rstart = 1, nr, rtilesize rend = rstart + rtilesize - 1 IF (rend > nr) rend = nr DO qstart = 1, nq, qtilesize qend = qstart + qtilesize - 1 IF (qend > nq) qend = nq CALL matrix_matrix_mult_tile(dst, src1, src2, nr, nc, nq, & & rstart, rend, cstart, cend, qstart, qend) END DO END SUBROUTINE matrix_matrix_mult_by_tiling

Supercomputing in Plain English: Multicore Madness Wednesday October Multiplying Within a Tile SUBROUTINE matrix_matrix_mult_tile (dst, src1, src2, nr, nc, nq, & & rstart, rend, cstart, cend, qstart, qend) IMPLICIT NONE INTEGER,INTENT(IN) :: nr, nc, nq REAL,DIMENSION(nr,nc),INTENT(OUT) :: dst REAL,DIMENSION(nr,nq),INTENT(IN) :: src1 REAL,DIMENSION(nq,nc),INTENT(IN) :: src2 INTEGER,INTENT(IN) :: rstart, rend, cstart, cend, qstart, qend INTEGER :: r, c, q DO c = cstart, cend DO r = rstart, rend IF (qstart == 1) dst(r,c) = 0.0 DO q = qstart, qend dst(r,c) = dst(r,c) + src1(r,q) * src2(q,c) END DO END SUBROUTINE matrix_matrix_mult_tile

Supercomputing in Plain English: Multicore Madness Wednesday October Reminder: Naïve Version, Again SUBROUTINE matrix_matrix_mult_naive (dst, src1, src2, & & nr, nc, nq) IMPLICIT NONE INTEGER,INTENT(IN) :: nr, nc, nq REAL,DIMENSION(nr,nc),INTENT(OUT) :: dst REAL,DIMENSION(nr,nq),INTENT(IN) :: src1 REAL,DIMENSION(nq,nc),INTENT(IN) :: src2 INTEGER :: r, c, q DO c = 1, nc DO r = 1, nr dst(r,c) = 0.0 DO q = 1, nq dst(r,c) = dst(r,c) + src1(r,q) * src2(q,c) END DO END SUBROUTINE matrix_matrix_mult_naive

Supercomputing in Plain English: Multicore Madness Wednesday October Performance with Tiling Better

Supercomputing in Plain English: Multicore Madness Wednesday October The Advantages of Tiling It allows your code to exploit data locality better, to get much more cache reuse: your code runs faster! It’s a relatively modest amount of extra coding (typically a few wrapper functions and some changes to loop bounds). If you don’t need tiling – because of the hardware, the compiler or the problem size – then you can turn it off by simply setting the tile size equal to the problem size.

Supercomputing in Plain English: Multicore Madness Wednesday October Why Does Tiling Work Here? Cache optimization works best when the number of calculations per byte is large. For example, with matrix-matrix multiply on an n × n matrix, there are O(n 3 ) calculations (on the order of n 3 ), but only O(n 2 ) bytes of data. So, for large n, there are a huge number of calculations per byte transferred between RAM and cache.

Multicore/Many-core Basics

Supercomputing in Plain English: Multicore Madness Wednesday October What is Multicore? In the olden days (i.e., the first half of 2005), each CPU chip had one “brain” in it. More recently, each CPU chip has 2 cores (brains), and, starting in late 2006, 4 cores. Jargon: Each CPU chip plugs into a socket, so these days, to avoid confusion, people refer to sockets and cores, rather than CPUs or processors. Each core is just like a full blown CPU, except that it shares its socket with one or more other cores – and therefore shares its bandwidth to RAM.

Supercomputing in Plain English: Multicore Madness Wednesday October Dual Core Core

Supercomputing in Plain English: Multicore Madness Wednesday October Quad Core Core

Supercomputing in Plain English: Multicore Madness Wednesday October Oct Core Core Core

Supercomputing in Plain English: Multicore Madness Wednesday October The Challenge of Multicore: RAM Each socket has access to a certain amount of RAM, at a fixed RAM bandwidth per SOCKET. As the number of cores per socket increases, the contention for RAM bandwidth increases too. At 2 cores in a socket, this problem isn’t too bad. But at 16 or 32 or 80 cores, it’s a huge problem. So, applications that are cache optimized will get big speedups. But, applications whose performance is limited by RAM bandwidth are going to speed up only as fast as RAM bandwidth speeds up. RAM bandwidth speeds up much slower than CPU speeds up.

Supercomputing in Plain English: Multicore Madness Wednesday October The Challenge of Multicore: Network Each node has access to a certain number of network ports, at a fixed number of network ports per NODE. As the number of cores per node increases, the contention for network ports increases too. At 2 cores in a socket, this problem isn’t too bad. But at 16 or 32 or 80 cores, it’s a huge problem. So, applications that do minimal communication will get big speedups. But, applications whose performance is limited by the number of MPI messages are going to speed up very very little – and may even crash the node.

Supercomputing in Plain English: Multicore Madness Wednesday October Multicore/Many-core Problem Most multicore chip families have relatively small cache per core (e.g., 2 MB) – and this problem seems likely to remain. Small TLBs make the problem worse: 512 KB per core rather than 2 MB. So, to get good cache reuse, you need to partition algorithm so subproblem needs no more than 512 KB.

Supercomputing in Plain English: Multicore Madness Wednesday October The T.L.B. on a Current Chip On Intel Core Duo (“Yonah”): Cache size is 2 MB per core. Page size is 4 KB. A core’s data TLB size is 128 page table entries. Therefore, D-TLB only covers 512 KB of cache.

Supercomputing in Plain English: Multicore Madness Wednesday October The T.L.B. on a Current Chip On Intel Core Duo (“Yonah”): Cache size is 2 MB per core. Page size is 4 KB. A core’s data TLB size is 128 page table entries. Therefore, D-TLB only covers 512 KB of cache. The cost of a TLB miss is 49 cycles, equivalent to as many as 196 calculations! (4 FLOPs per cycle)

Supercomputing in Plain English: Multicore Madness Wednesday October What Do We Need? We need much bigger caches! TLB must be big enough to cover the entire cache. It’d be nice to have RAM speed increase as fast as core counts increase, but let’s not kid ourselves.

Supercomputing in Plain English: Multicore Madness Wednesday October To Learn More Supercomputing

Lecture 29Fall 2007 Computer Architecture Fall 2007 Lecture 29. CMPs & SMTs Adapted from Mary Jane Irwin ( ) [Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005]

Lecture 29Fall 2007 Multithreading on A Chip  Find a way to “hide” true data dependency stalls, cache miss stalls, and branch stalls by finding instructions (from other process threads) that are independent of those stalling instructions  Multithreading – increase the utilization of resources on a chip by allowing multiple processes (threads) to share the functional units of a single processor l Processor must duplicate the state hardware for each thread – a separate register file, PC, instruction buffer, and store buffer for each thread l The caches, TLBs, BHT, BTB can be shared (although the miss rates may increase if they are not sized accordingly) l The memory can be shared through virtual memory mechanisms l Hardware must support efficient thread context switching

Lecture 29Fall 2007 Types of Multithreading on a Chip  Fine-grain – switch threads on every instruction issue l Round-robin thread interleaving (skipping stalled threads) l Processor must be able to switch threads on every clock cycle l Advantage – can hide throughput losses that come from both short and long stalls l Disadvantage – slows down the execution of an individual thread since a thread that is ready to execute without stalls is delayed by instructions from other threads  Coarse-grain – switches threads only on costly stalls (e.g., L2 cache misses) l Advantages – thread switching doesn’t have to be essentially free and much less likely to slow down the execution of an individual thread l Disadvantage – limited, due to pipeline start-up costs, in its ability to overcome throughput loss -Pipeline must be flushed and refilled on thread switches

Lecture 29Fall 2007 Multithreaded Example: Sun’s Niagara (UltraSparc T1)  Eight fine grain multithreaded single-issue, in-order cores (no speculation, no dynamic branch prediction) Ultra IIINiagara Data width64-b Clock rate1.2 GHz1.0 GHz Cache (I/D/L2) 32K/64K/ (8M external) 16K/8K/3M Issue rate4 issue1 issue Pipe stages14 stages6 stages BHT entries16K x 2-bNone TLB entries128I/512D64I/64D Memory BW2.4 GB/s~20GB/s Transistors29 million200 million Power (max)53 W<60 W 4-way MT SPARC pipe Crossbar 4-way banked L2$ Memory controllers I/O shared funct’s

Lecture 29Fall 2007 Niagara Integer Pipeline  Cores are simple (single-issue, 6 stage, no branch prediction), small, and power-efficient FetchThrd SelDecodeExecuteMemoryWB I$ ITLB Inst bufx4 PC logicx4 Decode RegFile x4 Thread Select Logic ALU Mul Shft Div D$ DTLB Stbufx4 Thrd Sel Mux Crossbar Interface Instr type Cache misses Traps & interrupts Resource conflicts From MPR, Vol. 18, #9, Sept. 2004

Lecture 29Fall 2007 Simultaneous Multithreading (SMT)  A variation on multithreading that uses the resources of a multiple-issue, dynamically scheduled processor (superscalar) to exploit both program ILP and thread- level parallelism (TLP) l Most SS processors have more machine level parallelism than most programs can effectively use (i.e., than have ILP) l With register renaming and dynamic scheduling, multiple instructions from independent threads can be issued without regard to dependencies among them -Need separate rename tables (ROBs) for each thread -Need the capability to commit from multiple threads (i.e., from multiple ROBs) in one cycle  Intel’s Pentium 4 SMT called hyperthreading l Supports just two threads (doubles the architecture state)

Lecture 29Fall 2007 Threading on a 4-way SS Processor Example Thread AThread B Thread CThread D Time → Issue slots → SMTFine MTCoarse MT

Lecture 29Fall 2007 Multicore Xbox360 – “Xenon” processor  To provide game developers with a balanced and powerful platform l Three SMT processors, 32KB L1 D$ & I$, 1MB UL2 cache l 165M transistors total l 3.2 Ghz Near-POWER ISA l 2-issue, 21 stage pipeline, with bit registers l Weak branch prediction – supported by software hinting l In order instructions l Narrow cores – 2 INT units, bit VMX units, 1 of anything else  An ATI-designed 500MZ GPU w/ 512MB of DDR3DRAM l 337M transistors, 10MB framebuffer l 48 pixel shader cores, each with 4 ALUs

Lecture 29Fall 2007 Xenon Diagram Core 0 L1D L1I Core 1 L1D L1I Core 2 L1D L1I 1MB UL2 512MB DRAM GPU BIU/IO Intf 3D Core 10MB EDRAM Video Out MC0 MC1 Analog Chip XMA Dec SMC DVD HDD Port Front USBs (2) Wireless MU ports (2 USBs) Rear USB (1) Ethernet IR Audio Out Flash Systems Control Video Out

Lecture 29Fall 2007 The PS3 “Cell” Processor Architecture  Composed of a Non-SMP Architecture l 234M 4Ghz l 1 Power Processing Element, 8 “Synergistic” (SIMD) PE’s l 512KB L2 $ - Massively high bandwidth (200GB/s) bus connects it to everything else l The PPE is strangely similar to one of the Xenon cores -Almost identical, really. Slight ISA differences, and fine-grained MT instead of real SMT l The real differences lie in the SPEs (21M transistors each) -An attempt to ‘fix’ the memory latency problem by giving each processor complete control over it’s own 256KB “scratchpad” – 14M transistors –Direct mapped for low latency -4 vector units per SPE, 1 of everything else – 7M trans.

Lecture 29Fall 2007 The PS3 “Cell” Processor Architecture

Lecture 29Fall 2007 How to make use of the SPEs

Lecture 29Fall 2007 What about the Software?  Makes use of special IBM “Hypervisor” l Like an OS for OS’s l Runs both a real time OS (for sound) and non-real time (for things like AI)  Software must be specially coded to run well l The single PPE will be quickly bogged down l Must make use of SPEs wherever possible l This isn’t easy, by any standard  What about Microsoft? l Development suite identifies which 6 threads you’re expected to run l Four of them are DirectX based, and handled by the OS l Only need to write two threads, functionally 

Lecture 29Fall 2007 Next Lecture and Reminders  Reminders l Final is Thursday, December 13 from 10-11:50 AM in ITT 322