Presentation is loading. Please wait.

Presentation is loading. Please wait.

ECE 4100/6100 (1) Multicore Computing - Evolution.

Similar presentations


Presentation on theme: "ECE 4100/6100 (1) Multicore Computing - Evolution."— Presentation transcript:

1 ECE 4100/6100 (1) Multicore Computing - Evolution

2 ECE 4100/6100 (2) Performance Scaling Pentium® Pro Architecture Pentium® 4 Architecture Pentium® Architecture 486 386 286 8086 Source: Shekhar Borkar, Intel Corp.

3 ECE 4100/6100 (3) Intel  Homogeneous cores  Bus based on chip interconnect  Shared Memory  Traditional I/O Classic OOO: Reservation Stations, Issue ports, Schedulers…etc Large, shared set associative, prefetch, etc. Source: Intel Corp.

4 ECE 4100/6100 (4) IBM Cell Processor Co-processor accelerator Heterogeneous MultiCore High bandwidth, multiple buses High speed I/O Classic (stripped down) core Source: IBM

5 ECE 4100/6100 (5) AMD Au1200 System on Chip Custom cores Embedded processor On-Chip I/O On-Chip Buses Source: AMD

6 ECE 4100/6100 (6) PlayStation 2 Die Photo (SoC) Source: IEEE Micro, March/April 2000 Floating point MACs

7 ECE 4100/6100 (7) Multi-* is Happening Source: Intel Corp.

8 ECE 4100/6100 (8) Intel’s Roadmap for Multicore Source: Adapted from Tom’s Hardware 200620082007 SC 1MB DC 2MB DC 2/4MB shared DC 3 MB/6 MB shared (45nm) 200620082007 DC 2/4MB DC 2/4MB shared DC 4MB DC 3MB /6MB shared (45nm) 200620082007 DC 2MB DC 4MB DC 16MB QC 4MB QC 8/16MB shared 8C 12MB shared (45nm) SC 512KB/ 1/ 2MB 8C 12MB shared (45nm) Desktop processors Mobile processors Enterprise processors Drivers are –Market segments –More cache –More cores

9 ECE 4100/6100 (9) Distillation Into Trends Technology Trends –What can we expect/project? Architecture Trends –What are the feasible outcomes? Application Trends –What are the driving deployment scenarios? –Where are the volumes?

10 ECE 4100/6100 (10) Technology Scaling 30% scaling down in dimensions  doubles transistor density Power per transistor –V dd scaling  lower power Transistor delay = C gate V dd /I SAT –C gate, V dd scaling  lower delay GATE SOURCE BODY DRAIN t ox GATE SOURCE DRAIN L

11 ECE 4100/6100 (11) Fundamental Trends High Volume Manufacturing 20042006200820102012201420162018 Technology Node (nm) 906545322216118 Integration Capacity (BT) 248163264128256 Delay = CV/I scaling 0.7~0.7>0.7Delay scaling will slow down Energy/Logic Op scaling >0.35>0.5 Energy scaling will slow down Bulk Planar CMOS High Probability Low Probability Alternate, 3G etcLow Probability High Probability VariabilityMedium High Very High ILD (K)~3<3 Reduce slowly towards 2-2.5 RC Delay11111111 Metal Layers6-77-88-90.5 to 1 layer per generation Source: Shekhar Borkar, Intel Corp.

12 ECE 4100/6100 (12) Moore’s Law How do we use the increasing number of transistors? What are the challenges that must be addressed? Source: Intel Corp.

13 ECE 4100/6100 (13) Impact of Moore’s Law To Date Memory Push the Memory Wall  Larger caches Frequency Increase Frequency  Deeper Pipelines ILP Increase ILP  Concurrent Threads, Branch Prediction and SMT Power Manage Power  clock gating, activity minimization IBM Power5 Source: IBM

14 ECE 4100/6100 (14) Shaping Future Multicore Architectures The ILP Wall –Limited ILP in applications The Frequency Wall –Not much headroom The Power Wall –Dynamic and static power dissipation The Memory Wall –Gap between compute bandwidth and memory bandwidth Manufacturing –Non recurring engineering costs –Time to market

15 ECE 4100/6100 (15) The Frequency Wall Not much headroom left in the stage to stage times (currently 8-12 FO4 delays) Increasing frequency leads to the power wall Vikas Agarwal, M. S. Hrishikesh, Stephen W. Keckler, Doug Burger. Clock rate versus IPC: the end of the road for conventional microarchitectures. In ISCA 2000

16 ECE 4100/6100 (16) Options Increase performance via parallelism –On chip this has been largely at the instruction/data level The 1990’s through 2005 was the era of instruction level parallelism –Single instruction multiple data/Vector parallelism MMX, SSIMD, Vector Co-Processors –Out Of Order (OOO) execution cores –Explicitly Parallel Instruction Computing (EPIC) Have we exhausted options in a thread?

17 ECE 4100/6100 (17) The ILP Wall - Past the Knee of the Curve? “Effort” Performance Scalar In-Order Moderate-Pipe Superscalar/OOO Very-Deep-Pipe Aggressive Superscalar/OOO Made sense to go Superscalar/OOO: good ROI Very little gain for substantial effort Source: G. Loh

18 ECE 4100/6100 (18) The ILP Wall Limiting phenomena for ILP extraction: – Clock rate : at the wall each increase in clock rate has a corresponding CPI increase (branches, other hazards) – Instruction fetch and decode : at the wall more instructions cannot be fetched and decoded per clock cycle – Cache hit rate : poor locality can limit ILP and it adversely affects memory bandwidth – ILP in applications : serial fraction on applications Reality: –Limit studies cap IPC at 100-400 (using ideal processor) –Current processors have IPC of only 1-2

19 ECE 4100/6100 (19) The ILP Wall: Options Increase granularity of parallelism –Simultaneous Multi-threading to exploit TLP TLP has to exist  otherwise poor utilization results –Coarse grain multithreading –Throughput computing New languages/applications –Data intensive computing in the enterprise –Media rich applications

20 ECE 4100/6100 (20) The Memory Wall µProc 60%/yr. DRAM 7%/yr. 1 10 100 1000 DRAM CPU Processor-Memory Performance Gap: (grows 50% / year) Time “Moore’s Law”

21 ECE 4100/6100 (21) The Memory Wall Increasing the number of cores increases the demanded memory bandwidth What architectural techniques can meet this demand? Average access time Year?

22 ECE 4100/6100 (22) The Memory Wall CPU 0 CPU 1 AMD Dual-Core Athlon FX On die caches are both area intensive and power intensive –StrongArm dissipates more than 43% power in caches –Caches incur huge area costs Larger caches never deliver the near-universal performance boost offered by frequency ramping (Source: Intel) IBM Power5

23 ECE 4100/6100 (23) The Power Wall Power per transistor scales with frequency but also scales with V dd –Lower V dd can be compensated for with increased pipelining to keep throughput constant –Power per transistor is not same as power per area  power density is the problem! –Multiple units can be run at lower frequencies to keep throughput constant, while saving power

24 ECE 4100/6100 (24) Leakage Power Basics Sub-threshold leakage –Increases with lower V th, T, W Gate-oxide leakage –Increases with lower T ox, higher W –High K dielectrics offer a potential solution Reverse biased pn junction leakage –Very sensitive to T, V (in addition to diffusion area)

25 ECE 4100/6100 (25) The Current Power Trend Source: Intel Corp. 4004 8008 8080 8085 8086 286 386 486 Pentium® P6 1 10 100 1000 10000 19701980199020002010 Year Power Density (W/cm 2 ) Hot Plate Nuclear Reactor Rocket Nozzle Sun’s Surface

26 ECE 4100/6100 (26) Improving Power/Performance Consider constant die size and decreasing core area each generation = more cores/chip –Effect of lowering voltage and frequency  power reduction –Increasing cores/chip  performance increase Better power performance!

27 ECE 4100/6100 (27) Accelerators 2.23 mm X 3.54 mm, 260K transistors Opportunities: Network processing engines MPEG Encode/Decode engines, Speech engines TCP/IP Offload Engine Source: Shekhar Borkar, Intel Corp.

28 ECE 4100/6100 (28) Low-Power Design Techniques Circuit and gate level methods –Voltage scaling –Transistor sizing –Glitch suppression –Pass-transistor logic –Pseudo-nMOS logic –Multi-threshold gates Functional and architectural methods –Clock gating –Clock frequency reduction –Supply voltage reduction –Power down/off –Algorithmic and software techniques Two decades worth of research and development!

29 ECE 4100/6100 (29) The Economics of Manufacturing Where are the costs of developing the next generation processors? –Design Costs –Manufacturing Costs What type of chip level solutions is the economics implying? Assessing the implications of Moore’s Law is an exercise in mass production

30 ECE 4100/6100 (30) The Cost of An ASIC Example: Design with 80 M transistors in 100 nm technology Estimated Cost - $85 M -$90 M C P production verification design prototype verification implementation verification 12 – 18 months Cost and Risk rising to unacceptable levels Top cost drivers –Verification (40%) –Architecture Design (23%) –Embedded Software Design 1400 man months (SW) 1150 man months (HW) –HW/SW integration *Handel H. Jones, “ How to Slow the Design Cost Spiral,” Electronics Design Chain, September 2002, www.designchain.com

31 ECE 4100/6100 (31) The Spectrum of Architectures Synthesis Compilation Custom ASIC FPGA Polymorphic Computing Architectures Fixed + Variable ISA Microprocessor Hardware Development Tiled architectures Software Development Customization fully in Hardware Customization fully in Software Design NRE Effort Decreasing Customization Increasing NRE and Time to Market Structured ASIC Tensilica Stretch Inc. PACT, PICOChip LSI Logic Leopard Logic MONARCH SM,RAW, TRIPS Xilinx Altera

32 ECE 4100/6100 (32) Interlocking Trade-offs Power Memory Frequency ILP speculation bandwidth dynamic power dynamic penalties miss penalty leakage power

33 ECE 4100/6100 (33) Multi-core Architecture Drivers Addressing ILP limits –Multiple threads –Coarse grain parallelism  raise the level of abstraction Addressing Frequency and Power limits –Multiple slower cores across technology generation –Scaling via increasing the number of cores rather than frequency –Heterogeneous cores for improved power/performance Addressing memory system limits –Deep, distributed, cache hierarchies –OS replication  shared memory remains dominant Addressing manufacturing issues –Design and verification costs  Replication  the network becomes more important!

34 ECE 4100/6100 (34) Parallelism

35 ECE 4100/6100 (35) Beyond ILP Performance is limited by the serial fraction parallelizable 1CPU 2CPUs3CPUs4CPUs Coarse grain parallelism in the post ILP era –Thread, process and data parallelism Learn from the lessons of the parallel processing community –Revisit the classifications and architectural techniques

36 ECE 4100/6100 (36) Flynn’s Model Flynn’s Classification –Single instruction stream, single data stream (SISD) The conventional, word-sequential architecture including pipelined computers –Single instruction stream, multiple data stream (SIMD) The multiple ALU-type architectures (e.g., array processor) –Multiple instruction stream, single data stream (MISD) Not very common –Multiple instruction stream, multiple data stream (MIMD) The traditional multiprocessor system M.J. Flynn, “Very high speed computing systems,” Proc. IEEE, vol. 54(12), pp. 1901–1909, 1966.

37 ECE 4100/6100 (37) SIMD/Vector Computation SIMD and Vector models are spatial and temporal analogs of each other A rich architectural history dating back to 1953! Source: CraySource: IBM IBM Cell SPE pipeline diagram IBM Cell SPE Organization

38 ECE 4100/6100 (38) SIMD/Vector Architectures VIRAM - Vector IRAM –Logic is slow in DRAM process –put a vector unit in a DRAM and provide a port between a traditional processor and the vector IRAM instead of a whole processor in DRAM Source: Berkeley Vector IRAM

39 ECE 4100/6100 (39) MIMD Machines Parallel processing has catalyzed the development of a several generations of parallel processing machines Unique features include the interconnection network, support for system wide synchronization, and programming languages/compilers P + C Dir Memory P + C Dir Memory P + C Dir Memory P + C Dir Memory Interconnection Network

40 ECE 4100/6100 (40) Basic Models for Parallel Programs Shared Memory –Coherency/consistency are driving concerns –Programming model is simplified at the expense of system complexity Message Passing –Typically implemented on distributed memory machines –System complexity is simplified at the expense of increased effort by the programmer

41 ECE 4100/6100 (41) Shared Memory Model That’s basically it… –need to fork/join threads, synchronize (typically locks) Main Memory Write XRead X CPU 0 CPU 1

42 ECE 4100/6100 (42) Recv Message Passing Protocols Explicitly send data from one thread to another –need to track ID’s of other CPUs –broadcast may need multiple send’s –each CPU has own memory space Hardware: send/recv queues between CPUs Send CPU 0 CPU 1

43 ECE 4100/6100 (43) Shared Memory Vs. Message Passing Shared memory doesn’t scale as well to larger number of nodes communications are broadcast based bus becomes a severe bottleneck Message passing doesn’t need centralized bus can arrange multi-processor like a graph –nodes = CPUs, edges = independent links/routes can have multiple communications/messages in transit at the same time

44 ECE 4100/6100 (44) Two Emerging Challenges Programming Models and Compilers? Interconnection Networks Source: IBM Source: Intel Corp.


Download ppt "ECE 4100/6100 (1) Multicore Computing - Evolution."

Similar presentations


Ads by Google