Computer Structure The Uncore

Slides:



Advertisements
Similar presentations
Outline Memory characteristics SRAM Content-addressable memory details DRAM © Derek Chiou & Mattan Erez 1.
Advertisements

Chapter 5 Internal Memory
Computer Organization and Architecture
+ CS 325: CS Hardware and Software Organization and Architecture Internal Memory.
LOGO.  Concept:  Is read-only memory.  Do not lose data when power is lost.  ROM memory is used to produce chips with integrated.
Anshul Kumar, CSE IITD CSL718 : Main Memory 6th Mar, 2006.
COEN 180 DRAM. Dynamic Random Access Memory Dynamic: Periodically refresh information in a bit cell. Else it is lost. Small footprint: transistor + capacitor.
Memory Chapter 3. Slide 2 of 14Chapter 1 Objectives  Explain the types of memory  Explain the types of RAM  Explain the working of the RAM  List the.
Main Mem.. CSE 471 Autumn 011 Main Memory The last level in the cache – main memory hierarchy is the main memory made of DRAM chips DRAM parameters (memory.
Computer Architecture 2011 – peripherals 1 Computer Architecture Peripherals By Dan Tsafrir, 6/6/2011 Presentation based on slides by Lihu Rappoport.
CS.305 Computer Architecture Memory: Structures Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made.
Memories and the Memory Subsystem; The Memory Hierarchy; Caching; ROM.
Chapter 9 Memory Basics Henry Hexmoor1. 2 Memory Definitions  Memory ─ A collection of storage cells together with the necessary circuits to transfer.
Registers  Flip-flops are available in a variety of configurations. A simple one with two independent D flip-flops with clear and preset signals is illustrated.
University College Cork IRELAND Hardware Concepts An understanding of computer hardware is a vital prerequisite for the study of operating systems.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 18, 2002 Topic: Main Memory (DRAM) Organization – contd.
Basic Computer Organization CH-4 Richard Gomez 6/14/01 Computer Science Quote: John Von Neumann If people do not believe that mathematics is simple, it.
F1020/F1031 COMPUTER HARDWARE MEMORY. Read-only Memory (ROM) Basic instructions for booting the computer and loading the operating system are stored in.
CSIT 301 (Blum)1 Memory. CSIT 301 (Blum)2 Types of DRAM Asynchronous –The processor timing and the memory timing (refreshing schedule) were independent.
Memory Technology “Non-so-random” Access Technology:
Computer Architecture Part III-A: Memory. A Quote on Memory “With 1 MB RAM, we had a memory capacity which will NEVER be fully utilized” - Bill Gates.
Faculty of Information Technology Department of Computer Science Computer Organization and Assembly Language Chapter 5 Internal Memory.
CSIE30300 Computer Architecture Unit 07: Main Memory Hsin-Chou Chi [Adapted from material by and
Survey of Existing Memory Devices Renee Gayle M. Chua.
Memory Systems Embedded Systems Design and Implementation Witawas Srisa-an.
Chapter 5 Internal Memory. Semiconductor Memory Types.
Computer Structure 2014 – Uncore 1 Computer Structure The Uncore.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
Charles Kime & Thomas Kaminski © 2004 Pearson Education, Inc. Terms of Use (Hyperlinks are active in View Show mode) Terms of Use ECE/CS 352: Digital Systems.
Chapter 3 Internal Memory. Objectives  To describe the types of memory used for the main memory  To discuss about errors and error corrections in the.
University of Tehran 1 Interface Design DRAM Modules Omid Fatemi
Memory Cell Operation.
Computer Structure 2013 – Uncore 1 Computer Structure The Uncore.
Memory Hierarchy Registers Cache Main Memory Fixed Disk (virtual memory) Tape Floppy Zip CD-ROM CD-RWR Cost/Bit Access/Speed Capacity.
Computer Architecture Lecture 24 Fasih ur Rehman.
Semiconductor Memory Types
COMP541 Memories II: DRAMs
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
07/11/2005 Register File Design and Memory Design Presentation E CSE : Introduction to Computer Architecture Slides by Gojko Babić.
1 Memory Hierarchy (I). 2 Outline Random-Access Memory (RAM) Nonvolatile Memory Disk Storage Suggested Reading: 6.1.
Contemporary DRAM memories and optimization of their usage Nebojša Milenković and Vladimir Stanković, Faculty of Electronic Engineering, Niš.
Chapter 5 Internal Memory. contents  Semiconductor main memory - organisation - organisation - DRAM and SRAM - DRAM and SRAM - types of ROM - types of.
Computer Architecture Chapter (5): Internal Memory
“With 1 MB RAM, we had a memory capacity which will NEVER be fully utilized” - Bill Gates.
Computer Structure 2015 – System 1 Computer Structure System and DRAM Lecturer: Aharon Kupershtok Created by Lihu Rappoport.
COMP541 Memories II: DRAMs
Computer Structure System and DRAM
Chapter 5 Internal Memory
William Stallings Computer Organization and Architecture 7th Edition
Computer Structure System and DRAM
Memory Units Memories store data in units from one to eight bits. The most common unit is the byte, which by definition is 8 bits. Computer memories are.
William Stallings Computer Organization and Architecture 7th Edition
William Stallings Computer Organization and Architecture 8th Edition
William Stallings Computer Organization and Architecture 7th Edition
William Stallings Computer Organization and Architecture 8th Edition
Chapter 4: MEMORY.
DRAM Hwansoo Han.
William Stallings Computer Organization and Architecture 8th Edition
Bob Reese Micro II ECE, MSU
Presentation transcript:

Computer Structure The Uncore

2nd Generation Intel® Core™ Integrates CPU, Graphics, MC, PCI Express* on single chip Next Generation Intel® Turbo Boost Technology High BW/ low-latency core/GFX interconnect 2ch DDR3 ×16 PCIe Graphics Core LLC System Agent Display DMI PCI Express* IMC PCH Substantial performance improvement High Bandwidth Last Level Cache Intel® Advanced Vector Extension (Intel® AVX) Next Generation Graphics and Media Integrated Memory Controller – 2ch DDR3 Embedded DisplayPort Intel® Hyper-Threading Technology 4 Cores / 8 Threads 2 Cores / 4 Threads Discrete Graphics Support: 1x16 or 2x8 Foil taken from IDF 2011

3rd Generation Intel CoreTM 22nm process Quad core die, with Intel HD Graphics 4000 1.4 Billion transistors Die size: 160 mm2

The Uncore Subsystem The SoC design provides a high bandwidth bi-directional ring bus Connect between the IA cores and the various un-core sub-systems The uncore subsystem includes A system agent The graphics unit (GT) The last level cache (LLC) In Intel Xeon Processor E5 Family No graphics unit (GT) Instead it contains many more components: An LLC with larger capacity and snooping capabilities to support multiple processors Intel® QuickPath Interconnect interfaces that can support multi-socket platforms Power management control hardware A system agent capable of supporting high bandwidth traffic from memory and I/O devices Graphics Core LLC System Agent Display DMI PCI Express* IMC From the Optimization Manual

High Bandwidth, Low Latency, Modular Scalable Ring On-die Interconnect Ring-based interconnect between Cores, Graphics, Last Level Cache (LLC) and System Agent domain Composed of 4 rings 32 Byte Data ring, Request ring, Acknowledge ring and Snoop ring Fully pipelined at core frequency/voltage: bandwidth, latency and power scale with cores Massive ring wire routing runs over the LLC with no area impact Access on ring always picks the shortest path – minimize latency Distributed arbitration, ring protocol handles coherency, ordering, and core interface Scalable to servers with large number of processors Graphics Core LLC System Agent Display DMI PCI Express* IMC High Bandwidth, Low Latency, Modular Foil taken from IDF 2011

Last Level Cache – LLC The LLC consists of multiple cache slices The number of slices is equal to the number of IA cores Each slice contains a full cache port that can supply 32 bytes/cycle Each slice has logic portion + data array portion The logic portion handles Data coherency Memory ordering Access to the data array portion LLC misses and write-back to memory The data array portion stores cache lines May have 4/8/12/16 ways Corresponding to 0.5M/1M/1.5M/2M block size The GT sits on the same ring interconnect Uses the LLC for its data operations as well May in some case competes with the core on LLC Graphics Core LLC System Agent Display DMI PCI Express* IMC From the Optimization Manual

Distributed coherency & ordering; Scalable Bandwidth, Latency & Power Cache Box Interface block Between Core/Graphics/Media and the Ring Between Cache controller and the Ring Implements the ring logic, arbitration, cache controller Communicates with System Agent for LLC misses, external snoops, non-cacheable accesses Full cache pipeline in each cache box Graphics Core LLC System Agent Display DMI PCI Express* IMC Physical Addresses are hashed at the source to prevent hot spots and increase bandwidth Maintains coherency and ordering for the addresses that are mapped to it LLC is fully inclusive with “Core Valid Bits” – eliminates unnecessary snoops to cores Per core CVB indicates if core needs to be snooped for a given cache line Runs at core voltage/frequency, scales with Cores Distributed coherency & ordering; Scalable Bandwidth, Latency & Power Foil taken from IDF 2011

Ring Interconnect and LLC The physical addresses of data kept in the LLC are distributed among the cache slices by a hash function Addresses are uniformly distributed From the cores and the GT view, the LLC acts as one shared cache With multiple ports and bandwidth that scales with the number of cores The number of cache-slices increases with the number of cores The ring and LLC are not likely to be a BW limiter to core operation From SW point of view, this does not appear as a normal N-way cache The LLC hit latency, ranging between 26-31 cycles, depends on The core location relative to the LLC block (how far the request needs to travel on the ring) All the traffic that cannot be satisfied by the LLC, still travels through the cache-slice logic portion and the ring, to the system agent E.g., LLC misses, dirty line writeback, non-cacheable operations, and MMIO/IO operations From the Optimization Manual

LLC Sharing LLC is shared among all Cores, Graphics and Media Graphics driver controls which streams are cached/coherent Any agent can access all data in the LLC, independent of who allocated the line, after memory range checks Controlled LLC way allocation mechanism prevents thrashing between Core/GFX Graphics Core LLC System Agent Display DMI PCI Express* IMC Multiple coherency domains IA Domain (Fully coherent via cross-snoops) Graphic domain (Graphics virtual caches, flushed to IA domain by graphics engine) Non-Coherent domain (Display data, flushed to memory by graphics engine) Much higher Graphics performance, DRAM power savings, more DRAM BW available for Cores Foil taken from IDF 2011

Cache Hierarchy The LLC is inclusive of all cache levels above it Capacity ways Line Size (bytes) Write Update Policy Inclusive Latency (cycles) Bandwidth (Byte/cyc) L1 Data 32KB 8 64 Write-back - 4 2 ×16 L1 Instruction N/A L2 (Unified) 256KB No 12 1 × 32 LLC Varies Yes 26-31 The LLC is inclusive of all cache levels above it Data contained in the core caches must also reside in the LLC Each LLC cache line holds an indication of the cores that may have this line in their L2 and L1 caches Fetching data from LLC when another core has the data Clean hit – data is not modified in the other core – 43 cycles Dirty hit – data is modified in the other core – 60 cycles From the Optimization Manual

Data Prefetch to L2$ and LLC Two HW prefetchers fetch data from memory to L2$ and LLC Streamer and spatial prefetcher prefetch the data to the LLC Typically data is brought also to the L2 Unless the L2 cache is heavily loaded with missing demand requests. Spatial Prefetcher Strives to complete every cache line fetched to the L2 cache with the pair line that completes it to a 128-byte aligned chunk Streamer Prefetcher Monitors read requests from the L1 caches for ascending and descending sequences of addresses L1 D$ requests: loads, stores, and L1 D$ HW prefetcher L1 I$ code fetch requests When a forward or backward stream of requests is detected The anticipated cache lines are pre-fetched Prefetch-ed cache lines must be in the same 4K page From the Optimization Manual

Data Prefetch to L2$ and LLC Streamer Prefetcher Enhancement The streamer may issue two prefetch requests on every L2 lookup Runs up to 20 lines ahead of the load request Adjusts dynamically to the number of outstanding requests per core Not many outstanding requests  prefetch further ahead Many outstanding requests  prefetch to LLC only, and less far ahead When cache lines are far ahead Prefetch to LLC only and not to the L2$ Avoids replacement of useful cache lines in the L2$ Detects and maintains up to 32 streams of data accesses For each 4K byte page, can maintain one forward and one backward stream From the Optimization Manual

Lean and Mean System Agent Contains PCI Express*, DMI, Memory Controller, Display Engine… Contains Power Control Unit Programmable uController, handles all power management and reset functions in the chip Smart integration with the ring Provides cores/Graphics /Media with high BW, low latency to DRAM/IO for best performance Handles IO-to-cache coherency Separate voltage and frequency from ring/cores, Display integration for better battery life Extensive power and thermal management for PCI Express* and DDR Graphics Core LLC System Agent Display DMI PCI Express* IMC Smart I/O Integration Foil taken from IDF 2011

The System Agent The system agent contains the following components An arbiter that handles all accesses from the ring domain and from I/O (PCIe* and DMI) and routes the accesses to the right place PCIe controllers connect to external PCIe devices Support different configurations: x16+x4, x8+x8+x4, x8+x4+x4+x4 DMI controller connects to the PCH chipset Integrated display engine, Flexible Display Interconnect, and Display Port, for the internal graphic operations Memory controller All main memory traffic is routed from the arbiter to the memory controller The memory controller supports two channels of DDR Data rates of 1066MHz, 1333MHz and 1600MHz 8 bytes per cycle Addresses are distributed between memory channels based on a local hash function that attempts to balance the load between the channels in order to achieve maximum bandwidth and minimum hotspot collisions From the Optimization Manual

The Memory Controller For best performance Populate both channels with equal amounts of memory Preferably the exact same types of DIMMs Using more ranks for the same amount of memory, results in somewhat better memory bandwidth Since more DRAM pages can be open simultaneously Use highest supported speed DRAM, with the best DRAM timings The two memory channels have separate resources Handle memory requests independently Each memory channel contains a 32 cache-line write-data-buffer The memory controller contains a high-performance out-of-order scheduler Attempts to maximize memory bandwidth while minimizing latency Writes to the memory controller are considered completed when they are written to the write-data-buffer The write-data-buffer is flushed out to main memory at a later time, not impacting write latency From the Optimization Manual

The Memory Controller Partial writes are not handled efficiently on the memory controller May result in read-modify-write operations on the DDR channel if the partial-writes do not complete a full cache-line in time Software should avoid creating partial write transactions whenever possible and consider alternative such as buffering the partial writes into full cache line writes The memory controller also supports high-priority isochronous requests E.g., USB isochronous, and Display isochronous requests High bandwidth of memory requests from the integrated display engine takes up some of the memory bandwidth Impacts core access latency to some degree From the Optimization Manual

Integration: Optimization Opportunities Dynamically redistribute power between Cores & Graphics Tight power management control of all components, providing better granularity and deeper idle/sleep states Three separate power/frequency domains: System Agent (Fixed), Cores+Ring, Graphics (Variable) High BW Last Level Cache, shared among Cores and Graphics Significant performance boost, saves memory bandwidth and power Integrated Memory Controller and PCI Express ports Tightly integrated with Core/Graphics/LLC domain Provides low latency & low power – remove intermediate busses Bandwidth is balanced across the whole machine, from Core/Graphics all the way to Memory Controller Modular uArch for optimal cost/power/performance Derivative products done with minimal effort/time Foil taken from IDF 2011

DRAM

Basic DRAM chip DRAM access sequence Row RAS# Row Address Latch decoder Column addr CAS# RAS# Data Memory array Addr Column Address Latch DRAM access sequence Put Row on addr. bus and assert RAS# (Row Addr. Strobe) to latch Row Put Column on addr. bus and assert CAS# (Column Addr. Strobe) to latch Col Get data on address bus

DRAM Operation DRAM cell consists of transistor + capacitor AL DL C M DRAM cell consists of transistor + capacitor Capacitor keeps the state; Transistor guards access to the state Reading cell state: raise access line AL and sense DL Capacitor charged  current to flow on the data line DL Writing cell state: set DL and raise AL to charge/drain capacitor Charging and draining a capacitor is not instantaneous Leakage current drains capacitor even when transistor is closed DRAM cell periodically refreshed every 64ms

DRAM Access Sequence Timing tRCD – RAS/CAS delay tRP – Row Precharge RAS# Data A[0:7] CAS# Data n Row i Col n Row j X CL – CAS latency Put row address on address bus and assert RAS# Wait for RAS# to CAS# delay (tRCD) between asserting RAS and CAS Put column address on address bus and assert CAS# Wait for CAS latency (CL) between time CAS# asserted and data ready Row precharge time: time to close current row, and open a new row

DRAM controller DRAM controller gets address and command Splits address to Row and Column Generates DRAM control signals at the proper timing DRAM data must be periodically refreshed DRAM controller performs DRAM refresh, using refresh counter DRAM address decoder Time delay gen. mux RAS# CAS# R/W# A[20:23] A[10:19] A[0:9] Memory address bus D[0:7] Select Chip select

Improved DRAM Schemes Paged Mode DRAM Multiple accesses to different columns from same row Saves RAS and RAS to CAS delay Extended Data Output RAM (EDO RAM) A data output latch enables to parallel next column address with current column data RAS# Data A[0:7] CAS# Data n D n+1 Row X Col n Col n+1 Col n+2 D n+2 RAS# Data A[0:7] CAS# Data n Data n+1 Row X Col n Col n+1 Col n+2 Data n+2

Improved DRAM Schemes (cont) Burst DRAM Generates consecutive column address by itself RAS# Data A[0:7] CAS# Data n Data n+1 Row X Col n Data n+2

Synchronous DRAM – SDRAM All signals are referenced to an external clock (100MHz-200MHz) Makes timing more precise with other system devices 4 banks – multiple pages open simultaneously (one per bank) Command driven functionality instead of signal driven ACTIVE: selects both the bank and the row to be activated ACTIVE to a new bank can be issued while accessing current bank READ/WRITE: select column Burst oriented read and write accesses Successive column locations accessed in the given row Burst length is programmable: 1, 2, 4, 8, and full-page May end full-page burst by BURST TERMINATE to get arbitrary burst length A user programmable Mode Register CAS latency, burst length, burst type Auto pre-charge: may close row at last read/write in burst Auto refresh: internal counters generate refresh address

SDRAM Timing clock cmd Bank Data Addr NOP X Data j Data k ACT Bank 0 Row i RD Col j tRCD > 20ns RD+PC Col k Row l t RC>70ns Bank 1 Row m t RRD > CL=2 Col q Col n Data n Data q BL = 1 tRCD: ACTIVE to READ/WRITE gap = tRCD(MIN) / clock period tRC: successive ACTIVE to a different row in the same bank tRRD: successive ACTIVE commands to different banks

DDR-SDRAM 2n-prefetch architecture Uses 2.5V (vs. 3.3V in SDRAM) DRAM cells are clocked at the same speed as SDR SDRAM cells Internal data bus is twice the width of the external data bus Data capture occurs twice per clock cycle Lower half of the bus sampled at clock rise Upper half of the bus sampled at clock fall Uses 2.5V (vs. 3.3V in SDRAM) Reduced power consumption n:2n-1 0:n-1 200MHz clock 0:2n-1 SDRAM Array 400M xfer/sec

DDR SDRAM Timing 133MHz clock cmd Bank Addr Data tRCD >20ns NOP X ACT Bank 0 Row i RD Col j tRCD >20ns Row l t RC>70ns Bank 1 Row m t RRD >20ns CL=2 Col n j +1 +2 +3 n

DIMMs DIMM: Dual In-line Memory Module A small circuit board that holds memory chips 64-bit wide data path (72 bit with parity) Single sided: 9 chips, each with 8 bit data bus Dual sided: 18 chips, each with 4 bit data bus Data BW: 64 bits on each rising and falling edge of the clock Other pins Address – 14, RAS, CAS, chip select – 4, VDC – 17, Gnd – 18, clock – 4, serial address – 3, …

DDR Standards DRAM timing, measured in I/O bus cycles, specifies 3 numbers CAS Latency – RAS to CAS Delay – RAS Precharge Time CAS latency (latency to get data in an open page) in nsec CAS Latency × I/O bus cycle time Total BW for DDR400 3200M Byte/sec = 64 bit2200MHz / 8 (bit/byte) 6400M Byte/sec for dual channel DDR SDRAM Standard name Mem. clock (MHz) I/O bus clock (MHz) Cycle time (ns) Data rate (MT/s) VDDQ (V) Module name  transfer rate (MB/s) Timing (CL-tRCD-tRP) CAS Latency DDR-200 100 10 200  2.5  PC-1600 1600 DDR-266 133⅓ 7.5 266⅔ PC-2100 2133⅓ DDR-333 166⅔ 6 333⅓ PC-2700 2666⅔ DDR-400 5 400 2.6 PC-3200 3200 2.5-3-3 3-3-3 3-4-4 12.5 15

DDR2 DDR2 doubles the bandwidth Smaller page size: 1KB vs. 2KB 4n pre-fetch: internally read/write 4× the amount of data as the external bus DDR2-533 cell works at the same freq. as a DDR266 cell or a PC133 cell Prefetching increases latency Smaller page size: 1KB vs. 2KB Reduces activation power – ACTIVATE command reads all bits in the page 8 banks in 1Gb densities and above Increases random accesses 1.8V (vs 2.5V) operation voltage Significantly lower power Memory Cell Array I/O Buffers Data Bus Memory Cell Array I/O Buffers Data Bus Memory Cell Array I/O Buffers Data Bus

DDR2 Standards Standard name Mem clock (MHz) Cycle time I/O Bus clock Data rate (MT/s) Module name Peak transfer rate Timings CAS Latency DDR2-400 100 10 ns 200 400 PC2-3200 3200 MB/s 3-3-3 4-4-4 15 20 DDR2-533 133 7.5 ns 266 533 PC2-4200 4266 MB/s 11.25 DDR2-667 166 6 ns 333 MHz 667 PC2-5300 5333 MB/s 4-4-4 5-5-5 12 DDR2-800 5 ns 400 MHz 800 PC2-6400 6400 MB/s 4-4-4 5-5-5 6-6-6 10 12.5 DDR2-1066 3.75 ns 533 MHz 1066 PC2-8500 8533 MB/s 6-6-6 7-7-7 13.125

DDR3 30% power consumption reduction compared to DDR2 1.5V supply voltage, compared to DDR2's 1.8V 90 nanometer fabrication technology Higher bandwidth 8 bit deep prefetch buffer (vs. 4 bit in DDR2 and 2 bit in DDR) Transfer data rate Effective clock rate of 800–1600 MHz using both rising and falling edges of a 400–800 MHz I/O clock DDR2: 400–800 MHz using a 200–400 MHz I/O clock DDR: 200–400 MHz based on a 100–200 MHz I/O clock DDR3 DIMMs 240 pins, the same number as DDR2, and are the same size Electrically incompatible, and have a different key notch location

Peak transfer rate (MB/s) Timings (CL-tRCD-tRP) DDR3 Standards Standard Name  Mem clock (MHz) I/O bus clock (MHz) I/O bus Cycle time (ns) Data rate (MT/s) Module name  Peak transfer rate (MB/s) Timings (CL-tRCD-tRP) CAS Latency (ns) DDR3-800 100 400 2.5 800 PC3-6400 6400 5-5-5 6-6-6 12 1⁄2 15 DDR3-1066 133⅓ 533⅓ 1.875 1066⅔ PC3-8500 8533⅓ 6-6-6 7-7-7 8-8-8 11 1⁄4 13 1⁄8 15 DDR3-1333 166⅔ 666⅔ 1.5 1333⅓ PC3-10600 10666⅔ 8-8-8 9-9-9 12 13 1⁄2 DDR3-1600 200 1.25 1600 PC3-12800 12800 9-9-9 10-10-10 11-11-11 11 1⁄4 12 1⁄2 13 3⁄4 DDR3-1866 233⅓ 933⅓ 1.07 1866⅔ PC3-14900 14933⅓ 11-11-11 12-12-12 11 11⁄14 12 6⁄7  DDR3-2133 266⅔ 0.9375 2133⅓ PC3-17000 17066⅔ 12-12-12 13-13-13 11 1⁄4  12 3⁄16

DDR2 vs. DDR3 Performance The high latency of DDR3 SDRAM has negative effect on streaming operations Source: xbitlabs

How to get the most of Memory ? Single Channel DDR Dual channel DDR Each DIMM pair must be the same Balance FSB and memory bandwidth 800MHz FSB provides 800MHz × 64bit / 8 = 6.4 G Byte/sec Dual Channel DDR400 SDRAM also provides 6.4 G Byte/sec L2 Cache CPU FSB – Front Side Bus Memory Bus DRAM Ctrlr DDR DIMM CH A DDR DIMM CH B L2 Cache CPU FSB – Front Side Bus DRAM Ctrlr

How to get the most of Memory ? Each DIMM supports 4 open pages simultaneously The more open pages, the more random access It is better to have more DIMMs n DIMMs: 4n open pages DIMMs can be single sided or dual sided Dual sided DIMMs may have separate CS of each side The number of open pages is doubled (goes up to 8) This is not a must – dual sided DIMMs may also have a common CS for both sides, in which case, there are only 4 open pages, as with single side

SRAM – Static RAM True random access High speed, low density, high power No refresh Address not multiplexed DDR SRAM 2 READs or 2 WRITEs per clock Common or Separate I/O DDRII: 200MHz to 333MHz Operation; Density: 18/36/72Mb+ QDR SRAM Two separate DDR ports: one read and one write One DDR address bus: alternating between the read address and the write address QDRII: 250MHz to 333MHz Operation; Density: 18/36/72Mb+

SRAM vs. DRAM Random Access: access time is the same for all locations DRAM – Dynamic RAM SRAM – Static RAM Refresh Refresh needed No refresh needed Address Address muxed: row+ column Address not multiplexed Access Not true “Random Access” True “Random Access” density High (1 Transistor/bit) Low (6 Transistor/bit) Power low high Speed slow fast Price/bit Typical usage Main memory cache

Read Only Memory (ROM) Random Access Non volatile ROM Types PROM – Programmable ROM Burnt once using special equipment EPROM – Erasable PROM Can be erased by exposure to UV, and then reprogrammed E2PROM – Electrically Erasable PROM Can be erased and reprogrammed on board Write time (programming) much longer than RAM Limited number of writes (thousands)