Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.

Similar presentations


Presentation on theme: "The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and."— Presentation transcript:

1 The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and fast (most of the time)? –Hierarchy –Parallelism

2 An Expanded View of the Memory System Control Datapath Memory Processor Memory Fastest Slowest Smallest Biggest Highest Lowest Speed: Size: Cost:

3 Memory Hierarchy: How Does it Work? Temporal Locality (Locality in Time): => Keep most recently accessed data items closer to the processor Spatial Locality (Locality in Space): => Move blocks consisting of contiguous words to the upper levels Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

4 Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the user with as much memory as is available in the cheapest technology. –Provide access at the speed offered by the fastest technology. Control Datapath Secondary Storage (Disk) Processor Registers Main Memory (DRAM) Second Level Cache (SRAM) On-Chip Cache 1s10,000,000s (10s ms) Speed (ns):10s100s Gs Size (bytes): KsMs Tertiary Storage (Disk) 10,000,000,000s (10s sec) Ts

5 Users want large and fast memories! SRAM access times are 2 - 25ns at cost of $100 to $250 per Mbyte. DRAM access times are 60-120ns at cost of $5 to $10 per Mbyte. Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte. Try and give it to them anyway –build a memory hierarchy Exploiting Memory Hierarchy 1997

6 How is the hierarchy managed ? Registers Memory –by compiler (programmer?) cache memory –by the hardware memory disks –by the hardware and operating system (virtual memory) –by the programmer (files)

7 Read hits –this is what we want! Read misses –stall the CPU, fetch block from memory, deliver to cache, restart Write hits: –can replace data in cache and memory (write- through) –write the data only into the cache (write-back the cache later) Write misses: –read the entire block into the cache, then write the word Hits vs. Misses

8 Two issues: –How do we know if a data item is in the cache? –If it is, how do we find it? Our first example: – block size is one word of data – "direct mapped" For each item of data at the lower level, there is exactly one location in the cache where it might be. e.g., lots of items at the lower level share locations in the upper level Cache

9 Mapping: address is modulo the number of blocks in the cache Direct Mapped Cache

10 For MIPS What kind of locality are we taking advantage of? Direct Mapped Cache

11 Taking advantage of spatial locality: Direct Mapped Cache

12 Choosing a block size Large block sizes help with spatial locality, but... –It takes time to read the memory in Larger block sizes increase the time for misses –It reduces the number of blocks in the cache Number of blocks = cache size/block size Need to find a middle ground –16-64 bytes works nicely Use split caches because there is more spatial locality in code

13 Performance Simplified model: execution time = (execution cycles + stall cycles)  cycle time stall cycles = # of instructions  miss ratio  miss penalty Two ways of improving performance: –decreasing the miss ratio –decreasing the miss penalty What happens if we increase block size?

14 Decreasing miss ratio with associativity

15 Fully Associative vs. Direct Mapped Fully associative caches provide much greater flexibility –Nothing gets “thrown out” of the cache until it is completely full Direct-mapped caches are more rigid –Any cached data goes directly where the index says to, even if the rest of the cache is empty A problem, though... –Fully associative caches require a complete search through all the tags to see if there’s a hit –Direct-mapped caches only need to look one place

16 A Compromise 7.3 2-Way set associative Address = Tag | Index | Block offset 4-Way set associative Address = Tag | Index | Block offset 0: 1: 2: 3: 4: 5: 6: 7: VTagData Each address has two possible locations with the same index Each address has two possible locations with the same index One fewer index bit: 1/2 the indexes 0: 1: 2: 3: VTagData Each address has four possible locations with the same index Each address has four possible locations with the same index Two fewer index bits: 1/4 the indexes


Download ppt "The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and."

Similar presentations


Ads by Google