Presentation is loading. Please wait.

Presentation is loading. Please wait.

1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.

Similar presentations


Presentation on theme: "1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the."— Presentation transcript:

1 1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the user with as much memory as is available in the cheapest technology. –Provide access at the speed offered by the fastest technology. Control Datapath Secondary Storage (Disk) Processor Registers Main Memory (DRAM) Second Level Cache (SRAM) On-Chip Cache 1s10,000,000s (10s ms) Speed (ns):10s100s Gs Size (bytes): KsMs Tertiary Storage (Disk) 10,000,000,000s (10s sec) Ts

2 2  1998 Morgan Kaufmann Publishers Summary of caches: The Principle of Locality: –Program likely to access a relatively small portion of the address space at any instant of time. Temporal Locality: Locality in Time Spatial Locality: Locality in Space Three Major Categories of Cache Misses: –Compulsory Misses: sad facts of life. Example: cold start misses. –Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! –Capacity Misses: increase cache size Cache Design Space –total size, block size, associativity –replacement policy –write-hit policy (write-through, write-back)

3 3  1998 Morgan Kaufmann Publishers 4 Questions for Memory Hierarchy Q1: Where can a block be placed in the cache? (Block placement) Q2: How is a block found if it is in the cache? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy)

4 4  1998 Morgan Kaufmann Publishers Q1: Where can a block be placed in the cache? –Direct mapped – every block has only one possible location, shared by others –Set associative – every block has several possible locations, shared by others in its set –Fully associative – a block may be located anywhere –Why not always use full associativity?

5 5  1998 Morgan Kaufmann Publishers Q2: How is a block found if it is in the cache? Index is used to find possible location(s) Tag on each block within possible locations must be compared against tag in address we’re looking for Increasing associativity shrinks index, expands tag so it becomes more expensive to determine whether block is in cache

6 6  1998 Morgan Kaufmann Publishers Q3: Which block should be replaced on a miss? Easy for Direct Mapped (why?) Set Associative or Fully Associative: –Random –LRU (Least Recently Used)

7 7  1998 Morgan Kaufmann Publishers Q4: What happens on a write? Write through—The information is written to both the block in the cache and to the block in the lower-level memory. Write back—The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. –Have to check whether block is clean or dirty Pros and Cons of each? –WT: read misses cannot result in writes –WB: no writes of repeated writes WT always combined with write buffers so that don’t wait for lower level memory

8 8  1998 Morgan Kaufmann Publishers Performance Simplified model: execution time = (execution cycles + stall cycles)  cycle time stall cycles = memory accesses  miss ratio  miss penalty Two ways of improving performance: –decreasing the miss ratio (one way - increase associativity) –decreasing the miss penalty (one way – have multi-level caches) What happens if we increase block size?

9 9  1998 Morgan Kaufmann Publishers Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation –protection

10 10  1998 Morgan Kaufmann Publishers Pages: virtual memory blocks Page faults: the data is not in memory, retrieve it from disk –huge miss penalty, thus pages should be fairly large (e.g., 4KB) –reducing page faults is important (LRU is worth the price) –can handle the faults in software (OS) instead of hardware –using write-through is too expensive so we use writeback

11 11  1998 Morgan Kaufmann Publishers Page Tables Virtual page

12 12  1998 Morgan Kaufmann Publishers Page Tables Each memory reference requires 2 memory accesses (why?)

13 13  1998 Morgan Kaufmann Publishers Making Address Translation Fast A cache for address translations: translation lookaside buffer TLB

14 14  1998 Morgan Kaufmann Publishers Modern Systems Very complicated memory systems:

15 15  1998 Morgan Kaufmann Publishers Summary: Levels of the Memory Hierarchy CPU Registers 100s Bytes <10s ns Cache K Bytes 10-100 ns $.01-.001/bit Main Memory M Bytes 100ns-1us $.01-.001 Disk G Bytes ms 10 - 10 cents -3 -4 Capacity Access Time Cost Tape infinite sec-min 10 -6 Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Level Lower Level faster Larger


Download ppt "1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the."

Similar presentations


Ads by Google