Lecture: Large Caches, Virtual Memory

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

1 Lecture 17: Large Cache Design Papers: Managing Distributed, Shared L2 Caches through OS-Level Page Allocation, Cho and Jin, MICRO’06 Co-Operative Caching.
1 Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers By Sreemukha Kandlakunta Phani Shashank.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
1 Lecture 16: Large Cache Design Papers: An Adaptive, Non-Uniform Cache Structure for Wire-Dominated On-Chip Caches, Kim et al., ASPLOS’02 Distance Associativity.
1 Lecture 20: Cache Hierarchies, Virtual Memory Today’s topics:  Cache hierarchies  Virtual memory Reminder:  Assignment 8 will be posted soon (due.
CS Lecture 10 Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers N.P. Jouppi Proceedings.
1 Lecture 12: Cache Innovations Today: cache access basics and innovations (Sections )
1 Lecture 14: Cache Innovations and DRAM Today: cache access basics and innovations, DRAM (Sections )
1 Lecture 18: Large Caches, Multiprocessors Today: NUCA caches, multiprocessors (Sections ) Reminder: assignment 5 due Thursday (don’t procrastinate!)
1 Lecture 14: Virtual Memory Topics: virtual memory (Section 5.4) Reminders: midterm begins at 9am, ends at 10:40am.
1 Lecture 15: Virtual Memory and Large Caches Today: TLB design and large cache design basics (Sections )
1 Lecture 16: Virtual Memory Today: DRAM innovations, virtual memory (Sections )
1 Lecture 16: Large Cache Innovations Today: Large cache design and other cache innovations Midterm scores  91-80: 17 students  79-75: 14 students 
1 Lecture 8: Large Cache Design I Topics: Shared vs. private, centralized vs. decentralized, UCA vs. NUCA, recent papers.
Lecture 17: Virtual Memory, Large Caches
1 Lecture 13: Cache Innovations Today: cache access basics and innovations, DRAM (Sections )
1 Lecture 14: Virtual Memory Today: DRAM and Virtual memory basics (Sections )
1 Lecture 11: Large Cache Design Topics: large cache basics and… An Adaptive, Non-Uniform Cache Structure for Wire-Dominated On-Chip Caches, Kim et al.,
1 Lecture 21: Virtual Memory, I/O Basics Today’s topics:  Virtual memory  I/O overview Reminder:  Assignment 8 due Tue 11/21.
1 Lecture 15: Large Cache Design Topics: innovations for multi-mega-byte cache hierarchies Reminders:  Assignment 5 posted.
1 Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)
1 Lecture 16: Cache Innovations / Case Studies Topics: prefetching, blocking, processor case studies (Section 5.2)
1 Lecture: Virtual Memory, DRAM Main Memory Topics: virtual memory, TLB/cache access, DRAM intro (Sections 2.2)
1 Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5)
1 Lecture 13: Cache, TLB, VM Today: large caches, virtual memory, TLB (Sections 2.4, B.4, B.5)
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
1 Lecture 14: DRAM Main Memory Systems Today: cache/TLB wrap-up, DRAM basics (Section 2.3)
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Lecture: Cache Hierarchies Topics: cache innovations (Sections B.1-B.3, 2.1)
1 Lecture: Virtual Memory Topics: virtual memory, TLB/cache access (Sections 2.2)
1 Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5)
Lecture: Large Caches, Virtual Memory
Lecture: Cache Hierarchies
Lecture: Large Caches, Virtual Memory
Lecture: Cache Hierarchies
Lecture 23: Cache, Memory, Security
Lecture 13: Large Cache Design I
Lecture 21: Memory Hierarchy
Lecture 12: Cache Innovations
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Part V Memory System Design
Andy Wang Operating Systems COP 4610 / CGS 5765
Lecture: DRAM Main Memory
Lecture 23: Cache, Memory, Virtual Memory
Lecture 17: Case Studies Topics: case studies for virtual memory and cache hierarchies (Sections )
Lecture 22: Cache Hierarchies, Memory
Lecture: DRAM Main Memory
Lecture: Cache Innovations, Virtual Memory
Andy Wang Operating Systems COP 4610 / CGS 5765
Lecture 24: Memory, VM, Multiproc
Lecture 14: Large Cache Design II
Chapter 5 Exploiting Memory Hierarchy : Cache Memory in CMP
Lecture 15: Memory Design
Lecture 22: Cache Hierarchies, Memory
Lecture 11: Cache Hierarchies
CS 3410, Spring 2014 Computer Science Cornell University
CSE451 Virtual Memory Paging Autumn 2002
Lecture: Cache Hierarchies
Lecture 21: Memory Hierarchy
Lecture 24: Virtual Memory, Multiprocessors
Lecture 23: Virtual Memory, Multiprocessors
Lecture 13: Cache Basics Topics: terminology, cache organization (Sections )
Sarah Diesburg Operating Systems CS 3430
Andy Wang Operating Systems COP 4610 / CGS 5765
Lecture 14: Cache Performance
Sarah Diesburg Operating Systems COP 4610
Virtual Memory 1 1.
Presentation transcript:

Lecture: Large Caches, Virtual Memory Topics: cache innovations (Sections 2.4, B.4, B.5)

Techniques to Reduce Cache Misses Victim caches Better replacement policies – pseudo-LRU, NRU Prefetching, cache compression

Victim Caches A direct-mapped cache suffers from misses because multiple pieces of data map to the same location The processor often tries to access data that it recently discarded – all discards are placed in a small victim cache (4 or 8 entries) – the victim cache is checked before going to L2 Can be viewed as additional associativity for a few sets that tend to have the most conflicts

Replacement Policies Pseudo-LRU: maintain a tree and keep track of which side of the tree was touched more recently; simple bit ops NRU: every block in a set has a bit; the bit is made zero when the block is touched; if all are zero, make all one; a block with bit set to 1 is evicted

Prefetching Hardware prefetching can be employed for any of the cache levels It can introduce cache pollution – prefetched data is often placed in a separate prefetch buffer to avoid pollution – this buffer must be looked up in parallel with the cache access Aggressive prefetching increases “coverage”, but leads to a reduction in “accuracy”  wasted memory bandwidth Prefetches must be timely: they must be issued sufficiently in advance to hide the latency, but not too early (to avoid pollution and eviction before use)

Stream Buffers Simplest form of prefetch: on every miss, bring in multiple cache lines When you read the top of the queue, bring in the next line L1 Sequential lines Stream buffer

Stride-Based Prefetching For each load, keep track of the last address accessed by the load and a possibly consistent stride FSM detects consistent stride and issues prefetches incorrect init steady correct correct incorrect (update stride) PC tag prev_addr stride state correct correct trans no-pred incorrect (update stride) incorrect (update stride)

Intel Montecito Cache Two cores, each with a private 12 MB L3 cache and 1 MB L2 Naffziger et al., Journal of Solid-State Circuits, 2006

Intel 80-Core Prototype – Polaris Prototype chip with an entire die of SRAM cache stacked upon the cores

Shared Vs. Private Caches in Multi-Core What are the pros/cons to a shared L2 cache? P1 P2 P3 P4 P1 P2 P3 P4 L1 L1 L1 L1 L1 L1 L1 L1 L2 L2 L2 L2 L2

Shared Vs. Private Caches in Multi-Core Advantages of a shared cache: Space is dynamically allocated among cores No waste of space because of replication Potentially faster cache coherence (and easier to locate data on a miss) Advantages of a private cache: small L2  faster access time private bus to L2  less contention

UCA and NUCA The small-sized caches so far have all been uniform cache access: the latency for any access is a constant, no matter where data is found For a large multi-megabyte cache, it is expensive to limit access time by the worst case delay: hence, non-uniform cache architecture

Large NUCA Issues to be addressed for Non-Uniform Cache Access: Mapping Migration Search Replication CPU

Shared NUCA Cache A single tile composed of a core, L1 caches, and a bank (slice) of the shared L2 cache Core 0 Core 1 Core 2 Core 3 L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L2 $ L2 $ L2 $ L2 $ Core 4 Core 5 Core 6 Core 7 The cache controller forwards address requests to the appropriate L2 bank and handles coherence operations L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L1 D$ L1 I$ L2 $ L2 $ L2 $ L2 $ Memory Controller for off-chip access

Virtual Memory Processes deal with virtual memory – they have the illusion that a very large address space is available to them There is only a limited amount of physical memory that is shared by all processes – a process places part of its virtual memory in this physical memory and the rest is stored on disk Thanks to locality, disk access is likely to be uncommon The hardware ensures that one process cannot access the memory of a different process

Address Translation The virtual and physical memory are broken up into pages 8KB page size Virtual address 13 virtual page number page offset Translated to phys page number Physical address 13 physical page number page offset Physical memory

Title Bullet