Page Cache and Page Writeback

Slides:



Advertisements
Similar presentations
Paging: Design Issues. Readings r Silbershatz et al: ,
Advertisements

SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Buffer management.
Cache Definition Cache is pronounced cash. It is a temporary memory to store duplicate data that is originally stored elsewhere. Cache is used when the.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Memory Hierarchy. Smaller and faster, (per byte) storage devices Larger, slower, and cheaper (per byte) storage devices.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Multiprocessing Memory Management
CS 333 Introduction to Operating Systems Class 18 - File System Performance Jonathan Walpole Computer Science Portland State University.
Chapter 1 and 2 Computer System and Operating System Overview
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Overview: Memory Memory Organization: General Issues (Hardware) –Objectives in Memory Design –Memory Types –Memory Hierarchies Memory Management (Software.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Systems I Locality and Caching
Maninder Kaur CACHE MEMORY 24-Nov
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Chapter 1 Computer System Overview Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
Lecture 19: Virtual Memory
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
Chapter Twelve Memory Organization
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
Multiprocessor cache coherence. Caching: terms and definitions cache line, line size, cache size degree of associativity –direct-mapped, set and fully.
CS333 Intro to Operating Systems Jonathan Walpole.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Memory Hierarchy How to improve memory access. Outline Locality Structure of memory hierarchy Cache Virtual memory.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
Computer Architecture Lecture 25 Fasih ur Rehman.
CSCI206 - Computer Organization & Programming
CMSC 611: Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Jonathan Walpole Computer Science Portland State University
Main Memory Cache Architectures
Cache Memory.
Chapter 2 Memory and process management
Ramya Kandasamy CS 147 Section 3
Cache Memory Presentation I
Lecture 11: DMBS Internals
Lecture 28: Virtual Memory-Address Translation
Lecture 21: Memory Hierarchy
Filesystems 2 Adapted from slides of Hank Levy
Computer Architecture
What Happens if There is no Free Frame?
Overview Continuation from Monday (File system implementation)
Chapter 6 Memory System Design
Adapted from slides by Sally McKee Cornell University
Chapter 5 Exploiting Memory Hierarchy : Cache Memory in CMP
Performance metrics for caches
Main Memory Cache Architectures
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CS 3410, Spring 2014 Computer Science Cornell University
Semester Review Brian Kocoloski
CSC3050 – Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Lecture 21: Memory Hierarchy
Linux Block I/O Layer Chris Gill, Brian Kocoloski
Update : about 8~16% are writes
Cache Memory Rabi Mahapatra
Lecture 9: Caching and Demand-Paged Virtual Memory
Cache Memory and Performance
Interrupts and Interrupt Handling
Cache Memory and Performance
Performance metrics for caches
Presentation transcript:

Page Cache and Page Writeback Chris Gill, Brian Kocoloski CSE 422S - Operating Systems Organization Washington University in St. Louis St. Louis, MO 63130

System Architecture struct page * pages CPU DRAM System Bus Peripheral Devices (hard drives, keyboards, adapters, etc. Bridge I/O buses CSE 422S – Operating Systems Organization

Main memory (RAM) Characteristics Fast Limited Capacity CPU DRAM System Bus Peripheral Devices (hard drives, keyboards, adapters, etc. Secondary memory (e.g., hard drive) Characteristics Slow Large Capacity Persistent Bridge I/O buses CSE 422S – Operating Systems Organization

Hardware Performance Differences Performance comparison of most block devices with other hardware Reading/writing a CPU register: O(nanoseconds) Access memory in CPU cache: DRAM memory access: O(microseconds) Access to spinning hard drive: O(milliseconds) CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization I/O Without Caching Read/write block to/from I/O device Without any caching, all reads/writes go to a (slow) storage device along a (relatively slow) I/O bus Disk reads on the order of ms in time (~ 2 orders of magnitude slower than main memory) CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization I/O With Caching Page Read/write block to/from I/O device Read block from storage Allocate page of memory Copy block contents into the page Store page in the page cache Memory write Disk read CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization I/O With Caching Page On subsequent read/write of the same block Is the block located somewhere in the page cache? If yes, read/write directly from memory Else, read/write from storage Memory write Disk read CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Page Cache Simple high level idea: save file contents in memory for faster future access Q: does the overhead of checking the page cache matter? Do we care about the overhead for when the cache misses? What about when the cache hits? Q: what other questions does this create? CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Page Cache Simple high level idea: save file contents in memory for faster future access Q: does the overhead of checking the page cache matter? Q: what other questions does this create? How large should the cache be? How to maintain consistency between memory and disk? How frequently should cache/disk contents be synchronized? What should happen on writes? CSE 422S – Operating Systems Organization

Cache Size and Consistency Size of the page cache grows/shrinks dynamically based on system workload Only data that have been accessed are mapped Page cache may hold different portions of different files Write caching Writes may bypass the cache entirely (so-called “no-write” approach) Writes may update both page cache and disk (“write-through” approach) Writes may update page cache and mark as “dirty” and then written back to disk later (“write-back” approach) Allows requests to be merged and sorted, per previous lecture CSE 422S – Operating Systems Organization

Write Caching Approaches No-write Only makes sense if you know the data will not be read/written again in the future Write-through Writes to the page cache are immediately written through to storage Benefit: keeps cache coherent with physical storage Downside: all writes to storage issued independently Write-back Writes to the page cache are NOT written to the backing store immediately Downside: inconsistency between memory and storage contents (what happens if the system crashes?) Benefit: multiple writes to backing store can be coalesced and performed in bulk, which the block layer can handle more efficiently (via I/O scheduling approaches) CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Cache Eviction Clean vs dirty If not enough memory available, Linux replaces clean page(s) first If no clean pages are available, Linux forces a writeback of a dirty page, and then replaces that page Eviction Clean page; no writeback needed Dirty page; ”dirty” contents need to be written back to persistent storage, then the page can be released Eviction Policy Assuming all pages are dirty, what should be evicted? CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Linux’ Cache Eviction Cache eviction LRU strategy: if we need to evict, find the least recently used clean page, and remove it from the cache; no writeback needed Clean Pages Dirty Pages Sorted by LRU Sorted by LRU CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Linux’ Cache Eviction Cache eviction LRU strategy: if we need to evict, find the least recently used clean page, and remove it from the cache; no writeback needed Clean Pages Dirty Pages Sorted by LRU Sorted by LRU CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Linux’ Cache Eviction Cache eviction If no clean pages are available, Linux forces a writeback of a dirty page, and frees the page or marks it clean Clean Pages Dirty Pages CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Linux’ Cache Eviction Cache eviction If no clean pages are available, Linux forces a writeback of a dirty page, and frees the page or marks it clean Clean Pages Dirty Pages CSE 422S – Operating Systems Organization

CSE 422S – Operating Systems Organization Cache Eviction Least recently used strategy (LRU) Evicts page with the oldest access timestamp Based on idea that “least recently used” is a good approximation for “least likely to be used in the near future” An effective heuristic (Linux isn’t really clairvoyant) Linux current strategy: Two-list strategy (LRU/2) Addresses issue where file contents are only ever read once Linux modifies LRU slightly by keeping two separate lists Pages move from inactive list to active one when they are accessed Within each list, pages are ordered by access timestamp If active list is larger than inactive one, older pages move to inactive CSE 422S – Operating Systems Organization

Linux’ Cache Eviction Two-List LRU Active List (clean, but cannot be evicted) Inactive List (clean, and can be evicted) Considered “hot” Considered “cold” CSE 422S – Operating Systems Organization

Linux’ Cache Eviction Clean Pages Page accessed Active List (clean, but cannot be evicted) Inactive List (clean, and can be evicted) Page accessed CSE 422S – Operating Systems Organization

Linux’ Cache Eviction Clean Pages Page accessed (moves to active list) Active List (clean, but cannot be evicted) Inactive List (clean, and can be evicted) Page accessed (moves to active list) CSE 422S – Operating Systems Organization

Reading file contents into memory Page Memory write Q: What is the Linux system call we’ve used that performs very similar operations? Disk read CSE 422S – Operating Systems Organization