Download presentation
Presentation is loading. Please wait.
Published byKristina Hunter Modified over 6 years ago
1
Cache Manager Orlando Robertson Brown Bag Seminar
Microsoft Windows Internals 4th Edition
2
Cache Manger What is the cache manager?
a set of kernel-mode functions and system threads that cooperate with the memory manager to provide data caching for all Windows file system drivers (both local and network) how the cache manager works, including its key internal data structures and functions how it’s sized at system initialization time how it interacts with other elements of the operating system how you can observe its activity through performance counters describe the five flags on the Windows CreateFile function that affect file caching
3
Key Features Cache Manager
Supports all file system types (local and network) Uses the memory manger to control what parts of what files are in physical memory Uses virtual block data caching Allowing for intelligent read-ahead and high-speed access to the cache without involving file system drivers Supports “hints” passed by applications at file open time Ex. random vs. sequential access, temporary file creation, … Supports recoverable file systems to recover data after a system failure Ex. file systems using transaction logging
4
Single Centralized Cache
caches all externally stored data local hard disks, floppy disks, network file servers, or CD-ROMs Any data can be cached user data streams or file system metadata Windows cache method depends on the type of data being cached
5
Memory Manger cache manager never knows how much cached data is actually in physical memory cache manager accesses data by mapping views of files into system virtual address spaces using standard section objects Section objects are the basic primitive of the memory manager memory manager pages in blocks that aren’t in physical memory as addresses in the mapped views are accessed
6
Memory Manger cache manager copies data to or from virtual addresses and relies on the memory manager to fault the data into (or out of) memory as needed avoids generating read or write I/O request packets (IRPs) to access data for files it’s caching memory manager allowed to make global trade-offs on how much memory to give to system cache vs. user processes design allows for processes that open cached files to see the same data as processes that are mapping the same files into their user address spaces
7
Cache Coherency cache manager ensures that any process accessing cached data will get the most recent version cache manager and user applications that map files into their address spaces use the same memory management file mapping services memory manager guarantees that it has only one representation of each unique mapped file memory manager maps all views of a file (even if they overlap) to a single set of pages in physical memory
8
Virtual Cache Blocking
logical block caching: cache manager keeps track of which blocks of a disk partition are in the cache Novell NetWare, OpenVMS, and older UNIX systems virtual block caching: cache manager keeps track of which parts of which files are in the cache Windows cache manager cache manager maps file portions of 256-KB into system virtual address spaces uses special system cache routines located in the memory manager intelligent read-ahead possible cache manager can predict where the caller might be going next fast I/O possible I/O system can address cached data to satisfy an I/O request bypassing the file system
9
Stream Based Cache Stream: a sequence of bytes within a file
NTFS allows a file to contain more than one stream cache manager designed to cache each stream independently NTFS can organize its master file table into streams and cache these streams caches streams are identified by both a filename and stream name (if more than one stream exists)
10
Recoverable File System Support
Half-completed I/O operations can corrupt a disk volume and render an entire volume inaccessible Recoverable file systems are designed to reconstruct the disk volume structure after a system failure (NTFS) NTFS maintains a log file which records every intended update to the file system structure (the file system’s metadata) before writing changes to the volume
11
Recoverable File System Support
cache manager and file system must work together to ensure that the following actions occur in sequence: The file system writes a log file record documenting the volume update it intends to make. The file system calls the cache manager to flush the log file record to disk. The file system writes the volume update to the cache, it modifies its cached metadata. The cache manager flushes the altered metadata to disk, updating the volume structure.
12
Recoverable File System Support
logical sequence number (LSN) identifies the record in its log file and corresponds to the cache update LSN supplied by file system during data writes to the cache cache manager tracks LSNs associated with each page in the cache data streams marked as “no write” by NTFS are protected from page writes before the corresponding log records are written
13
Recoverable File System Support
A file system to flush a group of dirty pages to disk cache manager determines the highest LSN associated with the pages to be flushed and reports that number to the file system file system calls the cache manager back to flush log file data up to the point represented by the reported LSN cache manager flushes the corresponding volume structure updates to disk file system and the cache manager records what it’s going to do before actually doing it thereby providing the recoverability of the disk volume after a system failure
14
Cache Virtual Memory Management
cache manager given a region of system virtual address spaces to manage (instead of a region of physical memory) cache manager divides each address space into 256-KB slots called views cache manager maps views of files into slots in the cache’s address space on a round-robin basis
15
Cache Virtual Memory Management
16
Cache Virtual Memory Management
cache manager guarantees that active views are mapped A view is marked active only during a read or write operation to or from the file cache manager unmaps inactive views of a file as it maps new views Except for processes specifying the FILE_ FLAG_RANDOM_ACCESS flag in the call CreateFile Pages for unmapped views are sent to the standby or modified lists FILE_FLAG_SEQUENTIAL_SCAN flag moves pages to the front of the lists
17
Cache Virtual Memory Management
cache manager needs to map a view of a file and there are no more free slots in the cache unmap the least recently mapped inactive view and use that slot If no views are available return an I/O error
18
Cache Size Windows computes the size of the system cache
virtual and physical size of the system cache depends on a number of factors memory size version of Windows
19
LargeSystemCache A registry value
HKLM\SYSTEM\CurrentControlSet\Control\SessionManager\Memory Management\LargeSystemCache Affects both the virtual and physical sizes of the cache Default value is 0 Windows 2000 Professional and Windows XP Default value is 1 Windows Server systems
20
LargeSystemCache can modify LargeSystemCache
System -> Advanced -> Performance -> Settings -> Advanced Memory Usage
21
Cache Virtual Size system cache virtual size is dependent on the physical memory default virtual size is 64 MB algorithm for virtual size of the system cache for a computer 128 MB + (physical memory - 16 MB) / 4 MB * 64 MB = virtual memory
22
Cache Virtual Size x86 system virtual size limited to 512 MB
unless registry value LargeSystemCache = 1 cache virtual memory limited to 960 MB more cache virtual memory results in fewer view unmaps and remap operations
23
Cache Virtual Size
24
Cache Working Set Size cache manager controls the size of system cache
dynamically balance demands for physical memory between processes and operating system system cache shares a single system working set cache data, paged pool, pageable Ntoskrnl code, pageable driver code memory manager favors the system working set over processes running on the system registry value LargeSystemCache = 1 examine the physical size of the system cache on the system working set by examining the performance counters or system variables
25
Cache Working Set Size
26
Cache Physical Size system working set does not necessarily reflect the total amount of file data cached in physical memory file data might be in the memory manager’s standby or modified page lists standby list can consume nearly all the physical memory outside the system working set No other demand for physical memory total amount of file data cached are controlled by the memory manager Considered in some sense the real cache manager
27
Cache Physical Size cache manager subsystem acts as a facade for accessing file data thru the memory manager important for read-ahead and write-behind policies Task Manager show total amount of file data that’s cache on a system value named System Cache
28
Cache Data Structures VACB for each 256-KB slot in the system cache
private cache map for each opened cached file Information for read-ahead shared cache map structure for each cached file points to mapped views of the file
29
Systemwide Data Structures
virtual address control blocks (VACBs) track of the state of the views in the system cache cache manager allocates all the VACBs required to describe the system cache Each VACB represents one 256-KB view in the system cache
30
Systemwide Data Structures
first field in a VACB is the virtual address of the data in the system cache second field is a pointer to the shared cache map structure, third field identifies the offset within the file at which the view begins forth field identifies how many active reads or writes are accessing the view
31
Per-File Cache Data Structures
shared cache map structure describes the state of the cached file size and valid data length (for security reasons) list of private cache maps VACBs for currently mapped views section object private cache map structure contains location of the last two reads for intelligent read-ahead section object describes the file’s mapping into virtual memory
32
Per-File Cache Data Structures
VACB index array is an array of pointers to VACBs cache manager maintains to track which views are mapped into the system cache cache manager uses the file’s VACB index array to see if the requested data has been mapped into the cache data reference is in the cache for nonzero array entry data reference is not in the cache for zero array entry
33
Per-File Cache Data Structures
34
File System Interfaces
file system driver determines whether some part of a file is mapped in the system cache call the CcInitializeCacheMap function if not file system driver calls one of several functions to access data in a file copy method copies user data between cache buffers in system space and a process buffer in user space mapping and pinning method uses virtual addresses to read and write data directly to cache buffers physical memory access method uses physical addresses to read and write data directly to cache buffers. file system drivers must provide two versions of the file read operation to handle a page fault cached and noncached
35
File System Interfaces
typical interactions for cache manager, memory manager, and file system drivers in response to user read/write file I/O cache manager is invoked by a file system through the copy interfaces cache manager creates a view and reads the file data into the user buffer copy operation generates page faults as it accesses each previously invalid page memory manager initiates noncached I/O into the file system driver to retrieve the data
36
Copying to & from the Cache
system cache is in system space not available from user mode due to a potential security hole user application file reads and writes to cached files must be serviced by kernel-mode routines data copied between cache’s buffers in system space and application’s buffers in process address space
37
Copying to & from the Cache
38
Caching w/ Mapping & Pinning Interfaces
file system drivers need to read and write the data that describes the files metadata or volume structure data cache manager provides the functions file system drivers to modify data directly in the system cache mapping and pinning interfaces resolves a file system’s buffer management problem doesn’t predict the maximum number of buffers it will need eliminates the need for buffers by updating the volume structure in virtual memory
39
Caching w/ Mapping & Pinning Interfaces
40
Caching w/ Mapping & Pinning Interfaces
41
Caching w/ DMA Interfaces
direct memory access (DMA) functions are used to read from or write to cache pages without intervening buffers Ex. network file system doing a transfer over the network DMA interface returns to the file system the physical addresses of cached user data used to transfer data directly from physical memory to a network device can result in significant performance improvements for large transfers a memory descriptor list (MDL) is used to describe physical memory references
42
Caching w/ DMA Interfaces
43
Caching w/ DMA Interfaces
44
Fast I/O Fast I/O is a means of reading or writing a cached file without going through the work of generating an IRP Fast I/O doesn’t always occur first read or write to a file requires setting up the file for caching during an asynchronous read or write and caller stalled during paging I/O operations file in question has a locked range of bytes
45
Fast I/O A thread performs a read or write operation
the request passes to the fast I/O entry point of the file system driver calls the cache manager read or write routine to access the file data directly in the cache. cache manager translates the supplied file offset into a virtual address in the cache. cache manager copies the data from the cache into the buffer or copies the data from the buffer to the cache. read-ahead information in the caller’s private cache map is updated (reads) dirty bit of any modified page in the cache is set so that the lazy writer will know to flush it to disk (writes) modifications are flushed to disk (write-through)
46
Fast I/O
47
Read Ahead & Write Behind
cache manager implements reading and writing file data for the file system drivers file I/O only when a file opened without the FILE_FLAG_NO_BUFFERING flag file I/O using the Windows I/O functions excluding mapped files
48
Intelligent Read Ahead
cache manager uses the principle of spatial locality to perform intelligent read-ahead predicting what data the calling process is likely to read next File read-ahead for logical block caching is more complex asynchronous read-ahead with history extend read-ahead benefits to cases of strided data accesses uses last two read requests in the private cache map for the file handle being accessed FILE_FLAG_SEQUENTIAL_SCAN flag makes read-ahead even more efficient with CreateFile function cache manager doesn’t keep a read history reads ahead two times as much data
49
Intelligent Read Ahead
cache manager’s read-ahead is asynchronous performed in a thread separate from the caller’s thread and proceeds concurrently with the caller’s execution cache manager first accesses the requested virtual page then queues an additional I/O request to retrieve additional data to a system worker thread worker thread prereads additional data preread pages are faulted into memory while the program continues executing data already in memory when the caller requests it FILE_FLAG_ RANDOM_ACCESS flag for no predictable read pattern
50
Write Back Caching & Lazy Writing
lazy write means that data written to files is first stored in cache pages and then written to disk later Write operations flushed to disk all at once reduces overall number of disk I/O operations memory manager flushes cache pages to disk cache manager request it demand for physical memory is high decision when to flush the cache is important Flushing too frequently slows system performance with unnecessary I/O Flushing too rarely runs the risk of losing modified file data during a system failure
51
Write Back Caching & Lazy Writing
once per second the cache manager’s lazy writer function executes and queues one-eighth of the dirty pages in the system cache to be written to disk lazy writer writes an additional number of dirty pages if needed
52
Write Back Caching & Lazy Writing
Disabling Lazy Writing for a File FILE_ATTRIBUTE_TEMPORARY flag in a Windows CreateFile function call Ex. applications usually delete temporary files soon after closing them Forcing the Cache to Write Through to Disk FILE_FLAG_WRITE_ THROUGH flag in a CreateFile function call a thread can use the Windows FlushFileBuffers function Ex. applications can’t tolerate even momentary delays between writing a file and seeing the updates on disk
53
Write Back Caching & Lazy Writing
Flushing Mapped Files memory manager informs the cache manager when a user maps a file Cache manager writes the dirty pages in the cache checks to see whether the file is also mapped by another process cache manager then flushes the entire view of the section for views of a file also open in the cache view is unmapped modified pages are marked as dirty
54
Write Back Caching & Lazy Writing
Lazy Writing order: A user unmaps the view A process flushes file buffers
55
Write Throttling write throttling prevents system performance from degrading because of a lack of memory when a file system or network server issues a large write operation file system asks the cache manager whether a number of bytes can be written without hurting performance (CcCanIWrite) file system sets up a callback with the cache manager for automatically writing when writes are permitted (CcDeferWrite)
56
Write Throttling dirty page threshold is the number of pages that the system cache will allow to be dirty before throttling cached writers computed at system initialization depends on physical memory size LargeSystemCache value dirty page threshold is set to the value of the system maximum working set size minus 2 MB maximum working set size exceeds 4 MB
57
Write Throttling useful for network redirectors transmitting data over slow communication lines CcSetDirtyPageThreshold function set a limit on the number of dirty cache pages limiting the number of dirty pages ensures that a cache flush operation won’t cause a network timeout
58
Write Throttling
59
System Threads cache manager performs lazy write and read-ahead I/O operations by submitting requests to the common critical system worker thread pool cache manager does limit the use of these threads one less than the total number of critical worker system threads for small and medium memory systems two less than the total for large memory systems cache manager organizes work requests into two lists express queue used for read-ahead operations regular queue used for lazy write, write behinds, and lazy closes per-processor look-aside list tracks the work requests the worker threads need to perform for the cache manager
60
System Threads per-processor look-aside list tracks the work requests the worker threads need to perform for the cache manager a fixed-length list 32 worker queue items for small-memory systems 64 worker queue items for medium memory systems 128 worker queue items for large memory Windows Professional systems 256 worker queue items for large memory Windows Server systems
61
Conclusion provides a high-speed
performs intelligent read-ahead intelligent mechanism for reducing disk I/O provides fast I/O mechanism increasing overall system throughput physical memory management done by single Windows global memory manager thereby reducing code duplication and increasing efficiency
62
References Microsoft Windows Internals 4th Edition
David Solomon, Mark Russinovich programming interfaces to the cache manager are documented in the Windows Installable File System (IFS) kit
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.