Segmentation and Paging Considerations

Slides:



Advertisements
Similar presentations
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Advertisements

Paging: Design Issues. Readings r Silbershatz et al: ,
Chapter 8 Virtual Memory
Operating Systems Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani.
Page 15/4/2015 CSE 30341: Operating Systems Principles Allocation of Frames  How should the OS distribute the frames among the various processes?  Each.
Chapter 101 Cleaning Policy When should a modified page be written out to disk?  Demand cleaning write page out only when its frame has been selected.
9.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory OSC: Chapter 9. Demand Paging Copy-on-Write Page Replacement.
Advanced Operating Systems - Spring 2009 Lecture 17 – March 23, 2009 Dan C. Marinescu Office: HEC 439 B. Office hours:
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
1 Thursday, July 06, 2006 “Experience is something you don't get until just after you need it.” - Olivier.
Chapter 9: Virtual-Memory Management. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 9: Virtual Memory Background Demand Paging.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Virtual-Memory Management
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 17: Wrapping up Virtual Memory.
Gordon College Stephen Brinton
Virtual Memory Management
Memory Management Design & Implementation Segmentation Chapter 4.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Virtual Memory Chapter 8.
Virtual Memory Chapter 8.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Memory Management (II)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Virtual Memory Chapter 9. 2 Characteristics of Paging and Segmentation n Memory references are dynamically translated into physical addresses at run.
Module 3.1: Virtual Memory
Memory Management 2010.
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Virtual Memory I Chapter 8.
1 Virtual Memory Chapter 8. 2 Process Execution n The OS brings into main memory only a few pieces of the program (including its starting point) n Each.
Chapter 91 Translation Lookaside Buffer (described later with virtual memory) Frame.
Virtual Memory Management
Chapter 9: Virtual Memory. Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Virtual Memory Chapter 8. Characteristics of Paging and Segmentation A process may be broken up into pieces (pages or segments) that do not need to be.
Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Background Virtual memory –
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples Operating.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
Virtual Memory The memory space of a process is normally divided into blocks that are either pages or segments. Virtual memory management takes.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
9.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 9.5 Allocation of Frames Each process needs minimum number of pages Example: machine.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
Chapter 9: Virtual Memory. Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Chapter 9: Virtual Memory
Virtual Memory Chapter 8.
Virtual Memory CSSE 332 Operating Systems
Chapter 9: Virtual Memory
CSC 322 Operating Systems Concepts Lecture - 16: by
Chapter 8 Virtual Memory
Virtual Memory Chapter 8.
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory
Module 9: Virtual Memory
Operating Systems: Internals and Design Principles, 6/E
Presentation transcript:

Segmentation and Paging Considerations Operating Systems Segmentation and Paging Considerations A. Frank - P. Weisberg

Virtual Memory Management Background Demand Paging Demand Segmentation Paging Considerations Page Replacement Algorithms Virtual Memory Policies A. Frank - P. Weisberg

Dynamics of Segmentation Typically, each process has its own segment table. Similarly to paging, each segment table entry contains a present (valid-invalid) bit and a modified bit. If the segment is in main memory, the entry contains the starting address and the length of that segment. Other control bits may be present if protection and sharing is managed at the segment level. Logical to physical address translation is similar to paging except that the offset is added to the starting address (instead of appended).

Address Translation in a Segmentation System A. Frank - P. Weisberg

Segmentation on Comments In each segment table entry, we have both the starting address and length of the segment; The segment can thus dynamically grow or shrink as needed. But variable length segments introduce external fragmentation and are more difficult to swap in and out. It is natural to provide protection and sharing at the segment level since segments are visible to the programmer (pages are not). Useful protection bits in segment table entry: read-only/read-write bit Kernel/User bit A. Frank - P. Weisberg

Comparison of Paging and Segmentation A. Frank - P. Weisberg

Combined Segmentation and Paging To combine their advantages, some OSs page the segments. Several combinations exist – assume each process has: one segment table. several page tables: one page table per segment. The virtual address consists of: a segment number: used to index the segment table who’s entry gives the starting address of the page table for that segment. a page number: used to index that page table to obtain the corresponding frame number. an offset: used to locate the word within the frame. A. Frank - P. Weisberg

Simple Combined Segmentation and Paging The Segment Base is the physical address of the page table of that segment. Present/modified bits are present only in page table entry. Protection and sharing info most naturally resides in segment table entry. Ex: a read-only/read-write bit, a kernel/user bit... A. Frank - P. Weisberg

Address Translation in combined Segmentation/Paging A. Frank - P. Weisberg

Paging Considerations Locality, VM and Thrashing Prepaging (Anticipatory Paging) Page size issue TLB reach Program structure I/O interlock Copy-on-Write Memory-Mapped Files A. Frank - P. Weisberg

Degree of multiprogramming to be reached A. Frank - P. Weisberg

Locality in a Memory-Reference Pattern A. Frank - P. Weisberg

Locality and Virtual Memory Principle of locality of references: memory references within a process tend to cluster. Hence: only a few pieces of a process will be needed over a short period of time. Possible to make intelligent guesses about which pieces will be needed in the future. This suggests that virtual memory may work efficiently (i.e., thrashing should not occur too often). A. Frank - P. Weisberg

Possibility of Thrashing (1) If a process does not have “enough” pages, the page-fault rate is very high: Page fault to get page Replace existing frame But quickly need replaced frame back This leads to: Low CPU utilization Operating system thinking that it needs to increase the degree of multiprogramming Another process added to the system. Thrashing  a process is busy swapping pages in and out. A. Frank - P. Weisberg

Possibility of Thrashing (2) To accommodate as many processes as possible, only a few pieces of each process are maintained in main memory. But main memory may be full: when the OS brings one piece in, it must swap one piece out. The OS must not swap out a piece of a process just before that piece is needed. If it does this too often this leads to thrashing: The processor spends most of its time swapping pieces rather than executing user instructions. A. Frank - P. Weisberg

Locality and Thrashing Why does demand paging work? Locality model: Process migrates from one locality to another. Localities may overlap. Why does thrashing occur?  size of locality > total memory size A. Frank - P. Weisberg

Prepaging Can help to reduce the large number of page faults that occurs at process startup or resumption. Prepage all or some of the pages a process will need, before they are referenced. But if prepaged pages are unused, I/O and memory was wasted. Assume s pages are prepaged and a fraction α of the pages are used: Is cost of s * α saved pages faults greater or less than the cost of prepaging s * (1- α) unnecessary pages? α near zero  prepaging loses.

Page Size Issues Sometimes OS designers have a choice: Especially if running on custom-built CPU. Page size selection must take into consideration: Fragmentation Page table size I/O overhead Number of page faults Locality TLB size and effectiveness Always power of 2, usually in the range 212 (4,096 bytes) to 222 (4,194,304 bytes). On average, growing over time.

The Page Size Issue (1) Page size is defined by hardware; exact size to use is a difficult question: Large page size is good since for a small page size, more pages are required per process; More pages per process means larger page tables. Hence, a larger portion of page tables in virtual memory. Large page size is good since disks are designed to efficiently transfer large blocks of data. Larger page sizes means less pages in main memory; this increases the TLB hit ratio. Small page size is good to minimize internal fragmentation. A. Frank - P. Weisberg

The Page Size Issue (2) With a very small page size, each page matches the code that is actually used: faults are low. Increased page size causes each page to contain more code that is not used. Page faults rise. Page faults decrease if we approach point P were the size of a page is equal to the size of the entire process. A. Frank - P. Weisberg

The Page Size Issue (3) Page fault rate is also determined by the number of frames allocated per process. Page faults drops to a reasonable value when W frames are allocated. Drops to 0 when the number (N) of frames is such that a process is entirely in memory. A. Frank - P. Weisberg

The Page Size Issue (4) Page sizes from 1KB to 4KB are most commonly used. Increase in page sizes is related to trend of increasing block sizes. But the issue is non trivial. Hence some processors supported multiple page sizes, for example: Pentium supports 2 sizes: 4KB or 4MB R4000 supports 7 sizes: 4KB to 16MB A. Frank - P. Weisberg

Example Page Sizes A. Frank - P. Weisberg

TLB Reach The amount of memory accessible from the TLB. Ideally, working set of each process is stored in TLB: Otherwise there is a high degree of page faults. TLB Reach = (TLB Size) x (Page Size) Increase the size of the TLB: might be expensive. Increase the Page Size: This may lead to an increase in internal fragmentation as not all applications require a large page size. Provide Multiple Page Sizes: This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation. A. Frank - P. Weisberg

Program Structure Program structure int A[][] = new int[1024][1024]; Each row is stored in one page. Program 1: for (j = 0; j < A.length; j++) for (i = 0; i < A.length; i++) A[i,j] = 0; we have 1024 x 1024 page faults Program 2: for (i = 0; i < A.length; i++) for (j = 0; j < A.length; j++) A[i,j] = 0; we have 1024 page faults A. Frank - P. Weisberg

I/O Interlock I/O Interlock – Pages must sometimes be locked into memory. Consider I/O – Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm. A. Frank - P. Weisberg

Copy-on-Write Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory. If either process modifies a shared page, only then is the page copied. COW allows more efficient process creation as only modified pages are copied. In general, free pages are allocated from a pool of zero-fill-on-demand pages. A. Frank - P. Weisberg

Before Process 1 Modifies Page C A. Frank - P. Weisberg

After Process 1 Modifies Page C A. Frank - P. Weisberg

What Happens if there is no Free Frame? Used up by process pages. Also in demand from the kernel, I/O buffers, etc. How much to allocate to each? Page replacement – find some page in memory, but not really in use, page it out: Algorithm – terminate? swap out? replace the page? Performance – want an algorithm which will result in minimum number of page faults. Same page may be brought into memory several times.