Download presentation
Presentation is loading. Please wait.
1
Chapter 4 Memory Management
The following Sections will be skipped. They will not be covered by assignment/final exam 4.7.3 4.7.6
2
Why Memory Management? Parkinson’s law: programs expand to fill the memory available to hold them Programmers’ ideal: an infinitely large, infinitely fast memory, nonvolatile Reality: memory hierarchy If main memory is large to hold everything, the arguments in this chapter become obsolete. Registers Cache Main memory Magnetic disk Magnetic tape
3
What Is Memory Management?
Memory manager: the part of the OS managing the memory hierarchy Keep track of memory parts in use/not in use Allocate/de-allocate memory to processes Manage swapping between main memory and disk Basic memory management: every program is put and run in main memory as whole Swapping & paging: move processes back and forth between main memory and disk
4
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation
5
Mono Programming One program at a time Three variations
Share memory with OS Three variations 0xFFF… User program OS in RAM OS in ROM User program Device drivers in ROM User program OS in RAM Three variations: choices depend on system design considerations. OS at the bottom of memory in RAM Formerly used on mainframes and minicomputers, rarely used any more. OS in ROM at the top of memory Used on some palmtop computers and embedded systems Device drivers at the top of memory in a ROM and the rest of the system in RAM down below Used by early personal computers (MS-DOS) the portion of the system in ROM is called BIOS
6
Multiprogramming With Fixed Partitions
Scenario: multiple programs at a time Problem: how to allocate memory? Divide memory up into n partitions Equal partitions vs. unequal partitions Can be done manually when system is up A job arrives, put it into the input queue for the smallest partition large enough to hold it Any space in a partition not used by a job is lost Here, we assume that a program always fit in a memory partition.
7
Example: Multiprogramming With Fixed Partitions
800K Partition 4 Partition 3 Partition 2 Partition 1 OS 700K Multiple input queues 400K 200K 100K
8
Single Input Queue Partition 4 Partition 3
800K Disadvantage of multiple input queues Small jobs may wait, while a queue with larger memory is empty Solution: single input queue Partition 4 Partition 3 Partition 2 Partition 1 OS 700K 400K 200K 100K
9
How to Pick Jobs? Pick the first job in the queue fitting an empty partition Fast, but may waste a large partition on a small job Pick the largest job fitting an empty partition Memory efficient Smallest jobs may be interactive ones, need best service, slow Policies for efficiency and fairness Have at least one small partition around A job may not be skipped more than k times A trade-off between space and time efficiency Space and time can be traded off In many cases, we need to make a compromise
10
A Naïve Model for Multiprogramming
Multiprogramming improves CPU utilization If on average, a process computes 20% of the time it sitting in memory 5 processes can keep CPU busy all the time Assume all processes never wait for I/O at the same time. Not unrealistic!
11
A Probabilistic Model A process spends p% of its time waiting for I/O to complete At once n processes in memory CPU utilization 1 – (p/100)n Probability that all n processes are waiting for I/O: (p/100)n
12
CPU Utilization 1 – (p/100)n
When 80% I/O wait, if we want CPU utilization >= 80%, at least 7 processes.
13
Memory Management for Multiprogramming
Relocation What address the program will begin in memory Protection A program’s access should be confined to proper area
14
Relocation Problem Absolute address for programming Real address
A procedure at 100 Real address When the module is in partition 1 (started from physical address 100k), then the procedure is at 100K+100 Relocation problem: translation between absolute address and real address Actually modify the instructions as loading the program Program must include a list of program words for addresses to be relocated
15
Protection A malicious program can jump to space belonging to other users Generate a new instruction on the fly Jump to the new instruction PSW (for Program Status Word) solution PSW protection code 0x0100 Program space is confined: 0x – 0x0100 FFFF Word: malicious ma-li-cious \me-'lish-es\ adj
16
Relocation/Protection Using Registers
Base register: start of the partition Every memory address generated adds the content of base register Base register: 100K, CALL 100 CALL 100K +100 Limit register: length of the partition Addresses are checked against the limit register Disadvantage: perform addition and comparison on every memory reference
17
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation
18
In Time-sharing/Interactive Systems…
Not enough main memory to hold all currently active processes Intuition: excess processes must be kept on disk and brought in to run dynamically Swapping: bring in each process in entirely Assumption: each process can be held in main memory, but cannot finish at a run Virtual memory: allow programs to run even when they are only partially in main memory No assumption about program size
19
Swapping A OS B A OS C B A OS C B OS C B D OS C D OS C A D OS Time
20
Swapping V.S. Fixed Partitions
The number, location and size of partitions vary dynamically in swapping Flexibility, improve memory utilization Complicate allocating, deallocating and keeping track of memory Memory compaction: combine “holes” in memory into a big one More efficient in allocation Requires a lot of CPU time Rarely used in real systems Memory compaction is usually not done.
21
How Enlarge Memory for a Process?
Fixed size process: easy Growing process Expand to the adjacent hole, if there is a hole Swap it out to create a large enough hole If swap area on the disk is full, wait or be killed Allocate extra space whenever a process is swapped in or move More than one growing segments? Share extra space for growing
22
Handling Growing Processes
Room for growth of B B Room for growth of A A OS B-Stack Room for growth B-Data B-Program A-Stack A-Data A-Program OS
23
Memory Management With Bitmaps
Two ways to keep track of memory usage Bitmaps and free lists Bitmaps Memory is divided into allocation units One bit per unit: 0-free, 1-occupied A B C D E Size of allocation units: a few words ~ several kilobytes 1
24
Size of Allocation Units
4 bytes/unit 1 bit in map for 32 bits of memory bitmap takes 1/33 of memory Trade-off between allocation unit and memory utilization Smaller allocation unit larger the bitmap Larger allocation unit smaller the bitmap On average, half of the last unit is wasted To find a hole of k units search the entire bitmap, costly! Size of allocation units: a few words ~ several kilobytes
25
Memory Management With Linked Lists
Each entry in the list is a hole(H)/process(P) A B C D E P 5 H 5 3 P 8 6 P 14 4 H 18 2 P 20 6 P 26 3 H 29 3 X Length 6 Starts at 20 Process
26
Updating Linked Lists Combine holes if possible A X B A B A X A X B B
Before process X terminates After process X terminates A X B A B A X A X B B X
27
Allocate Memory for New Processes
First fit: find the first hole fitting requirement Break the hole into two pieces: P + smaller H Next fit: start search from the place of last fit Slightly worse performance than first fit Best fit: take the smallest hole that is adequate Slower Generate tiny useless holes Worst fit: always take the largest hole Not a very good idea Next fit keeps track of where it is whenever it finds a suitable hole. The next time it is called to find a hole, it starts searching the list from the place where it left off last time. It does not always start from the beginning.
28
Using Distinct Lists Distinct lists for processes and holes
List of holes can be sorted on size Best fit becomes fast Problem: how to free a process? Merging holes is very costly Quick fit: grouping holes based on size Merging holes is still costly Information about holes can be stored in holes
29
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation
30
Why Virtual Memory? If the program is too big to fit in memory …
Split the program into pieces – overlays Swapping overlays in and out Problem: splitting programs into small pieces is boring Virtual memory: OS takes care large programs Keep the parts currently used in memory Put other parts on disk
31
Virtual and Physical Addresses
Virtual addresses are used/generated by programs Physical addresses are used in execution If only one program VA = PA MMU: maps VA to PA CPU package Virtual addresses go to MMU. About the figure CPU sends virtual addresses to the MMU The MMU sends physical addresses to the memory Disk controller CPU MMU Memory Bus
32
Paging Virtual address space is divided into pages
60-64K X 56-6K 52-56K 48-52K 44-48K 7 40-44K 36-40K 5 32-36K 28-32K 24-28K 20-24K 3 16-20K 4 12-16K 8-12K 6 4-8K 1 0-4K 2 Pages Virtual address space is divided into pages Page frames in physical memory Pages and page frames are with same size Transfers between RAM and disk are always in units of pages Page frames Virtual address space 28-32K 24-28K 20-24K 16-20K 12-16K 8-12K 4-8K 0-4K Paging is a technique used in most virtual memory systems. In hardware, a present/absent bit keeps track of which pages are physically present in memory. Page fault: an unmapped page is requested
33
The Magic in MMU An address 4-bit page number + 12 bit offset
24 = 16 pages 212 = 4096 bytes/page Page table: yielding the number of the page frame corresponding to a virtual page Replace the virtual page number by the physical page frame number
34
Challenges for Page Table
Page table can be extremely large 32 bits virtual addresses, 4kb/page1m pages Each process needs its own page table Virtual to physical mapping must be fast 1+ page table references/instruction Have to seek for hardware solutions
35
Two Simple Designs for Page Table
Use fast hardware registers for page table Load registers at every process switching Requires no memory reference during mapping Expensive if the page table is large Put the whole table in main memory Only one register pointing to the start of table Fast switching 1+ memory references/instruction Pure memory solution is slow, pure register solution is expensive, so …
36
Multiple Level Page Tables
Avoid keeping all the page tables in memory all the time . Bits 10 12 PT1 PT2 Offset 1023 . 6 5 4 3 2 1 . Top-level table with 1024 entries – corresponding to the 10-bit PT1 field Second-level tables, each with 1024 entries – corresponding to the 10-bit PT2 field If some portion of pages are not currently under use, the corresponding second-level tables may not be loaded into memory Top level page table Second level page table
37
Typical Page Table Entry
Page frame number: goal of page mapping Present/absent bit: page in memory? Protection: what kinds of access permitted Modified: is the page is written? (If so, need to write back to disk later) Referenced: is someone using this page? Caching disable: read from the disk? Caching disabled Modified Present/absent Page frame number Referenced Protection
38
Translation Lookaside Buffers
Most programs tend to make a large number of references to a small number of pages Put the heavily read fraction in registers TLB/associative memory Also doable using software check Page table Some mapping and TLB maintenance are done by software. Read the text book for details. TLB: translation lookaside buffers TLB Virtual address found Not found Physical address
39
Inverted Page Table In 64-bit computers, 4kb/page 30million gigabytes page table! Only keep one entry / page frame 256M memory 65,536 entries. Save space! Hard to do virtual-physical mapping Solutions Use TLB Hashing virtual address
40
Warm-up Memory management Techniques
Provide a virtual consistent memory to users Memory allocation/deallocation Techniques Only one small process at a time: easy Multiple small processes: fixed partitions and swapping Multiple large processes: virtual memory Page table
41
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation
42
Page Replacement When page fault
Choose one page to kick out, if modified, update disk copy Better choose an unmodified page Better choose a seldom used page Many similar problems in computer systems Caches page replacement Web page replacement in web server
43
Optimal Algorithm Label each page with number of instructions will be executed before next reference Remove the page with highest label Unrealizable! Page id First next use … 108 600 109 8026
44
Remove Not Recently Used Pages
Reference flag R and modification flag M Clear R bit periodically Four classes of pages when a page fault Class 0 (R0M0): not referenced, not modified Class 1 (R0M1): not referenced, modified Class 2 (R1M0): referenced, not modified Class 3 (R1M1): referenced, modified NRU removes a page at random from the lowest numbered nonempty class R bit can be cleared every clock interrupt Advantage of NRU Easy to understand, moderately efficient to implement, not optimal but adequate
45
First-in, First-out Remove the oldest page
Old pages may be the commonest ones Pure FIFO is rarely used
46
Second Chance Algorithm
Looking for an old page not recently referenced Inspect the R bit before removing old pages 0: throw it away 1: clear R bit, move the page to the tail 18 18 20 A … … A A is treated like a newly loaded page Page loaded first Most recently loaded page
47
Clock Page Algorithm Moving pages around in second chance algorithm is inefficient Keep all page frames on a circular list A hand points to the oldest one A variation of second chance algorithm Second chance algorithm constantly moves pages around on its list. When a page fault occurs, the page being pointed to by the hand is inspected. If its R bit is 0, the page is evicted, the new page is inserted into the clock in its place, and the hand is advanced one position. If R is 1, R bit is cleared The hand is advanced to the next page
48
Least Recently Used Algorithm
Pages heavily used in the last few instructions will probably be heavily used again in next few. Remove page unused for the longest time Maintaining a list of pages? Most recently used page at the front Least recently used page at the rear Update the list on every memory reference Very costly!
49
Efficient Implementations of LRU
A global counter automatically incremented after each instruction A 64-bit counter/entry in page table Local counter global counter when page is referenced The page with lowest value is the least recently used one
50
LRU Using Matrix N page frames n*n matrix, initial all zero
Frame k is referenced Set all bits of row k to 1 Set all bits of column k to 0 The row with lowest binary value is the least recently used 1 2 3 1 2 3 Page 2 is referenced
51
Software LRU NFU – Not Frequently Used Improvement
Each page has a counter initially zero Add R bit to the counter every clock interrupt The page with least counter value is removed Never forget anything, hurt recently used pages Improvement Counters are shifted right 1 bit before the R bit is added in The R bit is added to the leftmost For machines having no LRU hardware, implementing LRU in software becomes a must The intensive region of the program may move ahead slowly
52
Example The page with lowest counter value is removed
Choice based on limited history Before clock tick 1 After clock tick 1 Page R bit Counter 1 2 3 4 5 Page R bit Counter 1 2 3 4 5 Only keep history about the last 8 clock tick Multiple pages with same counter value kick out one at random, no guarantee it is really the least frequently used one.
53
Why Page Faults When a process starts, no page
1st instruction 1st page fault Visit global variables page fault Visit stack page fault After a while, most of the pages in memory Run with relatively few page faults Demand paging & locality of reference Locality of reference: during any phrase of execution, the process references only a relatively small fraction of its pages.
54
Working Set & Thrashing
Working set: the set of pages currently used by a process The entire working set in main memory few page faults Only part of the working set in main memory many page faults, slow Trashing: a program causing page faults every few instructions
55
Working Set Model: Avoid Trashing
In multiprogramming systems, processes are back and forth between disk & memory Many page faults when a process is brought in Worst case: before the process’ working set is brought in, the process is switched out again Many page faults waste much CPU time Working set model Keep track of working sets Prepaging: before run a process, allocate its working set
56
The Function of Working Set
Working set: at time t, pages used by the k most recent memory references An alternation: the k most recently referenced pages Content of working set non-sensitive to k Evict pages not in the working set w(k, t) Most programs randomly access a small number of pages, the working set changes slowly in time. The curve: initial rapid rise, then slow rise for large k. The content of the working set is non-sensitive to the value of k Challenge: how to precisely keep track of working set? evict \i-'vikt\ vt [ME evicten, fr. LL evictus, pp. of evincere, fr. L, to vanquish, win a point -- more at EVINCE] (15c) 1a: to recover (property) from a person by legal process 1b: to put (a tenant) outby legal process 2: to force out: EXPEL syn see EJECT -- evic-tion\-'vik-shen\ n -- evic-tor \-'vik-ter\ n k
57
Maintaining Working Set
Prohibitively expensive! New referenced page Remove duplicate pages in registers Page 1 … Page k Working set Page fault Remove pages not in working set
58
Approximating Working Set
Use last k execution time instead of last k references Current virtual time: the amount of CPU time a process has actually used since it started
59
Working Set Page Replacement
60
WSClock Working set page replacement scans the entire page table per page fault, costly! Improvement: clock algorithm + working set information Widely used To reduce disk traffic, at most n writ-backs are allowed/page fault
61
Example 2014 1 2014 1213 1 1213 1 1980 1980 2014 1 2014 1213 1213 1251 1 New page 1980
62
Algorithm Find the first page can be evicted A page not in working set
Clean (no modification) use it Dirty write back to disk (use it if no other choice) and keep searching The hand all way round Some write-backs find a clean page (it must be not in working set) No write-back just grab a clean page or current page
63
Summary Algorithm Comment Optimal Not implementable, good as benchmark
NRU Very crude FIFO Might throw out important pages Second chance Big improvement over FIFO Clock Realistic LRU Excellent, but difficult to implement exactly NFU Fairly crude approximation to LRU Aging Efficient algorithm approximates LRU well Working set Somewhat expensive to implement WSClock Good efficient algorithm
64
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation How good are those page replacement algorithms? Can we evaluate them theoretically?
65
More Frames Fewer Page Faults?
Intuitively correct Belady’s anomaly Reference string: FIFO is used 1 2 3 4 Youngest Oldest P 9 page faults 1 2 3 4 Youngest Oldest P 10 page faults
66
Model of Page Replacement
When a page is referenced, it is always moved to the top entry in memory All pages above a just referenced page move down one position All pages below the referenced pages stay Reference string 2 1 3 5 4 6 7 Page faults P
67
An Observation LRU FIFO 1 2 3 4 Youngest Oldest P 9 page faults 1 2 3
Reference string 2 1 3 5 4 6 7 Page faults P LRU 1 2 3 4 Youngest Oldest P 9 page faults Algorithm LRU has the property M(m, r) M(m+1, r) FIFO does not have such a property Read numbers in the figure appear in M(3, r) but not in M(4, r) 1 2 3 4 Youngest Oldest P 10 page faults FIFO
68
Stack Algorithms If increase memory size by one page frame and re-execute the process, at every point during the execution, all the pages that were present in memory in the first run are also present in the second run Stack algorithms do not suffer from Belady’s anomaly
69
Distance String Page reference denoted by the distance from the top of the stack Reference string 2 1 3 5 4 6 7 Page faults P Distance string
70
Properties of Distance String
It depends on both the reference string and paging algorithm Statistical properties k pages can achieve good effect Consistent page faults P(d) P(d) k d d 1 n 1 n
71
Predicting Page Fault Rates
Scan distance string only once Reference string 2 1 3 5 4 6 7 Page faults P Distance string C1 = 4 C2 = 2 C3 = 1 C4 = 4 C5 = 2 C6 = 2 C7 = 1 C = 8 # times “1” occurs in distance string F1 = 19 F2 = 17 F3 = 16 F4 = 12 F5 = 10 F6 = 10 F7 = 8 F = 8 # of page faults with 5 frames =C6+C7+C
72
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation
73
Allocating Memory for Processes
Multiple processes in main memory Local page replacement Global page replacement Age A0 10 A1 7 A2 5 A3 4 A4 6 A5 3 B0 9 B1 B2 B3 2 B4 B5 B6 12 C1 C2 C3 A0 A1 A2 A3 A6 A5 B0 B1 B2 B3 B4 B5 B6 C1 C2 C3 A0 A1 A2 A3 A6 A5 B0 B1 B2 B4 B5 B6 C1 C2 C3 Local lowest Global lowest
74
Local/global Page Replacement
Local page replacement: every process with a fixed fraction of memory Thrashing even if plenty of free page frames Waste memory if working set shrinks Global page replacement: dynamically allocate page frames for each process Works better
75
Assign Page Frames to Processes
Monitor working set size by aging bits Working set may change size quickly, aging bits are crude measure spread over multiple ticks Page frame allocating algorithms Periodically allocate each proc an equal share Variations: adjust based on program size/minimum size Allocate based on page fault frequency (PFF) Working set size may change in microseconds How to determine # of page frames for each process?
76
Load Control System thrash Swap some processes to disk
Combined working sets > memory size Some processes need more memory, but no one needs less Swap some processes to disk Two level scheduling: CPU scheduler and memory scheduler Can be combined together Consider characteristics of processes: CPU bound or I/O bound Some processes swapped out to make the page-fault rate acceptable. Periodically, some processes are brought in from disk and other ones are swapped out there.
77
Page Size Hardware page size and OS page size
Arguments for small page size Internal fragmentation: on average, half of the final page is empty and wasted Large page more unused program in mem Arguments for large page size Small pages large page table and overhead For example, hardware page size 512 bytes, by combining multiple pages into one, OS can achieve 1k page frames. N segments in memory with page size p bytes, np/2 bytes will be wasted on internal fragmentation. Overheads on large page table Search page table Transfer page table to and from disk Switching cost S: average process size P: page size E: each page entry in page table Overhead = s e / p + p / 2 p = sqrt(2 * s e)
78
Separate Instruction and Data Space
Both page spaces are paged independently Each has its own page table Instruction space Data space Single address space 232-1 232-1 Data Program Program Data
79
Shared Pages Processes sharing same program should share memory pages
Separate page table and process table Two processes may use two working sets Searching all page tables for shared page? Costly! Special data structure to keep track of shared pages Share data Copy on write: share clean data, keep dirty data private
80
Cleaning Policy Writing back dirty pages when free pages are needed is costly! Processes are blocked and have to wait Keep enough clean pages for paging Background paging daemon Keep page content, can be reused Two handed clock Front hand: controlled by paging daemon Back hand: page replacement Back hand (replacement) hits a clean page with higher probability due to the work of the paging daemon.
81
Virtual Memory Interface
Multiple processes share same memory High bandwidth sharing High-performance message-passing system Distributed shared memory: share memory over a network
82
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation Implementing virtual memory systems have to make choices among the major theoretical algorithms. Some practical implementation issues should also be paid attention.
83
Swap Area A chunk of pages reserved on disk, same size of # of pages of the process in memory Each process has its own swap area, recorded in process table System swap area: a list of free chunks Initialize swap area before a process runs Copy entire process image, or Load the process in memory and let it be paged out as needed
84
Swap Area for Growing Processes
Data area and stack may grow Reserve separate swap areas for text (program), data, and stack Each areas consist of more than one chunk Reserve nothing in advance Allocate disk space when pages swapped out Deallocate when page swapped in again Have to keep track of pages on disk A table per process, costly!
85
Comparison of Two Methods
Paging to a static swap area Backing up pages dynamically
86
When OS Involves Paging?
Process creation Process execution Page faults Process termination
87
Paging in Process Creation
Determine (initial) size of program and data Create page table Process is running page table in memory Create swap area Record info about page table and swap area in process table
88
Paging During Process Running
Reset MMU Flush TLB (translation Lookaside Buffer) Copy or point to new process’ page table Bring some or all of new process’ pages into memory
89
Paging When Page Faults
Read registers to identify virtual address causing the page fault Compute page address on disk Find available page frame Read in the page Execute the instruction again
90
Paging When Process Exit
Release page table, pages, disk space
91
What Happens on A Page Fault
Hardware traps to kernel, saves PC on stack Assembly language routine saves registers and calls OS Identify which virtual page is needed Approve the virtual page, locate a free frame Write-back the page frame, mark the frame, switch Schedule a disk read Mark the page normal Back up to the faulting instruction OS returns to the assembly language routine Reload registers & other info, return to user space For details, see Section on page Back up an instruction is a quite complicated operation. Interested students can see Section on page We skip that section. It will not be covered in assignment/final exam.
92
Outline Basic memory management Swapping Virtual memory
Page replacement algorithms Modeling page replacement algorithms Design issues for paging systems Implementation issues Segmentation Implementing virtual memory systems have to make choices among the major theoretical algorithms. Some practical implementation issues should also be paid attention.
93
A Motivating Example Many large tables in a compiler
Symbol table: names and attributes of variables Constant table: integer/floating-point constants Parse tree: syntactic analysis Stack: for procedure calls within the compiler Tables grow/shrink as compilation proceeds How to manage space for compiler?
94
Space Management for Tables
Virtual address space Moving space from large tables to small ones Tedious work Free programmers from managing expanding and contracting tables Eliminate organizing program into overlays Segments: many completely independent address spaces Call stack Parse tree Constant table Source text Symbol table
95
Segments Segment: a two dimensional memory
Each segment has a linear sequence of address (0 to seg_max) Different segments may have different length Segment lengths may change during execution Different segments can grow/shrink independently Address: segment number + address within segment
96
Multiple Segments in A Process
Segments are logical entries Simplify linking multiple components Facilitate sharing among multiple processes Symbol table Parse tree If all components are packed in 1-dimensional address space, change of one component rearrange addresses for all components Using segments don’t need to change addresses for other components Sharing segment among multiple processes Source text Call stack Constants
97
Comparing Paging & Segmentation
Consideration Paging Segmentation Aware by programmers? NO Yes # of linear address spaces 1 Many Total address space > physical memory? Distinguish and separately protect procedures and data? No Accommodate fluctuating tables? Facilitate sharing among procedures? Why is this technique? Get larger linear space than that of physical memory 1. Break programs/data into logically independent address space 2. Aid sharing and protection
98
Implementing Pure Segmentation
Checkerboarding / external fragmentation Physical memory initially containing five segments Evict segment 1, bring in segment 7, a hole Evict segment 4, bring in segment 5, another hole Evict segment 3, bring in segment 6, more holes Checkerboarding / external fragmentation: memory is divided up into a number of chunks, some containing segments and some containing holes, which waste memory in holes. Solution: compact memory Time
99
Segmentation With Paging
For large segments, only the “working set” should be kept in memory Paging segments Each program has a segment table Segment table is itself a segment If (part of) a segment is in memory, its page table must be in memory Address: segment # + virtual page # + offset Segment # page table Page table + virtual page # page address Page address + offset word address Since segment table is huge, it is treated as a segment.
100
The Intel Pentium Case Very large segments LDT: Local Descriptor Table
Each program has its own LDT Describe segments local to each program GDT: Global Descriptor Table One GDT in the whole system Describe system segments, including OS 16 k segments, 1 billion 32-bit words / segment.
101
Converting to an Address
Selector Offset Base address Limit Other fields + For details, see Section 4.8.3 32-bit linear address
102
Summary Simplest case: no swapping nor paging Swapping
Virtual memory, page replacement Aging and WSClock Modeling paging systems Implementation issues Segmentation
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.