Presentation is loading. Please wait.

Presentation is loading. Please wait.

Day 22 Virtual Memory.

Similar presentations


Presentation on theme: "Day 22 Virtual Memory."— Presentation transcript:

1 Day 22 Virtual Memory

2

3 Two-level scheme to support large tables
Consider a process as described here: Logical address space is 4-GiB (232) Size of a page is 4KiB (212 bytes) There are 220 pages in the process. (232/212) This implies we need 220 page table entries. If each page table entry occupies 4-bytes, then need 222 byte (4MiB) large page table The page table will occupy 222/212 i.e. 210 pages. Root table will consist of 210 entries – one for each page that holds a page table. Root table will occupy 212 bytes i.e 4KiB of space and will be kept in main memory permanently. Could require two disk accesses.

4 Always in main memory Brought into main memory when needed.

5 Inverted page table The page table can get very large
An inverted page table has an entry for every frame in main memory and hence is of a fixed size. A hash function is used to map the page number to the frame number. An entry has a page number, process id, valid bit, modify bit, chain pointer, and so on.

6

7 Rehashing techniques for the inverted page table (Fig. 8.27)
Hashing function: X mod 8 (b) Chained rehashing

8 Translation Look-aside Buffer(TLB)
Used in conjunction with a page table Aim is to reduce references to the page table and hence reduce the number of memory accesses. (2 memory accesses for each fetch) TLB is a cache that holds a small portion of the page table. It’s a faster and smaller memory. Reduces the overall page access time. A TLB entry contains the page number and PTE. In tlb = 10ns (TLB) + 100ns (data not in TLB: 10ns (TLB) (root PT) (PT) + 100ns (data) Average = 110ns * (1-.99) * 310 = 112ns

9 During address translation:
Check TLB. If TLB hit, use frame number with offset to generate address. Simultaneously access page table. If TLB hit, then stop. Else look at page table entry. If found, use frame number with offset to generate address. Update TLB. If page fault, then block process and issue a request to bring the page into main memory. When page is ready, update page table

10 TLB If we keep the right entries of the page table in the TLB, we can reduce the page table accesses and hence memory accesses. TLB will hold only some of the page table entries use associative mapping to find a page table entry. Search time is O(1).

11 Memory access time In tlb = 10ns (TLB) + 100ns (data)
not in TLB: 10ns (TLB) (root PT) (PT) + 100ns (data) Average access time = 110ns (.99) + (1-.99) * 310ns = 112ns If 99% of the time, you have TLB hits, then average access time = 112ns.

12

13 Direct Mapping Associative Mapping

14

15 Page size – hardware/software decision
Small page size Less internal fragmentation More pages in main memory Large page tables Few page faults Large page size More internal fragmentation Fewer pages per process Smaller page tables Fewer page faults Fewer processes in main memory

16

17 Page faults and page size
Eg: Small pages while(x < 30){ - Page 1 printTheValues(); - Page 5 readNewValues(); - Page 6 filterNewValues(); - Page 11 writeNewValues(); - Page 12 printTheValues(); - Page5 x++; - Page 1 } Since the pages are small, pages 1, 5, 6, 11 and 12 can all reside in main memory. Hence, fewer page faults.

18 Eg: Medium sized pages while(x < 30){ - Page 1 printTheValues(); - Page 5 readNewValues(); - Page 3 filterNewValues(); - Page 4 writeNewValues(); - Page 5 printTheValues(); - Page5 x++; - Page 1 } Only pages 1,3 and 4 in main memory. So, bring in 5, but replace 1/3/4. Lots of page faults.

19 Eg: Large pages while(x < 30){ - Page 1 printTheValues(); - Page 1 readNewValues(); - Page 1 filterNewValues(); - Page 2 writeNewValues(); - Page 2 x++; - Page 1 } Both pages 1 and 2 in main memory. Fewer page faults.

20 Page faults and number of frames per process

21 Variable page sizes are supported by many architectures.
Operating systems typically support only one page size Makes replacement policy simpler Makes resident set management easier (how many pages per process etc)

22 VM with segmentation Advantages
Growing data structures – OS can shrink or enlarge the segment as required. Allows parts of the process to be recompiled independently without recompiling the entire process. Easier to share. Easier for protection.

23 Segment table entry present bit starting address length of segment
modify bit protection bit

24

25

26 Combined paging and segmentation
Sharing and protection at the segment level. Replacement at the page level. Present bit, modified bit in the page-table entry. Linux – 3 level paging for user space, buddy for kernel space UNIX – paging for user space, dynamic allocation for kernel space


Download ppt "Day 22 Virtual Memory."

Similar presentations


Ads by Google