Download presentation
Presentation is loading. Please wait.
Published byJuliana Floyd Modified over 9 years ago
1
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory
2
Cache memory enhances performance by providing faster memory access speed. Virtual memory enhances performance by providing greater memory capacity, without the expense of adding main memory. Instead, a portion of a disk drive serves as an extension of main memory. Most modern OSs use Virtual Memory.
3
Schedulers (very briefly) Virtual Memory is controled by the Medium term scheduler of an OS. An OS normally has 3 schedulers —Long Term Scheduler –Use to control multi programming. IE how many processes line up to use the processor. —Medium Term Scheduler –Use to control use of memory (and virtual memory if it exists) —Short Term Scheduler –Which process actually gets to use the processor. We’ll come back to schedulers in the OS lecture, but right now we look at the Medium Term Scheduler and memory management.
4
Memory Management Uni-program —Memory split into two —One for Operating System (monitor) —One for currently executing program Multi-program —“User” part is sub-divided and shared among active processes
5
Swapping Problem: I/O is so slow compared with CPU that even in multi-programming system, CPU can be idle most of the time Solutions: —Increase main memory –Expensive –Leads to larger programs —Swapping
6
What is Swapping? A queue of processes stored on disk Processes “swapped” in as space becomes available As a process completes it is moved out of main memory If none of the processes in memory are ready to use the CPU —Swap out a blocked process to intermediate queue —Swap in a ready process or a new process —But swapping is an I/O process...
7
Partitioning Splitting memory into sections to allocate to processes (including Operating System) Fixed-sized partitions —May not be equal size —Process is fitted into smallest hole that will take it (best fit) —Some wasted memory —Leads to variable sized partitions
8
Use of Swapping
9
Fixed Partitioning
10
Variable Sized Partitions (1) Allocate exactly the required memory to a process This leads to a hole at the end of memory, too small to use —Only one small hole - less waste When all processes are blocked, swap out a process and bring in another New process may be smaller than swapped out process Another hole
11
Variable Sized Partitions (2) Eventually have lots of holes (fragmentation) Solutions: —Coalesce - Join adjacent holes into one large hole —Compaction - From time to time go through memory and move all hole into one free block (c.f. disk de- fragmentation)
12
Effect of Dynamic Partitioning
13
Relocation No guarantee that process will load into the same place in memory Instructions contain addresses —Locations of data —Addresses for instructions (branching) Logical address - relative to beginning of program Physical address - actual location in memory (this time) Automatic conversion using base address
14
Paging Split memory into equal sized, small chunks - page frames Split programs (processes) into equal sized small chunks - pages Allocate the required number page frames to a process Operating System maintains list of free frames A process does not require contiguous page frames Use page table to keep track
15
Allocation of Free Frames
16
Logical and Physical Addresses - Paging
17
Virtual Memory Demand paging —Do not require all pages of a process in memory —Bring in pages as required Page fault —Required page is not in memory —Operating System must swap in required page —May need to swap out a page to make space –Page replacement algorithms, see cache replacement algorithms.
18
Thrashing Too many processes in too little memory Operating System spends all its time swapping Little or no real work is done Disk light is on all the time Solutions —Good page replacement algorithms —Reduce number of processes running —Fit more memory
19
Bonus We do not need all of a process in memory for it to run We can swap in pages as required So - we can now run processes that are bigger than total memory available! Main memory is called real memory User/programmer sees much bigger memory - virtual memory
20
Page Table Structure
21
Translation Lookaside Buffer Every virtual memory reference causes two physical memory access —Fetch page table entry —Fetch data Use special cache for page table —TLB
22
1. Extract the page number from the virtual address. 2. Extract the offset from the virtual address. 3. Search for the virtual page number in the TLB. 4. If the (virtual page #, page frame #) pair is found in the TLB, add the offset to the physical frame number and access the memory location. 5. If there is a TLB miss, go to the page table to get the necessary frame number. If the page is in memory, use the corresponding frame number and add the offset to yield the physical address. 6. If the page is not in main memory, generate a page fault and restart the access when the page fault is complete. TLB lookup process
23
Virtual Memory Putting it all together: The TLB, Page Table, and Main Memory
24
Segmentation Paging is not (usually) visible to the programmer Segmentation is visible to the programmer Usually different segments allocated to program and data May be a number of program and data segments A program may be run 2 times at the same time, but the OS uses the same program segment and different data segments. —Saving Memory!
25
Advantages of Segmentation Simplifies handling of growing data structures Allows programs to be altered and recompiled independently, without re-linking and re-loading Lends itself to sharing among processes Lends itself to protection Some systems combine segmentation with paging
26
Paging and Segmentation Combine to the best of both worlds Paging is used for memory allocation Segmentation is used on top of Paging —A Segment may take X number of pages. —Now data segments can continue to grow as needed and “new” pages are used as a segment causes a page fault.
27
Paging and Segmentation (2) Large page tables are cumbersome and slow, but with its uniform memory mapping, page operations are fast. Segmentation allows fast access to the segment table, but segment loading is labor- intensive. Paging and segmentation can be combined to take advantage of the best features of both by assigning fixed-size pages within variable-sized segments. Each segment has a page table. This means that a memory address will have three fields, one for the segment, another for the page, and a third for the offset.
28
A Real-World Example The Pentium architecture supports both paging and segmentation, and they can be used in all 4 variants. The processor supports two levels of cache (L1 and L2), both having a block size of 32 bytes. The L1 cache is next to the processor, and the L2 cache sits between the processor and memory. The L1 cache is in two parts: and instruction cache (I- cache) and a data cache (D-cache). The next slide shows this organization schematically.
29
A Real-World Example
30
Q A &
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.