Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 8 – Main Memory (Pgs 315 - 350). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.

Similar presentations


Presentation on theme: "Chapter 8 – Main Memory (Pgs 315 - 350). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory."— Presentation transcript:

1 Chapter 8 – Main Memory (Pgs 315 - 350)

2 Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory at a time A. Program may not be at address 00000000h (linking/loading issues) B. Virtual memory may be used (mapping/paging issues) C. Components may be duplicated (caching issues)

3 Review - Hardware  Each process has a base address (start) and a limit (range) to specify its accessible addresses  The base and limit are held in registers  Trying to access memory outside of the allowable address space generates a "segmentation fault"  Only applies to user mode, kernel mode can access any address

4 Binding  Linking a symbolic address in a program to an actual memory location in main memory  Absolute Code: The compiler actually puts physical addresses in the program  Relocatable Code: The compiler uses offsets from some base which is determined at load time and the addresses are inserted by the loader  Both of these are limited, modern OSs use virtual addressing instead

5 Linking & Loading (Figure 8.3)

6 Address Translation  Logical (Virtual) Address: What actually is in the compiled program  Physical Address: An actual main memory location  Goal: Map (at run-time) the logical/virtual address space to the physical address space  Done by the OS's MMU (Memory Management Unit)

7 Dynamic Relocation  Virtual addresses are offsets  Relocation register provides the base address and is set when the program is loaded  MMU adds offset to base to get physical address

8 Dynamic Loading  Lets load program parts and pieces as they are needed  Speeds up initial loading  Saves memory by not loading unneeded parts  Program can be bigger than main memory if only the parts currently in use are loaded  Can be augmented with dynamic linking

9 Swapping  When we run out of main memory, we can temporarily store some program parts in a backing store (on disk) and thus free some memory space  When we need them again they are copied back into main memory  This movement between main memory and disk is called swapping  May need to be swapped back to the same location if dynamic relocation not used  Variants of swapping are widely used in OSs

10 Contiguous Memory Allocation  Obsolete  Simple, but limited  Good for RTOS and Embedded OS if the running applications change rarely and have known sizes  Treats entire program as a single entity  Can waste a lot of memory

11 Partitions  A section of memory that holds one process  Fixed Size: Process must fit into partition and extra space is unused  Variable Size: Partition is sized to process, but partitions must be fit together into memory : "dynamic storage allocation"

12 Allocation Strategies 1. First Fit 2. Best Fit 3. Worst Fit  First fit is generally faster than Best fit and produces similar results  Worst fit is generally a bad idea  First fit tends to waste about 50% of the available memory due to fragmentation  Compaction (i.e., defragmentation of main memory) is slow but sometimes necessary

13 Paging  What OSs really do  Avoids fragmentation, improves swapping  Recent advances involve greater coupling / cooperation between hardware and OS  Physical memory is broken into fixed sized blocks called frames  Logical memory is broken in blocks of the same size called pages  Each page is put into a frame  Only part of the last page is wasted  Fixed sizes make allocation trivial

14 Figure 8.7

15 Basic Paging  Virtual Address = Page + Offset  Page Table holds mapping of pages to frames  Lookup page to find frame base address  Apply offset  Page size is usually configurable to suit OS usage  Number of pages is defined by address bus size  Number of frames = memory size / page size

16 Examining Page Size #include int main(void) { SYSTEM_INFO si; GetSystemInfo(&si); printf( "The page size for this system is %u bytes.\n", si.dwPageSize); return 0; }

17 Fragmentation  Maximum wasted space for a process is the page size minus one byte  Average wasted space for a process is half a page  Some systems use more than one page size  Most systems support (but rarely use) Huge Pages  Pages sizes growing as memory and process sizes are also increasing

18 Tables  Page Table: Maps a program page to a frame for address translation, usually there is a page table for each process, managed and held by the OS  Frame Table: Keeps track of how main memory is used (free/available), usually one frame table managed by the OS

19 Hardware Support  Efficiency of address translation is critical for good system performance  PTBR: Page Table Base Register – holds address (for this process) of the page table's location in main memory  TLB : Translation Lookaside Buffer – a cache of recently performed Page->Frame translations, small but fast!  ASID: Address Space IDentifier – permits TLB to be used for multiple processes, otherwise TLB must be flushed on context switch

20 Page Table Entries  Often store more than just the frame number  Valid Bit: Is this page part of the program (permits fixed sized page tables)  Use of a Valid Bit can be avoided by using a Page Table Length Register  Dirty Bit: Has this page been modified (saves needed to swap a page to disk)

21 Shared Pages  Re-entrant code is code that is never modified and can be used by several processes  Only need to load into memory once  Most OSs do this to save space/frames

22 Hierarchical Paging  64 bit address space, broken into 4KB blocks = HUGE page tables  12 Bits needed for 4 KB page/frame (52bits for the page number!)  Break it down into a tree of page tables

23 Other Page Table Formats  Hash Tables: Use "Page" as input to a hash function to find the "bucket" of frames for that value (See Fig. 8.16)  Inverted Page Tables (See Fig. 8.17)  One entry per frame  One table for entire system  Page + PID = Search Key  Slow lookup, shared memory difficult

24 Segmentation  All pages are not really the same  Code, shared code, data, shared data, heap data, stack data,...  Divide program into segments that match these differences  Could then use some part of the address to identify the segment and the rest for the address within the segment  Map to physical memory with a Segment Table  Use hardware support: Segment Base, Segment Limit

25 Intel Chips (32 Bit)  Have both paging and segmentation support  Segmentation is mostly hardware based  Paging is mostly OS (software) based  Pages either 4KB or 4 MB

26 To Do:  Begin Assignment 2 in Lab  Read Chapter 8 (pgs 315-350; this lecture)  Start reading Chapter 9 (next lecture)


Download ppt "Chapter 8 – Main Memory (Pgs 315 - 350). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory."

Similar presentations


Ads by Google