Memory Management By: Omar A. Cruz Salgado 802-00-1712 ICOM 5007 Sec. 121.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

Memory.
Part IV: Memory Management
1 Memory Management Managing memory hierarchies. 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile –transparent Memory.
Allocating Memory.
1 CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.
Chapter 7 Memory Management
Memory Management Chapter 4. Memory hierarchy Programmers want a lot of fast, non- volatile memory But, here is what we have:
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management - 2 CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent.
CMPT 300 Operating System I
Memory Management (continued) CS-3013 C-term Memory Management CS-3013 Operating Systems C-term 2008 (Slides include materials from Operating System.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Computer Organization and Architecture
Avishai Wool lecture Introduction to Systems Programming Lecture 6 Memory Management.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Chapter 7 Memory Management
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Memory Management From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally MEMORYMANAGEMNT.
Operating Systems Chapter 8
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
1 Memory Management Chapter Basic memory management 4.2 Swapping (εναλλαγή) 4.3 Virtual memory (εικονική/ιδεατή μνήμη) 4.4 Page replacement algorithms.
Chapter 4 Memory Management.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Subject: Operating System.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Chapter 10 Memory Management Introduction Process must be loaded into memory before being executed. Input queue – collection of processes on the.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
Operating Systems COMP 4850/CISG 5550 Basic Memory Management Swapping Dr. James Money.
Basic Memory Management 1. Readings r Silbershatz et al: chapters
1 ADVANCED OPERATING SYSTEMS Lecture 8 (Week 10) Resource Management – II (Memory Management) by: Syed Imtiaz Ali.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
O RERATING SYSTEM LESSON 9 MEMORY MANAGEMENT I 1.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
CHAPTER 3-1, 3-2 MEMORY MANAGEMENT. MEMORY HIERARCHY Small amount of expensive, fast, volatile cache Larger amount of still fast, but slower, volatile.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Memory Management.
Chapter 2 Memory and process management
From Monoprogramming to multiprogramming with swapping
CSC 322 Operating Systems Concepts Lecture - 12: by
Main Memory Management
Chapter 8: Main Memory.
Operating System Concepts
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
So far… Text RO …. printf() RW link printf Linking, loading
Main Memory Background Swapping Contiguous Allocation Paging
Lecture 3: Main Memory.
Contents Memory types & memory hierarchy Virtual memory (VM)
Memory Management (1).
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Operating Systems: Internals and Design Principles, 6/E
Page Main Memory.
Presentation transcript:

Memory Management By: Omar A. Cruz Salgado ICOM 5007 Sec. 121

Memory Management Introduction Ideally… What every programmer would like is an infinitely large, infinitely fast memory that is also nonvolatile. Nonvolatile = Don’t lose it’s content when the electric power fails. –Unfortunately technology don’t provide such memory. Memory Hierarchy: 1.Fast and expensive, volatile cache memory 2.Medium price and tens of megabyte, volatile main memory (RAM) 3.Tens or hundreds of gigabyte of slow, cheap, nonvolatile disk storage.

Memory Management Introduction –In order to manage this hierarchy the operating system has a Memory Manager. What is it’s Job??? –To keep track of which parts OS memory are and which parts are not in use, to allocate memory to processes when they need it and deallocate it when they are done, and to manage swapping between main memory and disk when main memory is too small to hold all the processes.

Basic Memory Management –Memory Management Systems Can be divided in two groups: Those that move processes back and forth between main memory and disk during execution ( Swapping and Paging ). Those that don’t. –We must keep in mind that swapping and paging are largely artifacts caused by the lack of sufficient main memory to hold all the program at once.

Monoprograming without Swapping or Paging –The simplest memory management scheme is to run one program at a time, sharing the memory between that program and the operating system. –Three Variations: User Program OS in RAM User Program OS in ROM Device drivers in ROM 0xFFF… 000 –First used in mainframes and minicomputers. Used in palmtops and embedded systems. Used in early personal computers

Monoprograming without Swapping or Paging –Only one process at a time can be running. As soon as the user types a command, the operating system copies the requested program from disk and executes it. When the process finishes, the operating system displays a prompt character and waits for a new command. When it receives the command, it loads a new program into memory, overwriting the first one.

Multiprogramming With Fixed Partitions –Most Modern systems allow multiple processes to run at the same time. This means that when a process is blocked waiting for I/O to finish, another one can use the CPU. –To achieve multiprogramming must divide memory up into n (possibly unequal) partitions. This can be done manually when the system is started up. –When a Job arrives, it can be put in an input queue.

Multiprogramming With Fixed Partitions A first scheme can be to put the incoming job in a large enough queue that can hold it. Any space in the partition not used by a job is lost. The disadvantage of sorting jobs into separate queues becomes apparent when the queue for a large partition is empty but the queue for a small job partition is full. Multiple Input queues Partition 4 Partition 3 Partition 2 Partition 1 OS 800 K 700 K 400 K 200 K 100 K

Multiprogramming With Fixed Partitions An alternative organization is to maintain a single queue. Whenever a partition becomes free the job closest to the front of the queue that fits in it could be loaded into the empty partition and run. Single Input Queue Partition 4 Partition 3 Partition 2 Partition 1 OS 800 K 700 K 400 K 200 K 100 K

Modeling Multiprogramming –A way to model multiprogramming is to look at CPU usage from a probabilistic viewpoint. Suppose that a process spends a fraction p of its time waiting for I/O to complete. With n processes in memory at once, the probability that all n processes are waiting for I/O is p n. CPU utilization = 1 - p n This is called degree of multiprogramming.

Relocation and Protection –Multiprogramming introduces two essential problems that must be solved: Relocation Protection –When a program is linked, the linker must know at what address the program will begin in memory. Example: First Instruction is a call to a procedure in address 100. If program loaded in partition 1 instruction will jump to address 100 which is inside OS memory space. What is needed is call to K. This problem is called relocation Partition 4 Partition 3 Partition 2 Partition 1 OS 800 K 700 K 400 K 200 K 100 K

Relocation and Protection –Solution: Modify instructions as the program is loaded to memory. Programs loaded into partition 1 have 100K added to each address. This would not work become a program can always construct a new instruction and jump to it. IBM provided a solution for protecting that was to divide memory into blocks of 2-KB bytes and assign a 4-Bit protection code to each block. –PSW = Program Status Word <= Contained 4-bit Key An alternative solution is the Base and Limit hardware which when a process is scheduled, the base register is loaded with the address of the start of the partition, and the limit register is loaded with the length of the partition. Disadvantages of this scheme is that it must perform an adding and a comparison on every memory reference.

SWAPPING –Swapping consists of bringing in each process in its entirety, running it for a while, then putting it back on the disk. OS AAA A DDD BBBB CCCCC Since A is in a different location, addresses contain in it must be relocated, either by software when it is swiped in or by hardware during program execution.

SWAPPING –The main difference between fixed partitions and the variable partitions is that the number, location and size of the partition vary dynamically in the latter as processes come and go. –This method gives the flexibility to have partitions that are not to small or to big for a process which improves memory utilization. –But it also complicates allocating and deallocating memory, as well of keeping track of it.

SWAPPING –Memory compaction: When swapping creates multiple holes in memory, it is possible to put them all together to make a one big space by moving all the processes downward as far as possible. This is usually not performed because it takes a lot of CPU time. –A point is worth making concerns how much memory should be allocated for a process when it is created or swapped in. If processes are created with a fixed size that never changes, then allocating is simple: the OS allocates exactly what is needed, no more and no less.

SWAPPING –Processes’ data segments can grow, for example by dynamically allocating memory from heap. –If a hole of memory is adjacent to the process memory region then the process is allowed to grow. –If a process is adjacent to another process then the growing process has to be moved to a large enough memory space for it, or one or more processes will have to be swapped out to create a large enough area. –If a process can grow in memory and the swap area on the disk is not enough then the process must be killed,.

SWAPPING

Memory Management with Bitmaps –With bitmaps, memory is divided up into allocation units. Corresponding to each allocation unit is a bit in the Bitmap, which is 0 if the unit is free and 1 if it is occupied (or vice versa). The size of the allocation unit is an important issue. The smaller the allocation unit, the larger the bitmap. But appreciable memory will be wasted if the units are to large. We have to remember that the bitmap will be also at memory which will limit our space for data. But the main problem with bitmaps is that when a k unit process is brought into memory, the memory manager must search for the consecutive amount of 0 bits in the map that will allocate the process which can be a slow task.

Memory Management with Linked Lists –Another way of keeping track of memory is to maintain a link list of allocated and free memory segments, where a segment is either a process or a hole between two processes. 5P0H53P86 H182P206P263 –This process has the advantage that when an process terminates or is swapped, updating the list is straightforward. Hole Starts at 18 Length 2

Memory Management with Linked Lists –Several algorithms can be used to allocate memory for a newly created process using the link list approach. First Fit = The memory manager scans along the list of segments until it finds a hole that is big enough. The hole is then broken in two pieces, one for the process and the other for the unused space. Next Fit = It works the same way as the first fit, but it keeps track of where it is whenever it finds a hole. The next time it looks from a hole it will start in the last place it visited instead of the beginning. Best fit = Searches the entire list and takes the smallest hole adequate. Worst fit = Take the largest available hole, so that the hole broken of will be big enough to be useful. Quick fit = Maintains a separate lists for some of the more common sizes requested.

Virtual Memory –The basic idea behind virtual memory is that the combined size of the program, data, and stack may exceed the amount of physical memory available for it. The OS keeps those parts of the program currently in use in main memory, and the rest in disk.

Paging –Virtual addresses are those addresses that are program generated. These form the virtual address space. When virtual memory is used, the virtual address don’t go directly to the memory bus, instead it goes to the Memory Management Unit that maps the virtual address onto the physical one.

Paging –In this example, we have a computer that can generate 16-bit addresses, form 0 to 64K, that are virtual addresses. This computer however has only 32KB of physical memory. The complete copy of the program’s core image must be in disk. –The virtual address space is divided into units called pages. –The corresponding units in the physical memory are called page frames. –Pages = Page frames = Frame size.

Paging –When a program tries to access address 0, virtual address 0 is sent to the MMU. The MMU sees This virtual address is in page 0, which according to its mapping is in page frame 2. –It will change: MOV REG,0 –To the address: MOV REG,8192 –Every page that is not mapped has an X in it. In actual hardware a, Present/absent bit keeps track of which pages are physically present in memory.

Paging –If the program tries to access an address that is in a virtual page with no mapping, this causes the CPU to trap to the OS. This is called Page Fault. –The OS takes a little used page frame and copies back its content to disk. –Then fetches the page just referenced into that page frame changes the mapping and restart the trapped instruction. –When the incoming instruction is delivered to the MMU it comes as a 16-bit virtual address that is split into a 4-bit page number a a 12-bit offset.

Page Tables –The Virtual page number is used as an index into the page table to find the entry for that virtual page. –The purpose of the page table is to map virtual pages onto page frames. –Two major issues must be faced: The page can be extremely large –Modern Computers use virtual addresses of at least 32 bits. For a 32-bit address space, 1 million pages. Must remember that each process has it’s own page table. The Mapping must be fast. –Virtual-to physical mapping must be done on every memory reference.

Multilevel Page Tables –The secret of this method is to avoid keeping all the page table in memory all the time. Those that are not needed shouldn’t be kept around.

Multilevel Page Tables –Structure of a page table entry: The exact layout of an entry is highly machine dependent, but the information present is almost the same machine to machine. Page frame number = The value to be located. Present/absent bit = If bit is 1, the entry is valid and can be used. If 0 virtual page to which the entry belongs is not in memory. Causes a page fault. Protection bit = Tells what kind of access are permitted. Modified and referenced = bits that keep track of page usage. When a page is written to, the hardware automatically sets the Modified bit. Caching disabled = this is important for pages that map onto device registers rather than memory

TLBs – Translation Lookaside Buffers –Computer come equipped with a small hardware device for mapping virtual addresses to physical addresses without going through the page table. –The device is called TLB (Translation Lookaside Buffer) or sometimes an associative memory.