Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating Systems Concepts

Similar presentations


Presentation on theme: "Operating Systems Concepts"— Presentation transcript:

1 Operating Systems Concepts
Lecture 28 to 33 – Memory management

2 Introduction The organization and management of the main memory or primary memory or real memory of a computer system has been one of the most important factors influencing operating systems design. The terms memory and storage have been used interchangeably in the literature. Programs and data must be in main storage in order to be run or referenced directly. Secondary storage, most commonly disk, provides massive, inexpensive capacity for the programs and data that must be kept readily available for processing.

3 Storage Organization The main storage has been viewed as an expensive resource. As such, it has demanded the attention of systems designers; it has been necessary to squeeze the maximum use out of this costly resource. OS are concerned with organizing and managing main storage for optimal use. By storage organization we mean the manner in which the main storage is viewed. Do we place only a single user several users in it at the same time? If several user programs are in main storage at the same time, do we give each of them the same amount of space, or do we divide main storage into portions, called partitions, of different sizes? Do we partition the main storage in a rigid manner with partitions defined for extended periods of time, or do we provide for a more dynamic partitioning allowing the computer system to adapt quickly to changes in the needs of user jobs? Do we require that user jobs be designed to run in a specific partition, or do we allow jobs to run anywhere they will fit? Do we require that each job be placed in one contiguous block of storage locations, or do we allow jobs to be parceled up into separate and placed in any available slots in main storage? Systems have been built implementing each of these schemes.

4 Storage Management Regardless of what storage organization scheme we adopt for a particular system, we must decide what strategies to use to obtain optimal performance. Storage management strategies determine how a particular storage organization performs under various policies: when do we get a new program to place it in memory? Do we get it when the system specifically asks for it, or do we attempt to anticipate the system’s requests? Where in main storage do we place the next program to be run? Do we place programs as tightly as possible into available memory slots to minimize wasted space, or do we place programs as quickly as possible.  If a new program needs to be placed in main storage and if main storage is currently full, which of the other programs do we displace? Should we replace the oldest programs, or should we replace those that are least frequently used, or those that are least recently used? Systems have been implemented using each of these storage management strategies.

5 Storage Hierarchy Main memory Secondary storage Cache memory
Should store currently needed program instructions and data only Secondary storage Stores data and programs that are not actively needed Cache memory Extremely high speed Usually located on processor itself Most-commonly-used data copied to cache for faster access Small amount of cache still effective for boosting performance Due to temporal locality

6 Fig: Hierarchical Storage Organization
Storage Hierarchy Fig: Hierarchical Storage Organization

7 Storage Hierarchy Programs and data need to be in main storage in order to be executed or referenced. Programs or data not needed immediately may be kept on secondary storage. Main storage is generally accessed much faster than secondary storage. In systems with several levels of storage, a great deal of shuttling goes on in which programs and data are move back and forth between the various levels. This shuttling consumes systems resources such as CPU time. In the 1960s it became clear that the storage hierarchy could be extended by one more level with dramatic improvements in performance and utilization. This additional level, the cache, is a high- speed storage that is much faster than the main storage. Cache storage is extremely expensive compared with main storage and therefore only relatively small caches are used. The following figure shows the relationship between cache, primary storage, and secondary storage. Cache storage imposes one more level of shuttling on the system. Programs in the main storage are shuttled to very high-speed cache before being executed. In the cache, they may be executed much faster than in main storage. The hope of designers using the cache concept is that the overhead involved in shuttling programs back and forth will be much smaller than the performance increase obtained by the faster execution possible in the cache.

8 STORAGE MANAGEMENT STRATEGIES
Storage management strategies are geared to obtaining the best possible use of the main storage resource. Storage management strategies are divided into the following categories. Fetch strategies Placement strategies Replacement strategies

9 Fetch Strategies Fetch strategies are concerned with when to obtain the next piece of programs or data for insertion into main storage. Demand Fetch in which the next piece of program or data is brought in to the main storage when it is referenced by a running program. + Easy to implement - Process is un-runnable until block is brought into memory Anticipatory Fetching in which OS predicts what blocks a process is likely to refer to and brings it in memory before it is referenced + Process doesn't always need to be suspended that is If 100% correct, process would never be suspended, an OS would give impression of infinite memory - Behavior of process is unpredictable, so OS could pre-load redundant blocks

10 Placement and Replacement Strategies
Placement strategies are concerned with determining where in main storage to place an incoming program. First-Fit Best-Fit Worst-Fit Replacement strategies are concerned with determining which piece of program or data to displace to make room for incoming programs. Optimal FIFO LRU Clock

11 Contiguous vs Non-Contiguous Allocation
Contiguous means together in sequence Ways of organizing programs in memory Contiguous allocation Program must exist as a single block of contiguous addresses Sometimes it is impossible to find a large enough block Low overhead Noncontiguous allocation Program divided into chunks called segments or pages Each segment/page can be placed in different part of memory Easier to find “holes” in which a segment/page will fit High overhead but increased number of processes that can exist simultaneously in memory compensates the overhead incurred by this technique

12 Single User Contiguous Storage Allocation
The earliest computer systems allowed only a single person at a time to use the machine. All of the machine’s resources were at the user’s disposal. The following figure illustrates the storage organization for a typical single user contiguous allocation system.

13 Protection in Single User Contiguous Allocation
Operating system must not be damaged by programs System cannot function if operating system overwritten Boundary register Contains address where program’s memory space begins Any memory accesses outside boundary are denied Can only be set by privileged commands Applications can access OS memory to execute OS procedures using system calls, which places the system in executive mode

14 FIXED PARTITION MULTIPROGRAMMING
To take maximum advantage of multiprogramming, it is necessary for several jobs to reside in the computer’s main storage at once. Thus when one job requests i/o the CPU may be immediately switched to another and may do calculations without delay. When this new job yields the CPU, another may be ready to use it. Multiprogramming often requires more storage than a single user system.

15 FIXED PARTITION MULTIPROGRAMMING: ABSOLUTE TRANSLATION AND LOADING
In the earliest multiprogramming systems, main storage was divided into a number of fixed-size partitions. Each partition could hold a single job. The CPU was switched rapidly between users to create the illusion of simultaneity. Jobs were translated with absolute assemblers and compilers to run only in a specific partition. If a job was ready to run and its partition was occupied, then that job had to wait, even if other partitions were available This resulted in waste of the storage resource, but the operating system was relatively straightforward to implement. Any program, no matter how small, occupies an entire partition. Wasting space inside the partition, this is called internal fragmentation.

16 Contd.

17 FIXED PARTTITION MULTIPROGRAMMING: RELOCATABLE TRANSLATION AND LOADING.
Relocating compilers, assemblers, and loaders are used to produce relocatable programs that can run in any available partition that is large enough to hold them. This scheme eliminates some of the storage waste inherent in multiprogramming with absolute translation and loading.

18 Protection in Fixed Partitioning
Can be implemented by boundary registers, called base and limit (also called low and high)

19 Drawbacks in Fixed Partitions
Internal fragmentation Process does not take up entire partition, wasting memory Storage fragmentation occurs in every computer system regardless of its storage organization. In fixed partition multiprogramming systems, fragmentation occurs either because user jobs don’t completely fill their designated partitions or when a partition remains unused if it is too small to hold a waiting job. Potential for processes to be too big to fit anywhere

20 Contd. Fig. Internal fragmentation in a fixed-partition multiprogramming system.

21 Variable Partition Multiprogramming
System designers found fixed partitions too restrictive Internal fragmentation Potential for processes to be too big to fit anywhere Variable partitions designed as replacement Operating systems designers observing the problems with fixed partition multiprogramming decided that an obvious improvement would be to allow jobs to occupy as much space (short of the full real storage) as they needed. No fixed boundaries would be observed.

22 Variable-Partition Multiprogramming
Figure Initial partition assignments in variable-partition programming.

23 Variable-Partition Characteristics
Jobs placed where they fit No space wasted initially Internal fragmentation impossible Partitions are exactly the size they need to be External fragmentation can occur when processes removed Leave holes too small for new processes Eventually no holes large enough for new processes

24 Variable-Partition Characteristics
Figure Memory “holes” in variable-partition multiprogramming.

25 Solutions for External Fragmentation
Coalescing Holes Combine adjacent free blocks into one large block Often not enough to reclaim significant amount of memory Figure. Coalescing memory “holes” in variable-partition multiprogramming.

26 Solutions for External Fragmentation
Compaction Sometimes called garbage collection (not to be confused with GC in object-oriented languages) Rearranges memory into a single contiguous block free space and a single contiguous block of occupied space Makes all free space available Significant overhead Figure. Memory compaction in variable-partition multiprogramming.

27 Solutions for External Fragmentation
Compaction Drawbacks It consumes system resources that could otherwise be used productively. The system must stop everything while it performs the compaction. This can result in erratic response times for interactive users and could be devastating in real-time systems. Compaction involves relocating the jobs that are in storage. This means that relocation information, must now be maintained in readily accessible form. With a normal, rapidly changing job mix, it is necessary to compact frequently. The consumed system resources might not justify the benefits from compacting.

28 Memory Placement Strategies
Where to put incoming processes First-fit strategy Process placed in first hole of sufficient size found Simple, low execution-time overhead Best-fit strategy Process placed in hole that leaves least unused space around it More execution-time overhead Worst-fit strategy Process placed in hole that leaves most unused space around it Leaves another large hole, making it more likely that another process can fit in the hole

29 Memory Placement Strategies
Figure. First-fit memory placement strategy

30 Memory Placement Strategies
Figure. Best-fit memory placement strategy

31 Memory Placement Strategies
Figure. Worst-fit memory placement strategy

32 Relocation When program loaded into memory the actual (absolute) memory locations are determined A process may occupy different partitions which means different absolute memory locations during execution as a result of may be Swapping or Compaction Hence, there must be an abstraction of addresses that is independent of main memory (i-e. Logical Addresses)

33 Addresses Logical Relative Physical or Absolute
Reference to a memory location independent of the current assignment of data to memory. Relative Address expressed as a location relative to some known point. Physical or Absolute The absolute address or actual location in main memory. A translation must be made from both Logical and Relative addresses to arrive at the Absolute address MMU – Memory Management Unit is a hardware component responsible for translating logical/relative address to physical address

34 Address Translation in Contiguous Allocation Schemes
Base register Starting address for the process Bounds/Limit register Ending location of the process These values are set when the process is loaded or when the process is swapped in The value of the base register is added to a relative address to produce an absolute address The resulting address is compared with the value in the bounds register If the address is not within bounds, an interrupt (trap) is generated to the operating system

35 Simple Paging Partition memory into small equal fixed-size chunks and divide each process into the same size chunks The chunks of a process are called pages The chunks of memory are called frames Can suffer from internal fragmentation but its is comparatively very less from that caused by fixed partition scheme as page size and frame size is relatively very small.

36 Simple Paging Operating system maintains a page table for each process
Contains the frame location for each page in the process Logical Memory Address consist of a page number and offset within the page Physical Memory Address consists of a frame number and and offset within the frame

37 Processes and Frames A.0 A.1 A.2 A.3 B.0 D.0 D.1 B.1 B.2 D.2 C.0 C.1
Animated slide System with a number of frames allocated Process A, stored on disk, consists of four pages. When it comes time to load this process, the operating system finds four free frames and loads the four pages of process A into the four frames. Process B, consisting of three pages, and process C, consisting of four pages, are subsequently loaded. Then process B is suspended and is swapped out of main memory. Later, all of the processes in main memory are blocked, and the operating system needs to bring in a new process, process D, which consists of five pages. The Operating System loads the pages into the available frames and updates the page table C.1 C.2 C.3 D.3 D.4

38 Page Table

39 Paging – Address Translation
Page number is mapped with a frame number by referring the page table The offset remains same as the size of both frame and page is same.

40 Simple Segmentation A program can be subdivided into segments
Segments may vary in length There is a maximum segment length Addressing consist of two parts a segment number and an offset Segmentation is similar to dynamic partitioning Segmentation eliminates internal fragmentation but suffers from external fragmentation (as does dynamic partitioning) The difference with dynamic partitioning, is that with segmentation a program may occupy more than one partition, and these partitions need not be contiguous. Segmentation eliminates internal fragmentation but suffers from external fragmentation (as does dynamic partitioning) However, because a process is broken up into a number of smaller pieces, the external fragmentation should be less. A consequence of unequal-size segments is that there is no simple relationship between logical addresses and physical addresses. Analogous to paging, a simple segmentation scheme would make use of a segment table for each process and a list of free blocks of main memory. Each segment table entry would have to give the starting address in main memory of the corresponding segment. the length of the segment, to assure that invalid addresses are not used. When a process enters the Running state, the address of its segment table is loaded into a special register used by the memory management hardware.

41 Simple Segmentation Operating would make a segment table for each process and a list of free blocks of main memory. Each segment table entry would have to give the starting address in main memory of the corresponding segment. the length of the segment, to assure that invalid addresses are not used.

42 Segmentation – Address Translation
s – Segment Number d – Offset Logical Address = <s,d> Offset(d) is compared with the limit/size in the segment table If it is less than the limit then base corresponding to the segment number and offset(d) are added to calculate physical address otherwise an interrupt is generated to show that the address trying to be accesses belongs to another segment Physical Address = Base + d

43 Protection and sharing
Segmentation lends itself to the implementation of protection and sharing policies. As each entry has a base address and length, inadvertent memory access can be controlled Sharing can be achieved by segments referencing multiple processes Segmentation lends itself to the implementation of protection and sharing policies. As each segment table entry includes a length as well as a base address, a program cannot inadvertently access a main memory location beyond the limits of a segment. To achieve sharing, it is possible for a segment to be referenced in the segment tables of more than one process.

44 Paging - Logical to Physical Address Translation
In our example, we have the logical address , which is page number 1, offset 478. Suppose that this page is residing in main memory frame 6 = binary Then the physical address is frame number 6, offset 478 =

45 Segmentation – Logical to Physical Address Translation
In our example, we have the logical address , which is segment number 1, offset 752. Suppose that this segment is residing in main memory starting at physical address Then the physical address is =

46 Combined Paging and Segmentation
In a combined paging/segmentation system, a user’s address space is broken up into a number of segments, at the discretion of the programmer. Each segment is, in turn, broken up into a number of fixed-size pages, which are equal in length to a main memory frame. If a segment has length less than that of a page, the segment occupies just one page. Paging is transparent to the programmer Segmentation is visible to the programmer

47 Thank You!


Download ppt "Operating Systems Concepts"

Similar presentations


Ads by Google