CS414 Review Session.

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

File Systems.
Segmentation and Paging Considerations
CS 153 Design of Operating Systems Spring 2015
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
Memory Management Design & Implementation Segmentation Chapter 4.
1 File Systems Chapter Files 6.2 Directories 6.3 File system implementation 6.4 Example file systems.
Memory Design Example. Selecting Memory Chip Selecting SRAM Memory Chip.
Operating System Support Focus on Architecture
Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Memory Management 2010.
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
Virtual Memory Chapter 8.
1 Storage Hierarchy Cache Main Memory Virtual Memory File System Tertiary Storage Programs DBMS Capacity & Cost Secondary Storage.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Computer Organization and Architecture
Secondary Storage Management Hank Levy. 8/7/20152 Secondary Storage • Secondary Storage is usually: –anything outside of “primary memory” –storage that.
CS364 CH08 Operating System Support TECH Computer Science Operating System Overview Scheduling Memory Management Pentium II and PowerPC Memory Management.
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Computer Organization and Architecture Operating System Support Chapter 8.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Disk Access. DISK STRUCTURE Sector: Smallest unit of data transfer from/to disk; 512B 2/4/8 adjacent sectors transferred together: Blocks Read/write heads.
Operating Systems CMPSC 473 I/O Management (4) December 09, Lecture 25 Instructor: Bhuvan Urgaonkar.
Review of Memory Management, Virtual Memory CS448.
CS 153 Design of Operating Systems Spring 2015 Final Review.
Cosc 3P92 Week 9 & 10 Lecture slides
Operating Systems Chapter 8
Disk Structure Disk drives are addressed as large one- dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer.
Chapter 5 Operating System Support. Outline Operating system - Objective and function - types of OS Scheduling - Long term scheduling - Medium term scheduling.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
File System Implementation Chapter 12. File system Organization Application programs Application programs Logical file system Logical file system manages.
CS 6502 Operating Systems Dr. J.. Garrido Device Management (Lecture 7b) CS5002 Operating Systems Dr. Jose M. Garrido.
CE Operating Systems Lecture 20 Disk I/O. Overview of lecture In this lecture we will look at: Disk Structure Disk Scheduling Disk Management Swap-Space.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
Chapter 9 Operating System Support. Outline Operating system - Objective and function - types of OS Scheduling - Long term scheduling - Medium term scheduling.
OSes: 11. FS Impl. 1 Operating Systems v Objectives –discuss file storage and access on secondary storage (a hard disk) Certificate Program in Software.
Chapter 4 Memory Management Virtual Memory.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Virtual Memory. Cache memory: provides illusion of very high speed Virtual memory: provides illusion of very large size Main memory: reasonable cost,
Week 10 March 10, 2004 Adrienne Noble. Important Dates Project 4 due tomorrow (Friday) Final Exam on Tuesday, March 16, 2:30- 4:20pm.
Operating Systems: Summary INF1060: Introduction to Operating Systems and Data Communication.
Memory Management – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Memory Management *** Modified – look for Reading:
Virtual Memory Chapter 8.
Memory Management.
Jonathan Walpole Computer Science Portland State University
Chapter 2 Memory and process management
Chapter 11: File System Implementation
Chapter 9: Virtual Memory
Chapter 8 Operating System Support
William Stallings Computer Organization and Architecture
Operating System I/O System Monday, August 11, 2008.
CSI 400/500 Operating Systems Spring 2009
Virtual Memory Chapter 8.
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Chapter 9: Virtual-Memory Management
Segmentation Lecture November 2018.
Page Replacement.
Main Memory Background Swapping Contiguous Allocation Paging
Overview Continuation from Monday (File system implementation)
CSE 451: Operating Systems Autumn 2005 Memory Management
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Chapter 14: File-System Implementation
COMP755 Advanced Operating Systems
Presentation transcript:

CS414 Review Session

Address Translation

Example Logical Address: 32 bits Number of segments per process: 8 Page size: 2 KB Page table entry size: 2B Physical Memory: 32MB Paged Segmentation 2 level paging

Logical Address Space Total Number of bits 32 Page Offset: 11 bits (2KB = 211B) Segment Number: 3 bits (8 = 23) Number of pages per segment: 218 (32-3-11=16) Number of page table entries in one page of page table: 1K (2KB/2B) Page number in inner page table: 10 bits (1K = 210) Page number in outer page table: 8 bits (18-10)

Segment Table Number of entries = 8 Width of each entry (sum of) Base Address of outer page table: 14 bits Number of page frames = 16K (32MB/2KB) Length of Segment: 29 bits (32 – 3) Miscellaneous items

Page Table Outer Page Table: Inner Page Table Number of entries = 28 Width of entry (sum of) Page frame number of inner page table: 14 bits Miscellaneous bits (total 2B specified) Inner Page Table Number of entries = 210 Width: same as outer page table

Translation Look-aside Buffer Just an Associative Cache Number of entries (pre fixed size) Width of each entry (sum of) Key: segment#+page# = 3 + 18 = 21 bits Some TLBs may also use process IDs Value: page frame# = 14 bits

The Page Size Issue With a very small page size, each page matches the code that is actually used  page faults are low Increased page size causes each page to contain code that is not used  Fewer pages in memory  Page faults rise. (Thrashing) Small pages  large page tables  costly translation 2KB to 8KB

Load Control Determines the number of processes that will be resident in main memory (i.e. multiprogramming level) Too few processes: often all processes will be blocked and the processor will be idle Too many processes: the resident size of each process will be too small and flurries of page faults will result: thrashing

Handling Interrupts and Traps Terminate current instruction (instructions) Pipeline flush. Save state Registers, PC, may need to repeat instructions. Invoke Interrupt Handling Routine Interrupt vector table User space to Kernel space context switch Execute the interrupt handling routine Invoke the scheduler to schedule a ready process. Kernel space to user space context switch

Disk Optimizations Seek Time (biggest overhead) Disk Scheduling Algorithms SSTF, SCAN, C-SCAN, LOOK, C-LOOK Contiguous file allocation Place contiguous block on same cylinder Same track, if not same numbered track on another disk. Organ Pipe Distribution Place most used blocks (I-nodes, directory structure) closer to the middle of the disk. Place the head in the middle of the disk Use multiple heads.

Disk Optimizations Rotational Latency (next biggest) Interleaving Adjacent sectors are actually not adjacent on the disk. Disk Cache Cache all sectors on the track. (2 rotations) 6 3 2 5 7 1 4

Redundant Array of Inexpensive Disks Mirroring or Shadowing Expensive, small gain in read time, reliable Striping Inexpensive, faster access time, not reliable Striping + Parity Inexpensive, small performance gain, reliable Interleaving + Parity + Striping Inexpensive, faster access time, reliable

Storage Hierarchy B nsec Level 1 Cache KB+ 100 nsec Level 2 Cache Register B nsec Level 1 Cache KB+ 100 nsec Level 2 Cache 500 KB+ usec 100 MB+ Main Memory msec 10-1000 usec GB+ Hard Disk Network ?? sec Tertiary Storage TB

Paging vs Segmentation Fixed size partitions Internal Fragmentation (average=page size/2) No External Fragmentation. Small chunk of memory. (~ 4 KB) Linear address space, invisible to programmer. Variable size partition No Internal Fragmentation External Fragmentation (compaction, page segs) Large chunk of memory. (~ 1 MB) Logical address space, visible to programmer.

Demand-paging vs Pre-paging Pages swapped in on demand. More page faults (especially initially). No wastage of page frames. No such overhead. Pages swapped in before use in anticipation. Reduce future page faults. Pages may not be used (wastage of memory space). Good strategies to pre-page. (working set, contiguous pages, etc…)

Local vs Global Page Replacement. Only swap out current process’ pages. Page frame allocation strategies required. (page fault frequency) Thrashing affects only current process. Admission control required. Can use different page replacement algorithms for each process. Swap out any page in memory. No explicit allocation of page frames. Can affect performance of other processes. Admission control required. Single page replacement algorithm.

Interrupt driven IO vs Polling Each interrupt has a fixed processing time overhead (context switches). Other processes can execute while waiting for response. Good for long and indefinite response time. Ex: Printer The response time on polling is variable. (device and request specific) No other process can execute while waiting for response. Good for short and predictable response time (< fixed interrupt overhead). Ex: Fast Networks

Contiguous vs Indexed Allocation All blocks of the file in contiguous disk locations. No additional index overhead. (Disk addresses can be computed) Disk fragmentation is a major problem. (compaction overhead) Smart allocation strategies required. Low average latency for sequential access. (only one long seek, smart block layouts) Blocks of the file randomly distributed throughout the disk. Each access involves a search in the index. (Involves fetching additional blocks from the disk) No Fragmentation on the disk. No allocation strategies required. High average latency (disk scheduling algorithms)

Contiguous vs Linked Allocation All blocks are in contiguous disk addresses. Disk addresses can be computed for each access. Suffers from fragmentation of disk. Bad sectors affect contiguity of blocks. Blocks are arranged in a link list fashion. Each access involves browsing the entire list. No disk fragmentation. All bad blocks can be hidden away as a file.

Hard Disks vs Tapes Small capacity (few GB) Subject to various failures (disk crashes, bad sectors, etc…) Random access latency is very small (msec) Huge capacity per unit volume (TB) Permanent storage (no corruption for long time.) Very high random access latency (sec) (need to read the tape from the beginning)

Unix FS vs Log FS Index used to map I-nodes to physical blocks. Same read latency as indexed allocation. Writes take place on the same block where data is read from. Write latency equals is dominated by seek time. No garbage collection required. Crash recovery is extremely difficult. Index used to map I-nodes to physical blocks. Same read latency as UNIX FS Writes are bunched together and done on sequential blocks. Write latency is small because of amortized seek time. Garbage collection required to free old blocks. Checkpoints enable efficient recovery from crashes.

Routing Strategies Fixed Permanent path between A and B Congestion independent of paths. No set-up costs. Sequential delivery. Virtual Circuit Per session path between A and B Some attempt to uniform congestion. Per session set-up cost. Sequential delivery. Dynamic Different path per message between A and B Uniform congestion across paths. Per message set-up cost. Out of order delivery.

Connection Strategies Circuit Switch Permanent link between A and B (hardware) Congestion independent of paths. No set-up costs. Sequential delivery. Message Switch Per message link between A and B Some attempt to uniform congestion. Initial set-up cost. Sequential delivery. Packet Switch Different link per packet between A and B Uniform congestion across links. (best link) No set-up cost. Out of order delivery.