Avishai Wool lecture 8 - 1 Introduction to Systems Programming Lecture 8 Paging Design Input-Output.

Slides:



Advertisements
Similar presentations
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Advertisements

Memory Management.
Paging: Design Issues. Readings r Silbershatz et al: ,
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Avishai Wool lecture Introduction to Systems Programming Lecture 8 Input-Output.
1 Input/Output Chapter Principles of I/O hardware 5.2 Principles of I/O software 5.3 I/O software layers 5.4 Disks 5.5 Clocks 5.6 Character-oriented.
EEE 435 Principles of Operating Systems Principles and Structure of I/O Software (Modern Operating Systems 5.2 & 5.3) 5/22/20151Dr Alain Beaulieu.
Memory Management Design & Implementation Segmentation Chapter 4.
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Avishai Wool lecture Introduction to Systems Programming Lecture 8 Paging Design.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
Memory Management 2010.
Chapter 3.2 : Virtual Memory
1 Lecture 21: Virtual Memory, I/O Basics Today’s topics:  Virtual memory  I/O overview Reminder:  Assignment 8 due Tue 11/21.
1 Input/Output Chapter 3 TOPICS Principles of I/O hardware Principles of I/O software I/O software layers Disks Clocks Reference: Operating Systems Design.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
I/O Tanenbaum, ch. 5 p. 329 – 427 Silberschatz, ch. 13 p
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
1 Input/Output. 2 Principles of I/O Hardware Some typical device, network, and data base rates.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
1 I/O and Filesystems. 2 How to provide interfaces Rough reading guide (no exam guarantee): Tanenbaum Ch. 5.1 – 5.5 & Silberschatz Ch. 13 & ,
Memory Management From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum.
ITEC 502 컴퓨터 시스템 및 실습 Chapter 8-1: I/O Management Mi-Jung Choi DPNM Lab. Dept. of CSE, POSTECH.
Segmentation & O/S Input/Output Chapter 4 & 5 Tuesday, April 3, 2007.
Sistem Operasi IKH311 Masukan Luaran (Input/Output)
CSC 322 Operating Systems Concepts Lecture - 23: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Input/Output CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent University.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
IT253: Computer Organization
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Memory and cache CPU Memory I/O. CEG 320/52010: Memory and cache2 The Memory Hierarchy Registers Primary cache Secondary cache Main memory Magnetic disk.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Chapter 5 Input/Output 5.1 Principles of I/O hardware
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Input/Output – 2 I/O Software CS 342 – Operating Systems Ibrahim Korpeoglu.
Processor Memory Processor-memory bus I/O Device Bus Adapter I/O Device I/O Device Bus Adapter I/O Device I/O Device Expansion bus I/O Bus.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
بسم الله الرحمن الرحيم MEMORY AND I/O.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
1 Device Controller I/O units typically consist of A mechanical component: the device itself An electronic component: the device controller or adapter.
1 Memory Management Adapted From Modern Operating Systems, Andrew S. Tanenbaum.
I/O Management.
CS161 – Design and Architecture of Computer
Session 3 Memory Management
CS703 - Advanced Operating Systems
CS703 - Advanced Operating Systems
CSE 153 Design of Operating Systems Winter 2018
Chapter 8: Main Memory.
Lecture 28: Virtual Memory-Address Translation
© CHAROTAR INSTITUTE OF TECHNOLOGY, CHANGA
Page Replacement.
Operating Systems Chapter 5: Input/Output Management
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Chapter 5: I/O Systems.
CSE 451: Operating Systems Autumn 2005 Memory Management
Virtual Memory Overcoming main memory size limitation
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Virtual Memory: Working Sets
CSE 153 Design of Operating Systems Winter 2019
COMP755 Advanced Operating Systems
Chapter 13: I/O Systems.
Presentation transcript:

Avishai Wool lecture Introduction to Systems Programming Lecture 8 Paging Design Input-Output

Avishai Wool lecture Steps in Handling a Page Fault

Avishai Wool lecture Virtual  Physical mapping CPU accesses virtual address MMU looks in page table to find physical address –Page table is in memory too Unreasonable overhead!

Avishai Wool lecture TLB: Translation Lookaside Buffer Idea: Keep the most frequently used parts of the page table in a cache, inside the MMU chip. TLB holds a small number of page table entries: Usually 8 – 64 TLB hit rate very high because, e.g., instructions fetched sequentially.

Avishai Wool lecture A TLB to speed up paging Example: Code loops through pages 19,20,21 Uses data array in pages 129,130,140 Stack variables in pages 860,861

Avishai Wool lecture Valid TLB Entries TLB miss: –Do regular page lookup –Evict a TLB entry and store the new TLB entry –Miniature paging system, done in hardware When OS does context switch to a new process, all TLB entries become invalid: –Early instructions of new process will cause TLB misses.

Avishai Wool lecture TLB placement/eviction Done by hardware Placement rule: –TLBIndex = VirtualAddr modulo TLBSize –TLBSize is always 2 k  TLBIndex = k least-significant bits –Keep “tag” (rest of bits) to fully identify virtual addr Virtual address can be only in one TLB index No explicit “eviction”: simply overwrite what is in TLB[TLBIndex]

Avishai Wool lecture TLB + Page table lookup In TLB? In page table? Page fault: copy from disk to memory Virtual address Physical address No Yes Yes; update TLB No

Avishai Wool lecture TLB – cont. If address is in TLB  page is in physical memory –OS invalidates TLB entry when evicting a page –So page fault not possible if we have a TLB hit “page fault rate” is computed only on TLB misses

Avishai Wool lecture Example: Average memory access time TLB lookup: 4ns Phys mem access: 10ns Disk access: 10ms TLB miss rate: 1% Page fault rate: 0.1% Assume page table is in memory. p=0.99, time=4ns+10ns Page hit: p=0.01*0.999, time=4ns+10ns+10ns Page fault: p=0.01*0.001, time=4ns+10ns+10ms+10ns TLB miss TLB hit Average memory access: 114.1ns (1.141*10 -7 )

Avishai Wool lecture Design issues in Paging

Avishai Wool lecture Local versus Global Allocation Policies: Physical Memory a)Original configuration – ‘A’ causes page fault b)Local page replacement c)Global page replacement

Avishai Wool lecture Local or Global? Local  number of frames per process is fixed –If working set grows  thrashing –If working set shrinks  waste Global usually better Some algorithms can only be local (working set, WSClock).

Avishai Wool lecture How many frames to give a process? Fixed number Proportional to its size (before load) Zero, let it issue page faults for all its pages. –This is called pure demand paging. Monitor page-fault-frequency (PFF), give more pages if PFF high.

Avishai Wool lecture Page fault rate as a function of the number of page frames assigned

Avishai Wool lecture Load Control Despite good designs, system may still thrash When PFF algorithm indicates –some processes need more memory –but no processes need less Solution: Reduce number of processes competing for memory –swap one or more to disk, divide up frames they held –reconsider degree of multiprogramming

Avishai Wool lecture Cleaning Policy Need for a background process, paging daemon –periodically inspects state of memory When too few frames are free –selects pages to evict using a replacement algorithm It can use same circular list (clock) –as regular page replacement algorithm but with diff ptr

Avishai Wool lecture Windows XP Page Replacement Processes are assigned working set minimum and working set maximum Working set minimum is the minimum number of page frames the process is guaranteed to have in memory A process may be assigned as many page frames up to its working set maximum When the amount of free memory in the system falls below a threshold, automatic working set trimming is performed to restore the amount of free memory Working set trimming removes frames from processes that have more than their working set minimum

Avishai Wool lecture Devices, Controllers, and I/O Architectures

Avishai Wool lecture I/O Device Types Block Devices –block size of bytes –block can be read/written individually –typical: disks / floppy / CD Character Devices –delivers / accepts a sequential stream of characters –non-addressable –typical: keyboard, mouse, printer, network Other: Monitor, Clock

Avishai Wool lecture Typical Data Rates

Avishai Wool lecture Device Controllers I/O devices have components: –mechanical component –electronic component The electronic component is the device controller –may be able to handle multiple devices Controller's tasks –convert serial bit stream to block of bytes –perform error correction as necessary –make available to main memory

Avishai Wool lecture Communicating with Controllers Controllers have registers to deliver data, accept data, etc. Option 1: special I/O commands, I/O ports in r0, 4 “4” is not memory address 4, it is I/O port 4 Option 2: I/O registers mapped to memory addresses

Avishai Wool lecture Memory-Mapped Registers Controller connected to the bus Has a physical “memory address” like B When this address appears on the bus, the controller responds (read/write to its I/O register) RAM configured to ignore controller’s address

Avishai Wool lecture Possible I/O Register Mappings Separate I/O and memory space (IBM 360) Memory-mapped I/O (PDP-11) Hybrid (Pentium, 640K-1M are for I/O)

Avishai Wool lecture Advantages of Memory Mapped I/O No special instructions, can be written in C. Protection by not putting I/O memory in user virtual address space. All machine instructions can access I/O: LOOP: test *b // check if port_4 is 0 beq READY branch LOOP READY:...

Avishai Wool lecture Disadvantages of Memory Mapped I/O Memory and I/O controllers have to be on the same bus: –modern architectures have separate memory bus! –Pentium has 3 buses: memory, PCI, ISA

Avishai Wool lecture Bus Architectures (a) A single-bus architecture (b) A dual-bus memory architecture

Avishai Wool lecture Memory Mapped with Separate Bus I/O Controllers do not see memory bus. Option 1: all addresses to memory bus. No response  I/O bus Option 2: Snooping device between buses –speed difference is a problem Option 3 (Pentium): filter addresses in PCI bridge

Avishai Wool lecture Structure of a large Pentium system

Avishai Wool lecture Principles of I/O Software

Avishai Wool lecture Goals of I/O Software Device independence –programs can access any I/O device –without specifying device in advance ·(floppy, hard drive, or CD-ROM) Uniform naming –name of a file or device a string or an integer –not depending on which machine Error handling –handle as close to the hardware as possible

Avishai Wool lecture Goals of I/O Software (2) Synchronous vs. asynchronous transfers –blocked transfers vs. interrupt-driven Buffering –data coming off a device cannot be stored in final destination Sharable vs. dedicated devices –disks are sharable –tape drives would not be

Avishai Wool lecture How is I/O Programmed Programmed I/O Interrupt-driven I/O DMA (Direct Memory Access)

Avishai Wool lecture Programmed I/O Steps in printing a string

Avishai Wool lecture Polling Busy-waiting until device can accept another character Example assumes memory- mapped registers

Avishai Wool lecture Properties of Programmed I/O Simple to program Ties up CPU, especially if device is slow

Avishai Wool lecture Interrupts Revisited bus

Avishai Wool lecture Interrupt-Driven I/O Code executed when print system call is made Interrupt service procedure

Avishai Wool lecture Properties of Interrupt-Driven I/O Interrupt every character or word. Interrupt handling takes time. Makes sense for slow devices (keyboard, mouse) For fast device: use dedicated DMA controller –usually for disk and network.

Avishai Wool lecture Direct Memory Access (DMA) DMA controller has access to bus. Registers: –memory address to write/read from –byte count –I/O port or mapped-memory address to use –direction (read from / write to device) –transfer unit (byte or word)

Avishai Wool lecture Operation of a DMA transfer

Avishai Wool lecture I/O Using DMA code executed when the print system call is made interrupt service procedure

Avishai Wool lecture DMA with Virtual Memory Most DMA controllers use physical addresses What if memory of buffer is paged out during DMA transfer? Force the page to not page out (“pinning”)

Avishai Wool lecture Burst or Cycle-stealing DMA controller grabs bus for one word at a time, it competes with CPU bus access. This is called “cycle-stealing”. In “burst” mode the DMA controller acquires the bus (exclusively), issues several transfers, and releases. –More efficient –May block CPU and other devices

Avishai Wool lecture Concepts for review TLB Local/Global page replacement Demand paging Page-fault-frequency monitor I/O device controller in/out commands Memory-mapped registers PCI Bridge Programmed I/O (Polling) Interrupt-driven I/O I/O using DMA Page pinning DMA cycle-stealing DMA burst mode