Enforcing Modularity Junehwa Song CS KAIST. Network Computing Lab. How to run multiple modules? Emacs X server Mail Reader File Server.

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Chapter 6 Limited Direct Execution
CSC 501 Lecture 2: Processes. Von Neumann Model Both program and data reside in memory Execution stages in CPU: Fetch instruction Decode instruction Execute.
Architectural Support for OS March 29, 2000 Instructor: Gary Kimura Slides courtesy of Hank Levy.
OS Fall ’ 02 Introduction Operating Systems Fall 2002.
Multiprocessing Memory Management
Home: Phones OFF Please Unix Kernel Parminder Singh Kang Home:
OS Spring’03 Introduction Operating Systems Spring 2003.
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Process Description and Control A process is sometimes called a task, it is a program in execution.
OS Spring’04 Introduction Operating Systems Spring 2004.
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
CSE 451: Operating Systems Autumn 2013 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
1 CS503: Operating Systems Part 1: OS Interface Dongyan Xu Department of Computer Science Purdue University.
System Calls 1.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
CSC 501 Lecture 2: Processes. Process Process is a running program a program in execution an “instantiation” of a program Program is a bunch of instructions.
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto OS-Related Hardware.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Background: Operating Systems Brad Karp UCL Computer Science CS GZ03 / M th November, 2008.
Multiprogramming. Readings r Silberschatz, Galvin, Gagne, “Operating System Concepts”, 8 th edition: Chapter 3.1, 3.2.
Chapter 4 Memory Management Virtual Memory.
Processes Introduction to Operating Systems: Module 3.
1 CSE451 Architectural Supports for Operating Systems Autumn 2002 Gary Kimura Lecture #2 October 2, 2002.
Operating Systems 1 K. Salah Module 1.2: Fundamental Concepts Interrupts System Calls.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Concurrency, Processes, and System calls Benefits and issues of concurrency The basic concept of process System calls.
Processes and Virtual Memory
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
What is a Process ? A program in execution.
Lecture 5 Rootkits Hoglund/Butler (Chapters 1-3).
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Chapter 6 Limited Direct Execution Chien-Chung Shen CIS/UD
S ALVATORE DI G IROLAMO (TA) Networks and Operating Systems: Exercise Session 1.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Multiprogramming. Readings r Chapter 2.1 of the textbook.
WORKING OF SCHEDULER IN OS
Introduction to Operating Systems Concepts
Introduction to Operating Systems
Operating Systems CMPSC 473
Anton Burtsev February, 2017
Day 08 Processes.
Day 09 Processes.
Overview of today’s lecture
CS 3305 System Calls Lecture 7.
Operating Systems: A Modern Perspective, Chapter 6
Module 2.2 COP4600 – Operating Systems Richard Newman
COP 4600 Operating Systems Spring 2011
COP 4600 Operating Systems Spring 2011
COT 5611 Operating Systems Design Principles Spring 2014
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
Process Description and Control
Lecture Topics: 11/1 General Operating System Concepts Processes
Architectural Support for OS
CSE 451: Operating Systems Autumn 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 596 Allen Center 1.
CSE 451: Operating Systems Autumn 2001 Lecture 2 Architectural Support for Operating Systems Brian Bershad 310 Sieg Hall 1.
CSE 451: Operating Systems Winter 2003 Lecture 2 Architectural Support for Operating Systems Hank Levy 412 Sieg Hall 1.
Architectural Support for OS
CSE 153 Design of Operating Systems Winter 2019
In Today’s Class.. General Kernel Responsibilities Kernel Organization
Shamelessly taken from Course Note Chapter 5.A
Presentation transcript:

Enforcing Modularity Junehwa Song CS KAIST

Network Computing Lab. How to run multiple modules? Emacs X server Mail Reader File Server

Designing a Virtual Memory System

Network Computing Lab. Can multiple Modules share MM? Partition MM space? Intentional/unintentional intrusion  LOAD  STORE … … … Emacs X server Mail ……

Network Computing Lab. Then, What Can We Do??? PROCESSOR Main Memory Address Read/Write Data PROCESSOR Main Memory Read/Write Data Virtual Memory Manager Virtual Address Physical Address Read/Write

Network Computing Lab. Address Translation Page Map (Page Table)  Virtual Address Space  Segmentation int PageTable[size] int translate(int virtual) /* HW */

Address Translation Page # Block # Byte Offset Block # Virtual address Physical address Page Map (Page Table) BTW, where do you want to put Page Map?

Network Computing Lab. Can multiple Modules share MM? PROCESSOR Read/Write Data Virtual Memory Manager Virtual Address Read/Write 300 Page Map Address Register Page #Offset Block #Offset Physical Address Page Map of VM1 Page Map of VM2 Page 10 (VM1) Page 12 (VM2) Page 11 (VM1) Unused block Thus, a Page Map defines an address space

Network Computing Lab. What do we further need to do? Things to do  Create/Delete an address space  Grow an address space  Map a device to an address space  Switch an address space to another Create-AS() { 1. identify an unused memory block; 2. generate a new address space id, ID ; 3. make a page map, PAGEMAP(ID) and initialize a page map ; 4. return (ID); } Delete-AS(ID) { for each block of each entry in PAGEMAP(ID) free(block) ; free(PAGEMAP(ID)); } Add-page(ID, page#) { 1. search for an unused block. Let’ the block be NEWBLOCK ; 2. make an entry (page#, NEWBLOCK) in PageMap(ID); } Delete-page(ID, page#) MAP(ID, page#, block) Swich-AS()

Network Computing Lab. Things to do Create-AS() { } Delete-AS(ID) { } Add-page(ID, page#) { } Delete-page(ID, page#) { 1. Search for the entry (page #, block) in PAGEMAP(ID) ; 2. Free(block); 3. Remove the entry from PAGEMAP(ID); } MAP(ID, page#, block) { insert a new entry (page#, block) to PAGEMAP(ID); } Swich-AS(ID) { 1. Change the address space to ID; 2. Load page map address register with the address of PAGEMAP(ID); /* we will come back to this later again*/ }

Network Computing Lab. Virtual Memory System Now, multiple programs can share a main memory  Each module has its own virtual memory space  Memory operations, e.g., LOAD, STORE, or control sequence instructions such as JMP are localized to its virtual address space.

Network Computing Lab. Do we need a protected area in memory? What if a module accidentally change a page map or page map address register? Separate, special address space, called KERNEL address space Put all page maps as well as the virtual memory manager programs in KERNEL Define a flag bit (in a flag register) to designate if we are currently in KERNEL MODE or in user mode  In user mode, nobody can change the value the page map address register Kernel Text Editor Mail Reader Create-AS Delete-AS Add-page Delete-page Switch-AS Map

Network Computing Lab. Interfaces to Kernel Different ways to get a service from Kernel  A device signals an Interrupt  A user module executes an illegal instruction (exception), e.g., divide by zero  A user module executes a special instruction (system-call)  Create-AS, Delete-AS, Add-page, Delete-page, Switch-AS, Map 등등이 user module 에게 주어지려 면 system-call 로 주어져야 함.

Network Computing Lab. Exceptions and Interrupts Interrupt  Maskable Interrupt  Non-maskable interrupt Exceptions  Processor-detected exceptions: Defer in the addresses stored in STACK  Fault : the address of the instruction which generated Fault is saved  E.g., page fault exception handler  Trap : that of the next instruction  Generated when there is no need to re-execute the instruction  Mainly used for debugging, i.e., to notify the debugger that a specific instruction has been executed  Abort : cannot save anything  Process will be terminated  Programmed Exceptions (Software Interrupt)  E.g., in linux, generated by INT, INT3, and conditionally by INT0 and BOUND  Handled as a trap by CPU  For System Call and for notification to debugger of a specific event

Network Computing Lab. 잠깐만 ! Our Interpreter Model “4-register with Interrupt” Interpreter Model 4 registers:  IC (Instruction Counter)  SP (Stack Pointer)  this means that each process (or thread) is given a stack  Flag (Flag Register)  Interrupt bit, kernel mode bit 포함  PMAR (Page Map Address Register) Interrupt is provided  Software interrupt is also provided

Network Computing Lab. Things to do to enter/leave Kernel The following can be common to three different kernel access methods Let’s assume 4-register with Interrupt model  Change-mode-enter-Kernel (SYSCALL instruction)  Change mode from user to kernel : set kernel mode flag on  Save Page map address register (stack) and load that of the kernel  Save flags register (Stack)  Save Instruction counter (stack) and reload that of the kernel /* handling SP will be considered later….. E.g., ThreadTable entry 에 …*/  Change-mode-leave-Kernel (RTE instruction)  Change mode from kernel to user : set kernel mode flag off  Reload Page map address register (stack)  Pop flags register (stack)  Reload Instruction counter (stack) Should be atomic actions  If not? Processors implement the sequences differently  Can be completely or partly done in HW  E.g. ???

Network Computing Lab. Mode-Change Operations Very expensive operation  Number of instructions  Invalidate or Clean up things (e.g., pipeline, cache, etc)

Network Computing Lab. Switching address spaces Switch-AS(ID) {  Change the address space to ID;  Load page map address register with the address of PAGEMAP(ID); } only kernel can change page map address register, thus Switch-AS(ID) {  Change to KERNEL; /* change-mode-enter-kernel */  Load page map address register to PAGEMAP(ID);  Change to the address space, ID; /*change-mode-leave-kernel */ }

Network Computing Lab. Q/A Isn’t it too slow doing address translation?  MMU  TLB  Do we want to see the details of a PageMap?

Designing a Virtual Processor System

Network Computing Lab. Abstraction of a Module in Execution Running multiple modules in a processor? Module AModule B Temporarily stop Start execution Temporarily stop Resume execution

Network Computing Lab. Abstraction of a Module in Execution An abstraction of a running program and its state  so that we can stop and resume an execution of a program at any time Then, we can simulate a virtual processor for each module. What do we need?

Network Computing Lab. Thread An abstraction of a running program and its state We should be able to save and load a thread state, including  Next step of a thread  Environment of a thread  Registers  general purpose registers,  stack pointer,  flag register, etc  Pointer to address space: page map address register

Network Computing Lab. First Trial !!! Let’s make it simple  A very simple Virtual Thread Manager Let’s assume that  the state of each thread is stored in its own stack A function called yield() ThreadTable[] yield() {  Save the current value of SP (current stack pointer) to ThreadTable[]  Select next thread to run // this part is a “scheduling” task  Load the stack pointer of the next thread to SP }

Network Computing Lab. First Trial !!! (example) 3 ….…. 100 ….… Stack for Thread 6 Stack for Thread Thread Table Next thread Stack Pointer Thread 6 in execution  yield  Thread 0 resumes

Network Computing Lab. First Trial!!! (we still need a bit more) Creat-Thread() Exit-Thread() Destroy-Thread()

Network Computing Lab. First Trial!!! Create-Thread (module_address)  Allocate space for a new stack  Initialize the stack  Push the address of “exit-thread()”  Push the address of (start of) module  Initialize the entry in the Thread-Table[]

Network Computing Lab. 3 ….…. 100 ….… Stack for Thread 6 Stack for Thread Thread Table Next thread Stack Pointer ….…. Stack for Thread ….… Address of “exit-thread()”

Network Computing Lab. First Trial!!! Exit-Thread ()  De-allocate space for a stack  De-allocate the entry in the Thread- Table[]

Network Computing Lab. Design with Sequence Coordination Sequence coordination with simple version... While (input_byte_count <= processed_byte_count) { yield() } … … While a char is ready { input_byte_count++; do something; } … Problems:  The shared variable input_byte_count should be carefully used  Editor module should repeatedly call yield()

Network Computing Lab. Design with Sequence Coordination Can we do any better?  What about making something similar to “interrupt”? Wait() and notify()  Notify(eventcount)  Wait(eventcount, value) … Eventcount input_byte_count; While (input_byte_count <= processed_byte_count) { wait(input_byte_count, processed_byte_count); } … … Input_byte_count++; notify(input_byte_count); …

Network Computing Lab. Design with Sequence Coordination What do we need to do to implement “virtual processor” with wait() and notify()? We are making a thread to be a wait state, and ready (to run) state. Thus, a thread can be in a {running, ready, waiting} waitingreadyrunning Wait() Notify()Schedule() Yield() Create-thread() Exit-thread()

Network Computing Lab. Virtual Thread Manager with Wait and Notify Struct thread { int SP ; //value of the stack pointer Int state ; //wait, ready, or run Int *event //if waiting, the eventcount we are waiting for } ThreadTable[] Yield() { ThreadTable[me].SP = SP; //save my stack pointer scheduler() ; } Scheduler() { do { // select a runnable thread next_thread = (next_thread +1) %7 ; //select a thread by a certain policy, here “round robin” } while (ThreadTable[next_thread].state == waiting) ; SP = ThreadTable[next_thread].SP ; //load SP register return ; // pop return address from stack } Wait() { } Notify() { }

Network Computing Lab. Virtual Thread Manager with Wait and Notify Struct thread { int SP ; //value of the stack pointer Int state ; //wait, ready, or run Int *event //if waiting, the eventcount we are waiting for } ThreadTable[] Yield() { } Scheduler() { } Wait(eventcount, value) { ThreadTable[me].event = &eventcount; ThreadTable[me].state = waiting; if ( *eventcount > value) ThreadTable[me].state = ready ; scheduler() ; } Notify(eventcount) { for ( i=0 ; i < size ; i++) { if ( (ThreadTable[i].state == WAITING) &&(ThreadTable[i].event==&eventcount) ) ThreadTable[i].state = ready ; } }

Network Computing Lab. Virtual Thread Manager with Wait and Notify Again, we should be cautious in using a shared variable, especially when it is updated. In this case, the updated shared variable is “ThreadTable[me].state” What if Wait(eventcount, value) { ThreadTable[me].event = &eventcount; ThreadTable[me].state = waiting; if ( *eventcount > value) { ThreadTable[me].state = ready ; scheduler() ; } else ThreadTable[me].state = waiting; }

Network Computing Lab. Virtual Thread Manager with Wait and Notify Be cautious in using Wait() and Notify()  What if everybody waits and nobody notifies?  What can we do for the case?  Deadlock

Network Computing Lab. Interface of our Simple Kernel Text Editor Mail Reader Create-AS Delete-AS Add-page Delete-page Switch-AS Map Create-thread Exit-thread Destroy-thread yield Register-gate Transfer-to-gate System Call Interrupt Exception We can say that the above is A simple Kernel model based on 4-register with interrupt interpreter model

Network Computing Lab. Design of a Simple OS Initialization Sequence  Reset  Boot  Kernel initialization  Initialization of initial applications Reset  Turn on of computer  Load boot program from ROM  Virtual and physical address 0 Boot  Read kernel program from disk and initialize  Kernel is located in a pre-agreed address of disk (or floppy disk)  Kernel is loaded into a pre-agreed address of the physical memory  CPU starts the first instruction of the kernel

Network Computing Lab. Design of a simple OS Kernel initialization  Sets on the supervisor bit (kernel mode bit)  Sets off the interrupt bit  Assume that interrupt does not occur in kernel mode  Simplification to make the kernel design simple  Allocate the Kernel stack  Preparation for Procedure call in kernel  Use add-page()  Make its own page table  Allocate several blocks  Start fromm a specific address, say KERNELPAGEMAP  Make the map  For simplification, use a simple mapping  0  0, 1  1, etc  Fill up the PageMapAddress register  With KERNELPAGEMAP  Then, address translation starts

Network Computing Lab. Design of a simple OS Initialization of initial applications  Assume that initial applications are located in a pre-agreed places on DISK  Use Create-AS() for an application  Allocate several blocks  Use Add-Page()  Starts from a pre-agreed address, say, FIRSTUSER  Read program into the blocks  Make a Page Map  Allocate blocks for Page Map  Make the map, Assign FIRSTUSER virtual address 0  Make a Stack  From the other end of the address space  Push 0 to the stack  To consider RTE instruction  Switch to the application  Switch-AS()

Network Computing Lab. Kernel Organization: Monolithic Kernel Text Editor Mail Reader VM File Manager Window Manager Network Manager Thread Manager

Network Computing Lab. Kernel Organization: Microkernel Text Editor Mail Reader VM File Manager Window Manager Network Manager Thread Manager Communication Manager

Network Computing Lab. Kernel Organization Which is better?  Monolithic.vs. Micro-kernel Any other ways?  E.g., exokernel ????

Network Computing Lab. Design Project So far, we designed a simple OS and Simple Kernel based on 4- register with interrupt interpreter model. Then, we want to design a different one. Give specification of a simple artificial machine and have students design a simple OS for it We may extend or restrict the “4 –register with interrupt” interpreter model Eg. A machine  A program can be loaded into at maximum 3 blocks  5 programs can run at the same time (5 threads)  DISK access operations are given  Give a simple vocabulary  No interrupt in a kernel mode  Size of the physical memory  Block size x number of blocks  Initial applications and the kernel program are located in specific addresses of the disk