COT 5611 Operating Systems Design Principles Spring 2014

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

More on Processes Chapter 3. Process image _the physical representation of a process in the OS _an address space consisting of code, data and stack segments.
Chapter 3 Process Description and Control
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
Processes CSCI 444/544 Operating Systems Fall 2008.
1 Process Description and Control Chapter 3. 2 Process Management—Fundamental task of an OS The OS is responsible for: Allocation of resources to processes.
Advanced OS Chapter 3p2 Sections 3.4 / 3.5. Interrupts These enable software to respond to signals from hardware. The set of instructions to be executed.
1 Process Description and Control Chapter 3 = Why process? = What is a process? = How to represent processes? = How to control processes?
Threads CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Process Description and Control A process is sometimes called a task, it is a program in execution.
CSE 451: Operating Systems Autumn 2013 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Thread Scheduling.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM.
Enforcing Modularity Junehwa Song CS KAIST. Network Computing Lab. How to run multiple modules? Emacs X server Mail Reader File Server.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
1 Process Description and Control Chapter 3. 2 Process A program in execution An instance of a program running on a computer The entity that can be assigned.
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Multiprogramming. Readings r Chapter 2.1 of the textbook.
Processes and threads.
Advanced Operating Systems CIS 720
Process Management Process Concept Why only the global variables?
CS 6560: Operating Systems Design
COT 4600 Operating Systems Fall 2009
Scheduler activations
Lecture 21 Concurrency Introduction
Intro to Processes CSSE 332 Operating Systems
COP 4600 Operating Systems Fall 2010
COP 5611 Operating Systems Spring 2010
Threads, SMP, and Microkernels
CGS 3763 Operating Systems Concepts Spring 2013
CGS 3763 Operating Systems Concepts Spring 2013
COP 4600 Operating Systems Spring 2011
COP 4600 Operating Systems Spring 2011
System Structure and Process Model
CGS 3763 Operating Systems Concepts Spring 2013
COT 5611 Operating Systems Design Principles Spring 2012
COT 5611 Operating Systems Design Principles Spring 2014
CSE 451: Operating Systems Spring 2012 Module 6 Review of Processes, Kernel Threads, User-Level Threads Ed Lazowska 570 Allen.
COP 4600 Operating Systems Fall 2010
Process & its States Lecture 5.
CGS 3763 Operating Systems Concepts Spring 2013
COP 4600 Operating Systems Spring 2011
Process Description and Control
COT 5611 Operating Systems Design Principles Spring 2012
Lecture Topics: 11/1 General Operating System Concepts Processes
Process Description and Control
Threads Chapter 4.
Process Description and Control
Process Description and Control
COP 5611 Operating Systems Spring 2010
Process Description and Control
Process Description and Control
Chapter 1 Computer System Overview
CSE 451: Operating Systems Winter 2003 Lecture 4 Processes
Chapter 2 Processes and Threads 2.1 Processes 2.2 Threads
CS510 Operating System Foundations
Foundations and Definitions
CSE 153 Design of Operating Systems Winter 2019
CS703 – Advanced Operating Systems
COP 5611 Operating Systems Spring 2010
COT 5611 Operating Systems Design Principles Spring 2014
Presentation transcript:

COT 5611 Operating Systems Design Principles Spring 2014 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 3:30 – 5:30 PM

Lecture 19 Reading assignment: Chapter 9 from the on-line text 11/27/2018 Lecture 19

Today Locks Thread management Address spaces and multi-level memories Kernel structures for the management of multiple cores/processors and threads/processes YIELD system call Switching the processor from one thread to another Cache coherence 11/27/2018 Lecture 19

Locks; Before-or-After actions Locks shared variables which acts as a flag to coordinate access to a shared data. Manipulated with two primitives ACQUIRE RELEASE Support implementation of before-or-after actions; only one thread can acquire the lock, the others have to wait. All threads must obey the convention regarding the locks. The two operations ACQUIRE and RELEASE must be atomic. Hardware support for implementation of locks RSM – Read and Set Memory CMP –Compare and Swap RSM (mem) If mem=LOCKED then RSM returns r=LOCKED and sets mem=LOCKED If mem=UNLOCKED the RSM returns r=LOCKED and sets mem=LOCKED 11/27/2018 Lecture 19

11/27/2018 Lecture 19

11/27/2018 Lecture 19

Important facts to remember Each thread has a unique ThreadId Threads save their state on the stack. The stack pointer of a thread is stored in the thread table. To activate a thread the registers of the processor are loaded with information from the thread state. What if no thread is able to run  create a dummy thread for each processor called a processor_thread which is scheduled to run when no other thread is available the processor_thread runs in the thread layer the SCHEDULER runs in the processor layer We have a processor thread for each processor/core. 11/27/2018 Lecture 19

Threads and the TM (Thread Manager) Thread  virtual processor - multiplexes a physical processor a module in execution; a module may have several threads. sequence of operations: Load the module’s text Create a thread and lunch the execution of the module in that thread. Scheduler  system component which chooses the thread to run next Thread Manager implements the thread abstraction. Interrupts  processed by the interrupt handler which interacts with the thread manager Exception  interrupts caused by the running thread and processed by exception handlers Interrupt handlers run in the context of the OS while exception handlers run in the context of interrupted thread. 11/27/2018 Lecture 19

The state of a thread; kernel versus application threads Thread state: Thread Id  unique identifier of a thread Program Counter (PC) -the reference to the next computational step Stack Pointer (SP) PMAR – Page Table Memory Address Register Other registers Threads User level threads/application threads – threads running on behalf of users Kernel level threads – threads running on behalf of the kernel Scheduler thread – the thread running the scheduler Processor level thread – the thread running when there is no thread ready to run 11/27/2018 Lecture 19

11/27/2018 Lecture 19

Virtual versus real; the state of a processor Virtual objects need a physical support. A system may have multiple processors and each processor may have multiple cores The state of a processor or a core: Processor Id/Core Id  unique identifier of a processor /core Program Counter (PC) -the reference to the next computational step Stack Pointer (SP) PMAR – Page Table Memory Address Register Other registers The OS maintains several kernel structures Thread table  used by the thread manager to maintain the state of all treads. Processor table  used by the scheduler to maintain the state of each processor. 11/27/2018 Lecture 19

11/27/2018 Lecture 19

Processor and thread tables – kernel control structures Processor and thread tables – kernel control structures. Must be locked for serialization. 11/27/2018 Lecture 19

Primitives for processor virtualization Memory CREATE/DELETE_ADDRESS SPACE ALLOCATE/FREE_BLOCK MAP/UNMAP UNMAP Interpreter ALLOCATE_THREAD DESTROY_THREAD EXIT_THREAD YIELD AWAIT ADVANCE TICKET ACQUIRE RELEASE Communication channel ALLOCATE/DEALLOCATE_BOUNDED_BUFFER SEND/RECEIVE 11/27/2018 Lecture 19

Thread states and state transitions 11/27/2018 Lecture 19

The state of a thread and its associated virtual address space 11/27/2018 Lecture 19

Switching the processor from one thread to another Thread creation: thread_id ALLOCATE_THREAD(starting_address_of_procedure, address_space_id); YIELD  function implemented by the kernel to allow a thread to wait for an event. Allows several threads running on the same processor to wait for a lock. It replaces the busy wait. Actions Save the state of the current thread Schedule another thread Start running the new thread – dispatch the processor to the new thread More about YIELD cannot be implemented in a high level language, must be implemented in the machine language. can be called from the environment of the thread, e.g., C, C++, Java 11/27/2018 Lecture 19

YIELD System call  executed by the kernel at the request of an application allows an active thread A to voluntarily release control of the processor. YIELD invokes the ENTER_PROCESSOR_LAYER procedure locks the thread table and unlock it when it finishes it work changes the state of thread A from RUNNING to RUNNABLE invokes the SCHEDULER the SCHEDULER searches the thread table to find another tread B in RUNNABLE state the state of thread B is changed from RUNNABLE to RUNNING the registers of the processor are loaded with the ones saved on the stack for thread B thread B becomes active Why is it necessary to lock the thread table? We may have multiple cores/processors so another thread my be active. An interrupt may occur The pseudo code assumes that we have a fixed number of threads, 7. The flow of control YIELDENTER_PROCESSOR_LAYERSCHEDULEREXIT_PROCESSOR_LAYERYIELD 11/27/2018 Lecture 19

11/27/2018 Lecture 19

Dynamic thread creation and termination Until now we assumed a fixed number, 7 threads; the thread table was of fixed size. We have to support two other system calls: EXIT_THREAD  Allow a tread to self-destroy and clean-up DESTRY_THREAD  Allow a thread to terminate another thread of the same application 11/27/2018 Lecture 19

Important facts to remember Each thread has a unique ThreadId Threads save their state on the stack. The stack pointer of a thread is stored in the thread table. To activate a thread the registers of the processor are loaded with information from the thread state. What if no thread is able to run  create a dummy thread for each processor called a processor_thread which is scheduled to run when no other thread is available the processor_thread runs in the thread layer the SCHEDULER runs in the processor layer We have a processor thread for each processor/core. 11/27/2018 Lecture 19

System start-up procedure Procedure RUN_PROCESSORS() for each processor do allocate stack and setup processor thread /*allocation of the stack done at processor layer shutdown  FALSE SCHEDULER() deallocate processor_thread stack /*deallocation of the stack done at processor layer halt processor 11/27/2018 Lecture 19

Switching threads with dynamic thread creation Switching from one user thread to another requires two steps Switch from the thread releasing the processor to the processor thread Switch from the processor thread to the new thread which is going to have the control of the processor The last step requires the SCHEDULER to circle through the thread_table until a thread ready to run is found The boundary between user layer threads and processor layer thread is crossed twice Example: switch from thread 0 to thread 6 using YIELD ENTER_PROCESSOR_LAYER EXIT_PROCESSOR_LAYER 11/27/2018 Lecture 19

11/27/2018 Lecture 19

The control flow when switching from one thread to another The control flow is not obvious as some of the procedures reload the stack pointer (SP) When a procedure reloads the stack pointer then the place where it transfers control when it executes a return is the procedure whose SP was saved on the stack and was reloaded before the execution of the return. ENTER_PROCESSOR_LAYER Changes the state of the thread calling YIELD from RUNNING to RUNNABLE Save the state of the procedure calling it , YIELD, on the stack Loads the processors registers with the state of the processor thread, thus starting the SCHEDULER EXIT_PROCESSOR_LAYER Saves the state of processor thread into the corresponding PROCESSOR_TABLE and loads the state of the thread selected by the SCHEDULER to run (in our example of thread 6) in the processor’s registers Loads the SP with the values saved by the ENTER_PROCESSOR_LAYER 11/27/2018 Lecture 19

11/27/2018 Lecture 19

11/27/2018 Lecture 19

In ENTER PROCESSOR_LAYER instead of SCHEDULER() should be SP processor_table[processor].topstack 11/27/2018 Lecture 19

Cache coherence The cache  can be thought as a replication system with the goal to improve performance. An invariant  the replica of any item in the cache (primary memory) should exist in the main memory (secondary memory). Consistency between primary and secondary memory Non-trivial due to the discrepancy of access time Two types of consistency rules: strict and eventual A write-through cache provides strict consistency. A write is not complete until both the cache and the main memory are updated. A non-write-through cache A write is acknowledged to be done when the cache is updated and the thread can continue its work while the cache manager will eventually update the main memory. 11/27/2018 Lecture 19