Presentation is loading. Please wait.

Presentation is loading. Please wait.

COT 5611 Operating Systems Design Principles Spring 2014

Similar presentations


Presentation on theme: "COT 5611 Operating Systems Design Principles Spring 2014"— Presentation transcript:

1 COT 5611 Operating Systems Design Principles Spring 2014
Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 3:30 – 5:30 PM

2 Lecture 19 Reading assignment: Chapter 9 from the on-line text
11/27/2018 Lecture 19

3 Today Locks Thread management Address spaces and multi-level memories
Kernel structures for the management of multiple cores/processors and threads/processes YIELD system call Switching the processor from one thread to another Cache coherence 11/27/2018 Lecture 19

4 Locks; Before-or-After actions
Locks shared variables which acts as a flag to coordinate access to a shared data. Manipulated with two primitives ACQUIRE RELEASE Support implementation of before-or-after actions; only one thread can acquire the lock, the others have to wait. All threads must obey the convention regarding the locks. The two operations ACQUIRE and RELEASE must be atomic. Hardware support for implementation of locks RSM – Read and Set Memory CMP –Compare and Swap RSM (mem) If mem=LOCKED then RSM returns r=LOCKED and sets mem=LOCKED If mem=UNLOCKED the RSM returns r=LOCKED and sets mem=LOCKED 11/27/2018 Lecture 19

5 11/27/2018 Lecture 19

6 11/27/2018 Lecture 19

7 Important facts to remember
Each thread has a unique ThreadId Threads save their state on the stack. The stack pointer of a thread is stored in the thread table. To activate a thread the registers of the processor are loaded with information from the thread state. What if no thread is able to run  create a dummy thread for each processor called a processor_thread which is scheduled to run when no other thread is available the processor_thread runs in the thread layer the SCHEDULER runs in the processor layer We have a processor thread for each processor/core. 11/27/2018 Lecture 19

8 Threads and the TM (Thread Manager)
Thread  virtual processor - multiplexes a physical processor a module in execution; a module may have several threads. sequence of operations: Load the module’s text Create a thread and lunch the execution of the module in that thread. Scheduler  system component which chooses the thread to run next Thread Manager implements the thread abstraction. Interrupts  processed by the interrupt handler which interacts with the thread manager Exception  interrupts caused by the running thread and processed by exception handlers Interrupt handlers run in the context of the OS while exception handlers run in the context of interrupted thread. 11/27/2018 Lecture 19

9 The state of a thread; kernel versus application threads
Thread state: Thread Id  unique identifier of a thread Program Counter (PC) -the reference to the next computational step Stack Pointer (SP) PMAR – Page Table Memory Address Register Other registers Threads User level threads/application threads – threads running on behalf of users Kernel level threads – threads running on behalf of the kernel Scheduler thread – the thread running the scheduler Processor level thread – the thread running when there is no thread ready to run 11/27/2018 Lecture 19

10 11/27/2018 Lecture 19

11 Virtual versus real; the state of a processor
Virtual objects need a physical support. A system may have multiple processors and each processor may have multiple cores The state of a processor or a core: Processor Id/Core Id  unique identifier of a processor /core Program Counter (PC) -the reference to the next computational step Stack Pointer (SP) PMAR – Page Table Memory Address Register Other registers The OS maintains several kernel structures Thread table  used by the thread manager to maintain the state of all treads. Processor table  used by the scheduler to maintain the state of each processor. 11/27/2018 Lecture 19

12 11/27/2018 Lecture 19

13 Processor and thread tables – kernel control structures
Processor and thread tables – kernel control structures. Must be locked for serialization. 11/27/2018 Lecture 19

14 Primitives for processor virtualization
Memory CREATE/DELETE_ADDRESS SPACE ALLOCATE/FREE_BLOCK MAP/UNMAP UNMAP Interpreter ALLOCATE_THREAD DESTROY_THREAD EXIT_THREAD YIELD AWAIT ADVANCE TICKET ACQUIRE RELEASE Communication channel ALLOCATE/DEALLOCATE_BOUNDED_BUFFER SEND/RECEIVE 11/27/2018 Lecture 19

15 Thread states and state transitions
11/27/2018 Lecture 19

16 The state of a thread and its associated virtual address space
11/27/2018 Lecture 19

17 Switching the processor from one thread to another
Thread creation: thread_id ALLOCATE_THREAD(starting_address_of_procedure, address_space_id); YIELD  function implemented by the kernel to allow a thread to wait for an event. Allows several threads running on the same processor to wait for a lock. It replaces the busy wait. Actions Save the state of the current thread Schedule another thread Start running the new thread – dispatch the processor to the new thread More about YIELD cannot be implemented in a high level language, must be implemented in the machine language. can be called from the environment of the thread, e.g., C, C++, Java 11/27/2018 Lecture 19

18 YIELD System call  executed by the kernel at the request of an application allows an active thread A to voluntarily release control of the processor. YIELD invokes the ENTER_PROCESSOR_LAYER procedure locks the thread table and unlock it when it finishes it work changes the state of thread A from RUNNING to RUNNABLE invokes the SCHEDULER the SCHEDULER searches the thread table to find another tread B in RUNNABLE state the state of thread B is changed from RUNNABLE to RUNNING the registers of the processor are loaded with the ones saved on the stack for thread B thread B becomes active Why is it necessary to lock the thread table? We may have multiple cores/processors so another thread my be active. An interrupt may occur The pseudo code assumes that we have a fixed number of threads, 7. The flow of control YIELDENTER_PROCESSOR_LAYERSCHEDULEREXIT_PROCESSOR_LAYERYIELD 11/27/2018 Lecture 19

19 11/27/2018 Lecture 19

20 Dynamic thread creation and termination
Until now we assumed a fixed number, 7 threads; the thread table was of fixed size. We have to support two other system calls: EXIT_THREAD  Allow a tread to self-destroy and clean-up DESTRY_THREAD  Allow a thread to terminate another thread of the same application 11/27/2018 Lecture 19

21 Important facts to remember
Each thread has a unique ThreadId Threads save their state on the stack. The stack pointer of a thread is stored in the thread table. To activate a thread the registers of the processor are loaded with information from the thread state. What if no thread is able to run  create a dummy thread for each processor called a processor_thread which is scheduled to run when no other thread is available the processor_thread runs in the thread layer the SCHEDULER runs in the processor layer We have a processor thread for each processor/core. 11/27/2018 Lecture 19

22 System start-up procedure
Procedure RUN_PROCESSORS() for each processor do allocate stack and setup processor thread /*allocation of the stack done at processor layer shutdown  FALSE SCHEDULER() deallocate processor_thread stack /*deallocation of the stack done at processor layer halt processor 11/27/2018 Lecture 19

23 Switching threads with dynamic thread creation
Switching from one user thread to another requires two steps Switch from the thread releasing the processor to the processor thread Switch from the processor thread to the new thread which is going to have the control of the processor The last step requires the SCHEDULER to circle through the thread_table until a thread ready to run is found The boundary between user layer threads and processor layer thread is crossed twice Example: switch from thread 0 to thread 6 using YIELD ENTER_PROCESSOR_LAYER EXIT_PROCESSOR_LAYER 11/27/2018 Lecture 19

24 11/27/2018 Lecture 19

25 The control flow when switching from one thread to another
The control flow is not obvious as some of the procedures reload the stack pointer (SP) When a procedure reloads the stack pointer then the place where it transfers control when it executes a return is the procedure whose SP was saved on the stack and was reloaded before the execution of the return. ENTER_PROCESSOR_LAYER Changes the state of the thread calling YIELD from RUNNING to RUNNABLE Save the state of the procedure calling it , YIELD, on the stack Loads the processors registers with the state of the processor thread, thus starting the SCHEDULER EXIT_PROCESSOR_LAYER Saves the state of processor thread into the corresponding PROCESSOR_TABLE and loads the state of the thread selected by the SCHEDULER to run (in our example of thread 6) in the processor’s registers Loads the SP with the values saved by the ENTER_PROCESSOR_LAYER 11/27/2018 Lecture 19

26 11/27/2018 Lecture 19

27 11/27/2018 Lecture 19

28 In ENTER PROCESSOR_LAYER instead of SCHEDULER() should be SP processor_table[processor].topstack
11/27/2018 Lecture 19

29 Cache coherence The cache  can be thought as a replication system with the goal to improve performance. An invariant  the replica of any item in the cache (primary memory) should exist in the main memory (secondary memory). Consistency between primary and secondary memory Non-trivial due to the discrepancy of access time Two types of consistency rules: strict and eventual A write-through cache provides strict consistency. A write is not complete until both the cache and the main memory are updated. A non-write-through cache A write is acknowledged to be done when the cache is updated and the thread can continue its work while the cache manager will eventually update the main memory. 11/27/2018 Lecture 19


Download ppt "COT 5611 Operating Systems Design Principles Spring 2014"

Similar presentations


Ads by Google