COP 5611 Operating Systems Spring 2010

Slides:



Advertisements
Similar presentations
Chapter 6: Process Synchronization
Advertisements

5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
COP 4600 Operating Systems Fall 2010 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:30-4:30 PM.
1 Advanced Operating Systems - Spring 2009 Lecture 9 – February 9, 2009 Dan C. Marinescu Office: HEC 439 B. Office.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Concurrency, Mutual Exclusion and Synchronization.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
Chapter 6: Process Synchronization. 6.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Module 6: Process Synchronization Background The.
Operating Systems CSE 411 CPU Management Dec Lecture Instructor: Bhuvan Urgaonkar.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:00-4:00 PM.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM.
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
COP 4600 Operating Systems Fall 2010 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:30-4:30 PM.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 6: Process Synchronization.
Chapter 6 Synchronization Dr. Yingwu Zhu. The Problem with Concurrent Execution Concurrent processes (& threads) often access shared data and resources.
COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
6.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 6.5 Semaphore Less complicated than the hardware-based solutions Semaphore S – integer.
Process Management Deadlocks.
Semaphores Synchronization tool (provided by the OS) that does not require busy waiting. Logically, a semaphore S is an integer variable that, apart from.
Chapter 6: Process Synchronization
CS703 – Advanced Operating Systems
COT 4600 Operating Systems Fall 2009
Process Synchronization
Process Synchronization: Semaphores
PARALLEL PROGRAM CHALLENGES
Background on the need for Synchronization
Intro to Processes CSSE 332 Operating Systems
Chapter 5: Process Synchronization
COP 4600 Operating Systems Fall 2010
CGS 3763 Operating Systems Concepts Spring 2013
Chapter 7: Deadlocks.
COP 4600 Operating Systems Spring 2011
COP 4600 Operating Systems Spring 2011
COT 5611 Operating Systems Design Principles Spring 2014
COT 5611 Operating Systems Design Principles Spring 2014
CGS 3763 Operating Systems Concepts Spring 2013
COT 5611 Operating Systems Design Principles Spring 2012
Topic 6 (Textbook - Chapter 5) Process Synchronization
COT 5611 Operating Systems Design Principles Spring 2014
COP 4600 Operating Systems Fall 2010
Midterm review: closed book multiple choice chapters 1 to 9
CGS 3763 Operating Systems Concepts Spring 2013
Semaphore Originally called P() and V() wait (S) { while S <= 0
Operating Systems.
COP 4600 Operating Systems Spring 2011
Module 7a: Classic Synchronization
Chapter 7: Deadlocks.
COT 5611 Operating Systems Design Principles Spring 2012
Lecture Topics: 11/1 General Operating System Concepts Processes
Lecture 2 Part 2 Process Synchronization
Concurrency: Mutual Exclusion and Process Synchronization
COP 5611 Operating Systems Spring 2010
Operating Systems: A Modern Perspective, Chapter 6
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
COP 4600 Operating Systems Fall 2010
COP 5611 Operating Systems Spring 2010
CSE 542: Operating Systems
COT 5611 Operating Systems Design Principles Spring 2014
CSE 542: Operating Systems
Presentation transcript:

COP 5611 Operating Systems Spring 2010 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

Lecture 5 Last time: Hard/Soft Modularity. Client/Server Organization. Today: Virtualization Threads Virtual memory Bounded buffers Race conditions, locks, semaphores Thread coordination with a bounded buffer Next Time: Processor sharing 2 2 2 2 2

Virtualization of the three abstractions Why virtualization Enforce modularity Provide uniform access to resources Virtualization– relating physical with virtual objects Done by the operating system for the three abstractions: Interpreters  Threads Storage  Virtual memory Communication links  Bounded buffers

Virtualization methods Virtualization  simulating the interface to a physical object by: Multiplexing  create multiple physical objects from one instance of a physical object. Aggregation  create one virtual object from multiple physical objects Emulation  construct a virtual object from a different type of a physical object. Emulation in software is slow. Method Physical Resource Virtual Multiplexing processor thread real memory virtual memory communication channel virtual circuit server (e.g., Web server) Aggregation disk RAID core multi-core processor Emulation RAM disk system (e.g. Macintosh) virtual machine (e.g., Virtual PC) Multiplexing + Emulation real memory + disk virtual memory with paging communication channel + TCP protocol

Threads Threads  a thread is a virtual processor; a module in execution Multiplexes a physical processor The state of a thread: (1) the reference to the next computational step (the Pc register) + (2) the environment (registers, stack, heap, current objects). Sequence of operations: Load the module’s text Create a thread and lunch the execution of the module in that thread. A module may have several threads. The thread manager implements the thread abstraction. Interrupts  processed by the interrupt handler which interacts with the thread manager Exception  interrupts caused by the running thread and processed by exception handlers Interrupt handlers run in the context of the OS while exception handlers run in the context of interrupted thread.

Virtual memory Virtual memory  a scheme to allow each thread to access only its own virtual address space (collection of virtual addresses). Why needed: To implement a memory enforcement mechanism; to prevent a thread running the code of one module from overwriting the data of another module The physical memory may be too small to fit an application; otherwise each application would need to manage its own memory. Virtual memory manager maps virtual address space of a thread to physical memory. Thread + virtual memory allow us to create a virtual computer for each module. Each module runs in own address space; if one module runs multiple threads all share one address space.

Bounded buffers Bounded buffers  implement the communication channel abstraction Bounded the buffer has a finite size. We assume that all messages are of the same size and each can fit into a buffer cell. A bounded buffer will only accommodate N messages. Threads use the SEND and RECEIVE primitives.

Principle of least astonishment Study and understand simple phenomena or facts before moving to complex ones. For example: Concurrency  an application requires multiple threads that run at the same time. Tricky. Understand sequential processing first. Examine a simple operating system interface to the three abstractions Memory CREATE/DELETE_ADDRESS SPACE ALLOCATE/FREE_BLOCK MAP/UNMAP UNMAP Interpreter ALLOCATE_THREAD DESTROY_THREAD EXIT_THREAD YIELD AWAIT ADVANCE TICKET ACQUIRE RELEASE Communication channel ALLOCATE/DEALLOCATE_BOUNDED_BUFFER SEND/RECEIVE

9

Processor sharing Possible because threads spend a significant percentage of their lifetime waiting for external events. Called: Time-sharing Processor multiplexing Multiprogramming Multitasking The kernel must support a number of functions: Creation and destruction of threads Allocation of the processor to a ready to run thread Handling of interrupts Scheduling – deciding which one of the ready to run threads should be allocated the processor The OS must maintain a thread table with information about each thread threadId stackPointer PC – Program Counter PMAP – pointer to the page table for thread (see virtual memory discussion) 10

12

Thread states and state transitions 13

The state of a thread and its associated virtual address space Lecture 18 14

Problems related to resource sharing A physical resource, in this case the processor, is multiplexed, shared among a number of virtual processors (threads). This poses a number of problems: Uncertainty of state  race conditions Possibility of deadlocks  deadlock- inability of individual threads to complete their task because some resources they need are held by other threads. Thread coordination effective and safe communication among threads. A scheduling discipline  decisions when is each thread allowed to use the resource Similar problems are posed by multiplexing other resources.

Race conditions Race condition  error that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence in order to be done correctly. Race conditions depend on the exact timing of events thus are not reproducible. A slight variation of the timing could either remove a race condition or create. Very hard to debug such errors.

Lock A mechanism to protect a resource. In our case to guarantee that a program works correctly when multiple threads execute concurrently a multi-step operation protected by a lock behaves like a single operation can be used to implement before-or after atomicity shared variable acting as a flag (traffic light) to coordinate access to a shared variable works only if all threads follow the rule  check the lock before accessing a shared variable.

Deadlocks Happen quite often in real life and the proposed solutions are not always logical: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.” a pearl from Kansas legislation. Deadlock jury. Deadlock legislative body.

Deadlocks Deadlocks  prevent sets of concurrent threads/processes from completing their tasks. How does a deadlock occur  a set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A) Aim prevent or avoid deadlocks

Example of a deadlock Traffic only in one direction. Solution  one car backs up (preempt resources and rollback). Several cars may have to be backed up . Starvation is possible.

System model Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices) Each resource type Ri has Wi instances. Resource access model: request use release

Simultaneous conditions for deadlock Mutual exclusion: only one thread at a time can use a resource. Hold and wait: a thread holding at least one resource is waiting to acquire additional resources held by other threads. No preemption: a resource can be released only voluntarily by the thread holding it (presumably after that thread has finished). Circular wait: there exists a set {P0, P1, …, P0} of waiting threads such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.

Semaphores Introduced by Dijkstra in 1965 Does not require busy waiting Semaphore S – integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++;

Semaphore as general synchronization tool Counting semaphore – integer value can range over an unrestricted domain Binary semaphore mutex locks– integer value can range only between 0 and 1; simpler to implement. Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore S; // initialized to 1 wait (S); Critical Section signal (S);

Semaphore implementation Must guarantee that no two threads can execute wait () and signal () on the same semaphore at the same time Implementation becomes the critical section problem where the wait and signal code are placed in the critical section. Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Applications may spend lots of time in critical sections and therefore this is not a good solution.

Semaphore implementation with no busy waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list The two operations on semphore S, Wait(S) and Signal(S) are implemented using: block – place the thread invoking the operation on the appropriate waiting queue. wakeup – remove one of thread in the waiting queue and place it in the ready queue.

Semaphore implementation with no busy waiting Implementation of wait: wait (S){ value--; if (value < 0) { add this thread to waiting queue block(); } } Implementation of signal: Signal (S){ value++; if (value <= 0) { remove a thread P from the waiting queue wakeup(P); }

Thread coordination with a bounded buffer Producer-consumer problem  coordinate the sending and receiving threads Basic assumptions: We have only two threads Threads proceed concurrently at independent speeds/rates Bounded buffer – only N buffer cells Messages are of fixed size and occupy only one buffer cell Spin lock  a thread keeps checking a control variable/semaphore “until the light turns green” The “naïve solution”

Implicit assumptions for the correctness of the naïve solution One sending and one receiving thread. Only one thread updates each shared variable. Sender and receiver threads run on different processors to allow spin locks in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) The shared memory used for the buffer provides read/write coherence The memory provides before-or-after atomicity for the shared variables in and out The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported

A slightly less naïve approach What about multiple threads updating a shared variable? Could a lock help? Use a buffer-lock two primitives to ACQUIRE and RELEASE the lock

Lecture 6 33

What about the busy waiting? This solution does not allow the two threads to share the same processor because it involves busy-waiting. A thread should be able to YIELD the control of the processor

Switching the processor from one thread to another Thread creation: thread_id ALLOCATE_THREAD(starting_address_of_procedure, address_space_id); YIELD  function implemented by the kernel to allow a thread to wait for an event. Save the state of the current thread Schedule another thread Start running the new thread – dispatch the processor to the new thread YIELD cannot be implemented in a high level language, must be implemented in the machine language. can be called from the environment of the thread, e.g., C, C++, Java allows several threads running on the same processor to wait for a lock. It replaces the busy wait we have used before. 35

Implementation of YIELD 36

One more pitfall of the previous implementation of bounded buffer If in and out are long integers (64 or 128 bit) then a load requires two registers, e.,g, R1 and R2. int “00000000FFFFFFFF” L R1,int /* R1  00000000 L R2,int+1 /* R2  FFFFFFFF Race conditions could affect a load or a store of the long integer.