Presentation is loading. Please wait.

Presentation is loading. Please wait.

COP 5611 Operating Systems Spring 2010

Similar presentations


Presentation on theme: "COP 5611 Operating Systems Spring 2010"— Presentation transcript:

1 COP 5611 Operating Systems Spring 2010
Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 2:00-3:00 PM

2 Lecture 5 Last time: Hard/Soft Modularity. Client/Server Organization.
Today: Virtualization Threads Virtual memory Bounded buffers Race conditions, locks, semaphores Thread coordination with a bounded buffer Next Time: Processor sharing 2 2 2 2 2

3 Virtualization of the three abstractions
Why virtualization Enforce modularity Provide uniform access to resources Virtualization– relating physical with virtual objects Done by the operating system for the three abstractions: Interpreters  Threads Storage  Virtual memory Communication links  Bounded buffers

4 Virtualization methods
Virtualization  simulating the interface to a physical object by: Multiplexing  create multiple physical objects from one instance of a physical object. Aggregation  create one virtual object from multiple physical objects Emulation  construct a virtual object from a different type of a physical object. Emulation in software is slow. Method Physical Resource Virtual Multiplexing processor thread real memory virtual memory communication channel virtual circuit server (e.g., Web server) Aggregation disk RAID core multi-core processor Emulation RAM disk system (e.g. Macintosh) virtual machine (e.g., Virtual PC) Multiplexing + Emulation real memory + disk virtual memory with paging communication channel + TCP protocol

5 Threads Threads  a thread is a virtual processor; a module in execution Multiplexes a physical processor The state of a thread: (1) the reference to the next computational step (the Pc register) + (2) the environment (registers, stack, heap, current objects). Sequence of operations: Load the module’s text Create a thread and lunch the execution of the module in that thread. A module may have several threads. The thread manager implements the thread abstraction. Interrupts  processed by the interrupt handler which interacts with the thread manager Exception  interrupts caused by the running thread and processed by exception handlers Interrupt handlers run in the context of the OS while exception handlers run in the context of interrupted thread.

6 Virtual memory Virtual memory  a scheme to allow each thread to access only its own virtual address space (collection of virtual addresses). Why needed: To implement a memory enforcement mechanism; to prevent a thread running the code of one module from overwriting the data of another module The physical memory may be too small to fit an application; otherwise each application would need to manage its own memory. Virtual memory manager maps virtual address space of a thread to physical memory. Thread + virtual memory allow us to create a virtual computer for each module. Each module runs in own address space; if one module runs multiple threads all share one address space.

7 Bounded buffers Bounded buffers  implement the communication channel abstraction Bounded the buffer has a finite size. We assume that all messages are of the same size and each can fit into a buffer cell. A bounded buffer will only accommodate N messages. Threads use the SEND and RECEIVE primitives.

8 Principle of least astonishment
Study and understand simple phenomena or facts before moving to complex ones. For example: Concurrency  an application requires multiple threads that run at the same time. Tricky. Understand sequential processing first. Examine a simple operating system interface to the three abstractions Memory CREATE/DELETE_ADDRESS SPACE ALLOCATE/FREE_BLOCK MAP/UNMAP UNMAP Interpreter ALLOCATE_THREAD DESTROY_THREAD EXIT_THREAD YIELD AWAIT ADVANCE TICKET ACQUIRE RELEASE Communication channel ALLOCATE/DEALLOCATE_BOUNDED_BUFFER SEND/RECEIVE

9 9

10 Processor sharing Possible because threads spend a significant percentage of their lifetime waiting for external events. Called: Time-sharing Processor multiplexing Multiprogramming Multitasking The kernel must support a number of functions: Creation and destruction of threads Allocation of the processor to a ready to run thread Handling of interrupts Scheduling – deciding which one of the ready to run threads should be allocated the processor The OS must maintain a thread table with information about each thread threadId stackPointer PC – Program Counter PMAP – pointer to the page table for thread (see virtual memory discussion) 10

11

12 12

13 Thread states and state transitions
13

14 The state of a thread and its associated virtual address space
Lecture 18 14

15 Problems related to resource sharing
A physical resource, in this case the processor, is multiplexed, shared among a number of virtual processors (threads). This poses a number of problems: Uncertainty of state  race conditions Possibility of deadlocks  deadlock- inability of individual threads to complete their task because some resources they need are held by other threads. Thread coordination effective and safe communication among threads. A scheduling discipline  decisions when is each thread allowed to use the resource Similar problems are posed by multiplexing other resources.

16 Race conditions Race condition  error that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence in order to be done correctly. Race conditions depend on the exact timing of events thus are not reproducible. A slight variation of the timing could either remove a race condition or create. Very hard to debug such errors.

17 Lock A mechanism to protect a resource.
In our case to guarantee that a program works correctly when multiple threads execute concurrently a multi-step operation protected by a lock behaves like a single operation can be used to implement before-or after atomicity shared variable acting as a flag (traffic light) to coordinate access to a shared variable works only if all threads follow the rule  check the lock before accessing a shared variable.

18 Deadlocks Happen quite often in real life and the proposed solutions are not always logical: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.” a pearl from Kansas legislation. Deadlock jury. Deadlock legislative body.

19

20 Deadlocks Deadlocks  prevent sets of concurrent threads/processes from completing their tasks. How does a deadlock occur  a set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example semaphores A and B, initialized to 1 P P1 wait (A); wait(B) wait (B); wait(A) Aim prevent or avoid deadlocks

21 Example of a deadlock Traffic only in one direction.
Solution  one car backs up (preempt resources and rollback). Several cars may have to be backed up . Starvation is possible.

22 System model Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices) Each resource type Ri has Wi instances. Resource access model: request use release

23 Simultaneous conditions for deadlock
Mutual exclusion: only one thread at a time can use a resource. Hold and wait: a thread holding at least one resource is waiting to acquire additional resources held by other threads. No preemption: a resource can be released only voluntarily by the thread holding it (presumably after that thread has finished). Circular wait: there exists a set {P0, P1, …, P0} of waiting threads such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.

24 Semaphores Introduced by Dijkstra in 1965
Does not require busy waiting Semaphore S – integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++;

25 Semaphore as general synchronization tool
Counting semaphore – integer value can range over an unrestricted domain Binary semaphore mutex locks– integer value can range only between 0 and 1; simpler to implement. Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore S; // initialized to 1 wait (S); Critical Section signal (S);

26 Semaphore implementation
Must guarantee that no two threads can execute wait () and signal () on the same semaphore at the same time Implementation becomes the critical section problem where the wait and signal code are placed in the critical section. Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Applications may spend lots of time in critical sections and therefore this is not a good solution.

27 Semaphore implementation with no busy waiting
With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list The two operations on semphore S, Wait(S) and Signal(S) are implemented using: block – place the thread invoking the operation on the appropriate waiting queue. wakeup – remove one of thread in the waiting queue and place it in the ready queue.

28 Semaphore implementation with no busy waiting
Implementation of wait: wait (S){ value--; if (value < 0) { add this thread to waiting queue block(); } } Implementation of signal: Signal (S){ value++; if (value <= 0) { remove a thread P from the waiting queue wakeup(P); }

29 Thread coordination with a bounded buffer
Producer-consumer problem  coordinate the sending and receiving threads Basic assumptions: We have only two threads Threads proceed concurrently at independent speeds/rates Bounded buffer – only N buffer cells Messages are of fixed size and occupy only one buffer cell Spin lock  a thread keeps checking a control variable/semaphore “until the light turns green” The “naïve solution”

30

31 Implicit assumptions for the correctness of the naïve solution
One sending and one receiving thread. Only one thread updates each shared variable. Sender and receiver threads run on different processors to allow spin locks in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) The shared memory used for the buffer provides read/write coherence The memory provides before-or-after atomicity for the shared variables in and out The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported

32 A slightly less naïve approach
What about multiple threads updating a shared variable? Could a lock help? Use a buffer-lock two primitives to ACQUIRE and RELEASE the lock

33 Lecture 6 33

34 What about the busy waiting?
This solution does not allow the two threads to share the same processor because it involves busy-waiting. A thread should be able to YIELD the control of the processor

35 Switching the processor from one thread to another
Thread creation: thread_id ALLOCATE_THREAD(starting_address_of_procedure, address_space_id); YIELD  function implemented by the kernel to allow a thread to wait for an event. Save the state of the current thread Schedule another thread Start running the new thread – dispatch the processor to the new thread YIELD cannot be implemented in a high level language, must be implemented in the machine language. can be called from the environment of the thread, e.g., C, C++, Java allows several threads running on the same processor to wait for a lock. It replaces the busy wait we have used before. 35

36 Implementation of YIELD
36

37

38

39 One more pitfall of the previous implementation of bounded buffer
If in and out are long integers (64 or 128 bit) then a load requires two registers, e.,g, R1 and R2. int “ FFFFFFFF” L R1,int /* R1  L R2,int /* R2  FFFFFFFF Race conditions could affect a load or a store of the long integer.


Download ppt "COP 5611 Operating Systems Spring 2010"

Similar presentations


Ads by Google