Presentation is loading. Please wait.

Presentation is loading. Please wait.

COP 4600 Operating Systems Fall 2010

Similar presentations


Presentation on theme: "COP 4600 Operating Systems Fall 2010"— Presentation transcript:

1 COP 4600 Operating Systems Fall 2010
Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 3:30-4:30 PM

2 Lecture 16 – Thursday October 14, 2010
Last time: Presentation of the paper “DNS Complexity” by Vixie Threads, Virtual memory, Bounded buffers. Virtual Links. Primitives for processor virtualization Threads. The state of a thread. Thread manager. Interrupts. Interrupt handler. Race conditions and locks. Today: Threads Thread and processor layers Processor sharing; Scheduling; Switching the processor from one thread to another – YIELD. Implementation of YIELD and SCHEDULER Next time More on threads and processor sharing. Lecture 16

3 Lecture 16

4 Lecture 16

5 Lecture 16

6 Switching the processor from one thread to another
Thread creation: thread_id ALLOCATE_THREAD(starting_address_of_procedure, address_space_id); YIELD  function implemented by the kernel to allow a thread to wait for an event. Save the state of the current thread Schedule another thread Start running the new thread – dispatch the processor to the new thread YIELD cannot be implemented in a high level language, must be implemented in the machine language. can be called from the environment of the thread, e.g., C, C++, Java allows several threads running on the same processor to wait for a lock. It replaces the busy wait we have used before. Lecture 16

7 Thread states and state transitions
Lecture 16

8 The processor and the thread table
Lecture 16

9 Lecture 16

10 Lecture 16

11 Lecture 16

12 Lecture 16

13 Deadlocks Happen quite often in real life and the proposed solutions are not always logical: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.” a pearl from Kansas legislation. Deadlock jury. Deadlock legislative body. Lecture 16

14 Lecture 16

15 Deadlocks Deadlocks  prevent sets of concurrent threads/processes from completing their tasks. How does a deadlock occur  a set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example semaphores A and B, initialized to 1 P P1 wait (A); wait(B) wait (B); wait(A) Aim prevent or avoid deadlocks Lecture 16

16 Example of a deadlock Traffic only in one direction.
Solution  one car backs up (preempt resources and rollback). Several cars may have to be backed up . Starvation is possible. Lecture 16

17 System model Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices) Each resource type Ri has Wi instances. Resource access model: request use release Lecture 16

18 Simultaneous conditions for deadlock
Mutual exclusion: only one process at a time can use a resource. Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. No preemption: a resource can be released only voluntarily by the process holding it (presumably after that process has finished). Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0. Lecture 16

19 Lecture 16

20 Implicit assumptions for the correctness of the implementation
One sending and one receiving thread. Only one thread updates each shared variable. Sender and receiver threads run on different processors to allow spin locks in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) The shared memory used for the buffer provides read/write coherence The memory provides before-or-after atomicity for the shared variables in and out The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported Lecture 16

21 Lecture 16

22 Lecture 16

23 One more pitfall of the previous implementation of bounded buffer
If in and out are long integers (64 or 128 bit) then a load requires two registers, e.,g, R1 and R2. int “ FFFFFFFF” L R1,int /* R1  L R2,int /* R2  FFFFFFFF Race conditions could affect a load or a store of the long integer. Lecture 16

24 Lecture 6 Lecture 16 24

25 Lecture 16


Download ppt "COP 4600 Operating Systems Fall 2010"

Similar presentations


Ads by Google