Presentation is loading. Please wait.

Presentation is loading. Please wait.

CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11:30 - 12:30 AM.

Similar presentations


Presentation on theme: "CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11:30 - 12:30 AM."— Presentation transcript:

1 CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11:30 - 12:30 AM

2 Last time:  CPU Scheduling  Process synchronization Today Answers to student questions. Process synchronization  Semaphores  Monitors  Thread coordination with a bounded buffer. Next time  Process synchronization Reading assignments  Chapter 6 of the textbook Lecture 27 – Friday, March 22, 2013 Lecture 272

3 March 11 th Monday: How does the CPU decide which type of scheduling to use? In what applications would the different CPU scheduling techniques be applicable? Can a system utilize any of the algorithms or is it built with a specific one? If Round Robin is the fairest scheduler, why are there other types that are used? How do you determine the length of the next CPU burst for one thread? Why is the waiting factor typically.5 to determine the length of the next CPU burst? Difference between SRTF and SJF? What is the importance of exponential averaging? Lecture 273

4 March 13 th Wednesday: Priority Inversion, how does a thread acquire a lock? How do locks work? How is the priorities set or determined by the scheduler? How does the computer know whether a process has a higher priority than another process? Is there an error found on the computer when starvation happens? Can a process age to have absolutely zero priority? And if it does, does it get ignored or does it get re-sent back into waiting? In Priority scheduling would SJF have precedence over RR? Lecture 274

5 March 15 th Friday: What methods are used for the system to determine which core to use for a specific process (since each core may finish different processes at different times, does it pre-allocate where a process will go?) What is the average maximum temperature that a processor can function in? Does a higher clock rate always indicate that a computer is inefficient? What happens if one boosts a clock rate too much? For NUMA, what happens if two processes or threads try to access the same data at the same time? What is a system contention scope? What are the drawbacks of Fair Share Scheduling? Discuss the purpose of the Light Weight Process (LWP). What is a homogeneous processor? What is a soft and hard processor affinity? Lecture 275

6 Contention scope Contention scope  which threads compete with one another for CPU. User level threads, many-to-one and many-to-many  Scheduled by the thread library  Process contention scope (PCS)  competition among the threads belonging to the same process Kernel-level threads  The system scheduler  System contention scope (SCS)  competition is among all threads in the system. Lecture 276

7 7

8 System model Resource types R 1, R 2,..., R m (CPU cycles, memory space, I/O devices) Each resource type R i has W i instances. Resource access model:  request  use  release Lecture 278

9 Wait-for-graph  directed graph (an edge connected one vertex to another has a direction associated with it). The vertices are the locks and the threads. Lecture 279

10 Simultaneous conditions for deadlock 1. Mutual exclusion: only one process at a time can use a resource. 2. Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. 3. No preemption: a resource can be released only voluntarily by the process holding it (presumably after that process has finished). 4. Circular wait: there exists a set {P 0, P 1, …, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2, …, P n–1 is waiting for a resource that is held by P n, and P 0 is waiting for a resource that is held by P 0. The circular wait is reflected by a cycle in the Wait-for-Graph Lecture 2710

11 Lecture 2711

12 Semaphores Abstract data structure introduced by Dijkstra to reduce complexity of threads coordination; has two components  C  count giving the status of the contention for the resource guarded by s  L  list of threads waiting for the semaphore s Counting semaphore – for a resource with multiple copies. Supports two operations: V - signal() increments the semaphore C P - wait() P decrements the semaphore C. Binary semaphore: C is either 0 or 1. Lecture 2712

13 P and V counting semaphore operations The value of the semaphore S is the number of units of the resource that are currently available. The P operation forces a thread to sleep until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed.  wait(): Decrements the value of semaphore variable by 1. If the value becomes negative, the process executing wait() is blocked, i.e., added to the semaphore's queue. The V operation is the inverse: it makes a resource available again after the thread has finished using it.  signal(): Increments the value of semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are threads waiting for a resource), it transfers a blocked thread from the semaphore's waiting queue to the ready queue. Lecture 2713

14 The Wait (P) and Signal (V) operations P (s) (wait) { If s.C > 0 then s.C − −; else join s.L; } V (s) (signal) { If s.L is empty then s.C + +; else release a process from s.L; } Lecture 2714


Download ppt "CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11:30 - 12:30 AM."

Similar presentations


Ads by Google