Presentation is loading. Please wait.

Presentation is loading. Please wait.

June 6, 2002Serguei A. Mokhov, 1 Salsa Theory Debrief 2 COMP346 - Operating Systems Tutorial 6 Edition 1.1, June 19, 2002.

Similar presentations


Presentation on theme: "June 6, 2002Serguei A. Mokhov, 1 Salsa Theory Debrief 2 COMP346 - Operating Systems Tutorial 6 Edition 1.1, June 19, 2002."— Presentation transcript:

1 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 1 Salsa Theory Debrief 2 COMP346 - Operating Systems Tutorial 6 Edition 1.1, June 19, 2002

2 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 2 Topics Device Drivers Mutual Exclusion Some Deadlock Processes fork() and exec() System Calls Processes vs. Threads, Take II Message Passing vs. System Calls

3 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 3 Device Drivers 1.Provide Common Interface (a layer of abstraction) 2.Execute privileged instructions 3.Part of the OS (Why? :-)) 4.Have two halves: Upper Half – Exposes 1. to the rest of OS Lower Half Provides Device-Specific implementation of 1 Interrupt handling

4 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 4 Device Drivers (2) Mapping independent I/O functions to device-dependent open() close() read() write() … Int handling, dev_open(), dev_close()… Buffers to sync data between the halves Upper half. Works at CPU’s speed Lower Half. Operates at device’s speed

5 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 5 Mutual Exclusion Possible Implementations: –General semaphores on top of counting ones –Using interrupts Drawbacks –ME violation –Starvation

6 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 6 Drivers Example: –LP driver

7 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 7 Mutual Exclusion Binary semaphores were implemented as separate entities for efficiency reasons [1]. General semaphores were using binary ones.

8 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 8 Mutual Exclusion What are the problems with the above? –Consider 2 V() and 2 P() operations executing concurrently: PVPV P() { V() { wait(mutex); wait(mutex); count – 1; count + 1; if (count < 0) { if (count <= 0) { signal(mutex); signal(delay); wait(delay); } } else { signal(mutex); signal(mutex); } }

9 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 9 Mutual Exclusion One P’s ran and got interrupted. Then one V ran till completion. Then the second P did as the first one. Then the second V ran till completion. Then… ? Imagine 100 P’s and V’s… P() { V() { wait(mutex); wait(mutex); count – 1; count + 1; if (count < 0) { if (count <= 0) { signal(mutex); signal(delay); wait(delay); } } else { signal(mutex); signal(mutex); } }

10 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 10 Mutual Exclusion To avoid the possible problem of the previous solution, we introduce interrupt operations. Looks OK, everybody is happy. BUT… what about multiprocessor systems? P() { V() { cli cli count – 1; count + 1; if (count < 0) { if (count <= 0) { wait(delay); signal(delay); } } sti sti }

11 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 11 Some Scheduling The scheduler places processes blocked on a semaphore in a queue. If semaphore queues are handled using: –FIFO –Stack (LIFO) How about fairness and starvation properties of each method?

12 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 12 Memory Stuff Tutorials 7 and 8 from Paul and Tony Locality of Reference Model –Keep the pages for a given scope of code (e.g. function), locality, with all instructions, variables (either local or global) used by these instructions in the memory to avoid thrashing. –There can be several localities –Localities change over time, can overlap.

13 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 13 Memory Stuff Locality – Self Referencing Code (loops): –Prof. Aiman Hanna: “Locality in this context just means that the process is referencing the same piece of code (that is why the page that is used will be used many times). The most obvious way to have code locality are loops, since the executed lines may go again and again. Since these lines belong to the same page(s), these pages will be kept (not considered for replacement). This is why LFU will behave best at that locality points.”

14 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 14 Deadlock A system: –Three concurrent processes –Five resources of the same type Each process needs to acquire three resources to terminate. Upon termination a process releases all resources back to the system. Is there a possibility of a deadlock?

15 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 15 Deadlock p1p2p3

16 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 16 Deadlock p1p2p3

17 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 17 Deadlock p1p2p3

18 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 18 Deadlock p1p2p3

19 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 19 Deadlock p1p2p3 ? ?

20 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 20 Processes Again fork() and exec() – Review COMP229 slides from the past term.

21 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 21 Threads vs. Processes, Take II Do the threads really “speed up” process’ execution? –Define “speed up” and the kind of threads (user, kernel, LWP) you’re talking about –Consider the task of the running program. Does it exploit parallelism?

22 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 22 Threads vs. Processes, Take II In which circumstances one’s better off with processes rather than threads? Answer this: –What’s your task and how parallel is it? –Robustness? (e.g. PostgreSQL RDBMS) What if a thread “crashes”? –Complexity of implementation? –Synchronization? –Is really “speed up” achieved using threads?

23 June 6, 2002Serguei A. Mokhov, mokhov@cs.concordia.ca 23 Message Passing vs. System Calls Where does one outperform the other? Good question because they do different things. Message passing is an IPC facility, system calls have a wider spectrum of use (subset may be used in IPC, obviously). Consider this (assuming IPC): –Message passing services run in the user space outside of the kernel. –System calls execute in the kernel on behalf of the process. –Messages have limitations on size. –System calls imply mode switch and more overhead –Messages are not necessarily received immediately. Now looking at the above tell: which one is more secure? Which one is more flexible? What are the space/time requirements in each?


Download ppt "June 6, 2002Serguei A. Mokhov, 1 Salsa Theory Debrief 2 COMP346 - Operating Systems Tutorial 6 Edition 1.1, June 19, 2002."

Similar presentations


Ads by Google