Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cooperating Processes The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process.

Similar presentations


Presentation on theme: "Cooperating Processes The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process."— Presentation transcript:

1 Cooperating Processes The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. Clearly, any process that does not share any data (temporary or persistent) with any other process is independent. On the other hand, a process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process. cooperating processes can communicate in a shared-memory environment. The scheme requires that these processes share a common buffer pool, and that the code for implementing the buffer be written explicitly by the application programmer. Cooperating process is required for following reasons: – Information sharing, computational speedup – Modularity, Convenience Concurrent execution of cooperating processes requires mechanisms that allow processes to communicate with one another and to synchronize their actions. Another way is interprocess communication.

2 Example

3 Interprocess Communication IPC provides a mechanism to allow cooperating processes to communicate and to synchronize their actions without sharing the same address space. IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network. An example is a chat program used on the World Wide Web. IPC is best provided by a message-passing system, and message systems.

4 Message-Passing System The function of a message system is to allow processes to communicate with one another without the need to resort to shared data. Communication among the user processes is accomplished through the passing of messages. An IPC facility provides at least the two operations: send(message) and receive(message). Send(destination_name, message) Receiver(source_name, message) Messages sent by a process can be of either fixed or variable size. There are several methods for logically implementing a link and the send/receive operations: – Direct or indirect communication – Symmetric or asymmetric communication – Automatic or explicit buffering – Send by copy or send by reference – Fixed-sized or variable-sized messages

5 Direct and indirect Direct Communication send (P, message) -send a message to process P. receive (Q, message) -Receive a message from process Q. Indirect Communication send (A, message) -Send a message to mailbox A. receive (A, message) -Receive a message from mailbox A. Mailbox is shared data structure consisting of queues that can temporarily hold messages.

6 Synchronous and Asynchronous The communication of a message between two processes implies some level of synchronization between the two processes. Message passing may be either blocking or nonblocking- also known as synchronous and asynchronous. Blocking send: The sender is blocked until the requested message is delivered. Nonblocking send: The sending process sends the message and resumes operation. Blocking receive: The receiver blocks until the requested message is delivered/arrived. Nonblocking receive: The receiver retrieves either a valid message or a null.

7 Three combination are possible using blocking and nonblocking. – Blocking send, blocking receive (Tight synchronization between process) – Nonblocking send, blocking receive (server process that exists to provide a service or resource to other processes) – Nonblocking send, nonblocking receive Nonblocking send consumes more system resources such as buffer space, processor time etc. Blocking receiver is suitable for concurrent tasks.

8 Buffering Zero capacity (queue with 0 length) Bounded capacity (finite length queue) Unbounded capacity (infinite length queue)

9 Threads Thread sometimes called a lightweight process. Basic unit of CPU utilization. It comprises a thread ID, a program counter, a register set and stack. Code section, data section, open files and OS resources {address space} shares with other threads belong to same process. A flow of control or execution through the process’s code, with its own thread ID, PC, system registers and stack. Each address space can have multiple concurrent control flows and it is accessed by other threads. Reducing overheads of process switching. May need synchronization to control access to shared variables A traditional (or heavyweight) process has a single thread of control. If the process has multiple threads of control, it can do more than one task at a time.

10 Thread block Address Space Program counter Registers Stack Thread 1 Single process contains one thread a process with multithreads Address Space Program counterP C RegistersR StackS Thread 1Thread 2

11 Single and Multithreaded Processes THREADS

12 Examples: – Web browser use one thread for display image or data and other threads retrieves data from the network. – Word processing (thread for display graphics, reading keystrokes, spelling and grammar checking in background and etc) – Web server (accept client request): Accessing several concurrent client (multithread).

13 Each process has an address space – Contains code section, data section (global and local variables), open files.. Multithreading - The ability of an OS to support multiple, concurrent paths of execution within a single process Multithreading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer. Java supports multithreaded programming, which allows you to write programs that do many things simultaneously Benefits – Responsiveness – Resource sharing – Economy – Utilization of multiprocessor architecture

14 Benefits of Threads Takes less time to create a new thread than a process Less time to terminate a thread than a process Switching between two threads takes less time than switching between processes Threads enhance efficiency in communication between programs

15 Type of thread User Level threads: – In user threads, All the work of thread management is done by the application and it is implemented by a thread library at the user level. – The most obvious advantage of user-level threads, it can be implemented on an Operating System that does not support threads. – The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread context.(no support from the kernel) – Because the kernel is unaware of user-level threads, all thread creation and scheduling are done in user space without the need for kernel intervention. Therefore, user-level threads are generally fast to create and manage. – ULT do not require any hardware support.

16 drawbacks if the kernel is single- threaded, then any user- level thread performing a blocking system call will cause the entire process to block, even if other threads are available to run within the application. User-thread libraries include POSIX Pthreads, Mach C-threads, and Solaris 2 UI-threads

17 Kernel level threads In kernel level thread, kernel performs thread creation, scheduling and management in kernel space. because thread management is done by the operating system. There is no thread management code in application area. Kernel threads are supported directly by the operating system. kernel threads are generally slower to create and manage than are user threads. (context switching time is longer in KLT) If on thread in a process is blocked, the kernel can schedule another thread of the same process. Most operating systems-including Windows NT, Windows 2000, Solaris 2, BeOS, and Tru64 UNIX (formerly Digital UN1X)-support kernel threads.

18 There are two distinct models of thread controls, and they are user-level threads and kernel-level threads. The thread function library to implement user- level threads usually runs on top of the system in user mode. Thus, these threads within a process are invisible to the operating system. User-level threads have extremely low overhead, and can achieve high performance in computation. However, using the blocking system calls like read(), the entire process would block. Also, the scheduling control by the thread runtime system may cause some threads to gain exclusive access to the CPU and prevent other threads from obtaining the CPU. Finally, access to multiple processors is not guaranteed since the operating system is not aware of existence of these types of threads. On the other hand, kernel-level threads will guarantee multiple processor access but the computing performance is lower than user-level threads due to load on the system. The synchronization and sharing resources among threads are still less expensive than multiple-process model, but more expensive than user-level threads. The thread function library available today is often implemented as a hybrid model, as having advantages from both user-level and kernel-level threads. The design consideration of thread packages today consists of how to minimize the system overhead while providing access to the multiple processors

19 Multithreading Models Some operating system provide a combined user level thread and kernel level thread facility. In combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models – Many to many relationship – Many to one relationship – One to one relationship

20 Many-to-One Relationship – In the many-to-one model, many user- level threads are all mapped onto a single kernel thread. – Thread management is done in user space, so it is efficient, but the entire process will block if a thread makes a blocking system call. – Any if user thread invoke blocking system call then entire kernal thread will be blocked. – only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multiprocessors. – Example “divide by zero” give infinite loop at user level which is blocked the kernal thread.

21 One-to-One Relationship – The one-to-one model creates a separate kernel thread to handle each and every user thread. – Linux and Windows from 95 to XP implement the one-to-one model for threads – It provides more concurrency than the many-to- one model by allowing another thread to run when a thread makes a blocking system call; – it also allows multiple threads to run in parallel on multiprocessors. – The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread. Because the overhead of creating kernel threads can burden the performance of an application, most implementations of this model restrict the number of threads supported by the system.

22 Many-to-Many Relationship – The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads. – The number of kernel threads may be specific to either a particular application or a particular machine. – In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallels on a multiprocessor. – If one kernal level thread blocked by one user level thread then other kernal level thread working properly but if one user level thread blocked then related user level thread can be blocked.

23

24 Advantages of thread – Use of threads provides concurrency within a process. – Efficient communication. – Economy- It is more economical to create and context switch threads. – Utilization of multiprocessor architectures to a greater scale and efficiency. – Responsiveness – Resource sharing, hence allowing better utilization of resources. – Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be distributed over a series of processors to scale. – Context Switching is smooth. Context switching refers to the procedure followed by CPU to change from one task to another. Thread minimize context switching time.

25

26 Multithreading issues The fork( ) and exec( ) System Calls Signal Handling Thread Cancellation Thread-Local Storage Scheduler Activations Privileged instruction done with the help of kernal (OS) [ex: assess system clock] Non provileged instruction done without using kernal.


Download ppt "Cooperating Processes The concurrent processes executing in the operating system may be either independent processes or cooperating processes. A process."

Similar presentations


Ads by Google