Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 4 1 Threads Threads are a subdivision of processes Since there is less information associated with a thread than there is info associated with.

Similar presentations

Presentation on theme: "Chapter 4 1 Threads Threads are a subdivision of processes Since there is less information associated with a thread than there is info associated with."— Presentation transcript:

1 Chapter 4 1 Threads Threads are a subdivision of processes Since there is less information associated with a thread than there is info associated with a process, thread switching is easier than process switching

2 Chapter 4 2 Process Characteristics n Unit of resource ownership - process is allocated: u a virtual address space to hold the process image u control of some resources (files, I/O devices...) n Unit of execution - process is an execution path through one or more programs u execution may be interleaved with other processes u the process has an execution state and a priority

3 Chapter 4 3 Process Characteristics n These 2 characteristics are treated independently by some recent OS n The unit of execution is usually referred to a thread or a lightweight process (book talks of unit of dispatching, similar concept) n The unit of resource ownership is usually referred to as a process or task

4 Chapter 4 4 Multithreading vs. Single threading n Multithreading: when the OS supports multiple threads of execution within a single process n Single threading: when the OS does not recognize the concept of thread n MS-DOS support a single user process and a single thread n UNIX supports multiple user processes but only supports one thread per process n Solaris, Linux, Windows NT and 2000, OS/2 support multiple threads

5 Chapter 4 5 Threads and Processes

6 Chapter 4 6 Processes n Have a virtual address space which holds the process image n Protected access to processors, other processes, files, and I/O resources

7 Chapter 4 7 Threads n Have execution state (running, ready, etc.) n Save thread context when not running n Have private storage for local variables and execution stack n Have shared access to the address space and resources (files…) of their process u when one thread alters (non-private) data, all other threads (of the process) can see this u threads communicate via shared variables u a file opened by one thread is available to others

8 Chapter 4 8 Single Threaded and Multithreaded Process Models Thread Control Block contains a register image, thread priority and thread state information (compare with Fig. 3.14)

9 Chapter 4 9 Benefits of Threads vs Processes n It takes less time to create a new thread than a new process n Less time to terminate a thread than a process n Less time to switch between two threads within the same process than to switch between processes n Mainly because a thread is associated with less information.

10 Chapter 4 10 Benefits of Threads n Example: a file server on a LAN n It needs to handle several file requests over a short period n Hence more efficient to create (and destroy) a single thread for each request n Such threads share files and variables n In Symmetric Multiprocessing: different threads can possibly execute simultaneously on different processors n Example 2: one thread displays menu and reads user input while the other thread executes user commands

11 Chapter 4 11 Application benefits of threads n Consider an application that consists of several independent parts that do not need to run in sequence n Each part can be implemented as a thread n Whenever one thread is blocked waiting for an I/O, execution could possibly switch to another thread of the same application (instead of switching to another process)

12 Chapter 4 12 Benefits of Threads n Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel n Therefore necessary to synchronize the activities of various threads so that they do not obtain inconsistent views of the data (chap 5)

13 Chapter 4 13 Terminaison de processus et fils n Suspending a process involves suspending all threads of the process n Termination of a process terminates all threads within the process n Since all such threads share the same address space in the process

14 Chapter 4 14 Threads States n As for processes, three key states: running, ready, blocked n They cannot have suspend state because all threads within the same process share the same address space (same memory) n A thread is created by another thread using a command often called Spawn n Finish is the state of a process that is completing

15 Chapter 4 15 Remote Procedure Call Using one thread only

16 Chapter 4 16 Remote Procedure Call Using Multiple Threads: less waiting time

17 Chapter 4 17 Thread management n Thread management can be done in one of three fundamental ways: u User-level thread: threads are entirely managed by the application u Kernel-level threads: threads are entirely managed by the OS kernel u Combined approaches

18 Chapter 4 18 User-Level Threads (ULT) (ex. Standard UNIX) n The kernel is not aware of the existence of threads n All thread management is done by the application using a thread library n Thread switching does not require kernel mode privileges (no mode switch) n Scheduling is application specific

19 Chapter 4 19 Threads library n Contains code for: u creating and destroying threads u passing messages and data between threads u scheduling thread execution u saving and restoring thread contexts

20 Chapter 4 20 Kernel activity for ULTs n The kernel is not aware of thread activity but it still manages process activity n When a thread makes a system call, the whole process is blocked n But for the thread library that thread is still in running state n So thread states are independent of process states

21 Chapter 4 21 Advantages and disadvantages of ULT n Advantages u Thread switching does not involve the kernel: no mode switching u Scheduling can be application specific: choose the best algorithm. u Can run on any OS. Only needs a thread library n Disadvantages u Most system calls are blocking for processes. So all threads within a process will be blocked u The kernel can only assign processes to processors. Two threads within the same process cannot run simultaneously on two processors

22 Chapter 4 22 Kernel-Level Threads (KLT) Ex: Windows NT, 2000, Linux and OS/2 n All thread management is done by kernel n No thread library but an API to the kernel thread facility n Kernel maintains context information for the process and the threads n Switching between threads requires the kernel n Scheduling on a thread basis

23 Chapter 4 23 Advantages and disadvantages of KLT n Advantages u the kernel can simultaneously schedule many threads of the same process on many processors u blocking is done on a thread level u kernel routines can be multithreaded n Inconveniences u thread switching within the same process involves the kernel. We have 2 mode switches per thread switch u this may results in a significant slow down u however kernel may be able to switch threads quicker than users lib

24 Chapter 4 24 Combined ULT/KLT Approaches (Solaris) n Thread creation done in the user space n Bulk of scheduling and synchronization of threads done in the user space n The programmer may adjust the number of KLTs n Combines the best of both approaches

25 Chapter 4 25 Solaris n Process includes the users address space, stack, and process control block n User-level threads (threads library) u invisible to the OS u are the interface for application parallelism n Kernel threads u the unit that can be dispatched on a processor n Lightweight processes (LWP) u each LWP supports one or more ULTs and maps to exactly one KLT u LWPs are visible to applications u therefore LWP are the way the user sees the KLT u constitute a sort of virtual CPU

26 Chapter 4 26 Process 2 is equivalent to a pure ULT approach ( = Unix) Process 4 is equivalent to a pure KLT approach ( = Win-NT, OS/2) We can specify a different degree of parallelism (process 3 and 5)

27 Chapter 4 27 Solaris: versatility n We can use ULTs when logical parallelism does not need to be supported by hardware parallelism (we save mode switching) u Ex: Multiple windows but only one is active at any one time n If ULT threads can block then we can add two or more LWPs to avoid blocking the whole application n Note versatility of SOLARIS that can operate like Windows-NT or like conventional Unix

28 Chapter 4 28 Solaris: user-level thread execution (threads library). n Transitions among states are under the control of the application: u they are caused by calls to the thread library n Its only when a ULT is in the active state that it is attached to a LWP ( so that it will run when the kernel level thread runs ) n A thread may transfer to the sleeping state by invoking a synchronization primitive (chap 5) and later transfer to the runnable state when the event waited for occurs n A thread may force another thread to go to the stop state

29 Chapter 4 29 Solaris: user-level thread states (attached to a LWP)

30 Chapter 4 30 Decomposition of user-level Active state Decomposition of user-level Active state n When a ULT is Active, it is associated to a LWP and thus to a KLP. n Transitions among the LWP states is under the exclusive control of the kernel n A LWP can be in the following states: u running: assigned to CPU = executing u blocked because the KLT issued a blocking system call (but the ULT remains bound to that LWP and remains active) u runnable: waiting to be dispatched to CPU

31 Chapter 4 31

32 Chapter 4 32 Solaris: Lightweight Process States LWP states are independent of ULT states (except for bound ULTs)

33 Chapter 4 33 Multiprocessing: Categories of Systems n Single Instruction Single Data (SISD) u single processor executes a single instruction stream to operate on data stored in a single memory n Single Instruction Multiple Data (SIMD) u each instruction is executed on a different set of data by the different processors F typical application: matrix calculations n Multiple Instruction Single Data (MISD) u several CPUs execute on a single set of data. Never implemented, probably not practical n Multiple Instruction Multiple Data (MIMD) u a set of processors simultaneously execute different instruction sequences on different data sets

34 Chapter 4 34

35 Chapter 4 35 Symmetric Multiprocessing n Kernel can execute on any processor n Typically each processor does self- scheduling form the pool of available processes or threads

36 Chapter 4 36

37 Chapter 4 37 Design Considerations n Managing kernel routines u they can be simultaneously in execution by several CPUs n Scheduling u can be performed by any CPU n Interprocess synchronization u becomes even more critical when several CPUs may attempt to access the same data at the same time n Memory management u becomes more difficult when several CPUs may be sharing the same memory n Reliability or fault tolerance u if one CPU fails, the others should be able to keep the system in function

38 Chapter 4 38 Microkernels: a trend n Small operating system core n Contains only essential operating systems functions n Many services traditionally included in the operating system kernel are now external subsystems u device drivers u file systems u virtual memory manager u windowing system u security services

39 Chapter 4 39 Layered Kernel vs Microkernel

40 Chapter 4 40 Client-server architecture in microkernels n the user process is the client n the various subsystems are servers

41 Chapter 4 41 Benefits of a Microkernel Organization n Uniform interface on request made by a process u All services are provided by means of message passing n Extensibility and flexibility u Facilitates the addition and removal of services and functionalities n Portability u Changes needed to port the system to a new processor may be limited to the microkernel Reliability n Modular design u Small microkernel can be rigorously tested

42 Chapter 4 42 More Benefits of a Microkernel Organization n Distributed system support u Message are sent without knowing what the target machine is n Object-oriented operating system u Components are objects with clearly defined interfaces

43 Chapter 4 43 Microkernel Elements n Low-level memory management u elementary mecanisms for memory allocation u support more complex strategies such as virtual memory n Inter-process communication n I/O and interrupt management

44 Chapter 4 44 Layered Kernel vs Microkernel

45 Chapter 4 45 Microkernel Performance n Microkernel architecture may be less performant than traditional layered architecture u Communication between the various subsystems causes overhead u Message-passing mechanisms less performant than simple system call

46 Chapter 4 46 Important concepts of Chapter 4 n Threads and difference wrt processes n User level threads, kernel level threads and combinations n Thread states and process states n Different types of multiprocessing n Microkernel architecture

Download ppt "Chapter 4 1 Threads Threads are a subdivision of processes Since there is less information associated with a thread than there is info associated with."

Similar presentations

Ads by Google