Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Lecture 4: Threads Advanced Operating System Fall 2010.

Similar presentations


Presentation on theme: "1 Lecture 4: Threads Advanced Operating System Fall 2010."— Presentation transcript:

1 1 Lecture 4: Threads Advanced Operating System Fall 2010

2 2 Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread and Kernel Thread Multithreading Models Threading Issues

3 3 Process Resource ownership process is allocated a virtual address space to hold the process image Process may be allocated control or ownership of resources, e.g. I/O and files Protection function by OS Scheduling/execution The execution of a process follows an execution path(trace) through one or more programs The execution of a process may be interleaved with other processes Execution state and a dispatching priority These two characteristics are treated independently by the operating system

4 4 Processes & Threads Resource ownership – Process or Task Scheduling/execution – Thread or lightweight process One process, one thread (MS-DOS) One process, multiple threads (Java Runtime) Multiple processes, multiple threads (W2K, Solaris, Linux) Multiple processes, one thread per process (Unix)

5 5 Processes & Threads (cont.) In a multithreaded environment, the followings are associated with a process: Address space to hold the process image Protected access to processors, other processes (IPC), files, and I/O resources (devices & channels) Within a process, there may be one or more threads, each with the following: A thread execution state (Running, Ready, etc) A saved context when not running – a separate program counter An execution stack Some static storage for local variables for this thread Access to memory and resources of its process, shared with all other threads in that process (global variables)

6 6 Single Threaded and Multithreaded Process Models

7 7 Process Address Space Revisited OS Code Globals Stack Heap OS Code Globals Stack Heap Stack (a) Single-threaded address space(b) Multi-threaded address space

8 8 Multi-Threading (cont) Implementation Each thread is described by a thread-control block (TCB) A TCB typically contains Thread ID Space for saving registers Pointer to thread-specific data not on stack Observation Although the model is that each thread has a private stack, threads actually share the process address space  There’s no memory protection!  Threads could potentially write into each other’s stack

9 9 Benefits of Threads Responsiveness Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the use. Resource Sharing Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel

10 10 Benefits of Threads (cont.) Economy Takes less time to create a new thread than a process Less time to terminate a thread than a process Less time to switch between two threads within the same process Utilization of Multiprocessor Architectures Threads within the same process may be running in parallel on different processors

11 11 Uses of Threads in a Single- User Multiprocessing System Foreground and background work For example, in a spreadsheet program, one thread could display menus and read user input, while another thread executes user commands and updates the spreadsheet. Asynchronous processing Asynchronous elements in the program can be implemented as threads. For example, as a protection against power failure, a word processor may write its buffer to disk once every minute. A thread can be created whose sole job is periodic backup and that schedules directly with the OS. Speed execution On a multiprocessor system, multiple threads from the same process may be able to execute simultaneously. Modular program structure Programs that involve a variety of activities or a variety of sources and destinations of input and output may be easier to design and implement using threads.

12 12 Thread States Running Ready Blocked Note: Suspend is at process-level

13 13 Diagram of Thread State

14 14 Thread Operations Spawn – create new thread Block – when a thread needs to wait for an event, it will block Unblock – when the event for which a thread is blocked occurs, the thread is moved to the ready queue. Finish – when a thread completes, its register context and stack are deallocated.

15 15 Context Switching Suppose a process has multiple threads …uh oh … a uniprocessor machine only has 1 CPU … what to do? In fact, even if we only had one thread per process, we would have to do something about running multiple processes … We multiplex the multiple threads on the single CPU At any instance in time, only one thread is running At some point in time, the OS may decide to stop the currently running thread and allow another thread to run This switching from one running thread to another is called context switching

16 16 Context Switching (cont) How to do a context switch? Save state of currently executing thread Copy all “live” registers to thread control block For register-only machines, need at least 1 scratch register points to area of memory in thread control block that registers should be saved to Restore state of thread to run next Copy values of live registers from thread control block to registers

17 17 Context Switching (cont) When does context switching occur? When the OS decides that a thread has run long enough and that another thread should be given the CPU Remember how the OS gets control of the CPU back when it is executing user code? When a thread performs an I/O operation and needs to block to wait for the completion of this operation To wait for some other thread Thread synchronization: we’ll talk about this lots in a couple of lectures

18 18 Threads & Signals What happens if kernel wants to signal a process when all of its threads are blocked? When there are multiple threads, which thread should the kernel deliver the signal to? OS writes into process control block that a signal should be delivered Next time any thread from this process is allowed to run, the signal is delivered to that thread as part of the context switch What happens if kernel needs to deliver multiple signals?

19 19 Threads Suspending a process involves suspending all threads of the process since all threads share the same address space Termination of a process, terminates all threads within the process

20 20 Question? If one thread in a process is blocked, does this prevent other threads in the process even if that other thread is in a ready state?

21 21 Answer Depends on whether OS is involved when the thread is blocked. If OS is involved, then answer is “yes”.

22 22 Thread Synchronization All of the threads of a process share the same address space and other resources such as open files. Any alternation of a resource by one thread affects the environment of the other threads in the same process. It is therefore necessary to synchronize the activities of the various threads. Will be covered later.

23 23 User Thread and Kernel Thread

24 24 User Threads All of the work of thread management is done by the application. The kernel is not aware of the existence of threads An application can be programmed to be multi-threaded by using a threads library, which is a package of routines for user thread management. The thread library contains code for creating and destroying threads, for passing messages and data between threads, for scheduling thread execution and for saving and restoring thread contexts. Three primary thread libraries: POSIX Pthreads Win32 threads Java threads

25 25 Pure User Threads Advantages: Thread switching does not require user/kernel mode switching. Thread scheduling can be application specific. User Threads can run on any OS through a thread library. Disadvantages: When a ULT executes a system call, not only the thread is blocked, but all of the threads within the process are blocked. Multithreaded application cannot take advantage of multiprocessing since kernel assign one process to only one processor at a time.

26 26 Kernel Threads Supported and managed directly by the OS. W2K, Linux, and OS/2 are examples of this approach In a pure Kernel Thread facility, all of the work of thread management is done by the kernel. There is no thread management code in the application area, simply an application programming interface to the kernel thread facility.

27 27 Pure Kernel Threads Advantages Kernel can simultaneously schedule multiple threads from the same process on multiple processors If one thread in a process is blocked, kernel can schedule another thread of the same process Disadvantage More overhead

28 28 Multithreading Models Many-to-One One-to-One Many-to-Many

29 29 Many-to-One Many user-level threads mapped to single kernel thread Examples: Solaris Green Threads GNU Portable Threads

30 30 One-to-One Each user-level thread maps to kernel thread Examples Windows NT/XP/2000 Linux Solaris 9 and later

31 31 Many-to-Many Model Allows many user level threads to be mapped to many kernel threads Allows the operating system to create a sufficient number of kernel threads Solaris prior to version 9 Windows NT/2000 with the ThreadFiber package

32 32 Two-level Model Similar to M:M, except that it allows a user thread to be bound to kernel thread Examples IRIX HP-UX Tru64 UNIX Solaris 8 and earlier

33 33 Threading Issues Semantics of fork() and exec() system calls Thread cancellation Signal handling Thread pools Thread specific data Scheduler activations

34 34 Semantics of fork() and exec() Does fork() duplicate only the calling thread or all threads?

35 35 Thread Cancellation Terminating a thread before it has finished Two general approaches: Asynchronous cancellation terminates the target thread immediately Deferred cancellation allows the target thread to periodically check if it should be cancelled

36 36 Signal Handling Signals are used in UNIX systems to notify a process that a particular event has occurred A signal handler is used to process signals 1. Signal is generated by particular event 2. Signal is delivered to a process 3. Signal is handled Options: Deliver the signal to the thread to which the signal applies Deliver the signal to every thread in the process Deliver the signal to certain threads in the process Assign a specific thread to receive all signals for the process

37 37 Thread Pools Create a number of threads in a pool where they await work Advantages: Usually slightly faster to service a request with an existing thread than create a new thread Allows the number of threads in the application(s) to be bound to the size of the pool

38 38 Thread Specific Data Allows each thread to have its own copy of data Useful when you do not have control over the thread creation process (i.e., when using a thread pool)

39 39 Scheduler Activations Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application Scheduler activations provide upcalls - a communication mechanism from the kernel to the thread library This communication allows an application to maintain the correct number kernel threads

40 40 End of lecture 4 Thank you!


Download ppt "1 Lecture 4: Threads Advanced Operating System Fall 2010."

Similar presentations


Ads by Google