Download presentation
Presentation is loading. Please wait.
1
Chapter 4 Threads
2
Outline Definition of Thread Benefits of multithreading
Thread Implementation User-level vs. Kernel-level Threads
3
Process Resource ownership Scheduling/execution
follows an execution path that may be interleaved with other processes These two characteristics are treated independently by the operating system
4
Resource Ownership A virtual address space which holds the process image Protected access to processors, other processes (for interprocess communication), files, and I/O resources
5
One or More Threads in Process
An execution state (running, ready, etc.) Saved thread context when not running An execution stack Access to the memory and resources of its process all threads of a process share this
6
Multithreading To support multiple, concurrent paths of execution within a single process Windows, Solaris, Linux, and Mac OS X support multiple processes and multiple threads per process
7
Single Threaded and Multithreaded Process Models
8
Advantages of Multithreading
Similar to the advantages we can gain with cooperating processes Information sharing Modular program structure Speed of execution Convenience
9
Benefits of Threads Takes less time to create a new thread than a process Experiment shows that it is more than 10 times faster Less time to terminate a thread than a process Less time to switch between two threads within the same process
10
Benefits of Threads Since threads within the same process share memory and files, they can communicate with each other without the kernel’s involvement
11
Threads Suspending a process involves suspending all threads of the process since all threads share the same address space Termination of a process terminates all threads within the process
12
Thread Implementation - Packages
Threads are provided as a package, including operations to create, destroy, and synchronize them A package can be implemented as: User-level threads Kernel threads
13
User-Level Threads All thread management is done by the application
The kernel is not aware of the existence of threads
14
User-Level Threads
15
User-Level Threads Thread library entirely executed in user mode
Kernel is not involved! Cheap to manage threads (e.g., Create: setup a stack & Destroy: free up memory) Cheap to do context switch (i.e., Just save CPU registers & Done based on program logic) However, a blocking system call blocks all peer threads
16
Kernel-Level Threads Kernel is aware of and schedules threads
A blocking system call, will not block all peer threads Kernel maintains context information for the process and the threads Scheduling is done on a thread basis
17
Kernel-Level Threads
18
Kernel-Level Threads Kernel is aware of and schedules threads
A blocking system call, will not block all peer threads More expensive to manage threads More expensive to do context switch Kernel intervention, mode switches are required
19
Thread/Process Operation Latencies
user-level threads kernel-level threads processes fork 34 usec 948 usec 11,300 usec signal-wait 37 usec 441 usec 1,840 usec Experiments on a uniprocessor computer.
20
User vs. Kernel-Level Threads
Users-level threads Cheap to manage and to do context switch A blocking system call blocks all peer threads Kernel-level threads A blocking system call will not block all peer threads Expensive to manage and to do context switch
21
Light-Weight Processes (LWP)
Support for hybrid (user-level and kernel) threads, example is Solaris A process contains several LWPs In addition, the system provides user-level threads Developer: creates multi-threaded applications System: Maps threads to LWPs for execution
22
Thread Implementation – LWP
Combining kernel-level lightweight processes and user-level threads
23
Thread Implementation – LWP
Each LWP offers a virtual CPU LWPs are created by system calls They all run the scheduler, to schedule a thread Thread table is kept in user space Thread table is shard by all LWPs LWPs switch context between threads
24
Thread Implementation – LWP
When a thread blocks for a signal from another thread, LWP schedules another ready thread Thread context switch is completely done in user mode When a thread blocks for I/O, the current LWP can no longer execute, context is switched to another LWP
25
LWP Features (I) Cheap thread management
A blocking system call may not suspend the whole process
26
LWP Features (II) LWPs are transparent to the application
LWPs can be easily mapped to different CPUs Managing LWPs is expensive (like kernel threads)
27
Thread Libraries Thread library provides programmer with API for creating and managing threads Two primary ways of implementing Library entirely in user space Kernel-level library supported by the OS
28
Pthreads A POSIX standard (IEEE c) API for thread creation and synchronization Specification, not implementation API specifies behavior of the thread library, implementation is up to development of the library May be provided either as user-level or kernel-level Common in UNIX operating systems (Solaris, Linux, Mac OS X) Modern Linux implementation of pthreads (i.e., NPTL) uses a 1:1 mapping between pthreads threads and kernel threads, so you get a kernel-level thread with pthread_create()
29
Pthreads Example
30
Pthreads Example (Cont.)
31
Appendix
32
(b) I/0 issued by thread 2, process B is blocked as a result
(b) I/0 issued by thread 2, process B is blocked as a result. Thread 2 is not actually running in the sense of being executed on a processor; but it is perceived as being in the running state by the threads library. (d) Thread 2 blocked for an action by thread 1.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.