Download presentation
Presentation is loading. Please wait.
1
Process Management Processes & Threads
03 Process Management Processes & Threads In previous lectures, we frequently mentioned the term of process. This week, we will explore how operating systems manage processes. As most operating systems enable a process to have multiple threads of control as it runs, we will also discuss how such multithreaded programming is handled by operating systems. Kai Bu
2
Process? Processes & Threads
So first, what is a process?
3
Process a program in execution
By definition, a process is a program in execution.
4
Process a program in execution using certain resources such as
CPU time, memory, files, I/O devices It will need certain resources, such as CPU time, memory, files, and I/O devices. These resources are allocated to the process either when it is created or while it is executing.
5
Process a program in execution more than program code / text section
Obliviously, a process is more than the program code, which is sometimes known as the text section.
6
Process in memory a program in execution
program counter + register contents; stack of temporary data; It also includes the current activity, as represented by the value of the program counter, and the contents of the processor’s registers. A process generally also includes the process stack, which contains temporary data, such as function parameters, return addresses, and local variables. It also has data section, which contains global variables. A process may also include a heap, which is memory that is dynamically allocated during process run time. Based on these properties of a process, e.g., function paras, return addr, local var data section of global var; heap dynamically allocatable in run time; /code
7
Program ≠ Process We emphasize again that a program by itself is not a process
8
Program ≠ Process a passive entity an active entity
e.g., executable file containing a list of instructions stored on disk Program ≠ Process an active entity with a program counter specifying the next instruction to execute and a set of associated resources A program is a passive entity, such as an executable file containing a list of instructions stored on disk, In contrast, a process is an active entity; It uses a program counter to specify the next instruction to execute, and also associates with a set of resources.
9
Program ↓ Process a passive entity an active entity
e.g., executable file containing a list of instructions stored on disk Program ↓ loaded into memory Process an active entity with a program counter specifying the next instruction to execute and a set of associated resources A program becomes a process only when the executable file is loaded into memory.
10
Program ↓ Process a passive entity an active entity
e.g., executable file containing a list of instructions stored on disk Program ↓ loaded into memory Process an active entity with a program counter specifying the next instruction to execute and a set of associated resources There are two common techniques for loading executable files into memory. We can either double-click an icon representing the executable file, or enter the name of the executable file on the command line. enter exe file name double-click icon on cmd line
11
Process State As a process executes, its state may vary with its current activity.
12
Process State New: Running: Waiting: Ready: Terminated:
proc is being created instr are being executed proc is waiting for some event to occur (e.g., I/O completion or signal reception) proc is waiting to be assigned to processor proc has finished execution Here are some common process states. We call it a new state if the process is being created; a running state if instructions are being executed; A waiting state if the process is waiting for some event to occur, such as I/O completion and reception of a signal; A ready state if the process is waiting to be assigned to a processor; And a terminated state if the process has finished execution.
13
State Transition Diagram
How one state transits to another; For example…;
14
State Transition Diagram
It is important to realize that only one process can be running on any processor at any instant. *only one process can be running on any processor at any instant
15
how OS tracks processes?
Given the running process and processes in other states like ready and waiting, how does operating system track the information of all these processes?
16
PCB: Process Control Block
Process state Program counter addr of next instr to execute CPU registers CPU-scheduling info proc priority, pointers to scheduling queues, other paras cont. Use process control block, Which contains process related information *aka. task control block
17
PCB: Process Control Block
Memory-management info base and limit registers, page tables or segment tables Accounting info CPU and real time usage, time limits, account numbers, job or process numbers I/O status info list of allocated I/O devs and open files
18
CPU Switch between Proc
This example shows how PCB is used when CPU switches from one process to another.
19
which process to execute?
Given these many processes under tracking, when OS wants to choose one for the next execution, Which one to choose?
20
Scheduling Queues device queue Use linked list to … queue processes
Ready queue: the list of processes that are residing in main memory and are ready and waiting to execute; Device queue: the list of processes that are waiting for a particular I/O device; Each device has its own device queue; Scheduling Queues
21
Queueing Diagram dispatched: selected for execution
Initially, a process is in the ready queue; It waits there until it is selected for execution, or dispatched. Once the process is allocated the CPU and is executing, one of several events could occur: The process could issue an I/O request and then be placed an I/O queue; The process could create a new child process and wait for the child’s termination; The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue.
22
Queueing Diagram A process continues this cycle until it terminates.
dispatched: selected for execution A process continues this cycle until it terminates. That’s when it is removed from all queues; its PCB and resources are deallocated.
23
which process to execute?
! Still did not get this question answered Given many processes in the queue, which process to run
24
which process to execute?
scheduler selects which process to execute? It is the scheduler that decides the process selection using a certain strategy …;
25
which process to execute?
more es r submitted than can be executed immediately; spooled to a mass-storage dev (typically a disk) for later execution Often, more processes are submitted than can be executed immediately. These processes are first spooled to a disk, where they are kept for late execution.
26
Scheduler Long-term scheduler / Job scheduler
selects processes from the pool; loads them into memory for execution Short-term scheduler / CPU scheduler selects from among ready processes; allocate the CPU to one of them Then we have two types of schedulers The one that selects processes from the pool and loads them into memory is called long-term scheduler or job scheduler. Given a number of loaded processes, we have the other scheduler called short-term scheduler and CPU scheduler to select one ready process and allocate the CPU to it. Then the selected process will be executed.
27
Scheduling Frequency? Long-term scheduler / Job scheduler
selects processes from the pool; loads them into memory for execution Short-term scheduler / CPU scheduler selects from among ready processes; allocate the CPU to one of them How frequently should these schedulers select a new process?
28
Scheduling Frequency? Long-term scheduler / Job scheduler
selects processes from the pool; loads them into memory for execution Short-term scheduler / CPU scheduler switch CPU among proc so frequently that users can interact with each program while it is running; execute at least once every 100 ms The short-term scheduler may execute at least once every 100 milliseconds; Because it needs to switch the CPU among processes so frequently that users can interact with each program while it is running.
29
Scheduling Frequency? Long-term scheduler / Job scheduler
execute much less frequently; e.g., every minutes Short-term scheduler / CPU scheduler switch CPU among proc so frequently that users can interact with each program while it is running; execute at least once every 100 ms In contrast, the long-term scheduler executes much less frequently, say, with an interval of minutes.
30
Process Types CPU-bound processes I/O-bound processes
According to what resources a process mainly uses upon execution, there are two types of processes: An I/O-bound process spends more of its time doing I/O than it spends doing computations. Differently, a CPU-bound process generates I/O requests infrequently, and uses more of its time doing computations. Process Types
31
Process Selection CPU-bound processes I/O-bound processes
long-term scheduler should select a good process mix of both types to avoid frequent idle of CPU and devices I/O-bound processes It is important that the long-term scheduler selects a good process mix of both types. Why? If all processes are I/O bound, the ready queue will almost always be empty, and the short-term scheduler as well as the CPU will have little to do. If all processes are CPU bound, the I/O waiting queue will almost always be empty, devices will go unused, and again the system will be unbalanced. The system with the best performance will thus have a combination of CPU-bound and I/O-bound processes. Process Selection
32
Process Selection witch? CPU-bound processes I/O-bound processes
During process execution, interrupts will cause the operating system to change a CPU from its current task and to run a kernel routine. What do we need to do when switching the CPU to another process? Process Selection witch?
33
Context Switch state save of the current process
state restore of a different process typical speed: a few milliseconds We need to perform a state save of the current process and a state restore of a different process, This task is known as context switch. It should complete in a very short time, such as a few milliseconds.
34
how to get your own process?
35
Process Creation A tree of processes on Linux parent process create
child process parent process create During the course of execution, a process may create several new processes. The creating process is called a parent process, and the newly created processes are called the children of that process. Each of these children processes may in turn create other processes, forming a tree of processes. child process
36
pid: Process Identifier
unique across processes Use unique pid to index each process
37
Process Creation Upon system boot, init runs and creates various user processes Once the system has booted, the init process can also create various user processes, For example,
38
Process Creation Example: a logged user uses bash shell
and creates ps & emacs Login process is responsible for managing clients that directly log onto the system; kthreadd process is responsible for creating additional processes that perform tasks on behalf of the kernel. sshd process is responsible for managing clients that connect to the system by using ssh (secure shell)
39
Resource Allocation A child process directly asks from OS
Or be constrained to a subset of the resources of the parent process After creation, a child process can directly asks resources from the operating system, Or it may be constrained to a subset of the resources of the parent process. Which one should we follow?
40
√ Resource Allocation A child process directly asks from OS
Or be constrained to a subset of the resources of the parent process restricting a child process to a subset of the parent’s resources prevents any process from overloading the system by creating too many child processes √ The second one, right? Cause if not so, when there are too many child processes being created, and each of them asks some resources from the operating system, the system will be overloaded.
41
Execution Possibility
The parent process continues to execute concurrently with its child processes The parent waits until some or all of its child processes have terminated When the newly created child processes are running, the parent process itself may continue to execute concurrently. Or, it can wait until some or all of its child processes have terminated.
42
Address-Space Possibility
The child process is a duplicate of the parent , with the same program and data as the parent The child process has a new program loaded into it
43
Example: UNIX process creation using fork() system call
The C program for UNIX fork() system call to create a new process
44
Example: UNIX process creation using fork() system call
Diagram corresponding to the code
45
Process Termination after a process finishes executing its final statement use exit() sys call to ask OS to delete it and deallocate its resources
46
Process Termination A parent process can terminate its child processes via an appropriate sys call e.g., TerminateProcess() in Windows for the following reasons: The child proc uses more resources than allocated The task assigned to the child proc is no longer required The parent proc is exiting and the operating system does not allow a child to continue if its parent terminates;
47
Process Termination A parent process can terminate its child processes via an appropriate sys call e.g., TerminateProcess() in Windows for the following reasons: The child proc uses more resources than allocated The task assigned to the child proc is no longer required The parent proc is exiting and the operating system does not allow a child to continue if its parent terminates; In the last case, we also refer it to as cascading termination cascading termination
48
Independent Process does not share data with any other processes;
cannot affect or be affected by concurrently executing processes in the system; During the course of process execution, If the process cannot affect or be affected by the other processes executing in the system, That is, it does not share data with any other processes, We call it an independent process.
49
Independent Process Cooperating
does not share data with any other processes; cannot affect or be affected by concurrently executing processes in the system; Clearly, there should be times when we want processes to share data with each other; We call such processes cooperating processes
50
Cooperating Process Apps
Information sharing Computation speedup Modularity Convenience … For example, we may use it for Information sharing: since several users may be interested in the same piece of information, we must provide an environment to allow concurrent access to such information. Computation speedup: if we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing cores. Modularity: we may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. Convenience: even an individual user may work on many tasks at the same time. For instance, a user may be editing, listening to music, and compiling in parallel.
51
how cooperating procs comm?
Since the purpose of cooperating processes is to share data among different processes; Then how do they communicate with each other?
52
InterProcess Communication
(a) message passing communication takes place by means of messages exchanged between cooperating processes Usually, cooperating processes exchange data and information using an interprocess communication (IPC) mechanism. We have two fundamental IPC models. One is message passing, in which communication takes place by means of messages exchanged between the cooperating processes.
53
InterProcess Communication
(b) shared memory communication takes place by reading and writing data to the shared memory region The other is shared memory, by which a region of memory that is shared by cooperating processes is established. Processes can exchange information by reading and writing data to the shared region
54
Shared-Memory Systems
Shared-memory region resides in the address space of the process creating it Other processes attach it to their address space to communicate Example: producer-consumer problem A common paradigm for cooperating processes is the producer-consumer problem
55
Producer-Consumer Problem
Producer process produces information Consumer process consumes it a buffer to be filled by the producer and emptied by the consumer synchronization needed, so that consumer does not try to consume an item that has not yet been produced Produced information is stored in a buffer
56
Producer-Consumer Problem
Unbounded buffer no practical limit on buffer size; producer can always produce new items Bounded buffer fixed buffer size; consumer waits upon empty buffer; producer waits upon full buffer; The operating system may place no practical limit on the size of the buffer, If so, we call it an unbounded buffer, producer can always produce and add new items to it. If fixed buffer size is enforced, we call it a bounded buffer. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.
57
Example: Bounded Buffer
#define BUFFER_SIZE 10 typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; a circular array with two logical pointers: in and out Here’s an example of a bounded buffer
58
Example: Bounded Buffer
item next_produced; while (true) { /* produce an item in next produced */ while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */ buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; } producer process consumer process Here’s an example of a bounded buffer item next_consumed; while (true) { while (in == out) ; /* do nothing */ next_consumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; /* consume the item in next consumed */ }
59
Example: Bounded Buffer
item next_produced; while (true) { /* produce an item in next produced */ while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */ buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; } produce while buffer is not full producer process consumer process Here’s an example of a bounded buffer item next_consumed; while (true) { while (in == out) ; /* do nothing */ next_consumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; /* consume the item in next consumed */ } consumer while buffer is not empty
60
Message-Passing Systems
Cooperating processes communicate via a message-passing facility, which provides at least two operations: send(message) receive(message)
61
Direct Communication Symmetric addressing
send(P, message): send msg to proc P receive(Q, message): receive msg from proc Q Asymmetric addressing receive(id, message): receive msg from any process; For two processes to communicate, a straightforward way called direct communication requires that process names be explicitly specified. Two types: For symmetric addressing: both the names of sender and receiver should be specified; While for asymmetric addressing: only the sender names the recipient, the recipient is not required to name the sender, it can receive a message from any process. (The variable id is set to the name of the process with which communication has taken place)
62
Direct Communication Flexibility?
Based on the working principle of direction communication, How do you think of its flexibility?
63
Direct Communication Flexibility: limited upon changing process id,
all references to old id must be found, and be modified to new id Limited flexibility, because Upon changing the identifier of a process, All references to the old identifier must be found, so that they can be modified to the new one.
64
Direct Communication Ind
Cooperating processes communicate via a shared mailbox or port send(A, msg): send msg to mailbox A receive(A, msg): recv msg from mb A Toward a better flexibility, we may turn to indirect communication, In which cooperating processes communicate via a shared mailbox or port. Two key operations of send and receive
65
Synchronization blocking/synchronous - nonblocking/async
Blocking send: the sending process is blocked till msg is received by proc/mb Nonblocking send: the sending process sends msg and resumes operation Blocking receive: the receiver is blocked untill a message is available Nonblocking receive: the receiver retrieves either a valid msg or a null Send() and receive() primitives can be implemented either blocking or nonblocking; which are also called synchronous or asynchronous, respectively.
66
Buffering Temporary queue for exchanged msg,
can be implemented with different length Zero capacity no msg waits in the queue; sender must block till receiver receives the message. Bounded capacity: sender blocks upon the queue is full Unbouded capacity: sender never blocks Whether communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue. Basically, such queues can be implemented in three ways. By zero capacity, (the queue has a maximum length of zero;) thus, the link cannot have any message waiting in it. In this case, the sender must block until the recipient receives the message By bounded capacity, the queue has a finite length; it can host up to a number of messages. If the queue is not full when a new message is sent, the message is placed in the queue, and the sender can continue execution without waiting. If the queue is full, the sender must block until space is available in the queue. By unbounded capacity, the queue’s length is potentially infinite; Thus, any number of messages can wait in it. The sender never blocks.
67
shared mem & msg passing
So far, we have investigated shared memory and message passing, and how they support the communication between two processes
68
shared mem & msg passing
work for client-server systems They are also applicable to client-server systems;
69
shared mem & msg passing
work for client-server systems together with three more: Next, let’s explore three other similar strategies for communication in client-server systems, which are Sockets, remote procedure calls (RPCs), and pipes sockets remote procedure calls (RPCs) pipes
70
Sockets IP address port number: >1024 for client
<1024 for server A socket is identified by an IP address concatenated with a port number.
71
Remote Procedure Calls
request Execution of a remote procedure call (RPC)
72
Pipes Four implementation issues: Bidirectional or unidirectional?
If bidirectional, half duplex or full duplex? Relationship between comm processes? Over a network? Or on same machine? *Half duplex: data can travel only one way at a time Full duplex: data can travel in both directions at the same time
73
Ordinary Pipes unidirectional, half duplex, parent-child relationship, on the same machine: producer writes to the write-end of pipe; consumer reads from the read-end *Example: pipe(int fd[]) to construct a piple file descriptor fd[0] is the read-end of the pipe; fd[1] is the write-end
74
Pipes Named bidirectional, no parent-child relationship required
used for several procs, several writers supported, exist after communicating procs finished on both UNIX and Windows
75
Pipes Named : UNIX referred to as FIFOs
appear as typical files in the file system create with mkfifo() manipulate with open(), read(), write(), close() bidirectional, half duplex communicating processes must reside on the same machine Named pipes are referred to as FIFOs in UNIX systems. Once created, they appear as typical files in the file system. A FIFO is created with the mkfifo() system call And manipulated with the ordinary open(), read(), write(), and close() system calls. Although FIFOs allow bidirectional communication, only half-duplex transmission is permitted. Additionally, the communicating processes must reside on the same machine.
76
Pipes Named : Windows bidirectional, full duplex
comm procs on same or diff machines create with CreateNamedPipe() connect with ConnectNamedPipe() manipulate with ReadFile() & WriteFile() On Windows systems, named pipes provide a richer communication mechanism. Fule-duplex communication is allowed, And the communicating processes may reside on either the same machine or different ones. Windows creates named pipes with CreateNamedPiple() function, A client connects to it using ConnectNamedPipe(). Communication over the named pipe can be accomplished using ReadFile() and WriteFile() functions.
77
processes processed so far
So far, we have … how processes are processed by operating sytems
78
processes processed so far with a single thread of control.
But, in all these discussions, We assume that a process has only a single thread of control;
79
processes processed so far with a single thread of control.
multi-threads for >1 task? If a process has multiple threads of control, it can perform more than one task at a time. Then how that works?
80
Here’s how a single-threaded process differs from a multithreaded process.
81
thread ID program counter register set stack
As a basic unit of CPU utilization, a thread comprises a thread ID, a program counter, a register set, and a stack.
82
shared thread ID program counter register set stack
But it needs to share with other threads belonging to the same process some other information like code section, data section, and other OS resources, such as open files and signals.
83
Thread: Examples One App One proc Several Threads Web browser
a thread display images or text; another retrieves data from network Word processor a thread displays graphics or texts; a thread responding to keystrokes; another for spelling/grammar checking On a multithreaded system, one application will be assigned one process, which further associates several threads. For example, when you are using a web browser, one thread may be displaying images or text and another may retrieve data from the network. Similarly, when you are editing a word file, a thread needs to display the graphics or texts, And a thread responds to keystrokes, Meanwhile, it may have another thread for checking spelling and grammar errors.
84
Multithreaded Server Arch
Here’s the architecture of how a multithreaded server handles threads. If the server is multithreaded, it will create a separate thread that listens from client request. When (1) a request is made, (2) the server creates a new thread to service the request, (3) And resume listening for additional requests.
85
benefits? Yes, I’m gonna ask you the same question again.
What benefits can we get from multithreaded systems? Can u think of any?
86
Responsiveness For an interactive application
Keeps running even if part of it is blocked or is performing a lengthy op App remains responsive to the user One major benefit from multithreading / multithreaded programming is responsiveness. For example, consider what happens when you clicks a button that results in the performance of a time-consuming operation. A single-threaded application would be unresponsive to the user until the operation had completed. In contrast, if the time-consuming operation is performed in a separate thread, the application reamain responsive to the user.
87
Resource Sharing Threads share the memory and the resources of their process by default Allow an application to have several threads within the same add space Multiple processes share resources via shared memory and message passing The second benefit would be ease of resource sharing. Threads share the memory and the resources of the process to which they belong by default. And the benefit of sharing code and data is that It allows an application to have several different threads of activity within the same address space. Why is this beneficial? Hope you still remember that, before multithreading, multiple processes can only share resources through some special techniques like shared memory and message passing.
88
Economy Economize resource usage as threads share resources of the same process Significantly more time consuming to create and manage processes than threads e.g., Solaris process creation: 30x slower context switch: 5x slower The third benefit is that by using multithreading, you can economize computer resources. This is because threads simply share the resources allocated for the same process, they do not need the system to allocate individual, additional/extra resources for them. In general, it is significantly more time consuming to create and manage processes than threads. For example, in Solaris, creating a process is about thirty times slower than is creating a thread; And context switching is about five times slower.
89
Scalability In a multiprocessor architecture
Threads run in parallel on different cores A single-threaded process can run on only one processor, albeit given many Also, when you are given with a multiprocessor architecture, threads can be running in parallel on different processing cores. Apparently, this will speedup the execution threads and therefore the process they belong to. However, for a single-threaded process, no matter how many processors it is provided, it can run on only one processor.
90
Concurrent execution on a single-core system
How multicore system helps improve execution speed, by having more than one threads to be executed at the same time; Parallel execution on a multicore system
91
Concurrent execution on a single-core system
what’s the different between concurrency and parallelism? Parallel execution on a multicore system
92
Concurrent execution on a single-core system
Supports more than one task by allowing all tasks to make progress; Rapidly switch between processes; A system is parallel if it can perform more than one task simultaneously. In contrast, a concurrent system supports more than one task by allowing all the tasks to make progress. On a single-core system, the CPU needs to rapidly switch between processes to provide the illusion that more than one processes are executed at the same time. Parallel execution on a multicore system Perform more than one task simultaneously
93
Parallelism Types Data parallelism
distribute subsets of the same data across multiple computing cores, and perform same operation on each core Task parallelism distribute not data but tasks (threads) across multiple computing cores; data could same or different; In general, there are two types of parallelism: One is called data parallelism: The other is called task parallelism:
94
benefits? to what extent? Now you may wonder,
I have a program to run, and have a multicore system to use, Then to what extent can we improve the performance?
95
Amdahl’s Law N: the number of processing cores
S: the portion of the application that S: must be performed serially Performance Gain: This can be quantified by Amdahl’s law, Which goes like this: When running a program on a system with N processing cores, Since usually we cannot have all the program to be parallelized, we use S to denote the portion of the application that has to be performed serially. Then of course the portion of parallel code will be 1-S; According to Amdahl’s law, the speedup you can gain through this multicore system will be these much. Some portion has to be serial
96
benefits? at what cost? Designers of operating systems must write scheduling algorithms that use multiple processing cores to allow the parallel execution; Also, for application programmers, the challenge is to modify existing programs as well as design new programs that are multithreaded. In general, five areas present challenges in programming for multicore systems.
97
Programming Challenges
Identify separate, concurrent tasks Balance tasks with equal work and value Split data across separate cores Synchronize tasks if data dependency Test and debug with many exec paths We need to divide an application into separate, concurrent tasks; they are better to be independent of one another, and thus can run in parallel on individual cores. Also, the workload of each task should be balanced. Like when your computer has two cores, and you divide your application into two tasks. Certainly it is not very smart if of of the task require one hour to finish while the other just takes seconds. Besides dividing the application, we also need to split the data that may be processed by the tasks. It becomes complex if there are data dependencies among tasks. That is, if one tasks depends on data from another, programmers must ensure that the execution of the tasks is well scheduled and synchronized. Also, given a number of tasks on different cores, a different scheduling decision may make the program execution follow different paths. Then, when an error occurs, it is harder to test and debug than when all tasks are scheduled on a single core.
98
Many-to-One Model efficient thread management in user space; only one thread can access kernel at a time; unable to run in parallel on multicore sys Three multithreading models Many-to-one model maps many user-level threads to one kernel thread. Thread management is done by the thread library in user space, so it is efficient. However, the entire process will block if a thread makes a blocking system call. Also because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multicore systems.
99
One-to-One Model provide more concurrency than many-to-one model; multiple threads run in parallel on multicore systems; creating many kernel models burdens application performance; The one-to-one model maps each user thread to a kernel thread. It provides more concurrency than many-to-one model.
100
Many-to-Many Model multiplex many user-level threads to a smaller or eual number of kernel threads; create as many user threads as necessary; corresponding kernel threads run in parallel on a multiprocessor Developers can create as many user threads as necessary, And the corresponding kernel threads can run in parallel on a multiprocessor.
101
Many-to-Many Model variant: two-level
multiplex many user-level threads to a smaller or eual number of kernel threads; also allow a user-level thread to be bound to a kernel thread A variation on the many-to-many model, called two-level model It still multiplexes many user-level threads to a smaller or equal number of kernel threads But also allows a user-level thread to be bound to a kernel thread.
102
Asynchronous vs Synchronous
When creating a thread, we can follow either asynchronous threading or synchronous threading.
103
Async Threading Once the parent creates a child thread,
the parent resumes its execution The parent and child execute concurrently and independently The parent thread need not know when its child terminates
104
Sync Threading The parent thread creates one or more children,
created threads work concurrently, but the parent must wait for all children to terminate before it resumes Also called fork-join strategy
105
how to create threads?
106
API provided by thread library
how to create threads: API provided by thread library Create threads with APIs provided by thread libraries
107
API provided by thread library API provided by Pthreads
how to create threads: API provided by thread library API provided by Pthreads API provided by Windows Three popular thread libraries: Pthreads (for Linux and Unix), Windows, and Java API provided by Java
108
API provided by thread library API provided by Pthreads
how to create threads: API provided by thread library Example: API provided by Pthreads API provided by summation of 0~n Now let’s use Pthreads as an example to see how a multithreaded program calculates the summation of zero through a certain integer.
109
where the first thread begins
When the program begins, a single thread of control begins in the main() function
110
where a new thread is created
It’ll call the runner function to create a new thread, Which performs the summation operation.
111
where the new thread terminates
And this action will be monitored by the parent thread using pthread _join function where the new thread terminates
112
where the parent thread monitors the termination of child threads
And this action will be monitored by the parent thread using pthread _join function where the new thread terminates
113
where you create more threads on a multiprocessor
and yes, when you have sufficient resources like a multicore, You can always create more threads to speedup the summation operation.
114
where you create more threads on a multiprocessor
unlimited number of threads could exhaust system resources But the question is, if we are not placing a bound on the number of threads concurrently active in the system, Unlimited threads could exhaust system resources such as CPU time or memory
115
where you create more threads on a multiprocessor
unlimited number of threads could exhaust system resources PUT A LIMIT! Apparently, we need to put a limit on the number of threads to be created
116
Thread Pools At process startup, create threads in a pool, for them to wait for work Awaken a thread upon service request Awakened thread serves, and returns to the pool after completion Faster as no need to wait for th creation Free of resource exhaustion One solution to this problem is Thread Pools. It creates a number of threads at process startup and places them into a pool, where they wait for work. When a server receives a request, it awakens a thread from this pool to service the request.
117
Thread Pools At process startup, create threads in a pool, for them to wait for work Awaken a thread upon service request Awakened thread serves, and returns to the pool after completion Faster as no need to wait for th creation Free of resource exhaustion Benefits of thread pools
118
Review What is a process/thread? How to manage processes/threads?
How processes communicate?
119
Chapter 3-4 The contents to be discussed can be found in chapters 3 and 4.
120
?
121
Thank You
122
#What’s More You and Your Research by Richard Hamming
How to Write a Great Research Paper by Simon Peyton Jones How to Give a Great Research Talk A Radical New Way to Control the English Language by George Gopen Again, I highly encourage you to take the research practice. Here’re some useful advices for research.
123
#What’s More Teach to Learn: A Privilege of Junior Faculty by Kai Bu Leaf, like you will sparkle. Also, a post I wrote about teaching. For you to know more about me, my teaching style, and how I get along with students.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.