Download presentation
Presentation is loading. Please wait.
Published byLester Fields Modified over 6 years ago
1
The operation system layer Protection Processes and threads
Outline Introduction The operation system layer Protection Processes and threads Communication and invocation Operating system architecture Summary Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
2
6.1 Introduction In this chapter we shall continue to focus on remote invocations without real-time guarantee An important theme of the chapter is the role of the system kernel The chapter aims to give the reader an understanding of the advantages and disadvantages of splitting functionality between protection domains (kernel and user-level code) We shall examining the relation between operation system layer and middle layer, and in particular how well the requirement of middleware can be met by the operating system Efficient and robust access to physical resources The flexibility to implement a variety of resource-management policies Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
3
The task of any operating system is to provide
Introduction (2) The task of any operating system is to provide problem-oriented abstractions of the underlying physical resources (For example, sockets rather than raw network access) the processors Memory Communications storage media System call interface takes over the physical resources on a single node and manages them to present these resource abstractions Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
4
Network operating systems
An operating system that produces a single system image like this for all the resources in a distributed system is called a distributed operating system Introduction (3) Network operating systems They have a network capability built into them and so can be used to access remote resources. Access is network-transparent for some – not all – type of resource. Multiple system images The node running a network operating system retain autonomy in managing their own processing resources Single system image One could envisage an operating system in which users are never concerned with where their programs run, or the location of any resources. The operating system has control over all the nodes in the system Transparent:顯而易見的,透明的, autonomy:自治, envisage:想像 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
5
Introduction (4) --Middleware and network operating systems
In fact, there are no distributed operating systems in general use, only network operating systems The first reason, users have much invested in their application software, which often meets their current problem-solving needs The second reason against the adoption of distributed operating systems is that users tend to prefer to have a degree of autonomy for their machines, even is a closely knit organization Knit:編織,adoption:採納 The combination of middleware and network operating systems provides an acceptable balance between the requirement for autonomy Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
6
Figure 6.1 System layers Figure 6.1 shows how the operating system layer at each of two nodes supports a common middleware layer in providing a distribution infrastructure for applications and services Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
7
The operating system layer
Our goal in this chapter is to examine the impact of particular OS mechanisms on middleware’s ability to deliver distributed resource sharing to users Kernels and server processes are the components that manage resources and present clients with an interface to the resources Encapsulation Protection Concurrent processing Communication Scheduling Provide a useful service interface to their resource Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
8
Figure 6.2 Core OS functionality
Handles the creation of and operations upon process Communication between threads attached to different processes on the same computer Tread creation, synchronization and scheduling Management of physical and virtual memory Dispatching of interrupts, system call traps and other exceptions Figure 6.2 show the core OS functionality that we shall be concerned with: process and thread management, memory management, and communication between processes on the same computer Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
9
Protecting the file consists of two sub-problem
6.3 Protection We said above that resources require protection from illegitimate accesses. Note that the threat to a system’s integrity does not come only from maliciously contrived code. Benign code that contains a bug or which has unanticipated behavior may cause part of the rest of the system to behave incorrectly. Protecting the file consists of two sub-problem The first is to ensure that each of the file’s two operations (read and write) can be performed only by clients with right to perform it The other type of illegitimate access, which we shall address here, is where a misbehaving client sidesteps the operations that resource exports Integrity:正直的 Benign:良性的 maliciously:惡意的 contrived:人為的 side-step:避免 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
10
Kernel and Protection The kernel is a program that is distinguished by the facts that it always runs and its code is executed with complete access privileged for the physical resources on its host computer A kernel process execute with the processor in supervisor (privileged) mode; the kernel arranges that other processes execute in user (unprivileged) mode A kernel also sets up address spaces to protect itself and other processes from the accesses of an aberrant process, and to provide processes with their required virtual memory layout The process can safely transfer from a user-level address space to the kernel’s address space via an exception such as an interrupt or a system call trap Arranges:整理,安排,aberrant:脫離常軌的,layout:安排 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
11
An execution environment primarily consists
6.4 Processes and threads A thread is the operating system abstraction of an activity (the term derives from the phrase “thread of execution”) An execution environment is the unit of resource management: a collection of local kernel-managed resources to which its threads have access An execution environment primarily consists An address space Thread synchronization and communication resources such as semaphore and communication interfaces High-level resources such as open file and windows Derives:起源 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
12
Region, separated by inaccessible areas of virtual memory
6.4.1 Address spaces Region, separated by inaccessible areas of virtual memory Region do not overlap Each region is specified by the following properties Its extent (lowest virtual address and size) Read/write/execute permissions for the process’s threads Whether it can be grown upwards or downward Extent:寬度長度 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
13
Figure 6.3 Address space A region is an area of contiguous virtual memory that is accessible by the threads of the owning process Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
14
The uses of shared regions include the following
6.4.1 Address spaces (2) A mapped file is one that is accessed as an array of bytes in memory. The virtual memory system ensures that accesses made in memory are reflected in the underlying file storage A shared memory region is that is backed by the same physical memory as one or more regions belonging to other address spaces The uses of shared regions include the following Libraries Kernel Data sharing and communication Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
15
6.4.2 Creation of a new process
The creation of a new process has been an indivisible operation provided by the operating system. For example, the UNIX fork system call. For a distributed system, the design of the process creation mechanism has to take account of the utilization of multiple computers The choice of a new process can be separated into two independent aspects The choice of a target host The creation of an execution environment Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
16
Choice of process host The choice of node at which the new process will reside – the process allocation decision – is a matter of policy Transfer policy Determines whether to situate a new process locally or remotely. For example, on whether the local node is lightly or heavily load Location policy Determines which node should host a new process selected for transfer. This decision may depend on the relative loads of nodes, on their machine architectures and on any specialized resources they may process Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
17
Choice of process host (3)
In sender-initiated load-sharing algorithms, the node that requires a new process to be created is responsible for initiating the transfer decision In receiver-initiated algorithm, a node whose load is below a given threshold advertises its existence to other nodes so that relatively loaded nodes will transfer work to it Migratory load-sharing systems can shift load at any time, not just when a new process is created. They use a mechanism called process migration Eager et al studied three approached to load sharing and concluded that “simplicity” is an important property of any load-sharing scheme Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
18
Creation of a new execution environment
There are two approaches to defining and initializing the address space of a newly created process Where the address space is of statically defined format For example, it could contain just a program text region, heap region and stack region Address space regions are initialized from an executable file or filled with zeroes as appropriate The address space can be defined with respect to an existing execution environment For example the newly created child process physically shares the parent’s text region, and has heap and stack regions that are copies of the parent’s in extent (as well as in initial contents) When parent and child share a region, the page frames belonging to the parent’s region are mapped simultaneously into the corresponding child region Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
19
The pages are initially write-protected at the hardware level
Figure 6.4 Copy-on-write a) Before write b) After write Shared frame A's page table B's page Process A’s address space Process B’s address space Kernel RA RB RB copied from RA The page fault handler allocates a new frame for process B and copies the original frame’s data into byte by byte The pages are initially write-protected at the hardware level page fault Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
20
Processes and Threads Process A process consists of an execution environment together with one or more threads. A thread is the operating system abstraction of an activity. An execution environment is the unit of resource management: a collection of local kernel managed resources to which its threads have access. Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
21
An execution environment consists of :
Processes and Threads An execution environment consists of : An address space Thread synchronization and communication resources (e.g., semaphores, sockets) Higher-level resources (e.g., file systems, windows) Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
22
Threads Threads are schedulable activities attached to processes.
Processes and Threads Threads Threads are schedulable activities attached to processes. The aim of having multiple threads of execution is : To maximize degree of concurrent execution between operations To enable the overlap of computation with input and output To enable concurrent processing on multiprocessors. Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
23
Threads can be helpful within servers:
Processes and Threads Threads can be helpful within servers: Concurrent processing of client’s requests can reduce the tendency for servers to become bottleneck. E.g. one thread can process a client’s request while a second thread serving another request waits for a disk access to complete. Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
24
Processes vs. Threads Threads are “lightweight” processes,
Processes and Threads Processes vs. Threads Threads are “lightweight” processes, processes are expensive to create but threads are easier to create and destroy. Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
25
Figure 6.5 Client and server with threads
Worker pool Server N threads Input-output Client Thread 2 makes T1 Thread 1 requests to server generates results Requests Receipt & queuing Manipulate:操作 A disadvantage of this architecture is its inflexibility Another disadvantage is the high level of switching between the I/O and worker threads as they manipulate the share queue Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
26
Associates a thread with each connection
Figure 6.6 Alternative server threading architectures (see also Figure 6.5) Associates a thread with each connection Associates a thread with each object request Contend:競爭 The server creates a new worker thread when thread when a client makes a each connection An I/O thread receives requests and queues them for the workers, but this time there is a per-object queue Advantage: the threads do not contend for a shared queue, and throughput is potentially maximized Disadvantage: the overheads of the thread creation and destruction operations Their disadvantage is that clients may be delayed while a worker thread has several outstanding requests but another thread has no work to perform In each of these last two architectures the server benefits from lowered thread-management overheads compared with the thread-per-request architecture. Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
27
A comparison of processes and threads as follows
Creating a new thread with an existing process is cheaper than creating a process. More importantly, switching to a different thread within the same process is cheaper than switching between threads belonging to different process. Threads within a process may share data and other resources conveniently and efficiently compared with separate processes. But, by the same token, threads within a process are not protected from one another. Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
28
A comparison of processes and threads as follows (2)
The overheads associated with creating a process are in general considerably greater than those of creating a new thread. A new execution environment must first be created, including address space table The second performance advantage of threads concerns switching between threads – that is, running one thread instead of another at a given process Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
29
Thread scheduling In preemptive scheduling, a thread may be suspended at any point to make way for another thread In non-preemptive scheduling, a thread runs until it makes a call to the threading system (for example, a system call). The advantage of non-preemptive scheduling is that any section of code that does not contain a call to the threading system is automatically a critical section Race conditions are thus conveniently avoided Non-preemptively scheduled threads cannot takes advantage of multiprocessor , since they run exclusively Suspended:懸吊,懸掛 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
30
Thread implementation
When no kernel support for multi-thread process is provided, a user-level threads implementation suffers from the following problems The threads with a process cannot take advantage of a multiprocessor A thread that takes a page fault blocks the entire process and all threads within it Threads within different processes cannot be scheduled according to a single scheme of relative prioritization Prioritization:優先次序 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
31
6.5.1 Invocation performance
Invocation performance is a critical factor in distributed system design Network technologies continue to improve, but invocation times have not decreased in proportion with increases in network bandwidth This section will explain how software overheads often predominate over network overheads in invocation times Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
32
Figure 6.11 Invocations between address spaces
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
33
Figure 6.12 RPC delay against parameter size
Client delay against requested data size. The delay is roughly proportional to the size until the size reaches a threshold at about network packet size Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
34
Packet initialization Thread scheduling and context switching
The following are the main components accounting for remote invocation delay, besides network transmission times This involves initializing protocol headers and trailers, including checksums. The cost is therefore proportional, in part, to the amount of data sent The choice of RPC protocol may influence delay, particularly when large amounts of data are sent Marshalling Data copying Packet initialization Thread scheduling and context switching Waiting for acknowledgements Potentially, even after marshalling, message data is copied several times in the course of an RPC Across the user-kernel boundary, between the client or server address space and kernel buffers Across each protocol layer (for example, RPC/UDP/IP/Ethernet) Between the network interface and kernel buffers Several system calls (that is, context switches) are made during an RPC, as stubs invokes the kernel’s communication operations One or more server threads is scheduled If the operating system employs a separate network manager process, then each Send involves a context switch to one of its threads Marshalling and unmarshalling, which involve copying and converting data, become a significant overhead as the amount of data grows Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
35
A lightweight remote procedure call
The LRPC design is based on optimizations concerning data copying and thread scheduling. Client and server are able to pass arguments and values directly via an A stack. The same stack is used by the client and server stubs In LRPC, arguments are copied once: when they are marshalled onto the A stack. In an equivalent RPC, they are copied four times Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
36
Figure 6.13 A lightweight remote procedure call
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
37
6.5.2 Asynchronous operation
A common technique to defeat high latencies is asynchronous operation, which arises in two programming models: concurrent invocations asynchronous invocations An asynchronous invocation is one that is performed asynchronously with respect to the caller. That is, it is made with a non-blocking call, which returns as soon as the invocation request message has been created and is ready for dispatch Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
38
Figure 6.14 Times for serialized and concurrent invocations
pipelining Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
39
Operating System Architecture
The kernel would provide only the most basic mechanisms upon which the general resource management tasks at a node are carried out. Server modules would be dynamically loaded as required, to implement the required resourced management policies for the currently running applications. Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
40
Operating System Architecture
Monolithic Kernels A monolithic kernel can contain some server processes that execute within its address space, including file servers and some networking. The code that these processes execute is part or the standard kernel configuration. (Figure 5) Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
41
Operating System Architecture
Figure 5. Monolithic kernel and microkernel Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
42
Operating System Architecture
Microkernel The microkernel appears as a layer between hardware layer and a layer consisting of major systems. If performance is the goal, rather than portability, then middleware may use the facilities of the microkernel directly. (Figure 6) Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
43
Operating System Architecture
Figure 6. The role of the microkernel Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
44
Operating System Architecture
Monolithic and Microkernel comparison The advantages of a microkernel Its extensibility Its ability to enforce modularity behind memory protection boundaries. Its small kernel has less complexity. The advantages of a monolithic The relative efficiency with which operations can be invoked because even invocation to a separate user-level address space on the same node is more costly. Couloris,Dollimore and Kindberg Distributed Systems: Concepts & Design Edn. 4 , Pearson Education 2005
45
Figure 6.15 Monolithic kernel and microkernel
Microkernel provides only the most basic abstraction. Principally address spaces, the threads and local interprocess communication Where these designs differ primarily is in the decision as to what functionality belongs in the kernel and what is to be left to sever processes that can be dynamically loaded to run on top of it Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
46
Figure 6.16 The role of the microkernel
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
47
Comparison The chief advantages of a microkernel-based operating system are its extensibility A relatively small kernel is more likely to be free of bugs than one that is large and more complex The advantage of a monolithic design is the relative efficiency with which operations can be invoked Monolithic:整體的 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn © Addison-Wesley Publishers 2000
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.