Presentation is loading. Please wait.

Presentation is loading. Please wait.

Slides for Chapter 6: Operating System support From Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 3, © Addison-Wesley.

Similar presentations


Presentation on theme: "Slides for Chapter 6: Operating System support From Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 3, © Addison-Wesley."— Presentation transcript:

1 Slides for Chapter 6: Operating System support From Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 3, © Addison-Wesley 2001

2 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Outline zIntroduction zThe operation system layer zProtection zProcesses and threads zCommunication and invocation zOperating system architecture zSummary

3 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Introduction zIn this chapter we shall continue to focus on remote invocations without real-time guarantee zAn important theme of the chapter is the role of the system kernel zThe chapter aims to give the reader an understanding of the advantages and disadvantages of splitting functionality between protection domains (kernel and user-level code) zWe shall examining the relation between operation system layer and middle layer, and in particular how well the requirement of middleware can be met by the operating system yEfficient and robust access to physical resources yThe flexibility to implement a variety of resource-management policies

4 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Introduction (2) zThe task of any operating system is to provide problem-oriented abstractions of the underlying physical resources (For example, sockets rather than raw network access) ythe processors yMemory yCommunications ystorage media zSystem call interface takes over the physical resources on a single node and manages them to present these resource abstractions

5 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Introduction (3) zNetwork operating systems yThey have a network capability built into them and so can be used to access remote resources. Access is network- transparent for some – not all – type of resource. yMultiple system images xThe node running a network operating system retain autonomy in managing their own processing resources zSingle system image yOne could envisage an operating system in which users are never concerned with where their programs run, or the location of any resources. The operating system has control over all the nodes in the system An operating system that produces a single system image like this for all the resources in a distributed system is called a distributed operating system

6 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Introduction (4) --Middleware and network operating systems zIn fact, there are no distributed operating systems in general use, only network operating systems yThe first reason, users have much invested in their application software, which often meets their current problem-solving needs yThe second reason against the adoption of distributed operating systems is that users tend to prefer to have a degree of autonomy for their machines, even is a closely knit organization The combination of middleware and network operating systems provides an acceptable balance between the requirement for autonomy

7 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.1 System layers

8 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 The operating system layer zOur goal in this chapter is to examine the impact of particular OS mechanisms on middleware’s ability to deliver distributed resource sharing to users zKernels and server processes are the components that manage resources and present clients with an interface to the resources yEncapsulation yProtection yConcurrent processing yCommunication yScheduling Provide a useful service interface to their resource

9 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.2 Core OS functionality Handles the creation of and operations upon process Tread creation, synchronization and scheduling Communication between threads attached to different processes on the same computer Management of physical and virtual memory Dispatching of interrupts, system call traps and other exceptions

10 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Protection zWe said above that resources require protection from illegitimate accesses. Note that the threat to a system’s integrity does not come only from maliciously contrived code. Benign code that contains a bug or which has unanticipated behavior may cause part of the rest of the system to behave incorrectly. zProtecting the file consists of two sub-problem read and write yThe first is to ensure that each of the file’s two operations (read and write) can be performed only by clients with right to perform it yThe other type of illegitimate access, which we shall address here, is where a misbehaving client sidesteps the operations that resource exports We can protect resource from illegitimate invocations such as setFilePointRandomly or to use a type-safe programming language (JAVA or Modula-3) this is a meaningless operation that would upset normal use of the file and that files would never be designed to export

11 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Kernel and Protection zThe kernel is a program that is distinguished by the facts that it always runs and its code is executed with complete access privileged for the physical resources on its host computer supervisor user zA kernel process execute with the processor in supervisor (privileged) mode; the kernel arranges that other processes execute in user (unprivileged) mode zA kernel also sets up address spaces to protect itself and other processes from the accesses of an aberrant process, and to provide processes with their required virtual memory layout zThe process can safely transfer from a user-level address space to the kernel’s address space via an exception such as an interrupt or a system call trap

12 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Processes and threads zA thread is the operating system abstraction of an activity (the term derives from the phrase “thread of execution”) zAn execution environment is the unit of resource management: a collection of local kernel-managed resources to which its threads have access zAn execution environment primarily consists yAn address space yThread synchronization and communication resources such as semaphore and communication interfaces yHigh-level resources such as open file and windows

13 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Address spaces zRegion, separated by inaccessible areas of virtual memory zRegion do not overlap zEach region is specified by the following properties yIts extent (lowest virtual address and size) yRead/write/execute permissions for the process’s threads yWhether it can be grown upwards or downward

14 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.3 Address space

15 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Address spaces (2) zA mapped file is one that is accessed as an array of bytes in memory. The virtual memory system ensures that accesses made in memory are reflected in the underlying file storage zA shared memory region is that is backed by the same physical memory as one or more regions belonging to other address spaces zThe uses of shared regions include the following yLibraries yKernel yData sharing and communication

16 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Creation of a new process zThe creation of a new process has been an indivisible operation provided by the operating system. For example, the UNIX fork system call. zFor a distributed system, the design of the process creation mechanism has to take account of the utilization of multiple computers zThe choice of a new process can be separated into two independent aspects yThe choice of a target host yThe creation of an execution environment

17 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Choice of process host zThe choice of node at which the new process will reside – the process allocation decision – is a matter of policy zTransfer policy yDetermines whether to situate a new process locally or remotely. For example, on whether the local node is lightly or heavily load zLocation policy yDetermines which node should host a new process selected for transfer. This decision may depend on the relative loads of nodes, on their machine architectures and on any specialized resources they may process

18 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Choice of process host (2) zProcess location policies may be yStatic yAdaptive zLoad-sharing systems may be yCentralized yHierarchical ydecentralized Load manager collect information about the nodes and use it to allocate new processes to node One load manager component Several load manager organized in a tree structure Node exchange information with one another direct to make allocation decisions

19 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Choice of process host (3) zIn sender-initiated load-sharing algorithms, the node that requires a new process to be created is responsible for initiating the transfer decision zIn receiver-initiated algorithm, a node whose load is below a given threshold advertises its existence to other nodes so that relatively loaded nodes will transfer work to it zMigratory load-sharing systems can shift load at any time, not just when a new process is created. They use a mechanism called process migration

20 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Creation of a new execution environment zThere are two approaches to defining and initializing the address space of a newly created process yWhere the address space is of statically defined format xFor example, it could contain just a program text region, heap region and stack region xAddress space regions are initialized from an executable file or filled with zeroes as appropriate yThe address space can be defined with respect to an existing execution environment xFor example the newly created child process physically shares the parent’s text region, and has heap and stack regions that are copies of the parent’s in extent (as well as in initial contents) xWhen parent and child share a region, the page frames belonging to the parent’s region are mapped simultaneously into the corresponding child region

21 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.4 Copy-on-write a) Before writeb) After write Shared frame A's page table B's page table Process A’s address spaceProcess B’s address space Kernel RARB RB copied from RA The pages are initially write-protected at the hardware level page fault The page fault handler allocates a new frame for process B and copies the original frame’s data into byte by byte

22 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Threads zthread 是 process 的簡化型式,它包含了使用 CPU 所必須的資訊: Program Counter 、 register set 以及 stack space 。同一個程式 (task) 的 thread 之間共享 code section 、 data section 以及作業系統的資源 (OS resource) 。如果作業系統可以提供多個 thread 同時 執行的能力,其具有 multithreading 的能力 zThe next key aspect of a process to consider in more detail and server process to possess more than one thread.

23 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.5 Client and server with threads Server N threads Input-output Client Thread 2 makes T1 Thread 1 requests to server generates results Requests Receipt & queuing Worker pool A disadvantage of this architecture is its inflexibility Another disadvantage is the high level of switching between the I/O and worker threads as they manipulate the share queue

24 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.6 Alternative server threading architectures (see also Figure 6.5) Advantage: the threads do not contend for a shared queue, and throughput is potentially maximized Disadvantage: the overheads of the thread creation and destruction operations request Associates a thread with each connection Associates a thread with each object In each of these last two architectures the server benefits from lowered thread- management overheads compared with the thread-per-request architecture. Their disadvantage is that clients may be delayed while a worker thread has several outstanding requests but another thread has no work to perform

25 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.7 State associated with execution environments and threads Execution environmentThread Address space tablesSaved processor registers Communication interfaces, open filesPriority and execution state (such as BLOCKED) Semaphores, other synchronization objects Software interrupt handling information List of thread identifiersExecution environment identifier Pages of address space resident in memory; hardware cache entries

26 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 A comparison of processes and threads as follows zCreating a new thread with an existing process is cheaper than creating a process. zMore importantly, switching to a different thread within the same process is cheaper than switching between threads belonging to different process. zThreads within a process may share data and other resources conveniently and efficiently compared with separate processes. zBut, by the same token, threads within a process are not protected from one another.

27 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.8 Java thread constructor and management methods Thread(ThreadGroup group, Runnable target, String name) Creates a new thread in the SUSPENDED state, which will belong to group and be identified as name; the thread will execute the run() method of target. setPriority(int newPriority), getPriority() Set and return the thread’s priority. run() A thread executes the run() method of its target object, if it has one, and otherwise its own run() method (Thread implements Runnable). start() Change the state of the thread from SUSPENDED to RUNNABLE. sleep(int millisecs) Cause the thread to enter the SUSPENDED state for the specified time. yield() Enter the READY state and invoke the scheduler. destroy() Destroy the thread.

28 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 A comparison of processes and threads as follows (2) zThe overheads associated with creating a process are in general considerably greater than those of creating a new thread. yA new execution environment must first be created, including address space table switching zThe second performance advantage of threads concerns switching between threads – that is, running one thread instead of another at a given process

29 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 zA context switch is the transition between contexts that takes place when switching between threads, or when a single thread makes a system call or takes another type of exception zIt involves the following: yThe saving of the processor’s original register state, and loading of the new state yIn some cases; a transfer to a new protection domain – this is known as a domain transition

30 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Thread scheduling zIn preemptive scheduling, a thread may be suspended at any point to make way for another thread zIn non-preemptive scheduling, a thread runs until it makes a call to the threading system (for example, a system call). zThe advantage of non-preemptive scheduling is that any section of code that does not contain a call to the threading system is automatically a critical section yRace conditions are thus conveniently avoided zNon-preemptively scheduled threads cannot takes advantage of multiprocessor, since they run exclusively

31 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Thread implementation zWhen no kernel support for multi-thread process is provided, a user-level threads implementation suffers from the following problems yThe threads with a process cannot take advantage of a multiprocessor yA thread that takes a page fault blocks the entire process and all threads within it yThreads within different processes cannot be scheduled according to a single scheme of relative prioritization

32 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Thread implementation (2) zUser-level threads implementations have significant advantages over kernel-level implementations yCertain thread operations are significantly less costly For example, switching between threads belonging to the same process does not necessarily involve a system call – that is, a relatively expensive trap to the kernel yGiven that the thread-scheduling module is implemented outside the kernel, it can be customized or changed to suit particular application requirements. Variations in scheduling requirements occur largely because of application-specific considerations such as the real-time nature of a multimedia processing yMany more user-level threads can be supported than could reasonably be provided by default by a kernel

33 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.9 Java thread synchronization calls thread.join(int millisecs) Blocks the calling thread for up to the specified time until thread has terminated. thread.interrupt() Interrupts thread: causes it to return from a blocking method call such as sleep(). object.wait(long millisecs, int nanosecs) Blocks the calling thread until a call made to notify() or notifyAll() on object wakes the thread, or the thread is interrupted, or the specified time has elapsed. object.notify(), object.notifyAll() Wakes, respectively, one or all of any threads that have called wait() on object.

34 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 The four type of event that kernel notified to the user-level scheduler zVirtual processor allocated yThe kernel has assigned a new virtual processor to the process, and this is the first timeslice upon it; the scheduler can load the SA with the context of a READY thread, which can thus can thus recommence execution zSA blocked yAn SA has blocked in the kernel, and kernel is using a fresh SA to notify the scheduler: the scheduler sets the state of the corresponding thread to BLOCKED and can allocate a READY thread to the notifying SA zSA unblocked yAn SA that was blocked in the kernel has become unblocked and is ready to execute at user level again; the scheduler can now return the corresponding thread to READY list. In order to create the notifying SA, the another SA in the same process. In the latter case, it also communicates the preemption event to the scheduler, which can re-evaluate its allocation of threads to SAs. zSA preempted yThe kernel has taken away the specified SA from the process (although it may do this to allocate a processor to a fresh SA in the same process); the scheduler places the preempted thread in the READY list and re-evaluates the thread allocation.

35 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.10 Scheduler activations Scheduler activation (SA) is a call from kernel to a process

36 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Communication and invocation zWe shall cover operating system design issues and concept by asking the following questions about the OS: yWhat communication primitives does to supply? yWhich protocols does it support and how open is the communication implementation? yWhat steps are taken to make communication as efficient as possible? yWhat support is provided for high-latency and disconnected operation?

37 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Communication primitives zIn practice, middleware, and not the kernel, provides most high-level communication facilities found in systems today, include RPC/RMI, event notification and group communication. zDevelopers typically implement middleware over sockets giving access to Internet standard protocol – often connected sockets using TCP but sometimes unconnected UDP sockets. zThe principal reasons for using sockets are portability and interoperability

38 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Invocation performance zInvocation performance is a critical factor in distributed system design zNetwork technologies continue to improve, but invocation times have not decreased in proportion with increases in network bandwidth zThis section will explain how software overheads often predominate over network overheads in invocation times

39 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.11 Invocations between address spaces

40 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.12 RPC delay against parameter size Client delay against requested data size. The delay is roughly proportional to the size until the size reaches a threshold at about network packet size

41 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 The following are the main components accounting for remote invocation delay, besides network transmission times yMarshalling yData copying yPacket initialization yThread scheduling and context switching yWaiting for acknowledgements Marshalling and unmarshalling, which involve copying and converting data, become a significant overhead as the amount of data grows Potentially, even after marshalling, message data is copied several times in the course of an RPC 1.Across the user-kernel boundary, between the client or server address space and kernel buffers 2.Across each protocol layer (for example, RPC/UDP/IP/Ethernet) 3.Between the network interface and kernel buffers This involves initializing protocol headers and trailers, including checksums. The cost is therefore proportional, in part, to the amount of data sent 1.Several system calls (that is, context switches) are made during an RPC, as stubs invokes the kernel’s communication operations 2.One or more server threads is scheduled 3.If the operating system employs a separate network manager process, then each Send involves a context switch to one of its threads The choice of RPC protocol may influence delay, particularly when large amounts of data are sent

42 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 A lightweight remote procedure call zThe LRPC design is based on optimizations concerning data copying and thread scheduling. zClient and server are able to pass arguments and values directly via an A stack. The same stack is used by the client and server stubs zIn LRPC, arguments are copied once: when they are marshalled onto the A stack. In an equivalent RPC, they are copied four times

43 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.13 A lightweight remote procedure call

44 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Asynchronous operation zA common technique to defeat high latencies is asynchronous operation, which arises in two programming models: yconcurrent invocations yasynchronous invocations zAn asynchronous invocation is one that is performed asynchronously with respect to the caller. That is, it is made with a non-blocking call, which returns as soon as the invocation request message has been created and is ready for dispatch

45 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.14 Times for serialized and concurrent invocations pipelining

46 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers Operating system architecture zRun only that system software at each computer that is necessary for it to carry out its particular role in the system architectures zAllow the software implementing any particular service to be changed independently of other facilities zAllow for alternatives of the same service to be provided, when this is required to suit different users or applications zIntroduce new services without harming the integrity of existing ones

47 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.15 Monolithic kernel and microkernel Where these designs differ primarily is in the decision as to what functionality belongs in the kernel and what is to be left to sever processes that can be dynamically loaded to run on top of it Microkernel provides only the most basic abstraction. Principally address spaces, the threads and local interprocess communication

48 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Figure 6.16 The role of the microkernel

49 Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3 © Addison-Wesley Publishers 2000 Comparison zThe chief advantages of a microkernel-based operating system are its extensibility zA relatively small kernel is more likely to be free of bugs than one that is large and more complex zThe advantage of a monolithic design is the relative efficiency with which operations can be invoked


Download ppt "Slides for Chapter 6: Operating System support From Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edition 3, © Addison-Wesley."

Similar presentations


Ads by Google