Presentation is loading. Please wait.

Presentation is loading. Please wait.

COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM.

Similar presentations


Presentation on theme: "COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM."— Presentation transcript:

1 COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM

2 Last time: Peer-to-peer systems Remote Procedure Call Strategies for name resolution Case study: DNS – Domain Name Service  Today Case study: NFS – Network File System Virtualization Locks  Next time Review for the midterm Lecture 13 – Thursday, February 24, 2011 Lecture 132

3 The Network File System Developed at Sun Microsystems in early to early 1980s. Application of the client-server paradigm. Objectives:  Design a shared file system to support collaborative work  Simplify the management of a set of workstations Facilitate the backups Uniform, administrative policies Main design goals 1. Compatibility with existing applications  NFS should provide the same semantics as a local UNIX file system 2. Ease of deployment  NFS implementation should be easily ported to existing systems 3. Broad scope  NSF clients should be able to run under a variety of operating systems 4. Efficiency  the users of the systems should not notice a substantial performance degradation when accessing a remote file system relative to access to a local file system Lecture 123

4 NFS clients and servers Should provide transparent access to remote file systems. It mounts a remote file system in the local name space  it perform a function analogous to the MOUNT UNIX call. The remote file system is specified as Host/Path  Host  the host name of the host where the remote file system is located  Path  local path name on the remote host. The NFS client sends to the NFS server an RPC with the file Path information and gets back from the server a file handle  A 32 bit name that uniquely identifies the remote object. The server encodes in the file handle:  A file system identifier  An inode number  A generation number Lecture 124

5 Lecture 135

6 Implementation Vnode – a structure in volatile memory which abstracts if a file or directory is local or remote. A file system call (Open, Read, Write, Close, etc.) is done through the vnode-layer. Example:  To Open a file a client calls PATHNAME_TO_VNODE  The file name is parsed and a LOOKUP is generated if the directory is local and the file is found the local file system creates a vnode for the file else  the LOOKUP procedure implemented by the NFS client is invoked. The file handle of the directory and the path name are passed as arguments  The NFS client invokes the LOOKUP remote procedure on the sever via an RPC  The NFS server extracts the file system id and the inode number and then calls a LOOKUP in the vnode layer.  The vnode layer on the server side does LOOKUP on the local file system passing the path name as an argument.  If the local file system on the server locates the file it creates a vnode for it and returns the vnode to the NFS server.  The NFS server sends a reply to the RPC containing the file handle of the vnode and some metadata  The NFS client creates a vnode containing the file handle Lecture 136

7 7

8 Why file handles and not path names --------------------------------- Example 1 ------------------------------------------------ Program 1 on client 1 Program 2 on client 2 CHDIR (‘dir1’) fd  OPEN(“f”, READONLY) RENAME(‘dir1’,’dir2) RENAME(‘dir3’,’dir1’) READ(fd,buf,n) To follow the UNIX specification if both clients would be on the same system client1 would read from dir2.f. If the inode number allows the client 1 to follw the same semantics rather than read from dir1/f ----------------------------------- Example 2 ----------------------------------------------- fd  OPEN(“file1”, READONLY) UNLINK(“f”) fd  OPEN(“f”,CREATE) READ(fd,buf,n) If the NFS server reuses the inode of the old file then the RPC from client 2 will read from the new file created by client 1. The generation number allows the NSF server to distinguish between the old file opened by client 2 and the new one created by client 1. Lecture 12 8

9 Read/Write coherence Enforcing Read/Write coherence is non-trivial even for a local operations.  For performance reasons a device driver may delay a Write operation issued by Client1; caching could cause problems when Client2 tries to Read the file. Possible solutions  Close-to-Open consistency: Client 1 executes the sequence : Open  Write  Close before Client 2 executes the sequence : Open  Read  Close  Read/Write coherence is provided Client 1 executes the sequence : Open  Write before Client 2 executes the sequence : Open  Read. Read/Write coherence may or may not be provided  Consistency for every operation: no caching  Lecture 139

10 Lecture 1210

11 NFS Close-to-Open semantics A client stores with each block in its cache the timestamp of the block’s vnode at the time the client got the block from the NFS server. When a user program opens a file the client sends a GETATTR request to get the timestamp of the latest modification for the file. The client gets a new copy only if the file has been modified since it has last accessed it; else it uses the local copy. To implement a Write a client updates only its copy in cache without a an RPC Write. At Close time the client sends the cached copy to the server Lecture 1311

12 Virtualization – relating physical with virtual objects Virtualization  simulating the interface to a physical object by: 1. Multiplexing  create multiple physical objects from one instance of a physical object. 2. Aggregation  create one virtual object from multiple physical objects 3. Emulation  construct a virtual object from a different type of a physical object. Emulation in software is slow. MethodPhysical Resource Virtual Resource Multiplexingprocessorthread real memoryvirtual memory communication channelvirtual circuit processorserver (e.g., Web server) AggregationdiskRAID coremulti-core processor EmulationdiskRAM disk system (e.g. Macintosh)virtual machine (e.g., Virtual PC) Multiplexing + Emulation real memory + diskvirtual memory with paging communication channel + processor TCP protocol 12Lecture 13

13 Virtualization of the three abstractions. (1) Threads Implemented by the operating system for the three abstractions: 1. Threads  a thread is a virtual processor; a module in execution 1. Multiplexes a physical processor 2. The state of a thread: (1) the reference to the next computational step (the Pc register) + (2) the environment (registers, stack, heap, current objects). 3. Sequence of operations: 1. Load the module’s text 2. Create a thread and lunch the execution of the module in that thread. 4. A module may have several threads. 5. The thread manager implements the thread abstraction. 1. Interrupts  processed by the interrupt handler which interacts with the thread manager 2. Exception  interrupts caused by the running thread and processed by exception handlers 3. Interrupt handlers run in the context of the OS while exception handlers run in the context of interrupted thread. 13Lecture 13

14 Virtualization of the three abstractions. (2) Virtual memory 2. Virtual Memory  a scheme to allow each thread to access only its own virtual address space (collection of virtual addresses). 1. Why needed: 1. To implement a memory enforcement mechanism; to prevent a thread running the code of one module from overwriting the data of another module 2. The physical memory may be too small to fit an application; otherwise each application would need to manage its own memory. 2. Virtual memory manager  maps virtual address space of a thread to physical memory. Thread + virtual memory allow us to create a virtual computer for each module. Each module runs in own address space; if one module runs mutiple threads all share one address space. 14Lecture 13

15 Virtualization of the three abstractions: (3) Bounded buffers 3 Bounded buffers  implement the communication channel abstraction 1 Bounded  the buffer has a finite size. We assume that all messages are of the same size and each can fit into a buffer cell. A bounded buffer will only accommodate N messages. 2 Threads use the SEND and RECEIVE primitives. 15Lecture 13

16 Principle of least astonishment Study and understand simple phenomena or facts before moving to complex ones. For example:  Concurrency  an application requires multiple threads that run at the same time. Tricky. Understand sequential processing first.  Examine a simple operating system interface to the three abstractions Memory CREATE/DELETE_ADDRESS SPACE ALLOCATE/FREE_BLOCK MAP/UNMAP UNMAP Interpreter ALLOCATE_THREAD DESTROY_THREAD EXIT_THREAD YIELD AWAIT ADVANCE TICKET ACQUIRE RELEASE Communication channel ALLOCATE/DEALLOCATE_BOUNDED_BUFFER SEND/RECEIVE 16Lecture 13

17 Thread coordination with a bounded buffer Producer-consumer problem  coordinate the sending and receiving threads Basic assumptions:  We have only two threads  Threads proceed concurrently at independent speeds/rates  Bounded buffer – only N buffer cells  Messages are of fixed size and occupy only one buffer cell Spin lock  a thread keeps checking a control variable/semaphore “until the light turns green” 17Lecture 13

18 18Lecture 13

19 Implicit assumptions for the correctness of the implementation 1. One sending and one receiving thread. Only one thread updates each shared variable. 2. Sender and receiver threads run on different processors  to allow spin locks 3. in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) 4. The shared memory used for the buffer provides read/write coherence 5. The memory provides before-or-after atomicity for the shared variables in and out 6. The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported 19Lecture 13

20 Race conditions Race condition  error that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence in order to be done correctly. Race conditions depend on the exact timing of events thus are not reproducible.  A slight variation of the timing could either remove a race condition or create.  Very hard to debug such errors. 20Lecture 13

21 21Lecture 13

22 22Lecture 13

23 One more pitfall of the previous implementation of bounded buffer If in and out are long integers (64 or 128 bit) then a load requires two registers, e.,g, R1 and R2. int “00000000FFFFFFFF” L R1,int /* R1  00000000 L R2,int+1 /* R2  FFFFFFFF Race conditions could affect a load or a store of the long integer. 23Lecture 13

24 Lock Lock  a mechanism to guarantee that a program works correctly when multiple threads execute concurrently  a multi-step operation protected by a lock behaves like a single operation  can be used to implement before-or after atomicity  shared variable acting as a flag (traffic light) to coordinate access to a shared variable  works only if all threads follow the rule  check the lock before accesing a shared variable. 24Lecture 13

25 Deadlocks Happen quite often in real life and the proposed solutions are not always logical: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.” a pearl from Kansas legislation. Deadlock jury. Deadlock legislative body. 25Lecture 13

26 26Lecture 13

27 Deadlocks Deadlocks  prevent sets of concurrent threads/processes from completing their tasks. How does a deadlock occur  a set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set. Example  semaphores A and B, initialized to 1 P 0 P 1 wait (A);wait(B) wait (B);wait(A) Aim  prevent or avoid deadlocks 27Lecture 13

28 Example of a deadlock Traffic only in one direction. Solution  one car backs up (preempt resources and rollback). Several cars may have to be backed up. Starvation is possible. 28Lecture 13

29 System model Resource types R 1, R 2,..., R m (CPU cycles, memory space, I/O devices) Each resource type R i has W i instances. Resource access model:  request  use  release 29Lecture 13

30 Simultaneous conditions for deadlock Mutual exclusion: only one process at a time can use a resource. Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. No preemption: a resource can be released only voluntarily by the process holding it (presumably after that process has finished). Circular wait: there exists a set {P 0, P 1, …, P 0 } of waiting processes such that P 0 is waiting for a resource that is held by P 1, P 1 is waiting for a resource that is held by P 2, …, P n–1 is waiting for a resource that is held by P n, and P 0 is waiting for a resource that is held by P 0. 30Lecture 13


Download ppt "COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM."

Similar presentations


Ads by Google