Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Systems CS 15-440 Distributed File Systems- Part II Lecture 20, Nov 16, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.

Similar presentations


Presentation on theme: "Distributed Systems CS 15-440 Distributed File Systems- Part II Lecture 20, Nov 16, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1."— Presentation transcript:

1 Distributed Systems CS 15-440 Distributed File Systems- Part II Lecture 20, Nov 16, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1

2 Today…  Last session  Distributed File Systems – Part I  Today’s session  Distributed File Systems – Part II  Announcements:  Problem solving assignment 4 (the last PS) has been posted and it is due by Dec 7  Project 4 (the last project) is going to be posted before the end of this week 2

3 Discussion on Distributed File Systems 3 Distributed File Systems (DFSs) Basics DFS Aspects Basics

4 DFS Aspects AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs? AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs?

5 DFS Aspects AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs? AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs?

6 Naming In DFSs NFS is considered as a representative of how naming is handled in DFSs

7 Naming In NFS  The fundamental idea underlying the NFS naming model is to provide clients with complete transparency  Transparency in NFS is achieved by allowing a client to mount a remote file system into its own local file system  However, instead of mounting an entire file system, NFS allows clients to mount only part of a file system  A server is said to export a directory to a client when a client mounts a directory, and its entries, into its own name space

8 Mounting in NFS remoteusr vu mbox Client A usersusr steen mbox Server workusr me mbox Client B Exported directory mounted by Client A Exported directory mounted by Client B The file named /remote/vu/mbox at Client A The file named /work/vu/mbox at Client B Sharing files becomes harder Mount steen subdirectory Mount steen subdirectory

9 Sharing Files In NFS  A common solution for sharing files in NFS is to provide each client with a name space that is partly standardized  For example, each client may by using the local directory /usr/bin to mount a file system  A remote file system can then be mounted in the same manner for each user

10 Example remoteusr bin mbox Client A usersusr steen mbox Server workusr bin mbox Client B Exported directory mounted by Client A Exported directory mounted by Client B The file named /usr/bin/mbox at Client A The file named /usr/bin/mbox at Client B Sharing files resolved Mount steen subdirectory Mount steen subdirectory

11 Mounting Nested Directories In NFSv3  An NFS server, S, can itself mount directories, Ds, that are exported by other servers  However, in NFSv3, S is not allowed to export Ds to its own clients  Instead, a client of S will have to explicitly mount Ds  If S will be allowed to export Ds, it would have to return to its clients file handles that include identifiers for the exporting servers  NFSv4 solves this problem

12 Mounting Nested Directories in NFS bin draw install Client packages draw install Server A install Server B Client imports directory from server A Server A imports directory from server B Client needs to explicitly import subdirectory from server B

13 NFS: Mounting Upon Logging In (1)  Another problem with the NFS naming model has to do with deciding when a remote file system should be mounted  Example: Let us assume a large system with 1000s of users and that each user has a local directory /home that is used to mount the home directories of other users  Alice’s (a user) home directory is made locally available to her as /home/alice  This directory can be automatically mounted when Alice logs into her workstation  In addition, Alice may have access to Bob’s (another user) public files by accessing Bob’s directory through /home/bob

14 NFS: Mounting Upon Logging In (2)  Example (Cont’d):  The question, however, is whether Bob’s home directory should also be mounted automatically when Alice logs in  If automatic mounting is followed for each user:  Logging in could incur a lot of communication and administrative overhead  All users should be known in advance  A better approach is to transparently mount another user’s home directory on-demand

15 On-Demand Mounting In NFS  On-demand mounting of a remote file system is handled in NFS by an automounter, which runs as a separate process on the client’s machine Client Machine NFS ClientAutomounter Local File System Interface 1. Lookup “/home/alice” 3. Mount request 2. Create subdir “alice” home alice 4. Mount subdir “alice” from server Server Machine alice users

16 DFS Aspects AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs? AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs?

17 Synchronization In DFSs  File Sharing Semantics  Lock Management

18 Synchronization In DFSs  File Sharing Semantics  Lock Management

19 Unix Semantics In Single Processor Systems  Synchronization for file systems would not be an issue if files were not shared  When two or more users share the same file at the same time, it is necessary to define the semantics of reading and writing  In single processor systems, a read operation after a write will return the value just written  Such a model is referred to as Unix Semantics Single Machine ab Original File Process A Write “c” abc Process B Read gets “abc”

20 Unix Semantics In DFSs  In a DFS, Unix semantics can be achieved easily if there is only one file server and clients do not cache files  Hence, all reads and writes go directly to the file server, which processes them strictly sequentially  This approach provides UNIX semantics, however, performance might degrade as all file requests must go to a single server

21 Caching and Unix Semantics  The performance of a DFS with one single file server and Unix semantics can be improved by caching  If a client, however, locally modifies a cache file and shortly another client reads the file from the server, it will get an obsolete file File Server Process A Client Machine #1 Client Machine #2 ab ab 1. Read “ab” abc 2. Write “c” 3. Read gets “ab” Process B ab

22 Session Semantics (1)  One way out of getting an obsolete file is to propagate all changes to cached files back to the server immediately  Implementing such an approach is very difficult  An alternative solution is to relax the semantics of file sharing Changes to an open file are initially visible only to the process that modified the file. Only when the file is closed, the changes are made visible to other processes. Session Semantics

23 Session Semantics (2)  Using session semantics raises the question of what happens if two or more clients are simultaneously caching and modifying the same file  One solution is to say that as each file is closed in turn, its value is sent back to the server  The final result depends on whose close request is most recently processed by the server  A less pleasant solution, but easier to implement, is to say that the final result is one of the candidates and leave the choice of the candidate unspecified

24 Immutable Semantics (1)  A different approach to the semantics of file sharing in DFSs is to make all files immutable  With immutable semantics there is no way to open a file for writing  What is possible is to create an entirely new file  Hence, the problem of how to deal with two processes, one writing and the other reading, just disappears

25 Immutable Semantics (2)  However, what happens if two processes try to replace the same file?  Allow one of the new files to replace the old one (either the last one or non-deterministically)  What to do if a file is replaced while another process is busy reading it?  Solution 1: Arrange for the reader to continue using the old file  Solution 2: Detect that the file has changed and make subsequent attempts to read from it fail

26 Atomic Transactions  A different approach to the semantics of file sharing in DFSs is to use atomic transactions where all changes occur atomically  A key property is that all calls contained in a transaction will be carried out in-order A process first executes some type of BEGIN_TRANSACTION primitive to signal that what follows must be executed indivisibly Then come system calls to read and write one or more files When done, an END_TRANSACTION primitive is executed 1 2 3

27 Semantics of File Sharing: Summary  There are four ways of dealing with the shared files in a DFS: MethodComment UNIX SemanticsEvery operation on a file is instantly visible to all processes Session SemanticsNo changes are visible to other processes until the file is closed Immutable FilesNo updates are possible; simplifies sharing and replication TransactionsAll changes occur atomically MethodComment UNIX SemanticsEvery operation on a file is instantly visible to all processes Session SemanticsNo changes are visible to other processes until the file is closed Immutable FilesNo updates are possible; simplifies sharing and replication TransactionsAll changes occur atomically MethodComment UNIX SemanticsEvery operation on a file is instantly visible to all processes Session SemanticsNo changes are visible to other processes until the file is closed Immutable FilesNo updates are possible; simplifies sharing and replication TransactionsAll changes occur atomically MethodComment UNIX SemanticsEvery operation on a file is instantly visible to all processes Session SemanticsNo changes are visible to other processes until the file is closed Immutable FilesNo updates are possible; simplifies sharing and replication TransactionsAll changes occur atomically MethodComment UNIX SemanticsEvery operation on a file is instantly visible to all processes Session SemanticsNo changes are visible to other processes until the file is closed Immutable FilesNo updates are possible; simplifies sharing and replication TransactionsAll changes occur atomically

28 Synchronization In DFSs  File Sharing Semantics  Lock Management

29 Central Lock Manager  In client-server architectures (especially with stateless servers), additional facilities for synchronizing accesses to shared files are required  A central lock manager can be deployed where accesses to a shared resource are synchronized by granting and denying access permissions P0P1P2 Central Lock Manager Lock RequestLock Granted Queue P0P1P2 Central Lock Manager Lock Request Lock Denied Queue P0P1P2 Central Lock Manager Release Lock Granted Queue 2 Lease is obtained Lease is expired 2

30 File Locking In NFSv4  NFSv4 distinguishes read locks from write locks  Multiple clients can simultaneously access the same part of a file provided they only read data  A write lock is needed to obtain exclusive access to modify part of a file  NFSv4 operations related to file locking are: OperationDescription LockCreate a lock for a range of bytes LocktTest whether a conflicting lock has been granted LockuRemove a lock from a range of bytes RenewRenew the lease on a specified lock OperationDescription LockCreate a lock for a range of bytes LocktTest whether a conflicting lock has been granted LockuRemove a lock from a range of bytes RenewRenew the lease on a specified lock OperationDescription LockCreate a lock for a range of bytes LocktTest whether a conflicting lock has been granted LockuRemove a lock from a range of bytes RenewRenew the lease on a specified lock OperationDescription LockCreate a lock for a range of bytes LocktTest whether a conflicting lock has been granted LockuRemove a lock from a range of bytes RenewRenew the lease on a specified lock OperationDescription LockCreate a lock for a range of bytes LocktTest whether a conflicting lock has been granted LockuRemove a lock from a range of bytes RenewRenew the lease on a specified lock

31 Sharing Files in Coda  When a client successfully opens a file f, an entire copy of f is transferred to the client’s machine  The server records that the client has a copy of f  If client A has opened f for writing and another client B wants to open f (for reading or writing) as well, it will fail  If client A has opened f for reading, an attempt by B for reading succeeds  An attempt by B to open for writing would succeed as well

32 DFS Aspects AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs? AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs?

33 Consistency and Replication In DFSs  Client-Side Caching  Server-Side Replication

34 Consistency and Replication In DFSs  Client-Side Caching  Server-Side Replication

35 Client-Side Caching In Coda  Caching and replication play an important role in DFSs, most notably when they are designed to operate over WANs  To see how client-side caching is deployed in practice, we discuss client-side caching in Coda  Clients in Coda always cache entire files, regardless of whether the file is opened for reading or writing  Cache coherence in Coda is maintained by means of callbacks

36 Callback Promise and Break  For each file, the server from which a client had cached the file keeps track of which clients have a copy of that file  A server is said to record a callback promise  When a client updates its local copy of a file for the first time, it notifies the server  Subsequently, the server sends an invalidation message to other clients  Such an invalidation message is called callback break

37 Using Cached Copies in Coda  The interesting aspect of client-side caching in Coda is that as long as a client knows it has an outstanding callback promise at the server, it can safely access the file locally Client A Server Client B Open(RD) Session S A File f Open(RW) Session S B Invalidate (callback break) close Open(RW) File f OK (no file transfer) Session S ’ B close Open(RD) Not OK File f Session S ’ A

38 Consistency and Replication In DFSs  Client-Side Caching  Server-Side Replication

39 Server-Side Replication  Server-side replication in DFSs is applied (as usual) for fault-tolerance and performance  However, a problem with server-side replication is that a combination of a high degree of replication and a low read/write ratio may degrade performance  For an N-fold replicated file, a single update request will lead to an N-fold increase of update operations  Concurrent updates need to be synchronized

40 Storage Groups in Coda  The unit of replication in Coda is a collection of files called volume  The collection of Coda servers that have a copy of a volume are known as that volume’s Volume Storage Group (VSG)  In the presence of failures, a client may not have access to all servers in a volume’s VSG  A client’s Accessible Volume Storage Group (AVSG) for a volume consists of those servers in that volume’s VSG that the client can currently access  If the AVSG is empty, the client is said to be disconnected

41 Maintaining Consistency in Coda  Coda uses a variant of Read-One, Write-All (ROWA) to maintain consistency of a replicated volume  When a client needs to read a file, it contacts one of the members in its AVSG of the volume to which that file belongs  When closing a session on an updated file, the client transfers it in parallel to each member in the AVSG  The scheme works fine as long as there are no failures (i.e., Each client’s AVSG of a volume equals to the volume’s VSG)

42 A Consistency Example (1)  Consider a volume that is replicated across 3 servers S 1, S 2, and S 3  For client A, assume its AVSG covers servers S 1 and S 2, whereas client B has access only to server S 3 Server S 1 Server S 2 Client A Server S 3 Client B Broken Network

43 A Consistency Example (2)  Coda allows both clients, A and B:  To open a replicated file f, for writing  Update their respective copies  Transfer their copies back to the members in their AVSG  Obviously, there will be different versions of f stored in the VSG  The question is how this inconsistency can be detected and resolved?  The solution adopted by Coda is deploying a versioning scheme

44 Versioning in Coda (1)  The versioning scheme in Coda entails that a server S i in a VSG maintains a Coda Version Vector CVV i (f) for each file f contained in the VSG  If CVV i (f)[j] = k, then server S i knows that server S j has seen at least version k of file f  CVV i (f)[i] is the number of the current version of f stored at server S i  An update of f at server S i will lead to an increment of CVV i (f)[i]

45 Versioning in Coda (2)  CVV 1 (f) = CVV 2 (f) = CVV 3 (f) = [1, 1, 1] (initially)  When client A reads f from one of the servers in its AVSG, say S 1, it also receives CVV 1 (f)  After updating f, client A multicasts f to each server in its AVSG (i.e., S1 and S2) Server S 1 Server S 2 Client A Server S 3 Client B Broken Network

46 Versioning in Coda (3)  S 1 and S 2 will then record that their respective copies have been updated, but not that of S 3 (i.e., CVV 1 (f) = CVV 2 (f) = [2, 2, 1])  Meanwhile, client B is allowed to open a session in which it receives a copy of f from server S3  If so, client B may subsequently update f Server S 1 Server S2 Client A Server S3 Client B Broken Network

47 Versioning in Coda (4)  Say, client B then closes its session and transfers the update to S 3  S 3 updates its version vector to CVV 3 (f) =[1, 1, 2]  When the partition is healed, the 3 servers will notice that a conflict has occurred and it needs to be repaired (i.e., inconsistency is detected) Server S 1 Server S 2 Client A Server S 3 Client B Broken Network How the inconsistency is resolved is discussed by Kumar and Satyanarayanan (1995)

48 DFS Aspects AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs? AspectDescription ArchitectureHow are DFSs generally organized? ProcessesWho are the cooperating processes? Are processes stateful or stateless? CommunicationWhat is the typical communication paradigm followed by DFSs? How do processes in DFSs communicate? NamingHow is naming often handled in DFSs? SynchronizationWhat are the file sharing semantics adopted by DFSs? Consistency and ReplicationWhat are the various features of client-side caching as well as server-side replication? Fault ToleranceHow is fault tolerance handled in DFSs?

49 Fault Tolerance In DFSs Fault Tolerance in DFSs is typically handled according to the principles we discussed in the Fault Tolerance lectures Hence, we will concentrate mainly on some special issues in fault tolerance for DFSs

50 Handling Byzantine Failures  One of the problems that is often ignored when dealing with fault tolerance in DFSs is that servers may exhibit Byzantine failures  To achieve protection against Byzantine failures, the server group must consist of at least 3k+1 processes (assuming that at most k processes fail at once)  In practical settings, such a protection can only be achieved if non- faulty processes are ensured to execute all operations in the same order (Castro and Liskov, 2002)  Each process request can be attached with a sequence number and a single coordinator can be used to serialize all operations

51 Quorum Mechanism (1)  The coordinator can fail  Furthermore, processes can go through a series of views, where:  In each view the members of a group agree on the non- faulty processes  A member initiate a view change when the current coordinator appears to be failing  To this end, a quorum mechanism is used

52 Quorum Mechanism (2)  The quorum mechanism goes as follows:  Whenever a process P receives a request to execute an operation o with a number n in a view v, it sends this to all other processes  P waits until it has received a confirmation from at least 2k other processes that have seen the same request  Such a confirmation is called a quorum certificate  A quorum of size 2K+1 is said to be obtained

53 Quorum Mechanism (3)  The whole quorum mechanism consists of 5 phases: Client Master Replica 1. Request2. Pre-Prepare3. Prepare 4. Commit5. Reply The Master multicasts a sequence # Each replica multicasts its acceptance to others Agreement is reached and all processes inform each other and execute the operation The client is given the result The client sends a request

54 Next Class Virtualization


Download ppt "Distributed Systems CS 15-440 Distributed File Systems- Part II Lecture 20, Nov 16, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1."

Similar presentations


Ads by Google