Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 19 Distributed Databases. 2 Distributed Database System n A distributed DBS consists of loosely coupled sites that share no physical component.

Similar presentations


Presentation on theme: "Chapter 19 Distributed Databases. 2 Distributed Database System n A distributed DBS consists of loosely coupled sites that share no physical component."— Presentation transcript:

1 Chapter 19 Distributed Databases

2 2 Distributed Database System n A distributed DBS consists of loosely coupled sites that share no physical component n DBSs that run on each site are independent of each other Transactions may access data at one or more sites

3 3 Homogeneous/Heterogeneous DDB n In a homogeneous distributed database  All sites have identical software  Are aware of each other and agree to cooperate in processing user requests  Each site surrenders part of its autonomy in terms of right to change schemas or software  Appears to user as a single system n In a heterogeneous distributed database  Different sites may use different schemas and software Difference in schema is a problem for query processing Difference in software is a problem for transaction processing

4 4 Distributed Data Storage n Assume relational data model n Replication  System maintains multiple copies of data, stored in different sites, for faster retrieval and fault tolerance n Fragmentation  Relation is partitioned into several fragments stored in distinct sites n Replication & fragmentation can be combined  Relation is partitioned into several fragments: system maintains several identical replicas of each such fragment

5 5 Data Replication n A relation or fragment of a relation is replicated if it is stored redundantly in two or more sites n Full replication of a relation is the case where the relation is stored at all sites n Fully redundant databases are those in which every site contains a copy of the entire database

6 6 Data Replication n Advantages of Replication  Availability: failure of site containing relation r does not result in unavailability of r is replicas exist.  Parallelism: queries on r may be processed by several nodes in parallel  Reduced data transfer: relation r is available locally at each site containing a replica of r n Disadvantages of Replication  Increased cost of updates: each replica must be updated  Increased complexity of concurrency control on distinct replicas may lead to inconsistent data unless special concurrency control mechanisms are implemented

7 7 Data Fragmentation n Division of relation r into fragments r 1, r 2, …, r n which contain sufficient information to reconstruct relation r n Horizontal fragmentation: each tuple of r is assigned to one or more fragments n Vertical fragmentation: the schema for relation r is split into several smaller schemas  All schemas must contain a common candidate key (or superkey) to ensure lossless join property  A special attribute, the tuple-id attribute may be added to each schema to serve as a candidate key

8 8 Horizontal Fragmentation of account branch_name account_number balance Hillside A-305 A-226 A-155 500 336 62 account 1 =  branch_name=“Hillside” (account ) branch_name account_number balance Valleyview A-177 A-402 A-408 A-639 205 10000 1123 750

9 9 Vertical Fragmentation of Employee branch_name customer_name tuple_id Hillside Valleyview Hillside Valleyview Lowman Camp Kahn Green deposit 1 =  branch_name, customer_name, tuple_id (employee) 12345671234567 account_number balance tuple_id 500 336 205 10000 62 1123 750 12345671234567 A-305 A-226 A-177 A-402 A-155 A-408 A-639 deposit 2 =  account_number, balance, tuple_id (employee)

10 10 Advantages of Fragmentation n Horizontal  allows parallel processing on fragments of a relation  allows a relation to be split so that tuples are located where they are most frequently accessed n Vertical  allows tuples to be split so that each part of the tuple is stored where it is most frequently accessed  tuple-id attribute allows efficient joining of vertical fragments  allows parallel processing on a relation n Vertical & horizontal fragmentation can be mixed  Fragments may be fragmented to an arbitrary depth

11 11 Distributed Transactions n Transaction may access data at several sites n A site has a local transaction manager responsible for  Maintaining a log for recovery purposes  Participating in coordinating the concurrent execution of the transactions executing at that site. n A site has a transaction coordinator responsible for  Starting the execution of transactions originated at the site  Distributing subtransactions at appropriate sites for execution  Coordinating the termination of each transaction that originates at the site, which may result in the transaction being committed at all sites or aborted at all sites

12 12 Transaction System Architecture

13 13 Commit Protocols n Commit protocols are used to ensure atomicity across sites  a transaction which executes at multiple sites must either be committed at all the sites, or aborted at all the sites  not acceptable to have a transaction committed at one site and aborted at another n The two-phase commit (2PC) protocol is widely used n The three-phase commit (3PC) protocol is more complicated and more expensive, but avoids some drawbacks of two-phase commit protocol. This protocol is not used in practice

14 14 Two Phase Commit Protocol (2PC) n Assumption: fail-stop model – failed sites simply stop working, and do not cause any other harm, such as sending incorrect messages to other sites. n Execution of the protocol is initiated by the coordinator after the last step of the transaction has been reached n The protocol involves all the local sites at which the transaction executed n Let T be a transaction initiated at site S i, and let the transaction coordinator at S i be C i

15 15 Phase 1: Obtaining a Decision n Coordinator asks all participants to prepare to commit transaction T i.  C i adds the records to the log and forces log to stable storage  sends prepare T messages to all sites at which T executed n Upon receiving message, transaction manager at site determines if it can commit the transaction  if not, add a record to the log and send abort T message to C i  if the transaction can be committed, then add the record to the log force all records for T to stable storage send ready T message to C i

16 16 Phase 2: Recording the Decision n T can be committed of C i received a ready T message from all the participating sites: otherwise, T must be aborted n Coordinator adds a decision record, or, to the log and forces record onto stable storage. Once the record stable storage it is irrevocable (even if failures occur) n Coordinator sends a message to each participant informing it of the decision (commit or abort) n Participants take appropriate action locally

17 17 Handling of Failures - Site Failure When site S i recovers, it examines its log to determine the fate of transactions active at the time of the failure n Log contains record: transaction had completed, nothing to be done n Log contains record: site must consult C i to determine the fate of T If T committed, redo (T); write record If T aborted, undo (T) n The log contains no log records concerning T: Implies that S k failed before responding to the prepare T message from C i since the failure of S k precludes the sending of such a response, coordinator C i must abort T S k must execute undo (T)

18 18 Handling of Failures- Coordinator Failure n If coordinator fails while the commit protocol for T is executing, then participating sites must decide on T’s fate 1. If an active site contains a record in its log, then T must be committed 2. If an active site contains an record in its log, then T must be aborted 3. If some active participating site does not contain a record in its log, then the failed coordinator C i cannot have decided to commit T Can abort T; however, such a site must reject any subsequent message from C i 4. If none of the above cases holds, then all active sites must have a record in their logs, but no additional control records (such as of ) In this case active sites must wait for C i to recover, to find decision n Blocking: active sites may have to wait for failed coordinator to recover

19 19 Handling of Failures - Network Partition n If the coordinator and all its participants remain in one partition, the failure has no effect on the commit protocol n If the coordinator and its participants belong to several partitions:  Sites that are not in the partition containing the coordinator think the coordinator has failed, and execute the protocol to deal with failure of the coordinator. No harm results, but sites may still have to wait for decision from coordinator n The coordinator and the sites are in the same partition as the coordinator think that the sites in the other partition have failed, and follow the usual commit protocol.  Again, no harm results

20 20 Recovery and Concurrency Control n In-doubt transactions have a, but neither a, nor an log record n The recovering site must determine the commit-abort status of such transactions by contacting other sites; this can slow and potentially block recovery n Recovery algorithms can note lock information in the log  Instead of, write out L = list of locks held by T when the log is written (read locks can be omitted)  For every in-doubt transaction T, all the locks noted in the log record are reacquired. n After lock reacquisition, transaction processing can resume; the commit/rollback of in-doubt transactions is performed concurrently with the execution of new transactions

21 21 Concurrency Control n Modify concurrency control schemes for use in distributed environment. n We assume that each site participates in the execution of a commit protocol to ensure global transaction automicity. n We assume all replicas of any item are updated

22 22 Single-Lock-Manager Approach n System maintains a single lock manager that resides in a single chosen site, say S i n When a transaction needs to lock a data item, it sends a lock request to S i and lock manager determines whether the lock can be granted immediately  If yes, lock manager sends a message to the site which initiated the request  If no, request is delayed until it can be granted, at which time a message is sent to the initiating site

23 23 Single-Lock-Manager Approach n The transaction can read the data item from any one of the sites at which a replica of the data item resides n Writes must be performed on all replicas of a data item n Advantages of scheme  Simple implementation  Simple deadlock handling n Disadvantages of scheme are  Bottleneck: lock manager site becomes a bottleneck  Vulnerability: system is vulnerable to lock manager site failure

24 24 Deadlock Handling Consider the following two transactions and history, with item X and transaction T 1 at site 1, and item Y and transaction T 2 at site 2: T 1 : write (X) write (Y) T 2 : write (Y) write (X) X-lock on X write (X) X-lock on Y write (Y) wait for X-lock on X Wait for X-lock on Y Result: deadlock which cannot be detected locally at either site

25 25 Centralized Approach n A global wait-for graph is constructed and maintained in a single site; the deadlock-detection coordinator  Real graph: Real, but unknown, state of the system.  Constructed graph: Approximation generated by the controller during the execution of its algorithm. n the global wait-for graph can be constructed when:  a new edge is inserted in or removed from one of the local wait-for graphs  a number of changes have occurred in a local wait-for graph  the coordinator needs to invoke cycle-detection n If the coordinator finds a cycle, it selects a victim & notifies all sites. The sites roll back the victim transaction

26 26 Local and Global Wait-For Graphs Local Global

27 27 Wait-For Graph for False Cycles Initial state:

28 28 False Cycles (Cont.) n Suppose that starting from the state shown in figure 1. T 2 releases resources at S 1 resulting in a message remove T 1  T 2 message from the Transaction Manager at site S 1 to the coordinator) 2. And then T 2 requests a resource held by T 3 at site S 2 resulting in a message insert T 2  T 3 from S 2 to the coordinator n Suppose further that the insert message reaches before the delete message  this can happen due to network delays n The coordinator would then find a false cycle T 1  T 2  T 3  T 1 n The false cycle above never existed in reality. n False cycles cannot occur if two-phase locking is used

29 29 Distributed Query Processing n For centralized systems, the primary criterion for measuring the cost of a particular strategy is the number of disk accesses n In a distributed system, other issues must be taken into account  The cost of a data transmission over the network.  The potential gain in performance from having several sites process parts of the query in parallel.

30 30 Query Transformation n Translating algebraic queries on fragments  It must be possible to construct relation r from its fragments  Replace relation r by the expression to construct relation r from its fragments n Consider the horizontal fragmentation of the account relation into account 1 =  branch_name = “Hillside” (account ) account 2 =  branch_name = “Valleyview” (account ) n The query  branch_name = “Hillside” (account ) becomes  branch_name = “Hillside” (account 1  account 2 ) which is optimized into  branch_name = “Hillside” (account 1 )   branch_name = “Hillside” (account 2 )

31 31 Example Query (Cont.) n Since account 1 has only tuples pertaining to the Hillside branch, we can eliminate the selection operation. n Apply the definition of account 2 to obtain  branch_name = “Hillside” (  branch_name = “Valleyview” (account ) n This expression is the empty set regardless of the contents of the account relation n Final strategy is for the Hillside site to return account 1 as the result of the query

32 32 Simple Join Processing n Consider the following relational algebra expression in which the three relations are neither replicated nor fragmented account depositor branch n account is stored at site S 1 n depositor at S 2 n branch at S 3 n For a query issued at site S I, the system needs to produce the result at site S I

33 33 Possible Query Processing Strategies n Ship copies of all three relations to site S I & choose a strategy for processing the entire locally at site S I n Ship a copy of the account relation to site S 2 & compute temp 1 = account depositor at S 2 n Ship temp 1 from S 2 to S 3, and compute temp 2 = temp 1 branch at S 3. Ship the result temp 2 to S I n Devise similar strategies, exchanging the roles S 1, S 2, S 3 n Must consider following factors:  amount of data being shipped  cost of transmitting a data block between sites  relative processing speed at each site

34 34 Semijoin Strategy n Let r 1 be a relation with schema R 1 stores at site S 1 Let r 2 be a relation with schema R 2 stores at site S 2 n Evaluate the expression r 1 r 2 & obtain the result at S 1 1. Compute temp 1   R 1  R 2 (r 1 ) at S 1 n 2. Ship temp 1 from S 1 to S 2 n 3. Compute temp 2  r 2 temp 1 at S 2 n 4. Ship temp 2 from S 2 to S 1 n 5. Compute r 1 temp 2 at S 1. This is the same as r 1 r 2

35 35 Formal Definition n The semijoin of r 1 with r 2, is denoted by: r 1 r 2 n it is defined by:  R1 (r 1 r 2 ) n Thus, r 1 r 2 selects those tuples of r 1 that contributed to r 1 r 2 n In step 3 above, temp 2 = r 2 r 1 n For joins of several relations, the above strategy can be extended to a series of semijoin steps

36 36 Join Strategies that Exploit Parallelism n Consider r 1 r 2 r 3 r 4 where relation r i is stored at site S i. The result must be presented at site S 1. n r 1 is shipped to S 2 and r 1 r 2 is computed at S 2 : simultaneously r 3 is shipped to S 4 and r 3 r 4 is computed at S 4 n S 2 ships tuples of (r 1 r 2 ) to S 1 as they produced; S 4 ships tuples of (r 3 r 4 ) to S 1 n Once tuples of (r 1 r 2 ) and (r 3 r 4 ) arrive at S 1 (r 1 r 2 ) (r 3 r 4 ) is computed in parallel with the computation of (r 1 r 2 ) at S 2 and the computation of (r 3 r 4 ) at S 4


Download ppt "Chapter 19 Distributed Databases. 2 Distributed Database System n A distributed DBS consists of loosely coupled sites that share no physical component."

Similar presentations


Ads by Google