Presentation is loading. Please wait.

Presentation is loading. Please wait.

Two Techniques For Improving Distributed Database Performance ICS 214B Presentation Ambarish Dey Vasanth Venkatachalam March 18, 2004.

Similar presentations


Presentation on theme: "Two Techniques For Improving Distributed Database Performance ICS 214B Presentation Ambarish Dey Vasanth Venkatachalam March 18, 2004."— Presentation transcript:

1 Two Techniques For Improving Distributed Database Performance ICS 214B Presentation Ambarish Dey Vasanth Venkatachalam March 18, 2004

2 Issues In Distributed Databases fast communication among clients –data requested by a client can be located and transferred quickly good utilization of client CPU and memory resources removing I/O bottlenecks –reducing disk accesses –reducing communication with servers increased scalability

3 Focus Of This Talk two approaches for improving performance of distributed systems client server caching (Franklin and Carey) fast page transfer schemes (Mohan and Narang) –shared disk architecture similarities

4 Client Server Caching caching of data and locks at multiple clients –minimizes communication overhead between clients and servers –reduces contention for server resources –reduces contention for data –increases autonomy of clients

5 Existing Techniques existing techniques for distributed data management fall into three categories –techniques that avoid caching –techniques that cache data but not locks –optimistic 2 phase locking O2PL-Invalidate (O2PL-I) O2PL-Propagate(O2PL-P) O2PL-Dynamic (O2PL-D)

6 Novel Techniques callback locking –an alternate method of maintaining cache consistency adaptive locking –a protocol that improves upon O2PL-D

7 Callback Locking supports caching of data pages and non- optimistic caching of locks locks obtained prior to data access server issues ‘call-back’ for conflicting locks no consistency maintenance operations in the commit phase

8 Techniques For Callback Locking callback read (CB-Read) –caches only read locks –lock issued only after completion of all the call-backs –on commit pages are sent back to server, but copies and hence a read lock is retained at the client callback all (CB-All) –write locks are cached in clients rather than read locks –information about exclusive copies is stored at the client –server issues downgrade requests when it gets read lock requests for a page

9 Novel Techniques callback Locking adaptive locking

10 The New Adaptive Heuristic the variety of the O2PL algorithms try to optimize the actions that they perform on the remote sites, once a lock has been obtained. propagate pages only when –the page is resident at the site when the consistency operation is attempted –if the page was previously propagated to this site, and it has been re-accessed since then –the page was previously invalidated at the site and that invalidation was a mistake

11 Where We Are client server caching fast page transfer schemes –shared disk architecture comparisons

12 Motivation disk based data sharing involves a lot of overhead system A wants to access a page owned by system B. –GLM sends B a lock conflict message –B writes the page to disk after forcing its logs (WAL) –B sends GLM a message to downgrade its lock, allowing A to read the page –A reads the page from disk cost is 2 I/Os, 2 messages, and a log force

13 Alternative: Fast Page Transfer systems transfer pages through message passing, rather than disk I/Os. improves performance requires buffer coherency protocols requires special recovery protocols what if a message is lost? what if one or more systems fail? four schemes for fast page transfer –medium, fast, superfast schemes

14 SuperFast Page Transfer pages transferred from one system to another without writing them or their logs to disk the final owner is responsible for writing the page to disk and ensuring that logs of all updates by all systems written to disk cost is 0 I/O and 3 messages how to deal with system failures? how to preserve write-ahead logging?

15 Recovery uses a merged log of all systems that have updated the page recovery LSN (RLSN) is the earliest point in the merged log from which redo processing for a page has to start –initialized to HIGH (no recovery needed) –changed to the next LSN value when a page is locked in update mode –reset to HIGH after the updated page is written to disk global lock manager adjusts RLSN value as it receives information from the systems

16 Single System Failure locking information preserved at the GLM a single system responsible for merging logs and doing REDO processing for all pages on behalf of all failed systems pages requiring REDO are those locked in U mode and whose RLSN < HIGH the minimum of these RLSN values is the starting point in the merged log for the REDO pass ARIES style REDO, followed by UNDO –if LSN log > LSN page, reapply the log

17 Complex System Failure the GLM crashes and at least one LLM crashes, so locking information is lost each system periodically checkpoints the global lock manager’s state –write a Begin_GLM_Checkpoint log record –request for all pages with RLSN not equal to HIGH –write these into an End_GLM_Checkpoint log record

18 Complex System Failure find the minimum RLSN contained in the End_GLM_Checkpoint log record start REDO processing at this RLSN, or at the LSN of the Begin_GLM_Checkpoint log record, if all pages have RLSN of HIGH. continue until end of log reached undo processing done by individual systems

19 Preserving WAL pages contain slots for attaching log information – when transferring a page, a system piggybacks the LSN of the latest log record it hasn’t written to disk the final owner reads the slots and enforces WAL

20 Conclusion the page transfer schemes incorporate ideas from client server caching for buffer coherency –central server maintains LSN information and transactions update this information when they commit –lock degradation caching and fast page transfer can coexist, but both share tradeoffs –overhead of maintaining cache/buffer coherency –overhead of recovery protocols


Download ppt "Two Techniques For Improving Distributed Database Performance ICS 214B Presentation Ambarish Dey Vasanth Venkatachalam March 18, 2004."

Similar presentations


Ads by Google