Client-Server Caching James Wann April 4, 2000. Client-Server Architecture A client requests data or locks from a particular server The server in turn.

Slides:



Advertisements
Similar presentations
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Advertisements

Optimistic Methods for Concurrency Control By : H.T. Kung & John T. Robinson Presenters: Munawer Saeed.
Introduction to Computer Networks Spanning Tree 1.
Database Systems (資料庫系統)
1 Concurrency Control Chapter Conflict Serializable Schedules  Two actions are in conflict if  they operate on the same DB item,  they belong.
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Concurrency: Deadlock and Starvation Chapter 6. Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate.
Interprocess Communication CH4. HW: Reading messages: User Agent (the user’s mail reading program) is either a client of the local file server or a client.
IO-Lite: A Unified Buffering and Caching System By Pai, Druschel, and Zwaenepoel (1999) Presented by Justin Kliger for CS780: Advanced Techniques in Caching.
Using DSVM to Implement a Distributed File System Ramon Lawrence Dept. of Computer Science
Quick Review of Apr 29 material
Transactions – T4.3 Title: Concurrency Control Performance Modeling: Alternatives and Implications Authors: R. Agarwal, M. J. Carey, M. Livny ACM TODS,
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Transaction Management and Concurrency Control.
Witawas Srisa-an Chapter 6
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
Two Techniques For Improving Distributed Database Performance ICS 214B Presentation Ambarish Dey Vasanth Venkatachalam March 18, 2004.
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Transaction Management and Concurrency Control.
1 Lecture 10: TM Implementations Topics: wrap-up of eager implementation (LogTM), scalable lazy implementation.
TRANSACTIONS AND CONCURRENCY CONTROL Sadhna Kumari.
Distributed Deadlocks and Transaction Recovery.
1 Concurrency: Deadlock and Starvation Chapter 6.
Distributed Shared Memory: A Survey of Issues and Algorithms B,. Nitzberg and V. Lo University of Oregon.
Scalable Web Server on Heterogeneous Cluster CHEN Ge.
CS 5204 (FALL 2005)1 Leases: An Efficient Fault Tolerant Mechanism for Distributed File Cache Consistency Gray and Cheriton By Farid Merchant Date: 9/21/05.
A Measurement Based Memory Performance Evaluation of High Throughput Servers Garba Isa Yau Department of Computer Engineering King Fahd University of Petroleum.
1 Mutual Exclusion: A Centralized Algorithm a)Process 1 asks the coordinator for permission to enter a critical region. Permission is granted b)Process.
Chapter 11 Concurrency Control. Lock-Based Protocols  A lock is a mechanism to control concurrent access to a data item  Data items can be locked in.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Effects of wrong path mem. ref. in CC MP Systems Gökay Burak AKKUŞ Cmpe 511 – Computer Architecture.
C-Hint: An Effective and Reliable Cache Management for RDMA- Accelerated Key-Value Stores Yandong Wang, Xiaoqiao Meng, Li Zhang, Jian Tan Presented by:
A Survey on Optimistic Concurrency Control CAI Yibo ZHENG Xin
Optimistic Methods for Concurrency Control By: H.T. Kung and John Robinson Presented by: Frederick Ramirez.
Distributed File Systems
OOPSLA 2001 Choosing Transaction Models for Enterprise Applications Jim Tyhurst, Ph.D. Tyhurst Technology Group LLC.
Concurrency Chapter 6.2 V3.1 Napier University Dr Gordon Russell.
Page 1 Concurrency Control Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content of this presentation.
1 Concurrency control lock-base protocols timestamp-based protocols validation-based protocols Ioan Despi.
1 Database Systems ( 資料庫系統 ) December 27, 2004 Chapter 17 By Hao-hua Chu ( 朱浩華 )
CS333 Intro to Operating Systems Jonathan Walpole.
10 1 Chapter 10_B Concurrency Control Database Systems: Design, Implementation, and Management, Rob and Coronel.
Highly Available Services and Transactions with Replicated Data Jason Lenthe.
Topics in Distributed Databases Database System Implementation CSE 507 Some slides adapted from Navathe et. Al and Silberchatz et. Al.
Running Commodity Operating Systems on Scalable Multiprocessors Edouard Bugnion, Scott Devine and Mendel Rosenblum Presentation by Mark Smith.
Lecture 11: Coordination and Agreement Central server for mutual exclusion Election – getting a number of processes to agree which is “in charge” CDK4:
DISTRIBUTED FILE SYSTEM- ENHANCEMENT AND FURTHER DEVELOPMENT BY:- PALLAWI(10BIT0033)
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
Concurrency: Deadlock and Starvation
The University of Adelaide, School of Computer Science
Concurrency Control.
Ivy Eva Wu.
Memory Management for Scalable Web Data Servers
CMSC 611: Advanced Computer Architecture
Example Cache Coherence Problem
Page Replacement.
Chapter 10 Transaction Management and Concurrency Control
Chapter 15 : Concurrency Control
Concurrency Unit 4.2 Dr Gordon Russell, Napier University
Dissemination of Dynamic Data on the Internet
Lecture 10: Coordination and Agreement
Transactions and Concurrency
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Database System Architectures
Lecture 11: Coordination and Agreement
Concurrency Unit 4.2 Dr Gordon Russell, Napier University
The University of Adelaide, School of Computer Science
Database Systems (資料庫系統)
Database Systems (資料庫系統)
Presentation transcript:

Client-Server Caching James Wann April 4, 2000

Client-Server Architecture A client requests data or locks from a particular server The server in turn responds with the requested items Otherwise known as a data shipping architecture

Why Caching? Better utilizes the CPU and memory resources of clients Reduces reliance on the server Increases the scalability of the system

Disadvantages of Caching Increased network utilization Extra load on system Increased transaction abort rates, depending on algorithm

Test Workloads for Caching Algorithms HOTCOLD – There is a high probability that pages in the “hot set” will be read. However, there is equal probability that pages in either the “hot set” or “cold set” will be written FEED – There is one client that writes to pages in the “hot set”. The other clients have a high probability of reading from the “hot set”

Test Workloads for Caching Algorithms (cont’d) UNIFORM – All pages have equal probability of being either read or written HICON – There is a high probability of conflicts in reading/writing

Server-Based Two-Phase Locking Client transactions must obtain locks from the server before accessing a data item Easiest algorithm to implement Heavy messaging overhead Best for workloads with high data contention

Optimistic Two-Phase Locking Each client has its own lock manager Upon commit, the client sends a message to the server stating which pages are updated The server sends an update message to all clients with copies of the pages The next action depends on algorithm

clientserver client Commit Phase

clientserver client Update Message Phase

Non-Dynamic O2PL Algorithms O2PL-Invalidate (O2PL-I) – invalidates the updated pages in the clients receiving the message O2PL-Propagate (O2PL-P) – propagates the changed pages to the clients that already have the pages

Dynamic O2PL Algorithms O2PL-Dynamic (O2PL-D) – chooses between propagation and invalidation based on a certain criteria Criteria 1 – The page is at the client where the message is being sent Criteria 2 – The page was previously propagated to the client and it has since been reaccessed

Dynamic O2PL Algorithms (cont’d) O2PL-New Dynamic (O2PL-ND) – uses the same criteria as O2PL-D with one additional characteristic: A structure called the invalidate window is used hold the last n invalidated pages Most recently invalidated pages are placed in front of the window

Dynamic O2PL Algorithms (cont’d) If a page is accessed and its number is found in the invalidate window, then the entry is marked as being a mistaken invalidation Criteria 3 – The page was found to be previously invalidated by mistake

Evaluation of O2PL Algorithms (HOTCOLD) O2PL-I and O2PL-ND have a higher throughput than O2PL-P and O2PL-D This is due to the fact that propagated updates may not necessarily be accessed again (wasted propagations) O2PL-I, O2PL-D, and O2PL-ND have similar performance on a faster network

Evaluation of O2PL Algorithms (FEED) O2PL-P, O2PL-D, and O2PL-ND have better throughput than O2PL-I This scenario benefits propagation (keeps “hot pages” in buffer) However, the performances are comparable in small buffers

Evaluation of O2PL Algorithms (UNIFORM) O2PL-P and O2PL-D have far less throughput than the other algorithms Higher probability of wasted propagations

Figures 1 through 6 in paper

Callback Locking Allows caching of data pages and locks Clients obtain locks by making a request to the server If there is a lock conflict, clients with the locks are asked to release the locks The lock request is granted only when all the locks are released

CB-Read Only read locks are cached When a write lock request is made, the server requests all clients with the specified page to release the page If all clients comply, then a write lock is granted All subsequent lock requests are blocked until the write lock is released

CB-All Both locks are cached and write locks are not released at the end of a transaction A page copy at a certain client is designated as the exclusive copy Upon a read request from another client, then exclusive copy is received and the original client no longer has the exclusive copy This is called a downgrade request

Evaluation of Callback Algorithms (HOTCOLD) O2PL-ND has better throughput than the Callback algorithms The Callback algorithms require more messages per transaction However, the throughput difference is not significant

Evaluation of Callback Algorithms (FEED) O2PL-ND has better throughput than either Callback algorithms This is because the pages are usually already found in the clients in O2PL-ND and so no extra messages are needed Again, the performance difference is not significant

Evaluation of Callback Algorithms (UNIFORM) All three algorithms have similar throughput

Evaluation of Callback Algorithms (HICON) O2PL-ND performance suffers because of frequent aborts due to late deadlock- detection CB-Read has higher throughput than CB-All because of smaller messaging requirements

Figures 8 through 13 in paper

Figures 14 through 15 in paper

Conclusion O2PL-ND proves to be a more flexible algorithm than O2PL-D Invalidation is the default, rather than propagation Ideal for a small number of clients

Conclusion (cont’d) CB-Read is a more adaptable algorithm than O2PL-ND and CB-All Detects deadlock earlier than O2PL-ND and avoids aborts for long transactions Has lower messaging overhead than CB-All Server-based 2PL works best with a large number of clients in a high-contention situation. Perhaps further research should be done in consideration of faster LANs (e.g. fast ethernet)?