Download presentation
Presentation is loading. Please wait.
1
Dynamic parallel access to replicated content in the Internet Pablo Rodriguez and Ernst W. Biersack IEEE/ACM Transactions on Networking, August 2002
2
Introduction Popular content is frequently replicated in multiple servers or caches. Even the fastest server is selected, its performance can fluctuate during a download session. Users can experience a better and more uniform performance by connecting to several servers. A user downloads different parts of the same document from each of the servers in parallel.
3
Parallel-access schemes History-based TCP parallel access Clients specify a priori which part of a document must be delivered from each mirror server. The portion of a document delivered by one server should be proportional to its service rate. Dynamic TCP parallel access A client partitions a document into many small blocks. The client request a different block from each server.
4
History-based parallel access The client divides a document (size S) into M disjoint blocks. (M servers) μ i is the transmission rate for server i. α i S is the size of the block delivered by server i. Download time T i = α i S/ μ i To achieve a maximum speedup, T i =T j for 1 ≤ i, j ≤ M.
5
Experimental setup
6
Performance (1/2) S=763KB, M=2
7
Performance (2/2) S=763KB, M=4
8
Dynamic parallel access The document is divided into B blocks of equal size. Use HTTP 1.1 byte-range header to request a block. Dynamic parallel-access scheme A client first request one block from every server. Every time a client has completely received one block from a server, it requests another block from this server. When the client has received all blocks, it reassembles them to construct the whole document.
9
Block request
10
Idle time Interblock idle time Each server will be idle between two consecutive block transfer. This idle time corresponds to one RTT. Can be avoided by pipelining requests. Termination idle time Not all servers terminate at the same time. Can be reduced by receiving missing blocks from the idle server.
11
Performance (1/2)
12
Performance (2/2)
13
Performance of parallel access with large documents
14
Performance of parallel access with small documents
15
Shared bottleneck A single server may already consume all the bandwidth of the link. When a client uses another server in parallel, there is no residual network bandwidth. Packets from different servers interfere and compete for bandwidth. The download time obtained with a dynamic parallel access is always slightly higher than the download time obtained for the fastest server. Because of interblock idle time
16
Performance (1/2) S = 256KB, B = 20, M = 3
17
Performance (2/2) S = 763KB, B = 30, M = 2
18
With pipelining (1/2) S = 256KB, B = 20, M = 3 S / B ≥ RTT
19
With pipelining (2/2) S = 763KB, B = 30, M = 2
20
Discussion Overhead Extra server access to find out S Block request messages Limitations Must be applied to large files Benefits Avoid the risk of selecting a slow or instable server. Suitable for peer-to-peer applications or content- distribution networks.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.