Presentation is loading. Please wait.

Presentation is loading. Please wait.

Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ 97 2000.12.14 Koo-Min Ahn.

Similar presentations


Presentation on theme: "Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ 97 2000.12.14 Koo-Min Ahn."— Presentation transcript:

1 Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ 97 2000.12.14 Koo-Min Ahn

2 Contents Introduction Problem in generating synthetic HTTP request A scalable method for generating HTTP request (main idea) Quantitative evaluation Conclusion

3 Introduction Improving web performance –Web caching, HTTP protocol enhancement, better HTTP servers and proxies, server OS and s/w Measuring web s/w performance –Characterizing web server workload : file type, transfer size, other related statistics Recently, web server evaluation –Generation of synthetic HTTP client traffic –Problem : lead to deviation of benchmarking conditions from reality and fail to predict the performance of a given web server

4 Problems in generating synthetic HTTP request (1) inability to generate excess load –A huge number of clients –Think time dist ’ n with large mean and variance –Think time is not independent –Web content causes high correlation of requests Peak request rates can exceed the capa of server HTTP requests arriving at a server is bursty

5 Problems in generating synthetic HTTP request (2) Additional problem : WAN based Web –Simple method does not model high and variable WAN delays –Packet losses due to congestion are absent in LAN based method –With an increasing number of clients per client machine, client CPU and memory contention are likely to arise  the bottleneck in a web transaction is no longer the server but the client

6 A scalable method for generating HTTP request (to be continue) Basic architecture –If WAN effects are to be evaluated, the client machine should be connected to the server through a router

7

8 S-Client A S-client consists of a pair processes connected by a Unix domain socketpair Connection establishment process is responsible for generating HTTP request at a certain rate and a certain request dist ’ n After a connection is established, the connection establishment process sends a HTTP request to the server, then it passes on the connection to the connection handling process, which handles the HTTP response

9 Connection establishment process of S- Client (1) The process open D connections to the server using D sockets in non-blocking mode D connection requests are spaces out over T ms After the process executes a non-blocking connect() to initiate a connect The process checks if for any of its D active sockets, the connection is completed or if T ms have elapsed since a connect() was performed on this socket

10 Connection establishment process of S- Client (2) if for any of its D active sockets, the connection is completed : the process sends a HTTP request on the newly established connection, handoff off this connection to the other process of the S- Client through the Unix domain socketpair, closes the socket, and then initiates another connection to the server if T ms have elapsed since a connect() was performed on this socket : the process simply closes the socket and initiates another connection to the server

11 Connection handling process of S-Client The process waits for (1) data to arrive on any of the active connections or (2) for a new connection to arrive on the Unix domain socket connecting it to the other process In case of new data on an active socket, it read this data If this completes the server ’ s response, it closes the socket A new connection arriving at the Unix domain socket is simply added to the set of active connect

12 Two key idea of S-Client 1)Shorten TCP ’ s connection establishment timeout –Using Non-blocking connect and closing the socket if no connection was establishment after T second –The purpose is to allow the generation of request rates beyond the capacity of the server with a reasonable number of client sockets 2)Maintain a constant number of unconnected sockets that are trying to establish new connections –To ensure that generated request rate is independent of the rate at which server handles request –Once request rate matches the capa of server, the additional queuing delay in the server ’ s accept queue no longer reduces the request rate of simulated client

13 Think time distribution This scheme uses a constant think time chosen to achieve a certain constant request rate It is possible to generate more complex request processes by adding appropriate think periods between the point where a S- Client detects a connection was established and when it next attempt to initiate another connection

14 Experiment setup Client machine –4 Sun SPARC 20 model 61 workstation (60Mhz SuperSPARC+, 36KB L1, 1MB L2, SPEXint 92 98.2) –32 MB memory and SunOS 4.1.3_U1 Server Machine –Dual processor SPARC 20 model 61 machines –64 MB memory and Solaris 2.5.1 155 Mbps ATM Local area network NCSA httpd server software revision 1.5.1 Not consider WAN delay

15 Simulation result : request generation rate

16 Simulation result : overload behavior of a web server

17 Simulation result : Throughput under bursty condition

18 Simulation result (1) Request generation rate –File size is 1294 byte, connection establishment timeout period T is 500ms –As we add more clients, the queue length at the accept queue of the server ’ s listen socket increases and the request rate remains nearly constant at the capacity of the server –S-clients enable the generation of request loads that greatly exceed the capacity of the server

19 Simulation result (2) Overload behavior of a web server –As before, the server saturates at about 130 transactions per second –This fall in throughput with increasing request rate is due to the CPU resources spent on protocol processing for incoming requests –This large drop in throughput of an overload server highlights the importance of evaluating the overload behavior of a web server

20 Simulation result (3) Throughput under bursty conditions –Bursty traffic parameter is (1)the ratio between the maximum request rate and the average request rate, (2) the fraction of time for which the request rate exceed the average rate –(6,5) refers to the case where for 5% of the time the request rate is 6 times the average request rate –High burstiness both in parameter (1) and (2) degrades the throughput substantially

21 Conclusion No benchmark we know of attempts to accurately model the effects of request overloads on server performance (Webstone, SPECWeb96) This method to evaluate a typical web server indicates that measuring web server performance under overload and bursty traffic conditions gives new and importance insights in web server performance


Download ppt "Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ 97 2000.12.14 Koo-Min Ahn."

Similar presentations


Ads by Google