Presentation is loading. Please wait.

Presentation is loading. Please wait.

Performance Analysis. What is it? & Why do I care? Performance analysis is the careful measurement of the capabilities and capacity of a system. –identify.

Similar presentations


Presentation on theme: "Performance Analysis. What is it? & Why do I care? Performance analysis is the careful measurement of the capabilities and capacity of a system. –identify."— Presentation transcript:

1 Performance Analysis

2 What is it? & Why do I care? Performance analysis is the careful measurement of the capabilities and capacity of a system. –identify limits –find bugs –know before you go Many bugs/problems only show up under extreme situations. I can sleep when the wind blows

3 Why do we care? “Twenty-eight percent of shoppers who have suffered failed performance attempts said they stopped shopping at the web site where they had problems, and six percent said they stopped buying at that particular company’s off-line store.” (Boston Consulting Group, quoted in Infoworld / Computerworld 3/00) “It takes only 8 ½ seconds for half of the subjects to [give up]” (Peter Bickford, “Worth the Wait?” in Netscape/View Source Magazine 10/97) “Perhaps as much as $4.35 billion in e-commerce sales in the U.S. may be lost each year due to unacceptable download speeds and resulting user bailout behaviors.” (Zona Research 4/99) “Fifty-eight percent of online customers surveyed indicated quick download time as a key factor in determining whether they would return to a web site.” (Forrester Research 1/99) “One of the top three reasons cited by online shoppers for dissatisfaction with a web site is slow site performance.” (Jupiter Communications / NFO Worldwide 1/99) “At one site, the abandonment rate fell from 30% to 6-8% because of a one second improvement in load time.” (Zona Research 4/99)

4 What is performance? User Experience –How fast does the page load? –How available is the site? Web Server –How many requests/second can be served? throughput –What is the effect of web proxies? Network –What is the network performance? Latency, bandwidth

5 Network Performance At the network level, performance can be measured in terms of: –Latency How long it takes a message to travel from one end of the network to the other –Bandwidth The number of bits that can be transmitted over the network in a certain period of time bandwidth latency

6 Network Performance Measures Overhead: latency of interface vs. Latency: network

7 Universal Performance Metrics Sender Receiver Sender Overhead Transmission time (size ÷ bandwidth) Transmission time (size ÷ bandwidth) Time of Flight Receiver Overhead Transport Latency Total Latency = Sender Overhead + Time of Flight + Message Size ÷ BW + Receiver Overhead Total Latency (processor busy) (processor busy) Include header/trailer in BW calculation?

8 Total Latency Example 1000 Mbit/sec., sending overhead of 80 µsec & receiving overhead of 100 µsec. a 10000 byte message (including the header), allows 10000 bytes in a single message 3 situations: distance 1000 km v. 0.5 km v. 0.01 Speed of light ~ 300,000 km/sec (1/2 in media) Latency 0.01km = Latency 0.5km = Latency 1000km =

9 Total Latency Example 1000 Mbit/sec., sending overhead of 80 µsec & receiving overhead of 100 µsec. a 10000 byte message (including the header), allows 10000 bytes in a single message 3 situations: distance 1000 km v. 0.5 km v. 0.01 Speed of light ~.3 km/µsec Latency 0.01km = 80usec + 0.01km / (50% x.3km/usec) + (10000bytes x 8bits/byte) / 1000 bit/usec + 100usec ~ 260 µsec Latency 0.5km = 80usec + 0.5km / (50% x.3km/usec) + (10000bytes x 8bits/byte) / 1000 Mbit/s + 100 ~ 263 µsec Latency 1000km = 80usec + 1000 km / (50% x.3km/usec) + (10000bytes x 8bits/byte) / 1000 Mbit/s + 100 ~ 6926 µsec Long time of flight => complex WAN protocol

10 So What? Long distance = long msg transmission time –Servers should be as close as possible to clients Low bandwidth = long msg transmission time –Servers should have high bandwidth links High Overhead = long msg transmission time –Reduce the communication overhead as much as possible –Fast TCP implementation –More memory

11 Web Server Performance Throughput: Requests per second How do you measure? –Live May be too late…. –Offline Replay logs - does the past characterize the future? Synthetic Workload - does it characterize reality? “...factoring out I/O, the primary determinant to server performance is the concurrency strategy.” –-- JAWS: Understanding High Performance Web Systems JAWS: Understanding High Performance Web Systems

12 Applications of Workload Models Identify Performance Problems –Problems may only occur under high load Benchmark Web Components –Deployment decisions –Evaluate new features Capacity Planning –Determine network, memory, disk and clustering needs

13 Web Workload Characterization Based on the results of numerous studies Key properties –HTTP Message Characteristics Several request methods and response codes –Resource Characteristics Diverse content-type, size, popularity, and modification frequency –User Behavior User browsing habits significantly affect workload CategoryParameter ProtocolRequest Method Response Code ResourceContent type Resource size Response size Popularity Modification freqency Temporal Locality # embedded resources UsersSession interarrival times # clicks per session Request interarrival times

14 Parameter Characterization Associate each parameter with quantitative values Statistics –Mean, median, mode OK for parameters that don’t vary much –Probability Distributions Capture how a parameter varies over a wide range of values

15 Probability Distribution Every random variable gives rise to a probability distribution Probability Density Function –Assigns a probability to every interval of the real numbers Cumulative Distribution Function –Describes the probability distribution of a real-valued random variable X –F(x) = P(X <= x) –The probability that a random variable will be less than or equal to x In the following slides, we will show the CDF of commonly used distributions

16 Poisson Distribution F(x) = (e - k )/k! Used to model the time between independent events that happen at a constant average rate The number of times a web server is accessed per minute is a Poisson distribution –For instance, the number of edits per hour recorded on Wikipedia's Recent Changes page follows an approximately Poisson distribution.

17 Exponential Distribution F(x) = e - x Used to model the time until the next occurrence of an event in a Poisson process Session interarrival times are exponential –Time between the start of one user session and the start of the next user session

18 Pareto Distribution F(x) = (x/a) -k k is shape, a is minimum value for x Power law 80-20 rule –20% of the sample is responsible for 80% of the results Response sizes, Resource sizes, Number of embedded images, Request interarrival times Often used to model self-similar patterns

19 Probability Distributions in Web Workload Models DistributionWorkload Parameter ExponentialSession interarrival times ParetoResponse Sizes Resource Sizes Number of embedded images Request interarrival times LognormalResponse sizes Resource sizes Temporal Locality Zipf-likeResource Popularity

20 Probability Distribution Conversion Most languages have random number library functions –Uniform distribution Must convert from uniform distribution to the chosen distribution Inverse of the CDF For the exponential distribution Given: the cumulative distribution function, CDF, of the chosen distribution –1. Generate a random number; call this number p –2. Compute x such that CDF(x) = p Determine the inverse of the CDF –3. x is the random number you use

21 Website Analysis Websites quickly become large and difficult to test and optimize Use tools –Workload generators Webstone httperf & autobench JMeter –Site analysis - log files Webalizer

22 Performance Tips Check for web standards compliance Turn off reverse DNS lookups on the server Get more memory Index your database tables Make fewer database queries Decrease the number of page components Decrease the size of each component Minimize Perceived Delay –Give the viewer something to look at while the page is loading

23 Performance Analysis Architectures You will generally need several load generating machines to effectively bog down a web server Web Server Load Switch

24 Httperf basics The three distinguishing characteristics of httperf are –its robustness, which includes the ability to generate and sustain server overload, –support for the HTTP/1.1 protocol –its extensibility to new workload generators performance measurements

25 A Simple Example httperf --server hostname Specify the sever --port 80 Specify the port --uri /test.html The file you want to download --rate 150 The rate in requests/second --num-conn 27000 The total number of TCP Connections --num-call 1 The number of requests for each connection --timeout 5 The request will fail if it takes longer than this

26 The output httperf --server apu --port 5556 --uri /test.html --rate 400 --num-conn 8000 --timeout 20 Maximum connect burst length: 1 Total: connections 8000 requests 8000 replies 8000 test-duration 20.914 s Connection rate: 382.5 conn/s (2.6 ms/conn, <=263 concurrent connections) Connection time [ms]: min 0.8 avg 125.9 max 3000.5 median 0.5 stddev 594.7 Connection time [ms]: connect 122.8 Connection length [replies/conn]: 1.000 Request rate: 382.5 req/s (2.6 ms/req) Request size [B]: 65.0 Reply rate [replies/s]: min 351.2 avg 395.9 max 432.2 stddev 33.4 (4 samples) Reply time [ms]: response 3.0 transfer 0.0 Reply size [B]: header 64.0 content 49.0 footer 0.0 (total 113.0) Reply status: 1xx=0 2xx=8000 3xx=0 4xx=0 5xx=0 CPU time [s]: user 4.18 system 16.73 (user 20.0% system 80.0% total 100.0%) Net I/O: 66.5 KB/s (0.5*10^6 bps) Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

27 Sample Graph Load (requests/sec)

28 Errors Errors will occur when the client connection experiences a timeout You can reduce the timeout value and increase the file size and rate to see the results: httperf --server apu --port 5556 --verbose --uri /testmid.html --rate 800 --num-conn 8000 --timeout 2 Connection rate: 799.9 conn/s (1.3 ms/conn, <=660 concurrent connections) Reply rate [replies/s]: min 671.8 avg 735.8 max 799.8 stddev 90.4 (2 samples) Errors: total 641 client-timo 641 socket-timo 0 connrefused 0 connreset 0

29 Another Example httperf --hog --server apu --port 5556 This command causes httperf to –create a connection to host apu.cs.byu.edu, –send a request for the root document (http://apu:5556/)http://apu:5556/ –receive the reply –close the connection, –and then print some performance statistics. –The --hog parameter lets httperf use ports outside the normal limits (>5000)

30 Another Example httperf --hog --server apu --port 5556 -- num-conn 100 --rate 10 --timeout 5 –a total of 100 connections are created – connections are created at a fixed rate of 10 per second –Connections timeout in 5 seconds

31 Another Example httperf --hog --ser=www --wsess=10,5,2 --rate 1 -- timeout 5 –Causes httperf to generate a total of 10 sessions –at a rate of 1 session per second. –Each session consists of 5 calls that are spaced out by 2 seconds.

32 Changing the inter-arrival rate httperf --server apu --port 5556 --uri /test.mid --hog --num-conn 100000 --rate 1000 --timeout 2 - -verbose --period=e2 –Use an exponential interarrival rate with a mean interarrival time of 2 seconds

33 Using files httperf --server apu --port 5556 --uri /Pareto --hog --num-conn 100000 - -rate 1000 --timeout 2 --verbose --wset 999,1 --period=e2 –The --wset directive indicates that you will access files in the /Pareto directory in a round robin fashion. –The URIs generated are of the form prefix / path.html, where prefix is the URI prefix specified by option --wset and path is generated as follows: for the i -th file in the working set, write down i in decimal, prefixing the number with as many zeroes as necessary to get a string that has as many digits as N -1. Then insert a slash character between each digit. For example, the 103rd file in a working set consisting of 1024 files would result in a path of '' 0/1/0/3 ''. Thus, if the URI-prefix is /wset1024, then the URI being accessed would be /wset1024/0/1/0/3.html. In other words, the files on the server need to be organized as a 10ary tree.

34 Autobench Perl script Wrapper for httperf to make things easier. Extracts the data from httperf output Simple mode – benchmark single server autobench --single_host --host1 www.test.com --uri1 /10K --quiet \ --low_rate 20 --high_rate 200 --rate_step 20 --num_call 10 \ --num_conn 5000 --timeout 5 --file results.tsv

35 Autobench Requests/sec

36 Summary Performance Analysis is important for many reasons Experimental work can help you to understand the limits of the web server The httperf application also lets you benchmark cookies, ssl connection times and many other important web server concepts. Use autobench to make things easier


Download ppt "Performance Analysis. What is it? & Why do I care? Performance analysis is the careful measurement of the capabilities and capacity of a system. –identify."

Similar presentations


Ads by Google