Presentation is loading. Please wait.

Presentation is loading. Please wait.

Performance Monitoring

Similar presentations


Presentation on theme: "Performance Monitoring"— Presentation transcript:

1 Performance Monitoring
In circuit switching, a direct circuit is established between the host and destination, through which packets simply pass unhindered and in order. In packet switching, the packets comprising a message are injected into the network and then meander through it by different routes before finally arriving at their destination, where they are re-assembled into the original message. The idea of packet switching was developed by Paul Baran of the Rand Corporation in the early 1960s. Packet switching is characterized by robustness, flexibility and lack of a requirement for a centralized control, so these features made it suitable for use in rugged military communications systems, and these same features have led to its subsequent civilian success. Dr.Faisal Alzyoud

2 WANs are often perceived to be slower than LANs
WANs are often perceived to be slower than LANs. As WAN is shared with many more users than the LAN. However, this explanation is not the whole story: often traffic on the WAN will be light and the bottleneck will occur somewhere between the router terminating the trunk line and the user’s end host. firewall between two end hosts which are performing a transfer can affect the transfer rate, since firewall has to examine every packet in a set of complex criteria. IETF IP Performance Metrics (IPPM) working group developed different metrics to monitor network. GGF (Global GRID Forum) Network Measurements Working Group (NMWG) has recently produced a Hierarchy of Network Performance Characteristics for Grid Applications and Services. Dr.Faisal Alzyoud

3 Types of Measurement Several common metrics have been developed for measuring network performance. Some of the different ways in which network performance might be measured are as follows. Passive or active monitoring: - In passive monitoring the traffic at some point in the network is simply monitored. The pertinent feature of passive monitoring is that it does not alter the operation or state of the network in any way. In active monitoring, by comparison, test packets are injected into the network in order to measure some aspect of its performance. For example, a test packet might be dispatched to a remote host in order to measure the time taken until a return package is received acknowledging that it has arrived. This may alter performance (and degrade it for other users). Measure at one point or many: -In order to get an overall view of the flow of data through a network it is tempting to measure it at a set of representative points. However, this approach is often a mistake. It is better to measure the traffic between two hosts by measuring the flow at points of ingress to, and egress from. Dr.Faisal Alzyoud

4 Common Metrics Network or application performance
There are (at least) two approaches to measuring the performance of a network. One measures the performance of the network directly, either passively or by injecting test packets. The other involves running a test application program to perform some specified task (the obvious example is to transfer afile) and timing how long it takes to complete. Common Metrics A number of metrics are used to measure network performance: latency jitter packet loss throughput link utilization availability and reliability Dr.Faisal Alzyoud

5 Time latency can vary for a number of reasons, including:
Latency (round-trip): it can be defined as the time between dispatch of a packet and receipt of an acknowledgement that it has reached its destination. It comprise if the time that the original packet and its acknowledgement spend travelling down transmission lines and waiting in the queues of routers, in addition to the time that the remote host takes to process the incoming packet and generate the acknowledgement. Ping is often used for measuring time latency. Time latency can vary for a number of reasons, including: A lightly loaded destination host will respond more quickly than a heavily loaded One. • If the network is lightly loaded then the packets will spend less time in router queues. • The route by which the packets travel between the source and destination hosts might change if the network configuration changes. Jitter (time variation): it can be measured through the variation in the rate at which packets travel across a network. Time variation in the time it takes packets to reach their destination is delay jitter. Packet loss ratio: it can be defined as the number of packets dispatched to a given destination host during some time interval to the actually received packets. Packet loss is the fraction (usually expressed as a percentage) of packets dispatched to actually received. Dr.Faisal Alzyoud

6 There are a number of causes of lost packets
There are a number of causes of lost packets. There may be hardware faults en route, or the packets might become corrupted by accumulating noise. However, such occurrences are relatively rare. A more common cause is packets being deliberately dropped from the queues of busy routers. Router queues are necessarily of finite size. If such a queue is full and packets are still arriving then some packets must be dropped. There are two alternatives. Packets can be dropped from the end of the queue, which favors applications which already have packets queued, or packets can be dropped from random locations in the queue, which distributes the losses more equitably amongst the various applications with packets as Random Early Detection (RED). TCP uses the fraction of lost packets to gauge its transmission rate: if the fraction becomes large then the transmitting host will reduce the rate at which it dispatches packets (on the assumption that they are being lost because the queue of some router in the path is filling up). Throughput: it can be defined as the rate at which data flow past some measurement point in the network. It can be measured in bits/sec, bytes/sec or packets/sec. It usually refers to all the traffic flowing past the measurement point, though it is possible to measure just a subset, perhaps just the Web traffic or the packets bound for a given destination. Dr.Faisal Alzyoud

7 available capacity = capacity − utilized capacity
It is useful to distinguish between: Capacity : the throughput of which the link is capable. Utilized capacity: the current traffic load excluding the sending traffic. Available capacity: the remaining capacity, which is available to you. available capacity = capacity − utilized capacity Tight link: it can be defined as hop with the smallest available capacity. Achievable capacity: it can be defined as the fraction of the available capacity which you can utilize. link utilisation: it can be defined as the throughput divided by the access rate and it is expressed as a percentage. For some types of link the service provider might quote a committed information rate (CIR) rather than an access rate. Here the link utilization should be computed from the CIR rather than the access rate. Availability and reliability: Availability and reliability are not strictly IPPM metrics, though they are closely related. The availability is the fraction of time during a given period when the network is unavailable. Dr.Faisal Alzyoud

8 Simple Tools for Measuring Network Performance
Reliability is related to both availability and packet loss. It is the frequency with which packets get corrupted, as distinct from being lost (the former happens because of either a malfunction in the network or transmission noise, the latter typically because a router queue fills up). Simple Tools for Measuring Network Performance Numerous tools are available for measuring network performance, and they vary greatly in approach, scope and applicability. A few simple ones, specifically ping, traceroute, pathchar, pchar and iperf. These tools establish if the destination host is reachable and willing to acknowledge ping packets. They do not establish that the host offers other services, such as file transfer. Also, it is tempting to select a router as the destination because routers should always acknowledge ping packets. Routers will often process ping packets at low priority, leading to high round trip times. A final note of caution is that the tools should be used sparingly and with discretion. If used unwisely they can flood the network with packets, to the detriment of other users. Dr.Faisal Alzyoud

9 a count of the packets transmitted
Ping:is the simplest tools for measuring network performance, and allows us to determine whether a destination host is reachable. Ping is available for a variety of platforms. On Unix it is usually supplied as part of the operating system. The Unix version reports the following information: the name of the destination host (clapton.nesc.ed.ac.uk in the example) a count of the packets transmitted a count of the packets received back the packet loss, expressed as a percentage the minimum, average and maximum round trip time. Example: PING clapton.nesc.ed.ac.uk ( ): 56 data bytes 64 bytes from : icmp_seq=0 ttl=249 time=1.5 ms 64 bytes from : icmp_seq=1 ttl=249 time=1.3 ms 64 bytes from : icmp_seq=2 ttl=249 time=1.4 ms 64 bytes from : icmp_seq=3 ttl=249 time=2.2 ms 64 bytes from : icmp_seq=4 ttl=249 time=1.3 ms 64 bytes from : icmp_seq=5 ttl=249 time=1.2 ms 64 bytes from : icmp_seq=6 ttl=249 time=2.1 ms 64 bytes from : icmp_seq=7 ttl=249 time=1.9 ms 64 bytes from : icmp_seq=8 ttl=249 time=2.1 ms --- clapton.nesc.ed.ac.uk ping statistics --- 9 packets transmitted, 9 packets received, 0% packet loss round-trip min/avg/max = 1.2/1.6/2.2 ms Dr.Faisal Alzyoud

10 Ping was written by Michael Muuss in 1983
Ping was written by Michael Muuss in The name comes from the sound of a returned sonar pulse. Ping might fail to obtain a response when all is well because ping traffic is being blocked by a firewall en route. Traceroute: it is a common diagnostic tool which is particularly useful for determining why a remote host is not responding to ping. It was written by Van Jacobsen around 1987. Traceroute is similar to ping in that the user specifies a destination host and it sends packets to that host, receives acknowledgements back and displays the results. Also like ping it is usually supplied as part of the operating system. Traceroute produces a hop-by-hop listing of each router that the packets pass through en route to their destination and displays the latency for each hop. It only shows the hops on the outward leg of the journey, not the return hops. Pathchar, pchar and iperf: These tools are similar to traceroute but they are more concerned with measuring throughput than establishing a route. Pathchar was developed to assist in diagnosing network congestion problems. It measures the data transfer rate, delay, average queue and loss rate for every hop from the user’s machine to the chosen destination host. It was written by Van Jacobsen, the author of traceroute. Dr.Faisal Alzyoud

11 pchar to grus.jb.man.ac.uk (130.88.24.231) using UDP/IPv4
Pchar: it was written by Bruce, pchar provides similar functionality and it is based on the same algorithms. Iperf: it is also a tool for measuring the available TCP bandwidth to a destination host. It allows various TCP parameters and UDP datagrams to be tuned. It reports the available bandwidth, delay jitter and datagram loss. Iperf is unlike the other tools it requires software to be installed on the destination as well as the source host. It was developed by Ajay Tirumala and colleagues at the National Laboratory for Applied Network Research, San Diego Supercomputer Center. Example: output showing a pchar probe from the Royal Observatory Edinburgh to the Jodrell Bank Observatory in Cheshire. Statistics are shown for each hop along the route and final statistics for the entire route are given. Some intermediate hops have been removed for clarity. pchar to grus.jb.man.ac.uk ( ) using UDP/IPv4 Using raw socket input Packet size increments from 32 to 1500 by 32 46 test(s) per repetition 32 repetition(s) per hop 0: (grendel12.roe.ac.uk) Partial loss: 292 / 1472 (19%) Partial char: rtt = ms, (b = ms/B), r2 = stddev rtt = , stddev b = Partial queueing: avg = ms (232 bytes) Dr.Faisal Alzyoud

12 Hop char: rtt = 0.079436 ms, bw = 48517.401423 Kbps
Example: followed Hop char: rtt = ms, bw = Kbps Hop queueing: avg = ms (232 bytes) 1: (teine.roe.ac.uk) Partial loss: 0 / 1472 (0%) Partial char: rtt = ms, (b = ms/B), r2 = stddev rtt = , stddev b = Partial queueing: avg = ms (954 bytes) Hop char: rtt = ms, bw = Kbps Hop queueing: avg = ms (722 bytes) 2: (eastman.roe.ac.uk) Partial char: rtt = ms, (b = ms/B), r2 = stddev rtt = , stddev b = Partial queueing: avg = ms (8078 bytes) Hop char: rtt = ms, bw = Kbps Hop queueing: avg = ms (7124 bytes) ... 11: (gw-uom.mcc.ac.uk) Partial char: rtt = ms, (b = ms/B), r2 = stddev rtt = , stddev b = Partial queueing: avg = ms (50091 bytes) Dr.Faisal Alzyoud

13 Hop char: rtt = 0.743205 ms, bw = 97985.081673 Kbps
Example: followed Hop char: rtt = ms, bw = Kbps Hop queueing: avg = ms (0 bytes) 12: (gw-jodrell.netnw.net.uk) Partial loss: 0 / 1472 (0%) Partial char: rtt = ms, (b = ms/B), r2 = stddev rtt = , stddev b = Partial queueing: avg = ms (50091 bytes) Hop char: rtt = ms, bw = Kbps Hop queueing: avg = ms (0 bytes) 13: (grus.jb.man.ac.uk) Path length: 13 hops Path char: rtt = ms r2 = Path bottleneck: Kbps Path pipe: bytes Path queueing: average = ms (50091 bytes) Start time: Fri Nov 12 13:00: End time: Fri Nov 12 14:33: Dr.Faisal Alzyoud


Download ppt "Performance Monitoring"

Similar presentations


Ads by Google