Download presentation
Presentation is loading. Please wait.
Published byJeremy Lucas Modified over 9 years ago
1
Network Performance
2
Performance (1) What would be the characteristics of the ideal network? –It would be completely transparent in every conceivable way! –An infinite amount of data could pass error free with zero delay and zero cost between arbitrary combinations of computers at any time of the day without any concern for security, political ramifications, physical location, or how many other were using the network –Good luck with that.... 2
3
Performance (2) There are several important measures of network performance –Throughput or capacity: data transferred per unit time –Delay or Latency: time required to complete some step of network activity; there are several different components of delay that are of interest –Jitter or variability: the statistical description of how the delay changes Data rate (sometimes called bandwidth) –Data transmitted per unit time (bits/second) –End-to-end data rate on a multi-link network path is the minimum of individual link data rates 3
4
Performance (3) A user computes throughput as follows –Throughput = amount of data transferred / time it took We compare our measured throughput to the nominal data rate of the network –Assume we have a 1 Gbit/sec network –Transfer 1 Gbyte in 10 seconds, for example 4
5
Performance (4) We are surprised when the throughput doesn’t match the data rate of the network, but data rate is an ideal number –In general, the other desirable characteristics of a data communication come at the expense of throughput Congestion avoidance, reliability, etc. Connection setup in connection-oriented protocol Protocol overhead for multiplexing keys, addresses, etc. –Helper protocols such as DNS consume time –Finally, the nominal data rate assumes the network is full of data all of the time This may not be true due to protocol limitations or configuration It is a rate we only achieve when we have access 5
6
Performance (5) Latency (delay) –Time to send message from point A to point B Specifically the time between when the first bit is placed on the wire and when the last bit leaves the wire at the other end We discuss delay in separate lessons –Propagation delay –Transmit time or delay –Queuing delay (we include the textbook’s switching delay and access delay in this lesson) –We ignore the text’s server delay as a non-network delay We also consider the delay-bandwidth product and the notion of “keeping the pipe full” 6
7
Performance (6) Jitter is significant for applications such as voice –One approach is to use an isochronous network that guarantees no jitter –Or use a protocol that compensates for jitter on a typical asynchronous network In many cases where we cannot tolerate much jitter we can accept some amount of average delay –This is why streaming applications have a delay when they first start as they buffer more data than the anticipated jitter –The buffer effectively smoothes out the jitter from the application’s perspective 7
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.