Scavenger performance Cern External Network Division - Caltech Datagrid WP7 - 24 January, 2002.

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

TCP transfers over high latency/bandwidth network & Grid TCP Sylvain Ravot
Network Performance Measurement
Introduction 2 1: Introduction.
NETWORK LAYER (1) T.Najah AlSubaie Kingdom of Saudi Arabia Prince Norah bint Abdul Rahman University College of Computer Since and Information System NET331.
Jaringan Komputer Lanjut Packet Switching Network.
Traffic Shaping Why traffic shaping? Isochronous shaping
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 2: Cisco VoIP Implementations.
Improving TCP Performance over Mobile Ad Hoc Networks by Exploiting Cross- Layer Information Awareness Xin Yu Department Of Computer Science New York University,
Iperf Tutorial Jon Dugan Summer JointTechs 2010, Columbus, OH.
© 2006 Cisco Systems, Inc. All rights reserved. 2.3: Encapsulating Voice Packets for Transport.
Congestion Control and Resource Allocation
Mapping a Network by Latency (and other things) Client connecting to , UDP port.
ACN: Congestion Control1 Congestion Control and Resource Allocation.
IP-UDP-RTP Computer Networking (In Chap 3, 4, 7) 건국대학교 인터넷미디어공학부 임 창 훈.
Computer Networks Switching Professor Hui Zhang
INDIANAUNIVERSITYINDIANAUNIVERSITY TransPAC QBSS “Scavenger Service” Chris Robb Indiana University TransPAC Network Engineer
© 2006 Cisco Systems, Inc. All rights reserved. Module 4: Implement the DiffServ QoS Model Lesson 4.5: Configuring CBWFQ and LLQ.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 4: Implement the DiffServ QoS Model.
POSTECH DP&NM Lab. Internet Traffic Monitoring and Analysis: Methods and Applications (1) 2. Network Monitoring Metrics.
CONGESTION CONTROL and RESOURCE ALLOCATION. Definition Resource Allocation : Process by which network elements try to meet the competing demands that.
Tiziana Ferrari Discussion on Less-Than Best-Effort services (LBE), TF-NFN Southampton Apr 02 1 Discussion on Less-than Best-Effort Services T.Ferrari.
Circuit & Packet Switching. ► Two ways of achieving the same goal. ► The transfer of data across networks. ► Both methods have advantages and disadvantages.
1 Flow Identification Assume you want to guarantee some type of quality of service (minimum bandwidth, maximum end-to-end delay) to a user Before you do.
Sami Al-wakeel 1 Data Transmission and Computer Networks The Switching Networks.
CING-YU CHU INFOCOM Outline  Introduction  Measurement  Measurement Results  Modeling Skype Behaviors  Analysis on TCP-friendly.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
Beyond Best-Effort Service Advanced Multimedia University of Palestine University of Palestine Eng. Wisam Zaqoot Eng. Wisam Zaqoot November 2010 November.
指導教授:林仁勇 老師 學生:吳忠融 2015/10/24 1. Author Chan, Y.-C. Chan, C.-T. Chen, Y.-C. Source IEE Proceedings of Communications, Volume 151, Issue 1, Feb 2004 Page(s):107.
QoS on GÉANT - Aristote Seminar -- Nicolas Simar QoS on GÉANT Aristote Seminar, Paris (France), Nicolas Simar,
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
9.7 Other Congestion Related Issues Outline Queuing Discipline Avoiding Congestion.
Netprog: Routing and the Network Layer1 Routing and the Network Layer (ref: Interconnections by Perlman)
Less than Best Effort -- Nicolas Simar Less than Best Effort QoS IP 2003, Milan (Italy), Nicolas Simar, Network Engineer.
QoS provisioning for large, high-rate file transfers Zhenzhen Yan, Malathi Veeraraghavan, Chris Tracy, and Chin Guok University of Virginia and ESnet Apr.
1. Introduction REU 2006-Packet Loss Distributions of TCP using Web100 Zoriel M. Salado, Mentors: Dr. Miguel A. Labrador and Cesar D. Guerrero 2. Methodology.
Promoting the Use of End-to-End Congestion Control in the Internet Sally Floyd and Kevin Fall IEEE-ACAM Transactions on Networking, 馬儀蔓.
Mr. Mark Welton.  Quality of Service is deployed to prevent data from saturating a link to the point that other data cannot gain access to it  QoS allows.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 4: Implement the DiffServ QoS Model.
1 Network Simulation and Testing Polly Huang EE NTU
QoS in Mobile IP by Preethi Tiwari Chaitanya Deshpande.
 First: Data Link Layer  1. Retransmission Policy: It deals with how fast a sender times out and what it transmit upon timeout. A jumpy sender that times.
1 Transport Layer: Basics Outline Intro to transport UDP Congestion control basics.
© 2006 Cisco Systems, Inc. All rights reserved. Module 4: Implement the DiffServ QoS Model Lesson 4.6: Congestion Avoidance.
© 2006 Cisco Systems, Inc. All rights reserved. 3.2: Implementing QoS.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
1 Switching and Forwarding Sections Connecting More Than Two Hosts Multi-access link: Ethernet, wireless –Single physical link, shared by multiple.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Univ. of TehranIntroduction to Computer Network1 An Introduction to Computer Networks University of Tehran Dept. of EE and Computer Engineering By: Dr.
Connect communicate collaborate Performance Metrics & Basic Tools Robert Stoy, DFN EGI TF, Madrid September 2013.
WTG – Wireless Traffic Generator Presented by: Lilach Givaty Supervised by: Dr. Yehuda Ben-Shimol, Shlomi Atias.
QoS Experience on European Backbone - TNC Nicolas Simar QoS Experience on European Backbone TNC 2003, Zabgreb (Croatia),
Unit-4 Lecture 9 Network Layer 1. Congestion Prevention Polices. To avoid congestion by using the appropriate polices at different levels. Layers DL Layer.
Quality and Value for the Exam 100% Guarantee to Pass Your Exam Based on Real Exams Scenarios Verified Answers Researched by Industry.
iperf a gnu tool for IP networks
Instructor Materials Chapter 6: Quality of Service
PPP Protocol.
Topics discussed in this section:
Top-Down Network Design Chapter Thirteen Optimizing Your Network Design Copyright 2010 Cisco Press & Priscilla Oppenheimer.
Congestion Control and Resource Allocation
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
Network Core and QoS.
Switching Techniques.
Computer Science Division
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Routing and the Network Layer (ref: Interconnections by Perlman
PPP Protocol.
Network Core and QoS.
Presentation transcript:

Scavenger performance Cern External Network Division - Caltech Datagrid WP January, 2002

Introduction Introduction : Qbone Scavenger Service (QBSS) is an additional best-effort class of service. A small amount of network capacity is allocated for this service; when the default best-effort capacity is underutilized, QBSS can expand itself to consume the unused capacity. Goal of our test : Does the Scavenger traffic affect performance of the normal best effort traffic? Does the Scavenger Service use the whole available bandwidth?

Tests configuration Uslink- POS 155 MbpsGbEth Pcgiga-gbe.cern.ch (Geneva) Ar1-chicago Cernh9Lxusa-ge.cern.ch (Chicago) GbEth CERN Chicago RTT : 116 ms Bandwidth-delay-product : 1.9 MBytes. Cernh9 configuration : policy-map match-all qbss match ip dscp 8 policy-map qos-policy class qbss bandwidth percent 1 queue-limit 64 random-detect class class-default random-detect interface... service-policy output qos-policy QBSS traffic is marked with DSCP (  Tos Field 0x20) TCP and UDP flows were generated by Iperf. QBSS traffic is marked using the TOS option of iperf : iperf –c lxusa-ge –w 4M –p tos 0x20

Scavenger and TCP traffic (1) We ran two connections at the same time. Packets of connection #2 were marked (scavenger traffic) and packets of the connection #1 were not marked. We measured how the two connections shared the bandwidth. TCP scavenger traffic doesn’t affect TCP normal traffic. Packets of connection #2 are dropped by the scavenger service, so the connection #2 reduces its rate before affecting the connection #1. The throughput of the connection #2 remains low because the loss rate of the scavenger traffic is high.

How does TCP Scavenger traffic use the available bandwidth? We performed the same tests without marking the packets. We had a throughput larger than 120 Mbps. TCP scavenger traffic doesn’t use the whole available bandwidth. Even if there is no congestion on the link, some packets are dropped by the router. It is probably due to the small size of the queue reserved for scavenger traffic ( queue-limit 64). Available bandwidth We performed TCP scavenger transfer when the available bandwidth was larger than 120 Mbps. We measured the performance of the scavenger traffic.

Conclusion TCP Scavenger traffic doesn’t affect normal traffic. TCP connection are very sensitive to loss. When congestion occurs, scavenger packets are dropped first and the TCP scavenger source immediately reduces its rate. Therefore normal traffic isn’t affected. Scavenger traffic expands to consume unused bandwidth, but doesn’t use the whole available bandwidth. Scavenger is a good solution to transfer data without affecting normal (best effort) traffic. It has to be kept in mind that scavenger doesn’t take advantage of the whole unused bandwidth. Future Work Our idea is to implement a monitoring tool based on Scavenger traffic. We could generate UDP scavenger traffic without affecting normal traffic in order to measure the available bandwidth. Can we use the Scavenger Service to perform tests without affecting the production traffic? Does the Scavenger traffic behave as the normal traffic when no congestion occurs?

Load balancing performance Cern External Network Division - Caltech Datagrid WP January, 2002

Introduction Load balancing allows to optimize resources by distributing traffic over multiple paths for transferring data to a destination. Load balancing can be configured on a per-destination or per-packet basis. On Cisco routers, there are two types of load balancing for CEF (Cisco Express Forwarding) : Per-Destination load balancing Per-Packets load balancing Per-Destination load balancing allows router to use multiple paths to achieve load sharing. Packets for a given source-destination pair are guaranteed to take the same path, even if multiple paths are available. Per-Packets load balancing allows router to send successive data packets without regard to individual hosts. It uses a round-robin method to determine which path each packet takes to the destination. We tested the two types of load balancing between Chicago and CERN using our two STM-1 circuits.

POS 155 Mbps – circuit #1 GbEth Pcgiga-gbe.cern.ch (Geneva) Ar1-chicago Cisco 7507 Cernh9 Lxusa-ge.cern.ch (Chicago) GbEth CERN Chicago RTT : 116 ms Bandwidth-delay-product : 2 * 1.9 MBytes. POS 155 Mbps – circuit #2 Cernh9 Cisco 7507 Configuration

Load balancing : Per Destination vs Per Packets Traffic from Chicago to CERN on the link #1 Traffic from Chicago to CERN on the link #2 MRTG report: traffic from Chicago to CERN Per PacketsPer Destination Load Balancing type: When the bulk of data passing through parallel links is for a single source/destination pair, per-destination load balancing overloads a single link while the other link has very little traffic. Per-packet load balancing allows to use alternate paths to the same destination.

Load Balancing Per-Packets and TCP performance TCP flow (Cern -> Chicago): UDP flow (Cern -> Chicago): Cern: sravot]$ iperf -c lxusa-ge -w 4M -b 20M -t 20 [ ID] Interval Transfer Bandwidth [ 3] sec 50.1 MBytes 21.0 Mbits/sec [ 3] Sent datagrams Chicago: sravot]$ iperf -s -w 4M -u [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] sec 50.1 MBytes 21.0 Mbits/sec ms 0/35716 (0%) [ 3] sec datagrams received out-of-order 50 % of the packets are received out-of-order. sravot]# iperf -c lxusa-ge -w 5700k -t 30 [ ID] Interval Transfer Bandwidth [ 3] sec 690 MBytes 191 Mbits/sec By using TCPtrace to plot and summarize TCP flows which were captured by TCPdump, we measured that 99,8 % of the acknowledgements are selective (SACK). The performance is quite good even if packets are received out of order. The SACK option is efficient. However, we were not able to get a higher throughput than 190 Mbit/s. It seems that receiving too much out- of-order packets limits TCP performance.

Conclusion and Future work We decided to remove the load balancing per packets option because it was impacting the operational traffic. Each packets flow going through the Uslink was disordered. Load balancing per packets is inappropriate for traffic that depends on packets arriving at the destination in sequence. Conclusion