The TeraPaths Testbed: Exploring End-to-End Network QoS Dimitrios Katramatos, Dantong Yu, Bruce Gibbard, Shawn McKee TridentCom 2007 Presented by D.Katramatos,

Slides:



Advertisements
Similar presentations
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Implement Inter- VLAN Routing LAN Switching and Wireless – Chapter 6.
Advertisements

Virtual LANs.
High Performance Computing Course Notes Grid Computing.
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
TeraPaths TeraPaths: Flow-Based End-to-End QoS Paths through Modern Hybrid WANs Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
Tiziana FerrariWP2.3 Advance Reservation Demonstration: Description and set-up 1 WP2.3 Advance Reservation Demonstration: Description and set-up DRAFT,
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
Tiziana FerrariWP2.3 Advance Reservation Demonstration: Description and set-up 1 WP2.3 Advance Reservation Demonstration: Description and set-up DRAFT,
TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation Bruce Gibbard Dimitrios Katramatos Shawn McKee Dantong.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
Institute of Technology, Sligo Dept of Computing Semester 3, version Semester 3 Chapter 3 VLANs.
(part 3).  Switches, also known as switching hubs, have become an increasingly important part of our networking today, because when working with hubs,
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
Copyright 2003 CCNA 1 Chapter 7 TCP/IP Protocol Suite and IP Addressing By Your Name.
Virtual LANs. VLAN introduction VLANs logically segment switched networks based on the functions, project teams, or applications of the organization regardless.
Lawrence G. Roberts CEO Anagran September 2005 Advances Toward Economic and Efficient Terabit LANs and WANs.
Protocols and the TCP/IP Suite Chapter 4. Multilayer communication. A series of layers, each built upon the one below it. The purpose of each layer is.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Use Case for Distributed Data Center in SUPA
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
Tiziana Ferrari Quality of Service Support in Packet Networks1 Quality of Service Support in Packet Networks Tiziana Ferrari Italian.
Common Devices Used In Computer Networks
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
Center of Excellence Wireless and Information Technology CEWIT 2008 TeraPaths: Managing Flow-Based End-to-End QoS Paths Experience and Lessons Learned.
VIRTUAL PRIVATE NETWORK By: Tammy Be Khoa Kieu Stephen Tran Michael Tse.
Chapter 8: Virtual LAN (VLAN)
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
Cisco 3 - LAN Perrine. J Page 110/20/2015 Chapter 8 VLAN VLAN: is a logical grouping grouped by: function department application VLAN configuration is.
USATLAS Network/Storage and Load Testing Jay Packard Dantong Yu Brookhaven National Lab.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Chapter 7 Backbone Network. Announcements and Outline Announcements Outline Backbone Network Components  Switches, Routers, Gateways Backbone Network.
Practical Distributed Authorization for GARA Andy Adamson and Olga Kornievskaia Center for Information Technology Integration University of Michigan, USA.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
LAN QoS and WAN MPLS: Status and Plan Dantong Yu and Shawn Mckee DOE Site Visit December 13, 2004.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Switch Features Most enterprise-capable switches have a number of features that make the switch attractive for large organizations. The following is a.
Cisco S3C3 Virtual LANS. Why VLANs? You can define groupings of workstations even if separated by switches and on different LAN segments –They are one.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
Switching Topic 2 VLANs.
1 TeraPaths and dynamic circuits  Strong interest to expand testbed to sites connected to Internet2 (especially US ATLAS T2 sites)  Plans started in.
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
SDN and OSCARS how-to Evangelos Chaniotakis Network Engineering Group ESCC Indianapoilis, July 2009 Energy Sciences Network Lawrence Berkeley National.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Internet2 Dynamic Circuit Services and Tools Andrew Lake, Internet2 July 15, 2007 JointTechs, Batavia, IL.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
TeraPaths TeraPaths:Configuring End-to-End Virtual Network Paths With QoS Guarantees Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
Instructor Materials Chapter 1: LAN Design
Instructor Materials Chapter 6: Quality of Service
Planning and Troubleshooting Routing and Switching
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Chapter 5: Inter-VLAN Routing
Virtual LANs.
IS3120 Network Communications Infrastructure
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
NTHU CS5421 Cloud Computing
OSCARS Roadmap Chin Guok
Presentation transcript:

The TeraPaths Testbed: Exploring End-to-End Network QoS Dimitrios Katramatos, Dantong Yu, Bruce Gibbard, Shawn McKee TridentCom 2007 Presented by D.Katramatos, BNL

2 Outline  Introduction  The TeraPaths project  The TeraPaths system architecture  The TeraPaths testbed  In progress/future work

3 Introduction  Project background: modern nuclear and high-energy physics community (e.g., LHC experiments) extensively uses grid computing model; US, European, and international networks are being upgraded to multiple 10Gbps connections to cope with data movements of gigantic proportions  The problem: support efficient/reliable/predictable peta-scale data movement in modern grid environments utilizing high-speed networks  Multiple data flows with varying priorities  Default “best effort” network behavior can cause performance and service disruption problems  Solution: enhance network functionality with QoS features to allow prioritization and protection of data flows  Schedule network usage as a critical grid resource

4 e.g., ATLAS Data Distribution Tier 1 Tier 1 site Online System CERN Tier 1 siteBNL Tier 3 site Workstations ~GBps Mbps ~PBps ~10-40 Gbps ~10 Gbps Tier 0+1 Tier 2 Tier 2 site Tier 3 Tier 4 ATLAS experiment ~ Gbps Tier 3 site UMich muon calibration

5 Prioritized vs. Best Effort Traffic

6 Partition Available Bandwidth Minimum Best Effort traffic Dynamic bandwidth allocation Shared dynamic class(es) Dynamic aggregate and microflow policing Mark packets within a class using DSCP bits, police at ingress, trust DSCP bits downstream Dedicated static classes Aggregate flow policing Shared static classes Aggregate and microflow policing

7 T BNL’s TeraPaths project: q Under the U.S.ATLAS umbrella, funded by DOE q Research the use of DiffServ, MPLS/GMPLS in data-intensive distributed computing environments q Develop theoretical models for LAN/WAN coordination q Develop necessary software for integrating end-site services and WAN services to provide end-to-end (host-to-host) guaranteed bandwidth network paths to users  Create, maintain, and expand a multi-site testbed for QoS network research  Collaboration includes BNL, University of Michigan, ESnet, Internet 2, SLAC; Tier 2 centers being added; Tier 3s to follow The TeraPaths Project

8 End-to-End QoS… How?  Within a site’s LAN (administrative domain) DiffServ works and scales fine  Assign data flows to service classes  Pass-through “difficult” segments  But once packets leave site… DSCP markings get reset. Unless…  Ongoing effort by new high-speed network providers to offer reservation- controlled dedicated paths with predetermined bandwidth  ESnet’s OSCARS  Internet2’s BRUW, DRAGON  Reserved paths configured to respect DSCP markings  Address scalability by grouping data flows with same destination and forwarding to common tunnels/circuits

9 Extend LAN QoS through WAN(s)  End sites use the DiffServ architecture to prioritize data flows at the packet level:  Per-packet QoS marking  DSCP bits (64 classes of service)  Pass-through: make high-risk and 3 rd party segments QoS-friendly  WAN(s) connecting end sites forward prioritized traffic:  MPLS tunnels of requested bandwidth (L3 service)  Circuits of requested bandwidth (L2 service)  No changes to DSCP bits

10 Automate LAN/WAN Setup  QoS reservation and network configuration grid middleware  Bandwidth partitioning  Access to QoS reservations:  Interactive web interface  API  Integration with popular grid data transfer tools (plug-ins)  Compatible with a variety of networking hardware  Coordination with WAN providers and remote LAN sites  User access control and accounting  Enable the reservation of end-to-end network resources to assure specific levels of bandwidth  User requests bandwidth, start time, and duration  System either grants request or makes a “counter offer”  Network is setup end-to-end with a single user request

11 A. “star” model A WAN 1WAN 2 B WAN n A WAN 1WAN 2 B WAN n A WAN 1WAN 2 B WAN n WAN chain C. star/daisy chain hybrid model B. “daisy chain” model End-to-End Configuration Models

12 End-to-End Configuration Models  Model C, hybrid star/daisy chain selected as most feasible  End sites and WAN providers don’t really want to need to understand each other’s internal operations  End sites don’t have any direct control over WAN providers and vice versa  End sites typically deal with only the first provider of a WAN chain and have no direct business with downstream providers  Star model A requires extensive topology information at end sites, authorization at all involved WAN segments  Daisy chain model B requires wide adoption of new sophisticated communication protocols  If the end sites cannot agree, daisy chain wastes time and cycles

13 Envisioned Overall Architecture TeraPaths Site A Site B Site C Site D WAN 1 WAN 2 WAN 3 service invocation data flow peering WAN chain

14 TeraPaths System Architecture Site A (initiator) Site B (remote) WAN chain web services WAN web services hardware drivers Web Interface API QoS requests user manager scheduler … … router manager user manager scheduler … … router manager WAN chain WAN web services

15 TeraPaths Web Services  TeraPaths modules implemented as “web services”  Each network device (router/switch) is accessible/programmable from at least one management node  Site management node maintains databases (reservations, etc.) and distributes network programming by invoking web services on subordinate management nodes  Remote requests to/from other sites invoke corresponding site’s TeraPaths public web services layer  WAN services invoked through clients appearing as proxy servers (standardization of interface, dynamic pluggability, fault tolerance)  Web services benefits  Standardized, reliable, and robust environment  Implemented in Java for portability  Accessible via web interface and/or API  Integration with grid services

16 TeraPaths Web Services Architecture Internal Services Public Services Web Interface Admin Module NDC Database protected network API remote local WAN Services WAN Services proxy

17 Reservation Negotiation  Capabilities of site reservation systems  Yes/No vs. Counteroffer(s)  Direct commit vs. Temporary/Commit/Start  Algorithms  Serial vs.Parallel  Counteroffer processing vs. multiple trials  TeraPaths (current implementation):  Counteroffers and temporary/commit/start  Serial procedure (local site/remote site/WAN), limited iterations  User approval requested for counteroffers  WAN is yes/no and direct commit

18 Initial Experimental Testbed  Full-featured LAN QoS simulation testbed using a private network environment:  Two Cisco switches (same models as production hardware) interconnected with 1Gb link  Two managing nodes, one per switch  Four host nodes, two per switch  All nodes have dual 1Gb Ethernet ports, also connected to BNL campus network  Managing nodes run web services, database servers, have exclusive access to switches  Demo of prototype TeraPaths functionality given at SC’05

19 LAN Setup hosts trust DSCP ACLs, policers to WAN from WAN border router admit re-police re-mark admit police mark non-participating subnets ACLs, policers

20 BNL testbed edge router BNL testbed (virtual) border router BNL border router ESnetUltraLight OSCARS UltraLight router at UMich TeraPaths peerin g at Chicag o test host NDC New TeraPaths Testbed (end-to-end) 1 st end-to-end fully automated route setup BNL-ESnet-Umich on 1:41pm EST

21 BNL-side Testbed

22 BNL-UMich route peering

23 Current BNL/UMich Testbed Details

24 Testbed Expansion in 2007  Sites for 2007 expansion:  University of Michigan / Michigan State University  University of Chicago / Indiana University  University of Oklahoma / University of Texas at Arlington  Boston University / Harvard University  SLAC  More?

25 L2 issues  Special path appears as a single-hop connection between end sites (dynamic VLAN setup)  Forwarding of priority flows to special path now has to take place at source end site instead of at WAN ingress point  Non-priority flows must not have access to special path  Pass-through issues  Stricter coordination necessary between end sites and WAN segments (VLAN tags, routing tables)  No mix and match between L2 and L3, but coexistence required  Scalability issues

26 Utilizing Dynamic Circuits TeraPaths-controlled “virtual border” router (directs flows w/PBR) e.g.,1 to X, 2 to Y Local Provider’s Router WAN switch Site’s Border Router trunked VLAN pass-through TeraPaths-controlled host router #X #Y DSCP-friendly LAN host 1host n host to X 2 to Y

27 In progress/future work  End site support for dynamic circuits  Interoperation with I2’s Dynamic Circuit Services (DCS through DRAGON software)  Expansion to Tier 2 sites and beyond (including BNL’s production network)  Experimentation with grid data transfer tools (dCache, GridFTP, etc.)  Continual improvement of TeraPaths software and feature addition  Reservation negotiation algorithms  Grid software plug-ins  Bandwidth partitioning schemes  Grid AAA

28 Thank you! Questions?

29 Simulated (testbed) and Actual Traffic BNL to Umich. – 2 bbcp dtd xfers with iperf Testbed demo – competing iperf streams background traffic through ESnet MPLS tunnel

30 BNL Site Infrastructure LAN/MPLS TeraPaths Domain Controller MPLS/L2 requests traffic identification: addresses, port #, DSCP bits grid AAA Bandwidth Requests & Releases OSCARS ingress / egress LAN QoS M10 data transfer management monitoring GridFtp & dCache/SRM SE network usage policy ESnet remote TeraPaths Remote LAN QoS requests