Presentation is loading. Please wait.

Presentation is loading. Please wait.

TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.

Similar presentations


Presentation on theme: "TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop."— Presentation transcript:

1 TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop

2 2 Outline  Introduction  The TeraPaths project  The TeraPaths system architecture  Interoperability with WAN network service  Experimental deployment and testing  Future work

3 3 Introduction  The problem: support efficient/reliable/predictable peta-scale data movement in modern high-speed networks  Multiple data flows with varying priority  Default “best effort” network behavior can cause performance and service disruption problems  Solution: enhance network functionality with QoS features to allow prioritization and protection of data flows

4 4 The QoS Arsenal  IntServ  RSVP: end-to-end, individual flow-based QoS  DiffServ  Per-packet QoS marking  IP precedence (6+2 classes of service)  DSCP (64 classes of service)  MPLS/GMPLS  Uses RSVP-TE  QoS compatible  Virtual tunnels, constraint-based routing, policy-based routing

5 5  The TeraPaths project investigates the integration and use of LAN QoS and MPLS/GMPLS-based differentiated network services in the ATLAS data intensive distributed computing environment in order to manage the network as a critical resource  DOE: The collaboration includes BNL and the University of Michigan, as well as OSCARS (ESnet), and DWMI (SLAC)  NSF: BNL participates in UltraLight to provide the network advances required in enabling petabyte-scale analysis of globally distributed data  NSF: BNL participates in a new network initiative: PLaNetS (Physics Lambda Network System ), led by CalTech The TeraPaths Project

6 6 Newly Development T TeraPaths is rapidly evolving from a last-mile, LAN QoS provider to a distributed end-to-end network path QoS negotiator through multiple administrative domains. T Developed as a web service-based software system. T TeraPaths automates the establishment of network paths with QoS guarantees between end sites by configuring their corresponding LANs and requesting MPLS paths through WANs on behalf of end users. T The primary mechanism for the creation of such paths is the negotiation and placement of advance reservations across all involved domains.

7 7 BNL Site Infrastructure LAN/MPLS TeraPaths resource manager MPLS requests traffic identification: addresses, port #, DSCP bits grid AAA Bandwidth Requests & Releases OSCARS ingress / egress LAN QoS data transfer management monitoring GridFtp & dCache/SRM SE network usage policy ESnet remote TeraPaths Remote LAN QoS requests ESnet

8 8 Envisioned Overall Architecture TeraPaths Site A Site B Site C Site D WAN 1 WAN 2 WAN 3 service invocation data flow peering

9 9 Automate MPLS/LAN QoS Setup  QoS reservation and network configuration system for data flows  Access to QoS reservations:  Manually,through interactive web interface  From a program, through APIs  Compatible with a variety of networking components  Cooperation with WAN providers and remote LAN sites  Access Control and Accounting  System monitoring  Design goal: enable the reservation of end-to-end network resources to assure a specified “Quality of Service”  User requests minimum bandwidth, start time, and duration  System either grants request or makes a “counter offer”  Network is setup end-to-end with one user request

10 10 TeraPaths System Architecture Site A (initiator) Site B (remote) WAN web services WAN monitoring WAN web services hardware drivers Web page APIs Cmd line QoS requests user manager scheduler site monitor … router manager user manager scheduler site monitor … router manager

11 11 TeraPaths Web Services  TeraPaths modules implemented as “web services”  Each network device (router/switch) is accessible/programmable from at least one management node  Site management node maintains reservation etc. databases and distributes network programming by invoking web services on subordinate management nodes  Remote requests to/from other sites invoke corresponding web services (destination site’s TeraPaths or WAN provider’s)  Web services benefits  Standardized, reliable, and robust environment  Implemented in Java and completely portable  Accessible via web clients and/or APIs  Compatible and easily portable into Grid services and the Web Services Resource Framework (WSRF in GT4)

12 12 TeraPaths Web Services Structure AAA Module (AAA) Remote Negotiation Module (RNM) Network Programming Module (NPM) Advance Reservation Module (ARM) Hardware Programming Module (HPM) Hardware Programming Module (HPM) Hardware Programming Module (HPM) Remote Request Module (RRM) Network Configuration Module (NCM) DiffServ Module (DSM) Route Planning Module (RPM) MPLS Module (MSM) Web Interface … APIs future capability Remote Invocations TeraPaths

13 13 Site Bandwidth Partitioning Scheme Minimum Best Effort traffic Dynamic bandwidth allocation Shared dynamic class(es) Dynamic microflow policing Mark packets within a class using DSCP bits, police at ingress, trust DSCP bits downstream Dedicated static classes Aggregate flow policing Shared static classes Aggregate and microflow policing

14 14 Route Planning with MPLS WAN WAN monitoring WAN web services TeraPaths site monitoring

15 15 Experimental Setup  Full-featured LAN QoS simulation testbed using a private network environment and WAN network testbed:

16 16 Acquired Experience  Enabled, tested, and verified LAN QoS inside BNL campus network  Tested and verified MPLS paths between BNL and University of Michigan.  Integrated LAN QoS with MPLS paths reserved with OSCARS  Fully interoperate between Web services of these two projects.  Installed DWMI network monitoring tools  Examined impact of prioritized traffic on overall network performance and the effectiveness and efficiency of MPLS/LAN QoS  Weekly meeting between ESnet, SLAC, University of Michigan and BNL. T Developed and deployed remote negotiation/response, etc. services to fully automate end-to-end QoS establishment across multiple network domains T Dynamically configure and partition QoS-enabled paths to meet time-constrained network requirements T Integrate with software from other network projects: OSCARS, lambda station, and DWMI T A peer-reviewed paper will be published in GridNet 2006.

17 17 In Progress / Future Work T Add GUMS based Grid AAA into the TeraPaths. T Develop site-level network resource manager for multiple VOs vying for limited WAN resources T Support dynamic bandwidth/routing adjustments based on resource usage policies and network monitoring data (provided by DWMI)  widen deployment of QoS capabilities to USATLAS tier 2 sites  TeraPaths addendum was approved and new additional funding will be provided for Tier 2 deployment.  Each interested Tier 2 cluster will get moderated funding for this activity.  End-to-end network meeting will be in BNL around September/October.


Download ppt "TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop."

Similar presentations


Ads by Google