Presentation is loading. Please wait.

Presentation is loading. Please wait.

Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.

Similar presentations


Presentation on theme: "Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research."— Presentation transcript:

1 Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research PI Meeting September 15-17, 2004

2 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 2 Terapaths Topics  Introduction: what is the Terapaths project  BNL network configuration and upgrade plan  WAN usage at BNL  Related projects at BNL: Grid AA and SRM/dCache  Program goals and details  Work plans and schedule  Summary and current status

3 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 3 T This project will investigate the integration and use of MPLS based differentiated network services in the ATLAS data intensive distributed computing environment as a way to manage the network as a critical resource;  The program intends to explore network configurations from common shared infrastructure (current IP networks) to dedicated optical paths point-to-point, using MPLS/QoS to span the intervening possibilities.  The Collaboration includes:  Brookhaven National Laboratory (US ATLAS Tier 1, ESNet)  Univ. of Michigan (US ATLAS Candidate Tier 2 Center, Internet2, UltraLight) What is Terapaths ?

4 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 4 BNL Campus Network and Immediate Update Plan  The USATLAS/RHIC Computing Facility is attached to access layer routers.  Campus network consists of Cisco 6509 Series Switch, MPLS support.  Cisco PIX 535 Firewall. (1Gbps)  Will be replaced by firewall service blade for CISCO 6500 series. (5G bps), Sep/Oct 2004  Core router is connected to ESnet Juniper M10 router.  WAN connection will be upgrade to OC 48 (2.5Gbps) in Oct/Nov 2004 time frame.

5 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 5 Network Traffic Monitoring at GridFtp server and routers

6 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 6 Network Utilization  USATLAS is doing Grid enabled DC2 production and RHIC is sending physics data to the remote collaborators.  Sustain USATLAS data transfer BNL  DC2 sites at ~ 25MB/sec last month.  Goal: sustain data transfer BNL  CERN at ~ 45MBytes/sec in OCT’ 04.  Network Monitoring, performance testing (iperf, GridFtp).  BNL currently offers only best effort network service.  We periodically fill up OC-12 connection: intensive contentions over limited network bandwidth give unhappy users.  Network resource has to be systematically allocated and managed to deliver the most efficient, effective overall system!!!

7 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 7 Related Projects: BNL Grid AA and SRM/dCache  GUMS (Grid User Management System) is a Grid identity mapping services: Grid credential to local credential.  Part of Privilege project: a joint project between USATLAS/USCMS.  GUMS is in production for RHIC at BNL since May/2004.  We are transforming GUMS into a service which the gatekeeper can contact directly: a preliminary implementation was completed.  Need extension to authorize network resource.  Storage Resource Managers (SRM) is to provide dynamic space allocation and file management on Grid SE. It uses GridFtp for file moving.  BNL USATLAS deployed two flavor of SRMs: Berkeley SRM and dCache.  Capable of inter-operating with each other via web services interface.  Provide access to USATLAS users.  Experience and their characteristics are documented.

8 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 8 Project Goal and Objectives T The primary goal of this project is to investigate the use of this technology in the ATLAS data intensive distributed computing environment. In addition we intend to: q Develop expertise in MPLS based QoS technology which will be important to ATLAS and the LHC community more generally. q Dedicate fractions of the available WAN bandwidth via MPLS to ATLAS Tier 1 data movement, RHIC data replications to assure adequate throughput and limit their disruptive impact upon each other. q Enhance technical contact between the ATLAS tier 1 at BNL and its network partners including the Tier 0 center at CERN, potential ATLAS Tier 2’s and other members of the Grid3+ (OSG- 0) community of which it is a part.

9 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 9 Proposed Prototype/Primitive Infrastructure GridFtp & SRM MPLS Path ESnet Network resource manager MPLS requests Traffic Identification TCP syn/fin packages, addresses, port # Grid AA Network Usage Policy Translator MPLS Bandwidth Requests & Releases OSCARS INGRESS Monitoring Direct MPLS /Bandwidth Requests SE Second/Third year

10 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 10 Work Plans  Terapaths envisions a multiple year program to deliver a high- performance, QoS enable network infrastructure for ATLAS/LHC computing. Each year will determine the following year(s)’s direction.  Phase I: Establish Initial Functionality (08/04 ~ 07/05). z Help to steer the direction of the following two phases.  Phase II:Establish Prototype Production Service (08/05 ~ 07/06).  Depends on the success of Phase 1.  Phase III:Establish Full Production Service, Extend Scope and Increase Functionality (08/06 ~ 07/07). z The level of service and its scope will depends on the available project funding and some additional resources. z Broaden deployment and capability to Tier2s, partners.

11 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 11 Establish MPLS paths with Initial Partners SLAC LBL Sunnyvale Albuquerque Atlanta New York Chicago PAIX StarLight El Paso SLACLBNLESnet BNL DC TWCAbilene ANLFNAL MPLS CERN MICH

12 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 12 Establish Initial Functionality Milestones:  Utilize network mock-up on Linux “routers” to test configuration and capability.  Setup MPLS path inside BNL campus network, connect GridFtp servers and SRM into MPLS based network.  Study Impact of MPLS on data transfer service and gain experience of using MPLS.  Study the behavior of MPLS path through firewall.  Setup MPLS paths on routers from multiple vendors (Cisco and Juniper).  Test and verify MPLS paths between BNL and LBL, SLAC (network monitoring project), FNAL and CERN.  Test and verify Inter Domain MPLS path between BNL and University of Michigan.

13 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 13 Contributions at Year 1  Build MPLS expertise in BNL: MPLS setup, configuration, maintenance and removal.  Learn the effectiveness and efficiency of MPLS and its impact to overall network performance: MPLS and non- MPLS.  Decide whether MPLS is useful to LHC physics.  Document any lesson/experience learned from this project.  Raise aware of that network resource can/should be managed.

14 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 14 Establish Prototype Production Service T Integrate Grid data transfer (GridFtp) into MPLS enabled network service. T Effectively couple these network/data transfer services with storage resources managed by SRM, have GridFtp/SRM functionality in “beta”. T Incorporate the resulting system into the ATLAS grid middleware. T Build tools to provide basic authentication, authorization and access control – Depend on funding, Rely on leverage. T Supply client interfaces which make this service available in a manner transparent to any details of the underlying QoS/MPLS traffic engineering T Leverage MPLS paths/VO level network monitoring services with DWMI project to be developed at SLAC.

15 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 15 Extend Scope and Increase Functionality of Prototype Service T Inter-network domain MPLS establishment, dynamically creating, adjusting of sub-partitioning MPLS paths to meet time constrained network requirements. q Create site level network resource manager for multiple VOs vying for limited WAN resource. q Provide dynamic bandwidth re-adjusting based resource usage policy and path utilization status collected from network monitoring (DWMI). q Leverage dynamic MPLS establishing services provided by OSCARS: the ESnet On-Demand Secure Circuits and Advance Reservation System. q Create user interface/web services for LHC data transfer applications to request network resource in advance.  Goal: to broaden deployment and capability to tier 1 and tier 2 sites, create services which will be honored/adopted by CERN ATLAS/LHC Tier 0.

16 B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, 2004 16 Summary and Status T Summary: this project will prototype and deploy a QoS capable wide area networking service based on MPLS to support ATLAS/LHC data transfer.  Current Status:  A MPLS simulation testbed is being created in a private network environment.  Evaluate mechanisms to assign different labels to GridFtp data transfers initiated by different VOs.  Different GLOBUS_TCP_PORT_RANGE for multiple VOs, source IP addresses and port numbers determine labels.  Use IP CoS bits to assign labels at border router.  Open Science Grid authentication/Authorization Systems are being developed to provide access control to various resource. (GUMS software development)  Berkeley SRM and dCache/SRM were evaluated and deployed to interface BNL storage resource into Grid.


Download ppt "Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research."

Similar presentations


Ads by Google