Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.

Slides:



Advertisements
Similar presentations
LambdaStation Phil DeMar Don Petravick NeSC Oct. 7, 2004.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
High Performance Computing Course Notes Grid Computing.
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation Bruce Gibbard Dimitrios Katramatos Shawn McKee Dantong.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
WG Goals and Workplan We have a charter, we have a group of interested people…what are our plans? goalsOur goals should reflect what we have listed in.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Lambda Station Project Andrey Bobyshev; Phil DeMar; Matt Crawford ESCC/Internet2 Winter 2008 Joint Techs January 22; Honolulu, HI
Apr 30, 20081/11 VO Services Project – Stakeholders’ Meeting Gabriele Garzoglio VO Services Project Stakeholders’ Meeting Apr 30, 2008 Gabriele Garzoglio.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
Practical Distributed Authorization for GARA Andy Adamson and Olga Kornievskaia Center for Information Technology Integration University of Michigan, USA.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
LAN QoS and WAN MPLS: Status and Plan Dantong Yu and Shawn Mckee DOE Site Visit December 13, 2004.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
OSG Integration Activity Report Rob Gardner Leigh Grundhoefer OSG Technical Meeting UCSD Dec 16, 2004.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
1 TeraPaths and dynamic circuits  Strong interest to expand testbed to sites connected to Internet2 (especially US ATLAS T2 sites)  Plans started in.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
ALCF Argonne Leadership Computing Facility GridFTP Roadmap Bill Allcock (on behalf of the GridFTP team) Argonne National Laboratory.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Sep 25, 20071/5 Grid Services Activities on Security Gabriele Garzoglio Grid Services Activities on Security Gabriele Garzoglio Computing Division, Fermilab.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
The TeraPaths Testbed: Exploring End-to-End Network QoS Dimitrios Katramatos, Dantong Yu, Bruce Gibbard, Shawn McKee TridentCom 2007 Presented by D.Katramatos,
Bob Jones EGEE Technical Director
CONNECTING TO THE INTERNET
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
From Prototype to Production Grid
The UltraLight Program
Presentation transcript:

Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research PI Meeting September 15-17, 2004

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Terapaths Topics  Introduction: what is the Terapaths project  BNL network configuration and upgrade plan  WAN usage at BNL  Related projects at BNL: Grid AA and SRM/dCache  Program goals and details  Work plans and schedule  Summary and current status

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, T This project will investigate the integration and use of MPLS based differentiated network services in the ATLAS data intensive distributed computing environment as a way to manage the network as a critical resource;  The program intends to explore network configurations from common shared infrastructure (current IP networks) to dedicated optical paths point-to-point, using MPLS/QoS to span the intervening possibilities.  The Collaboration includes:  Brookhaven National Laboratory (US ATLAS Tier 1, ESNet)  Univ. of Michigan (US ATLAS Candidate Tier 2 Center, Internet2, UltraLight) What is Terapaths ?

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, BNL Campus Network and Immediate Update Plan  The USATLAS/RHIC Computing Facility is attached to access layer routers.  Campus network consists of Cisco 6509 Series Switch, MPLS support.  Cisco PIX 535 Firewall. (1Gbps)  Will be replaced by firewall service blade for CISCO 6500 series. (5G bps), Sep/Oct 2004  Core router is connected to ESnet Juniper M10 router.  WAN connection will be upgrade to OC 48 (2.5Gbps) in Oct/Nov 2004 time frame.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Network Traffic Monitoring at GridFtp server and routers

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Network Utilization  USATLAS is doing Grid enabled DC2 production and RHIC is sending physics data to the remote collaborators.  Sustain USATLAS data transfer BNL  DC2 sites at ~ 25MB/sec last month.  Goal: sustain data transfer BNL  CERN at ~ 45MBytes/sec in OCT’ 04.  Network Monitoring, performance testing (iperf, GridFtp).  BNL currently offers only best effort network service.  We periodically fill up OC-12 connection: intensive contentions over limited network bandwidth give unhappy users.  Network resource has to be systematically allocated and managed to deliver the most efficient, effective overall system!!!

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Related Projects: BNL Grid AA and SRM/dCache  GUMS (Grid User Management System) is a Grid identity mapping services: Grid credential to local credential.  Part of Privilege project: a joint project between USATLAS/USCMS.  GUMS is in production for RHIC at BNL since May/2004.  We are transforming GUMS into a service which the gatekeeper can contact directly: a preliminary implementation was completed.  Need extension to authorize network resource.  Storage Resource Managers (SRM) is to provide dynamic space allocation and file management on Grid SE. It uses GridFtp for file moving.  BNL USATLAS deployed two flavor of SRMs: Berkeley SRM and dCache.  Capable of inter-operating with each other via web services interface.  Provide access to USATLAS users.  Experience and their characteristics are documented.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Project Goal and Objectives T The primary goal of this project is to investigate the use of this technology in the ATLAS data intensive distributed computing environment. In addition we intend to: q Develop expertise in MPLS based QoS technology which will be important to ATLAS and the LHC community more generally. q Dedicate fractions of the available WAN bandwidth via MPLS to ATLAS Tier 1 data movement, RHIC data replications to assure adequate throughput and limit their disruptive impact upon each other. q Enhance technical contact between the ATLAS tier 1 at BNL and its network partners including the Tier 0 center at CERN, potential ATLAS Tier 2’s and other members of the Grid3+ (OSG- 0) community of which it is a part.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Proposed Prototype/Primitive Infrastructure GridFtp & SRM MPLS Path ESnet Network resource manager MPLS requests Traffic Identification TCP syn/fin packages, addresses, port # Grid AA Network Usage Policy Translator MPLS Bandwidth Requests & Releases OSCARS INGRESS Monitoring Direct MPLS /Bandwidth Requests SE Second/Third year

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Work Plans  Terapaths envisions a multiple year program to deliver a high- performance, QoS enable network infrastructure for ATLAS/LHC computing. Each year will determine the following year(s)’s direction.  Phase I: Establish Initial Functionality (08/04 ~ 07/05). z Help to steer the direction of the following two phases.  Phase II:Establish Prototype Production Service (08/05 ~ 07/06).  Depends on the success of Phase 1.  Phase III:Establish Full Production Service, Extend Scope and Increase Functionality (08/06 ~ 07/07). z The level of service and its scope will depends on the available project funding and some additional resources. z Broaden deployment and capability to Tier2s, partners.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Establish MPLS paths with Initial Partners SLAC LBL Sunnyvale Albuquerque Atlanta New York Chicago PAIX StarLight El Paso SLACLBNLESnet BNL DC TWCAbilene ANLFNAL MPLS CERN MICH

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Establish Initial Functionality Milestones:  Utilize network mock-up on Linux “routers” to test configuration and capability.  Setup MPLS path inside BNL campus network, connect GridFtp servers and SRM into MPLS based network.  Study Impact of MPLS on data transfer service and gain experience of using MPLS.  Study the behavior of MPLS path through firewall.  Setup MPLS paths on routers from multiple vendors (Cisco and Juniper).  Test and verify MPLS paths between BNL and LBL, SLAC (network monitoring project), FNAL and CERN.  Test and verify Inter Domain MPLS path between BNL and University of Michigan.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Contributions at Year 1  Build MPLS expertise in BNL: MPLS setup, configuration, maintenance and removal.  Learn the effectiveness and efficiency of MPLS and its impact to overall network performance: MPLS and non- MPLS.  Decide whether MPLS is useful to LHC physics.  Document any lesson/experience learned from this project.  Raise aware of that network resource can/should be managed.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Establish Prototype Production Service T Integrate Grid data transfer (GridFtp) into MPLS enabled network service. T Effectively couple these network/data transfer services with storage resources managed by SRM, have GridFtp/SRM functionality in “beta”. T Incorporate the resulting system into the ATLAS grid middleware. T Build tools to provide basic authentication, authorization and access control – Depend on funding, Rely on leverage. T Supply client interfaces which make this service available in a manner transparent to any details of the underlying QoS/MPLS traffic engineering T Leverage MPLS paths/VO level network monitoring services with DWMI project to be developed at SLAC.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Extend Scope and Increase Functionality of Prototype Service T Inter-network domain MPLS establishment, dynamically creating, adjusting of sub-partitioning MPLS paths to meet time constrained network requirements. q Create site level network resource manager for multiple VOs vying for limited WAN resource. q Provide dynamic bandwidth re-adjusting based resource usage policy and path utilization status collected from network monitoring (DWMI). q Leverage dynamic MPLS establishing services provided by OSCARS: the ESnet On-Demand Secure Circuits and Advance Reservation System. q Create user interface/web services for LHC data transfer applications to request network resource in advance.  Goal: to broaden deployment and capability to tier 1 and tier 2 sites, create services which will be honored/adopted by CERN ATLAS/LHC Tier 0.

B. Gibbard & D. Yu B. Gibbard & D. Yu DOE Network Research PI Meeting FNAL September 15-17, Summary and Status T Summary: this project will prototype and deploy a QoS capable wide area networking service based on MPLS to support ATLAS/LHC data transfer.  Current Status:  A MPLS simulation testbed is being created in a private network environment.  Evaluate mechanisms to assign different labels to GridFtp data transfers initiated by different VOs.  Different GLOBUS_TCP_PORT_RANGE for multiple VOs, source IP addresses and port numbers determine labels.  Use IP CoS bits to assign labels at border router.  Open Science Grid authentication/Authorization Systems are being developed to provide access control to various resource. (GUMS software development)  Berkeley SRM and dCache/SRM were evaluated and deployed to interface BNL storage resource into Grid.