BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.

Slides:



Advertisements
Similar presentations
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
Advertisements

1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation Bruce Gibbard Dimitrios Katramatos Shawn McKee Dantong.
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
USATLAS Network/Storage and Load Testing Jay Packard Dantong Yu Brookhaven National Lab.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
TeraPaths The TeraPaths Collaboration Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos, BNL.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
TeraPaths TeraPaths:Configuring End-to-End Virtual Network Paths With QoS Guarantees Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
ASGC Activities Update Hsin-Yen Chen ASGC LHCONE/LHCOPN meeting Taipei 13 Mar
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
The TeraPaths Testbed: Exploring End-to-End Network QoS Dimitrios Katramatos, Dantong Yu, Bruce Gibbard, Shawn McKee TridentCom 2007 Presented by D.Katramatos,
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
T0-T1 Networking Meeting 16th June Meeting
“A Data Movement Service for the LHC”
Networking for the Future of Science
ESnet Network Engineer Lawrence Berkeley National Laboratory
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
UltraLight Status Report
Energy Sciences Network Enabling Virtual Science June 9, 2009
LHC Data Analysis using a worldwide computing grid
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Presentation transcript:

BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab

2 USATLAS Tier 1 Network Outline  Tier 1 Networks.  dCache and Network Integration.  Tier 0 Data Exports Performance.  Tier 2 Sites Networks.  Network Monitoring, and 24*7 Operations.  Network Research  Network Future Plan- Direct Tier 1 to Tier 1, Tier 1 and USATLAS Tier 2 connectivity.

BNL Tier 1 Networks: A Zoom-out View

4 BNL 20 Gig-E Architecture Based on CISCO65xx  20 GBps LAN for LHCOPN  20GBps for Production IP  Full Redundant and survive the failure of any network switch.  No Firewall for LHCOPN, as shown in the green lines.  Two Firewalls for all other IP networks.  Cisco Firewall Services Module (FWSM), a line card plugged into CISCO chassis with 5*1Gbps capacity, allows outgoing connection (except http and https ports).

5 BNL and Long Island MAN Ring ESnet demarc Cisco 6509 LI MAN – Diverse dual core connection 32 AoA, NYC Brookhaven National Lab, Upton, NY 10GE circuit BNL IP/LHC gateway 10 Gb/s circuits International USLHCnet circuits (proposed) production IP core SDN/provisioned virtual circuits 2007 circuits/facilities LI MAN DWDM MAN LAN GEANT CERN Chicago (ESnet IP core) Europe Washington (ESnet IP core) Abilene NYSERNet SINet (Japan) CANARIE (Canada) HEANet (Ireland) Qatar DWDM ring (KeySpan Communications) T320 ESnet IP core 2007 or 2008 second MAN switch USLHCnet Chi USLHCnet ESnet MAN ESnet MAN

6 Other connections MAN LAN CERN (?) NLR ESnet GEANT, etc. BNL internal BNL Redundant Diverse Network Connection

TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL Lab DC Offices MIT ANL BNL FNAL AMES NREL LLNL GA DOE-ALB OSC GTN NNSA International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) 42 end user sites SINet (Japan) Russia (BINP) CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP DC commercial peering points MAE-E PAIX-PA Equinix, etc. PNWGPoP/ PAcificWave ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene Abilene CERN (USLHCnet DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc NYC Starlight SNV Abilene JGI LBNL SLAC NERSC SNV SDN SDSC Equinix SNV ALB ORNL CHI MREN Netherlands StarTap Taiwan (TANet2, ASCC) NASA Ames AU SEA CHI-SL MAN LAN Abilene Specific R&E network peers Other R&E peering points UNM MAXGPoP AMPATH (S. America) ESnet Science Data Network (SDN) core R&E networks Office Of Science Sponsored (22) ATL NSF/IRNC funded Equinix IARC ESnet3 Today Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators

dCache WAN interface architecture and integration

9 20 Gb/s HPSS Mass Storage System dCache SRM and Core Servers Gridftp door (7 nodes) WAN 2x10 Gb/s LHC OPN VLAN Write Pool (13 nodes / TB) Farm Pool (434 nodes / 360 TB) 7 x 1 Gb/s Tier 1 VLANS 10 Gb/s 7 x 1 Gb/s dCache.... N x 1 Gb/s Gb/s Logical Connections FTS controlled Srmcp path T0 Export Pool (>=30 nodes) New Farm Pool (80 nodes, 360TB Raw ) Thumpers (30 nodes, 720TB Raw ) dCache and Network Integration 5.4 TB storage on the write pool is off-line

10 BNL dCache and Network Integration  Data Import and Export:  Preferred and fully supported: FTS Glite-URL-COPY, data transfer goes through GridFtp server nodes.  Less desired and partially supported: srmcp direct end to end transfer. goes through CISCO firewall, bottleneck <<5*1Gbps.  Advantages  Less exposure to WAN, only limited nodes with firewall conduits.  Bulk of data transfer managed by FTS, and by-passing firewall.  Firewall can handle a negligible load generated by users directly using srmcp.  Performance can be scaled up by adding extra Grid server nodes.

Tier 0 Data Exports Performance

12 Megatable Extract Tier1 Centre ALICEATLASCMSLHCbTarget IN2P3, Lyon GridKA, Germany CNAF, Italy FNAL, USA BNL, USA RAL, UK NIKHEF, NL ASGC, Taipei PIC, Spain Nordic Data Grid Facility TRIUMF, Canada US ALICE TOTALS

13 ATLAS Tier 0 Data Export Dashboard Last hour Last Four Hours Last Day

14 Ganglia Plots for the Aggregated Data Into dCache

Tier 2 Network Connectivity

16 ATLAS Great Lakes Tier 2

17 Midwest Tier 2

18 Northeast Tier 2

19 Southwest Tier2

20 Western Tier 2: SLAC

21 Network Operations and Monitoring  Cacti  Replacement for MRTG  SNMP monitoring tool  Tracks most BNL core network interfaces  Firewall Service Module EtherChannel interfaces also  Public available at

22 BNL Off-Hour Coverage for Network Operation  Off-hour phone calls are handled by a trained helpdesk analyst 24 hours a day 7 days a week.  Help desk does initial triage and forwards the call to Network on-call person.  On-call person has all contacts information to ESnet NOC, USLHCNOC, CERN NOC.

23 TeraPaths  The problem: support efficient/reliable/predictable peta-scale data movement in modern high-speed networks  Multiple data flows with varying priority  Default “best effort” network behavior can cause performance and service disruption problems  Solution: enhance network functionality with QoS features to allow prioritization and protection of data flows  Treat network as a valuable resource  Schedule network usage (how much bandwidth and when)  Techniques: DSCP, MPLS, and VLAN.  Collaborate with ESnet (OSCAR) and Internet 2 (DRAGON) to dynamically create end to end paths, and dynamically forward traffic into the paths. Is being deployed to USATLAS Tier 2 sites.  Option 1: Layer 3: MPLS (supported)  Option 2: Layer 2: VLAN (under development)

24 TeraPaths System Architecture Site A (initiator) Site B (remote) WAN chain web services WAN monitoring WAN web services hardware drivers Web page APIs Cmd line QoS requests user manager scheduler site monitor … router manager user manager scheduler site monitor … router manager WAN chain WAN web services

Conclusions and Network Discussion Points

26 Conclusions  BNL Network has been stable and significantly improved since we had 20Gbps upgrade.  Tier 0 and Tier 1 data transfer rides on LHCOPN network, while Tier 1 to BNL still uses IP production network.  BNL Network utilization is less than 30% of 20Gbps. We have not been able to push data transfer close to the network bandwidth limitation.  Full redundancy has been built in LAN.  WAN (USLHCNetwork) redundancy is being investigated.  dCache and BNL LAN is fully integrated. It is an optimized trade-off between network security and performance.

27 Discussion Points  T1 to T1 transit via T0: data transfers between T1 centers transiting to the T0, is technically feasible, however, it is not implemented.  Direct Tier 1 and Tier 1 connection:  A Layer 2 connection between FNAL and IN2P3 via ESnet and GEANT was setup.  BNL/TRIUMF (ready) and BNL/Prague (in planning).  How about BNL/IN2P3 and BNL/FZK?  BNL needs to work with ESnet, (USLHCNET) and IN2P3 and FZK with GEANT. We need to work on both ends simultaneously.  Tier 1 and Tier 2.