U.S. ATLAS Tier 1 Networking Bruce G. Gibbard LCG T0/1 Network Meeting CERN 19 July 2005.

Slides:



Advertisements
Similar presentations
Network Systems Sales LLC
Advertisements

T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
BNL RACF Site Report HEPIX SPRING 2015 – OXFORD, UK WILLIAM STRECKER-KELLOGG
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Operating and Configuring Cisco IOS Devices © 2004 Cisco Systems, Inc. All rights reserved. Operating Cisco IOS Software INTRO v2.0—8-1.
Managing Your Network Environment © 2004 Cisco Systems, Inc. All rights reserved. Managing Cisco IOS Devices INTRO v2.0—9-1.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Agenda Network Infrastructures LCG Architecture Management
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Toby Rodwell, Network Engineer, DANTE TNLC, 28.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public ITE PC v4.0 Chapter 1 1 The Internet and Its Uses Working at a Small-to-Medium Business or.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
T0/T1 network meeting July 19, 2005 CERN
LAN Design of a Local High School Martin Kucek Chris C. Yu Sandy Ramirez Cisco TCS Project – Semester 3 © 2001 Martin Kucek / Chris C. Yu / Sandy Ramirez.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
UNM RESEARCH NETWORKS Steve Perry CCNP, CCDP, CCNP-V, CCNP-S, CCNP-SP, CCAI, CMNA, CNSS 4013 Director of Networks.
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
SNMP Simple Network Management Protocol SNMP Simple Network Management Protocol Haris Ribic.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
1 LHC-OPN 2008, Madrid, th March. Bruno Hoeft, Aurelie Reymund GridKa – DE-KIT procedurs Bruno Hoeft LHC-OPN Meeting 10. –
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Services in a Converged WAN Accessing the WAN – Chapter 1.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
1 Updating the ESnet Site Coordinator Model (Presented to SLCCC, June, 2004) Joe Burrescia Mike Collins William E. Johnston DRAFT FOR COMMENT 7/19/04.
Slide 1 UCI Lightpath A Dedicated Network for Research on UCI Campus Jessica Yu Office of Information Technology, UC Irvine NSF PI Workshop 9/29/2015.
Networking Components WILLIAM NELSON LTEC HUB  Device that operated on Layer 1 of the OSI stack.  All I/O flows out all other ports besides the.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Using IP Addressing in the Network Design
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
© 2002, Cisco Systems, Inc. All rights reserved..
SCIENCE_DMZ NETWORKS STEVE PERRY, DIRECTOR OF NETWORKS UNM PIYASAT NILKAEW, DIRECTOR OF NETWORKS NMSU.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN Operational model: Roles and functions.
David Foster, CERN LHC T0-T1 Meeting, Seattle, November November Meeting David Foster SEATTLE.
Connect communicate collaborate GÉANT Making the Difference Dai Davies, DANTE 8 th eConcertation Meeting CERN, Geneva 4-5 November 2010.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
DM Collaboration – OMA & BBF: Deployment Scenarios Group Name: WG5 - MAS Source: Tim Carey, ALU, Meeting Date:
Implementing a Security Policy JISC – ICT Security Threats & Promises, April 2002 Mick Ismail ICT Services Manager City of Wolverhampton College.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
Secure High Performance Networking at BNL Winter 2013 ESCC Meeting John Bigrow Honolulu Hawaii.
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 2 v3.1 Module 2 Introduction to Routers.
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
Core Network Services Robin Tasker 10 May Network Performance.
UNM SCIENCE DMZ Sean Taylor Senior Network Engineer.
T0-T1 Networking Meeting 16th June Meeting
Chapter 12: Architecture
Fermilab T1 infrastructure
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Connecting an Enterprise Network to an ISP Network
Chapter 12: Physical Architecture Layer Design
Presentation transcript:

U.S. ATLAS Tier 1 Networking Bruce G. Gibbard LCG T0/1 Network Meeting CERN 19 July 2005

B. Gibbard LCG Tier 0/1 Network Meeting 2 Disclaimer  Presentation by a Non-network Expert  Information Primarily From  ESnet – William Johnston  BNL Networking – W. Scott Bradley

19 July William Johnston et al. 3

19 July William Johnston et al. 4

19 July from W. Scott Bradley 5 Foreseen T1 Connection to Lambda  ESnet will provide a dedicated 10Gb wavelength from Brookhaven to the MANLAN PoP in NYC to provide T0- T1 connectivity, and a second 10Gb wavelength to provide for T1-T1, T1-T2/3  Which networking equipment will be used?  Adva multiplexing platforms to provide 10GigE feeds to BNL’s perimeter router.  How is local network layout organized?  10Gb feed to BNL Atlas Computing Facility in production today, to be expanded to dual 10Gb feeds and beyond in anticipation of articulated bandwidth requirements.

19 July from W. Scott Bradley 6 Foreseen T1 Connection to Lambda (2)  What AS number and IP Prefixes will they use and are the IP prefixes dedicated to the connectivity to the T0?  BNL has registered number 43 as its BGP AS number. The IP prefixes for connection to the T0 facility have not been determined at this time.  What backup connectivity is foreseen?  The two wavelengths terminating BNL to the MANLAN PoP in NYC will traverse geographically disparate paths, with planned failover capability.  What is the monitoring technology used locally?  Spectrum, Cisco Works 2K.

19 July from W. Scott Bradley 7 Foreseen T1 Connection to Lambda (3)  How is the operational support organized?  The traditional operational relationship between ESnet and BNL will be maintained, with the BNL perimeter router serving as the line of demarcation for operational responsibility. The network support of the internal Atlas Computing Facility will be managed in the traditional manner now in place.  What is the security model to be used with OPN?  The security model for the OPN at Brookhaven is expected to be as currently in operation: the RHIC/ATLAS Computing Facility resides in its own firewalled enclave, with access rules and permissions applied appropriate to an international collaboration. ***It is a fallacy that firewall technology cannot support 10Gb line speeds; this point has been discussed separately as part of comments on the LHC High-Level Architecture document v 1.9***  What is the policy for external monitoring of local network devices, e.g. the border router for the OPN.  The traditional monitoring policy between ESnet and BNL will be maintained in that external entities will be allowed to monitor up to the external interfaces of BNL’s perimeter router. Perceived problems within the BNL enterprise are reported to the BNL Network Operations staff for troubleshooting and resolution.