Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.

Slides:



Advertisements
Similar presentations
1 UNIT I (Contd..) High-Speed LANs. 2 Introduction Fast Ethernet and Gigabit Ethernet Fast Ethernet and Gigabit Ethernet Fibre Channel Fibre Channel High-speed.
Advertisements

LambdaStation Phil DeMar Don Petravick NeSC Oct. 7, 2004.
US CMS Tier1 Facility Network Andrey Bobyshev (FNAL) Phil DeMar (FNAL) CHEP 2010 Academia Sinica Taipei, Taiwan.
NDN in Local Area Networks Junxiao Shi The University of Arizona
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Network Upgrades { } Networks Projects Briefing 12/17/03 Phil DeMar; Donna Lamore Data Comm. Group Leaders.
Network/Technology Infrastructure Plan Section 5 – 6 – 7 As prepared for the TUSD Governing Board Summer 2007 John Bratcher Network Security Systems Analyst.
1 Introduction to Networking Lesson 01 NETS2150/2850.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
31 May, 2006Stefan Stancu1 Control and Data Networks Architecture Proposal S. Stancu, C. Meirosu, B. Martin.
Chapter 15: LAN Systems Business Data Communications, 4e.
ITSC Report From The CIO: Network Program Update 25 September 2014.
University of Oklahoma Network Infrastructure and National Lambda Rail.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Chapter 1: Hierarchical Network Design
Chapter 6 High-Speed LANs Chapter 6 High-Speed LANs.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Chapter 1 An Introduction to Networking
CD FY09 Tactical Plan Review FY09 Tactical Plan for Wide-Area Networking Phil DeMar 9/25/2008.
Business Data Communications, Stallings 1 Chapter 1: Introduction William Stallings Business Data Communications 6 th Edition.
Introductionto Networking Basics By Avinash Kulkarni.
1 Reliable high-speed Ethernet and data services delivery Per B. Hansen ADVA Optical Networking February 14, 2005.
1. 1. Overview: Telecommunications Project  Planning and implementation (2007-today) 2. Discussion: Proposal to Improve Infrastructure  Upgrade horizontal.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
1 Prepared by: Les Cottrell SLAC, for SLAC Network & Telecommunications groups Presented to Kimberley Clarke March 8 th 2011 SLAC’s Networks.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Site Networking Anna Jordan April 28, 2009.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
CS/IS 465: Data Communication and Networks 1 CS/IS 465 Data Communications and Networks Lecture 28 Martin van Bommel.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
Layer 1,2,3 networking on GrangeNet II Slide Pack Greg Wickham APAN 2006 ver 1.1.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Wide-Area Networking Phil DeMar 10/1/2009 Tactical plan name(s)DocDB# FY10 Tactical Plan.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Chapter 3 - VLANs. VLANs Logical grouping of devices or users Configuration done at switch via software Not standardized – proprietary software from vendor.
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
COS 420 Day 15. Agenda Finish Individualized Project Presentations on Thrusday Have Grading sheets to me by Friday Group Project Discussion Goals & Timelines.
Run II Review Closeout 15 Sept., 2004 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Use of Alternate Path Circuits at Fermilab {A Site Perspective of E2E Circuits} Phil DeMar I2/JointTechs Meeting Monday, Feb. 12, 2007.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Hierarchical Network Design Connecting Networks.
Network Move & Upgrade 2008 Les Cottrell SLAC for SCCS core services group Presented at the OU Admin Group Meeting August 21,
FNAL Site Report Phil DeMar Summer 2013 ESCC meeting LBL July 15, 2013.
© ExplorNet’s Centers for Quality Teaching and Learning 1 Select appropriate hardware for building networks. Objective Course Weight 2%
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
100G R&D for Big Data at Fermilab Gabriele Garzoglio Grid and Cloud Computing Department Computing Sector, Fermilab ISGC – March 22, 2013 Overview Fermilab.
CD FY08 Tactical Plan Status FY08 Tactical Plan Status Report for SCF/DMS/WAN Group Phil DeMar 6/3/2008.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Introduction Chapter 1. Introduction  A computer network is two or more computers connected together so they can communicate with one another.  Two.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
CAMPUS LAN DESIGN GUIDE Design Considerations for the High-Performance Campus LAN.
CD FY08 Tactical Plan Review FY08 Tactical Plans for Site Networking Rick Finnegan November 7, 2007.
Gene Oleynik, Head of Data Storage and Caching,
Fermilab T1 infrastructure
UW-La Crosse Network “Gartner forecasts that we will see 4.9 billion connected things by year-end, with the number hitting 25 billion by 2020.” is the.
FY09 Tactical Plan Status Report for Site Networking
Chapter 1: WAN Concepts Connecting Networks
Network Group Priorities (1998)
Business Data Communications, 4e
Presentation transcript:

Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head

LAN Overview Centralized management of the facility network  Includes design, installation, & operation of Run-II networks LAN architecture based on work groups  Run-II computing resources reside within its own LAN General philosophies on network design  “Over-provisioning” capacity reduces problems  Homogeneity & simplicity help management & operations Technical directions:  Capacious switch fabric w/ high port density  10 GE backbone links; 1000B-T for host systems

FNAL Networks: A Work Group Model Minos- Soudan Core Network Off Site On Site Fast routing & switching of workgroup LANs, each with a 1000Mb/s link into the core network OC12 link {622 Mb/s} Workgroup internal LANs have switched 10/100 Mb/s desktop links & 100/1000 Mb/s server links. CDF D0 Run-II work groups: 10GE & (n x 1GE) backbone links 1000 Mb/s server & farms support switched 100 Mb/s desktop support R&D WANs StarLight dark fiber: 10GE & (2 x 1GE) channels 10 GE links to CDF, CMS, and (soon) D0 Intended use: R & D Production WANs CMS MINOS Mini- Boone SDSS

CDF Run-II Network

D0 Run-II Network

FY05 Planned LAN Upgrades Core Network Upgrades:  10 Gb/s backbone links, including border router  Wireless LAN upgrade to b (54Mb/s) CDF Run-II Network:  CAF expansion – 160 workers; 20 servers; 10GE uplink  Off-line LAN upgrade – 10 GE uplink; GE for desktops D0 Run-II Network:  CAS switch fabric upgrade, incl. addtl 1000B-T ports  10 GE backbone, incl. HDCF & StarLight  Farms expansions – 280 CAB & 80 Recon nodes  On-line switch fabric upgrade & 1000B-T ports

Current WAN Status ESnet-funded off-site link carries production traffic  Currently, OC12 (622 Mb/s)  CMS robust data challenge: 10,000 Mb/s FNAL-funded StarLight dark fiber  Initial configuration: one 10 GE & two 1 GE channels  Theoretical capacity = > Thirty-three 10GE channels  Intended uses:  WAN network R & D projects  Overflow link for production WAN traffic  Redundancy for ESnet link

ESnet Off-Site link utilization Current link is “stressed”:  Carries all production traffic  Only an OC12 (622 Mb/s)  Traffic doubles every year  Outbound: blue = average; magenta = peak  Inbound: green = average; dark green = peak Traffic mostly Run-II data  CMS traffic beginning to appear ESnet upgrade of existing link is problematic

ESnet Networking Directions Three-tiered network architecture  Production (what we have today…)  High-impact (very large scale data movement)  Research (next generation networking & apps) Cost-sharing model for site connectivity  Labs expected to contribute to site link upgrades Metropolitan Area Network (MAN) initiative  Metro fiber rings supporting multiple Labs  ESnet proposing MANs for Bay Area & Chicago  Bay Area MAN partially funded in FY04  Chicago MAN not funded in FY04…

ESnet Chicago MAN proposal

FNAL Starlight dark fiber project Dark fiber to Starlight in place  StarLight = international optical network exchange pt.  Link optics w/ DWDM gear soon  Capable of multiple λs  Each λ can carry 2-10 Gb/s  But expensive, (O) $100k/ 10GE λ  Initially, one 10GE & one 2 GE λs  Currently: 1 Gb/s ‘til DWDM turn-on  Separate network infrastructure from production LAN path

Off-Site Link Upgrade Planning Primary emphasis is to get ESnet MAN deployed  Would provide both production & high impact paths  Pursuing fiber availability options for missing segment  In the end, the MAN needs DOE approval & funding Alternatives involve using StarLight link for production traffic  Carrying overflow traffic would be one possible mode  Keeping ESnet as our service provider remains long term goal  Want to avoid being in the production network WAN business

WAN Network R & D Projects Primary use of StarLight link intended to be R&D Practical benefits:  Environment for collaborators to develop & refine their applications for high rate data movement  Non-production (< five 9s…) facility for testing & introducing advanced network technologies  Can be utilized to support production network facilities  Redundancy; overflow traffic… Research benefits:  Develop services & tools for HEP research environment  LambdaStation: 2-3 year SciDac Project to facilitate selective path forwarding across advanced WAN research networks

StarLight Link R&D Projects Build it and they will come….. Project Local Affiliation Collaborating Site(s)StatusBW Needs Usage Pattern UKlightCDFUCLActive4 Gb/speriodically CMS/CERNCMS (OSG?)CERNActiveUp to 10 Gb/speriodically CMS/ LambdaStation CMSCal Tech/UCSDStart 10/2004Up to 10 Gb/ssustained Toronto HEPCDFU. TorontoInactive{undefined} SC2004 BW Challenge CDSC200411/06/2004Up to 10 Gb/sOne-time VanderbiltOSG(?)VanderbiltPending{undefined} WestGrid (Ca)D0Simon Fraser UPending4 Gb/ssustained LHC Tier 1CMSBNLPending< 622 Mb/s{undefined} UltraLightCMSCal Tech, UF, others PendingUp to 10 Gb/s{undefined}

Network support at FNAL - Effort Network FTE level has been static for past decade  Greater complexity & added services = increased demands on personnel resources  Continually working toward more efficient, automated procedures ~1.5 FTE to upgrade/service each Run-II network/year  Also a relatively static number for the past decade  Core network, services, WAN, security, etc. efforts not included…  Each experiment has liaison to coordinate network support… Emerging or significantly growing areas of effort  Advanced WAN (StarLight…) infra. & ops support  Network R&D (LambdaStation)  Computer security demands on network support