From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.

Slides:



Advertisements
Similar presentations
Architecting the Network
Advertisements

IR Confidential & Proprietary Do Not Distribute Our Proposed IT Strategy (2006 – 2011) Developing Optimal IT Strategy Through Business Context, Applications,
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
Connect communicate collaborate LHCONE – Linking Tier 1 & Tier 2 Sites Background and Requirements Richard Hughes-Jones DANTE Delivery of Advanced Network.
STUDENT SUCCESS CENTERS : WORKING BETTER TOGETHER TO ENSURE STUDENT SUCCESS.
Cloud Computing: Theirs, Mine and Ours Belinda G. Watkins, VP EIS - Network Computing FedEx Services March 11, 2011.
1 st Review Meeting, Brussels 5/12/12 – Technical progress (P. Paganelli, Bluegreen) iCargo 1st Review Meeting Brussels 5/12/12 Technical.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Technical Review Group (TRG)Agenda 27/04/06 TRG Remit Membership Operation ICT Strategy ICT Roadmap.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Dynamically Provisioned Networks as a Substrate for Science David Foster CERN.
Introduction to software project management. What is a project? One definition ‘a specific design or plan’ ‘a specific design or plan’ Key elements non-routine.
Bringing XBRL tax filing to the UK Jeff Smith, Customer Contact, Online Services,
Enterprise Architecture
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Assessment of Core Services provided to USLHC by OSG.
Engineering, Operations & Technology | Information TechnologyAPEX | 1 Copyright © 2009 Boeing. All rights reserved. Architecture Concept UG D- DOC UG D-
Work Programme for the specific programme for research, technological development and demonstration "Integrating and strengthening the European Research.
A. Aimar - EP/SFT LCG - Software Process & Infrastructure1 Software Process panel SPI GRIDPP 7 th Collaboration Meeting 30 June – 2 July 2003 A.Aimar -
Getting Started Conservation Coaches Network New Coach Training.
Commissioning Self Analysis and Planning Exercise activity sheets.
MIT Workshops on technology-enhanced & open education Port‐au-Prince, Haiti, March 28–31, 2012.
CHECKPOINTS OF THE PROCESS Three sequences of project checkpoints are used to synchronize stakeholder expectations throughout the lifecycle: 1)Major milestones,
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
© 2012 xtUML.org Bill Chown – Mentor Graphics Model Driven Engineering.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
Promoting excellence in social security Building on sector wide commonalities to enhance the benefits of Information.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Motivations for Innovations in Operational Excellence Bruce Rodin VP – Wireless Technology Bell Canada.
30-Year National Transportation Policy Framework to the Future September 12,
Data Center & Large-Scale Systems (updated) Luis Ceze, Bill Feiereisen, Krishna Kant, Richard Murphy, Onur Mutlu, Anand Sivasubramanian, Christos Kozyrakis.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
EMC Proven Professional. Copyright © 2012 EMC Corporation. All Rights Reserved. NAS versus SAN NAS – Architecture to provide dedicated file level access.
Technology Subcommittee November Topical Outline Technology Subcommittee Technology Subcommittee RoleRole Work PlanWork Plan ActivitiesActivities.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
The Network & ATLAS Workshop on transatlantic networking panel discussion CERN, June Kors Bos, CERN, Geneva & NIKHEF, Amsterdam ( ATLAS Computing.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
DRIVER Action plan for an International Repository Organisation Dale Peters OAI6 Breakout Session Joining up Repositories 18 June 2009.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Association of Enterprise Architects International Committee on Enterprise Architecture Standards Jan 23, Collaborative Expedition Workshop #57aeajournal.org.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
David Foster, CERN LHC T0-T1 Meeting, Seattle, November November Meeting David Foster SEATTLE.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Leadership Guide for Strategic Information Management Leadership Guide for Strategic Information Management for State DOTs NCHRP Project Information.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Global Network of Civil Society Organisations for Disaster Reduction Building resilient communities and nations by putting the interests and concerns of.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
Eric Peirano, Ph.D., TECHNOFI, COO
T0-T1 Networking Meeting 16th June Meeting
Evolution of storage and data management
Bob Jones EGEE Technical Director
Eric Peirano, Ph.D., TECHNOFI, COO
WLCG Network Discussion
CLDMS Conference Oct 12 Building a shared understanding of the principles of strengthening communities.
Network Requirements Javier Orellana
Overview of working draft v. 29 January 2018
Presentation transcript:

From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT

LHC Network Activities LHCOPN designed to ensure T0-T1 data transport and provide capacity for (most) T1- T1 needs. – Closed group of parties, CERN + T1’s – Bounded problem with good estimates of needs T2 connectivity uses general purpose IP connectivity. – T2-T1, T2-T2 – Open problem with poor estimates of needs David Foster CERN-IT

We have entered a new era A workshop on transatlantic connectivity was held June 2010 at CERN. – – ~50 Participants representing major stakeholders in R&E networking ESNet, I2, Dante, NRENs, NSF, DOE, Industry, Major Labs etc And revealed the following: – Flows are already larger than foreseen at this point in the LHC program, even with lower luminosity – Some T2’s are very large (not new). All US ATLAS and US CMS T2’s have 10G capability. – Some T1-T2 flows are quite large (several to 10Gbps) – T2-T2 data flows are also starting to become significant. – The vision progressively moves away from all hierarchical models to peer-peer True for both CMS and ATLAS For reasons of reduced latency, increased working efficiency Expectations of network capability are reaching unrealistic proportions without forward planning. David Foster CERN-IT

Problems and Opportunities Networking Technology and Bandwidth is not the problem itself, but the flows are scaling up to occupy much of the existing infrastructures. This will not be a problem if: 1.You can clearly define what you need, now and over the next few years. 2.“You” (or your agency) can pay for it. If not we better make the best use of what we have. 3.You can integrate it into a system, where the end-site facilities and networks operate together in a consistent fashion. David Foster CERN-IT

1. Define what you need The networking community needs a much better definition of real requirements. – Excessive use of general purpose networks will cause operators to take defensive action. – International network architectures, with sufficient reliability and capacity to cope with the expected traffic growth and flow patterns need to be designed and implemented. This is not at all trivial; it needs planning and time – Experiments must work with the network community to create a definition that will enable implementation of infrastructures to support T1-T2-T3 matrix flows. David Foster CERN-IT

2. Pay for it Given an agreed architectural plan with capacity and other objectives (e.g. resilience and adaptability to shifting flows) – Optimal solutions in terms of costs can be found; The limits of what could be afforded can be understood – The funding bodies can plan for the resulting infrastructure, within feasible cost bounds. – The sites can budget to connect. This requires conviction and excellent justification of the costs from the experiments – In line with the justifications given for other parts of the LHC program – and ultimately: Commitment from the funding bodies. David Foster CERN-IT

3. Integrate it into a system Provisioning capacity and providing it to end sites is manageable – If you can define what you need. – If you can show how your needs follow from a well-defined comprehensive operational plan, with well-justified costs. – And you can pay for it. Using it effectively is more of a problem – Site/Campus/Regional network issues – End system hardware issues – End system platform and interface issues (O/S, drivers, NIC, etc) – End system application issues – Experiments’ data movement and job placement software. Real end-end awareness is needed. David Foster CERN-IT

We Need 3 Core Interrelated Activities Data Architecture Vision – Maybe dynamic “pull” in the brave new world to give increasing location independence. Maybe new tradeoffs between cache disk storage and networking – Encompassing all substantial data movement operations, with clear throughput or time-to- complete goals – Leading to clear (and effective) capacity planning – System-wide view enabling key efficiencies: e.g. strategic data placement with some form of Content Distribution Network (not a new concept) International Network Architecture – Existing general purpose IP networks will only take us so far. – New infrastructures take time to put in place. (years) End-end everything – More intelligent frameworks need information on infrastructure components to manage them. – Monitoring – Automated Operations – Consistent site and network resource-use and/or performance optimization David Foster CERN-IT

LHC Networking Activities First Step A small planning group that engages a small number of people to come up with an agreed plan for network bandwidth requirements for the experiments. – MUST be led by the experiments. (Kors Bos will organise this) – That network architects can then translate into core infrastructure designs. – That funding bodies can understand and plan for; based on a convincing, comprehensive operational model – That sites can then understand how to connect to these infrastructures and any policy implications. A design group that will understand the architectural and organisational issues around an additional international core infrastructure. – MUST be led by the network experts – The LHCOPN working group is a good start of key stakeholders. – Needs to involve and inform T2/T3 stakeholders in some managed way. – “Strawman” proposed by Edoardo based on multiple exchange points (L2 or L3?) This should be done now to ensure that the infrastructures will be in place for 2013 data taking. – In time for a “STEP12” exercise in 2012 to ensure readiness for Must work closely with the other core activities. David Foster CERN-IT