Scientific Networking: The Cause of and Solution to All Problems April 14 th 2011 - Workshop on High Performance Applications of Cloud and Grid Tools Jason.

Slides:



Advertisements
Similar presentations
Circuit Monitoring July 16 th 2011, OGF 32: NMC-WG Jason Zurawski, Internet2 Research Liaison.
Advertisements

M A Wajid Tanveer Infrastructure M A Wajid Tanveer
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
FP6−2004−Infrastructures−6-SSA [ Empowering e Science across the Mediterranean ] Grids and their role towards development F. Ruggieri – INFN (EUMEDGRID.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Innovating the commodity Internet Update to CENIC 14-Mar-2007.
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Mantychore Oct 2010 WP 7 Andrew Mackarel. Agenda 1. Scope of the WP 2. Mm distribution 3. The WP plan 4. Objectives 5. Deliverables 6. Deadlines 7. Partners.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Campus Cyberinfrastructure – Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
Research and Educational Networking and Cyberinfrastructure Russ Hobby, Internet2 Dan Updegrove, NLR University of Kentucky CI Days 22 February 2010.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
High Energy & Nuclear Physics (HENP) SIG October 4 th 2011 – Fall Member Meeting Jason Zurawski, Internet2 Research Liaison.
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
Chapter 7 Backbone Network. Announcements and Outline Announcements Outline Backbone Network Components  Switches, Routers, Gateways Backbone Network.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
Authors: Ronnie Julio Cole David
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
DYnamic NEtwork System (DYNES) NSF # October 3 rd 2011 – Fall Member Meeting Eric Boyd, Internet2 Jason Zurawski, Internet2.
Internet2 Update October 7 th 2010, LHCOPN Jason Zurawski, Internet2.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1 Recommendations Now that 40 GbE has been adopted as part of the 802.3ba Task Force, there is a need to consider inter-switch links applications at 40.
Marv Adams Chief Information Officer November 29, 2001.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Connect communicate collaborate LHCONE moving forward Roberto Sabatino, Mian Usman DANTE LHCONE technical workshop SARA, 1-2 December 2011.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site.
DYNES Project Updates October 11 th 2011 – USATLAS Facilities Meeting Shawn McKee, University of Michigan Jason Zurawski, Internet2.
Virtualization as Architecture - GENI CSC/ECE 573, Sections 001, 002 Fall, 2012 Some slides from Harry Mussman, GPO.
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
G É ANT2 Development Support Activity and the Republic of Moldova 1st RENAM User Conference Chisinau, Republic of Moldova 14-May-2007 Valentino Cavalli.
ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
LHCONE Monitoring Thoughts June 14 th, LHCOPN/LHCONE Meeting Jason Zurawski – Research Liaison.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Possible Governance-Policy Framework for Open LightPath Exchanges (GOLEs) and Connecting Networks June 13, 2011.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
DYNES ( DYnamic NEtwork System) & LHCONE ( LHC Open Network Env.) Shawn McKee University of Michigan Jason Zurawski Internet2 USATLAS Facilities Meeting.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
The DYNES Architecture & LHC Data Movement Shawn McKee/University of Michigan For the DYNES collaboration Contributions from Artur Barczyk, Eric Boyd,
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Bob Jones EGEE Technical Director
Clouds , Grids and Clusters
Grid Computing.
Chapter 7 Backbone Network
Big-Data around the world
The UltraLight Program
Presentation transcript:

Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason Zurawski, Research Liaison

Topics so far on the core design and operation of Grid/Cloud infrastructures – Fertile area for work – Lots of advancement – being driven by scientific needs (e.g. Physics, Biology, Climate, etc.) Achilles Heal of Grid/Cloud computing = infrastructure that links the components – Distributed CPU, Disk, and Users – Earlier efforts to improve the overall performance (e.g. Logistical Networking) Role of Networking – “Under the hood”. Should enable science, but stay out of the way – Lots of advancement, highlight 2 efforts today: DYNES – Dynamic Networking to end sites LHCONE – Dedicated resources for data movement 2 – 10/28/2015, © 2011 Internet2 And Now for Something Completely Different

Data movement to support science: – Increasing in size (100s of TBs in the LHC World) – Becoming more frequent (multiple times per day) – Reaching more consumers (VO size stands to increase) – Time sensitivity (data may grow “stale” if not processed immediately) Traditional networking: – R&E or Commodity “IP” connectivity is subject to other users – Supporting large sporadic flows is challenging for the engineers, and frustrating for the scientists 3 – 10/28/2015, © 2011 Internet2 DYNES

Solution – Dedicated bandwidth (over the entire end to end path) to move scientific data – Invoke this “on demand” instead of relying on permanent capacity (cost, complexity) – Exists in harmony with traditional IP networking – Connect to facilities that scientists need to access – Integration with data movement applications Invoke the connectivity when the need it, based on network conditions Prior Work – “Dynamic Circuit” Networking – creation of Layer 2 point to point VLANs – Transit the Campus, Regional, and Backbone R&E networks – Software to manage the scheduling and negotiation of resources 4 – 10/28/2015, © 2011 Internet2 DYNES

NSF Funded “Cyber-Instrument” – Internet2/Caltech/University of Michigan/Vanderbilt University Provide equipment and software to extend the Internet2 ION service into Campus and Regional networks – Build using the OSCARS IDC software (based on work in OGF NSI Working Group) – perfSONAR Monitoring (based on work in the OGF NM, NMC, and NML Working Groups) – FDT (Fast Data Transfer) data movement 5 – 10/28/2015, © 2011 Internet2 DYNES

Deployment Targets: – 25 End Sites – 8 Regional Networks – Collaboration with like minded efforts (DoE ESCPS) Plans to consider provisional applications (send to if you are Supporting all science - e arly focus on Physics (LHC) sites 6 – 10/28/2015, © 2011 Internet2 DYNES

DYNES Infrastructure Overview 7 – 10/28/2015, © 2010 Internet2

Inter-domain Controller (IDC) Server and Software – IDC creates virtual LANs (VLANs) dynamically between the FDT server, local campus, and wide area network – Dell R410 (1U) Server Fast Data Transfer (FDT) server – Fast Data Transfer (FDT) server connects to the disk array via the SAS controller and runs the FDT software – Dell R510 (2U) Server DYNES Ethernet switch options (emerging): – Dell PC6248 (48 1GE ports, 4 10GE capable ports (SFP+, CX4 or optical) – Dell PC8024F (24 10GE SFP+ ports, 4 “combo” ports supporting CX4 or optical) DYNES Standard Equipment 8 – 10/28/2015, © 2010 Internet2

DYNES Data Flow Overview 9 – 10/28/2015, © 2010 Internet2

4 Project Phases – Phase 1: Planning ( Completed in Feb 2011 ) – Phase 2: Initial Deployment ( Feb 2011 through July 2011 ) – Phase 3: Full Deployment ( July 2011 through Sept 2011 ) – Phase 4: Testing and Evaluation ( Oct 2011 through August 2012 ) A draft DYNES Program Plan document is available with additional details on the project plan and schedule: – Questions can be sent to the mailing list: – DYNES Current Status 10 – 10/28/2015, © 2010 Internet2

Campus connectivity is just one part of a solution – Campus has been the traditional bottleneck – Using a traffic engineering solution like DYNES will connect sites on a national level in a point to point fashion – What about transit to non-DYNES sites? What about other countries? Resources on a national and international level – Investment in networking is still strong – Backbone capacity upgrades coupled with availability of new sites (U.S. UCAN)U.S. UCAN 11 – 10/28/2015, © 2011 Internet2 Inductive Step

Scientific networking needs to be pervasive – Availability where the science is, e.g. “everywhere” – Linking the resources that require this capability Clusters and Supercomputers Data stores Scientific Instruments (Telescopes, Colliders). LHC Community: – Pro-active in terms of network preparedness – Designing next generation connectivity options to meet the needs of the VO as a whole – Sensitive to funding, but always wanting the best for the community to support scientific activity for the next 10+ years 12 – 10/28/2015, © 2011 Internet2 Inductive Step

The goal of LHCONE is to provide a collection of access locations that are effectively entry points into a network that is private to the LHC It is anticipated that LHCONE access locations will be provided in countries / regions in a number and location so as to best address the issue of ease of access – In the US, LHCONE access locations might be co- located with the existing R&E exchange points and/or national backbone nodes – A similar situation exists in Europe and Southeast Asia. 13 – 10/28/2015, © 2011 Internet2 LHC Open Network Environment (LHCONE)

Proposed installation of two nodes to provide immediate service – Chicago – New York Interconnected via Internet2 IP Network – Generally has 9 Gbps of available capacity for initial best-effort traffic use – Potential to provide a dedicated backbone circuit to provide 10G of capacity just for LHCONE (or shared with other scientific VOs) – It is certain that this bandwidth will grow as the Internet2 network upgrades its backbone links to 100 Gbps in – 10/28/2015, © 2011 Internet2 LHCONE – North America

15 – 10/28/2015, © 2011 Internet2 Sample Architecture and Connectivity

Designed to be “come as you are” – Network connectivity is expensive, budgets are tight – Funding opportunities can accommodate increased connectivity in the future – Short term is to offer several methods There will be three primary methods of connection to the LHCONE-NA architecture. – Direct Connection to LHCONE-NA Nodes – Layer2 Connectivity via Internet2 Network (e.g. ION) – Layer3 Connectivity via Internet2 Network 16 – 10/28/2015, © 2011 Internet2 LHCONE Access Methodology

Normally an expensive option, but one that provides the greatest access Physical connection from end site to connection point – Initially Chicago and New York, others over time – 10GE anticipated Mimics the current Tier1 to Tier2 connectivity via static circuits 17 – 10/28/2015, © 2011 Internet2 Direct Connection to LHCONE-NA Nodes

Two basic approaches discussed – Static connectivity into Internet2 at some other location (e.g. not in Chicago or New York) Facilitates end sites with this network option already in place – Dynamic connectivity via the ION service Inexpensive way to manage traffic through existing network connections Takes advantage of newly deployed infrastructure for DYNES 18 – 10/28/2015, © 2011 Internet2 Layer2 Connectivity via Internet2 Network

Option that will appeal to many Tier3 facilities without dedicated connections for science traffic Cost effective – Additional hardware is not needed – In most cases, R&E IP access is sufficient (e.g. 10G or less) Use the R&E connectivity of their institution – Best effort in terms of bandwidth – Harder to manage traffic flows 19 – 10/28/2015, © 2011 Internet2 Layer3 Connectivity via Internet2 Network

DYNES is in deployment, demonstrations at major conferences expected (SC11) LHCONE Demonstration in Summer 2011 – – LHCONE NA meeting scheduled for May 2011 in Washington DC (participation welcome) Future Work – LHCONE is just the beginning – Opportunity to provide a nationwide “science focused” infrastructure for all VOs Dedicated Bandwidth Cutting edge technology (Open Flow, etc.) Integration with International efforts Open Access and Open Standards 20 – 10/28/2015, © 2011 Internet2 Conclusions/Next Steps

Scientific Networking: The Cause of and Solution to All Problems April 18 th 2011, Workshop on High Performance Applications of Cloud and Grid Tools Jason Zurawski, Research Liaison For more information, visit 21 – 10/28/2015, © 2011 Internet2