Shawn P. McKee University of Michigan University of Michigan UltraLight Meeting, NSF January 26, 2005 Network Working Group Report.

Slides:



Advertisements
Similar presentations
TransLight/StarLight Tom DeFanti (Principal Investigator) Maxine Brown (Co-Principal Investigator) Joe Mambretti (Investigator) Alan Verlo, Linda Winkler.
Advertisements

Indiana University Global NOC Chris Robb The Hybrid Packet and Optical Initiative as a Connectivity Solution Presented to the APAN NOC & Resource Allocation.
All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
Storage System Integration with High Performance Networks Jon Bakken and Don Petravick FNAL.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
Integrating Network and Transfer Metrics to Optimize Transfer Efficiency and Experiment Workflows Shawn McKee, Marian Babik for the WLCG Network and Transfer.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Transport SDN: Key Drivers & Elements
GLIF Engineering (TEC) Working Group & SURFnet6 Blue Print Erik-Jan Bos Director of Network Services, SURFnet I2 Fall meeting, Austin, TX, USA September.
UltraLight: Network & Applications Research at UF Dimitri Bourilkov University of Florida CISCO - UF Collaborative Team Meeting Gainesville, FL, September.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
10GbE WAN Data Transfers for Science High Energy/Nuclear Physics (HENP) SIG Fall 2004 Internet2 Member Meeting Yang Xia, HEP, Caltech
CD FY09 Tactical Plan Review FY09 Tactical Plan for Wide-Area Networking Phil DeMar 9/25/2008.
Is Lambda Switching Likely for Applications? Tom Lehman USC/Information Sciences Institute December 2001.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology State of the art in the use of long distance network International.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Shawn McKee / University of Michigan USATLAS Tier1 & Tier2 Network Planning Meeting December 14, BNL UltraLight Overview.
Connect. Communicate. Collaborate VPNs in GÉANT2 Otto Kreiter, DANTE UKERNA Networkshop 34 4th - 6th April 2006.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
HOPI: Making the Connection Chris Robb 23 June 2004 Broomfield, CO Quilt Meeting.
LambdaStation Monalisa DoE PI meeting September 30, 2005 Sylvain Ravot.
Shawn P. McKee / University of Michigan International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science May 25, 2005.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Shawn McKee University of Michigan University of Michigan UltraLight: A Managed Network Infrastructure for HEP CHEP06, Mumbai, India February 14, 2006.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
John D. McCoy Principal Investigator Tom McKenna Project Manager UltraScienceNet Research Testbed Enabling Computational Genomics Project Overview.
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
Layer 1,2,3 networking on GrangeNet II Slide Pack Greg Wickham APAN 2006 ver 1.1.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Five Essential Elements for Future Regional Optical Networks Harold Snow Sr. Systems Architect, CTO Group.
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
The Design and Demonstration of the UltraLight Network Testbed Presented by Xun Su GridNets 2006, Oct.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
S. Ravot, J. Bunn, H. Newman, Y. Xia, D. Nae California Institute of Technology CHEP 2004 Network Session September 1, 2004 Breaking the 1 GByte/sec Barrier?
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
A WAN-in-LAB for Protocol Development Netlab, Caltech George Lee, Lachlan Andrew, David Wei, Bartek Wydrowski, Cheng Jin, John Doyle, Steven Low, Harvey.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
Fall 2005 Internet2 Member Meeting International Task Force Julio Ibarra, PI Heidi Alvarez, Co-PI Chip Cox, Co-PI John Silvester, Co-PI September 19, 2005.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
US ATLAS Tier-2 Networking Shawn McKee University of Michigan US ATLAS Tier-2 Meeting San Diego, March 8 th, 2007.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
The SURFnet Project Bram Peeters, Manager Network Services
Wide Area Networking at SLAC, Feb ‘03
Wide-Area Networking at SLAC
The UltraLight Program
Presentation transcript:

Shawn P. McKee University of Michigan University of Michigan UltraLight Meeting, NSF January 26, 2005 Network Working Group Report

UltraLight Network Report  Overview of UltraLight Network Overview of UltraLight Network Backbone Backbone UltraLight Sites UltraLight Sites UltraLight Partners UltraLight Partners Network Engineering Network Engineering Effective Protocols for “Best Effort” Effective Protocols for “Best Effort” MPLS/QoS for “Sizeable Pipes” MPLS/QoS for “Sizeable Pipes” Optical Path Management Plans for “Optical Circuits” Optical Path Management Plans for “Optical Circuits” Monitoring Monitoring Disk-to-Disk Transfers Disk-to-Disk Transfers Milestones Milestones Status and Summary Status and Summary (MS Word, for PDF use 2547 rather than 2546)

S. McKee (UM, Team Leader) S. Ravot (LHCNet) R. Summerhill (Abilene/HOPI) D. Pokorney (FLR) S. Gerstenberger (MiLR) C. Griffin (UF) The UltraLight Network Engineering Team J. Ibarra (WHREN, AW) C. Guok (ESnet) L. Cottrell (SLAC) D. Petravick (FNAL) R. Hockett (UM) E. Rubi (FIU)

UltraLight Backbone UltraLight has a non-standard core network with dynamic links and varying bandwidth inter-connecting our nodes.  Optical Hybrid Global Network The core of UltraLight will dynamically evolve as function of available resources on other backbones such as NLR, HOPI, Abilene or ESnet. The main resources for UltraLight: LHCnet (IP, L2VPN, CCC) LHCnet (IP, L2VPN, CCC) Abilene (IP, L2VPN) Abilene (IP, L2VPN) ESnet (IP, L2VPN) ESnet (IP, L2VPN) Cisco NLR wave (Ethernet) Cisco NLR wave (Ethernet) HOPI NLR waves (Ethernet; provisioned on demand) HOPI NLR waves (Ethernet; provisioned on demand) UltraLight nodes: Caltech, SLAC, FNAL, UF, UM, StarLight, CENIC PoP at LA, CERN UltraLight nodes: Caltech, SLAC, FNAL, UF, UM, StarLight, CENIC PoP at LA, CERN

UltraLight Network Infrastructure Elements   Trans-US 10G  s Riding on NLR, Plus CENIC, FLR, MiLR  LA – CHI (2 Waves): HOPI and Cisco Research Waves  CHI – JAX (Florida Lambda Rail)  Dark Fiber Caltech – L.A.: 2 X 10G Waves (One to WAN In Lab); 10G Wave L.A. to Sunnyvale for UltraScience Net Connection  Dark Fiber with 10G Wave: StarLight – Fermilab  Dedicated Wave StarLight – Michigan Light Rail  SLAC: ESnet MAN to Provide 2 X 10G Links (from July): One for Production, and One for Research  Partner with Advanced Research & Production Networks  LHCNet (Starlight- CERN), Abilene/HOPI, ESnet, NetherLight, GLIF, UKLight, CA*net4  Intercont’l extensions: Brazil (CHEPREO/WHREN), GLORIAD, Tokyo, AARNet, Taiwan, China

UltraLight Sites UltraLight currently has 10 participating core sites (shown alphabetically) The table provides a quick summary of the near term connectivity plans Details and diagrams for each site and its regional networks are shown in the technical report (see URL on second slide) SiteDateTypeStorage Out of Band BNLMarchOC48TBDTBD CaltechJanuary 10 GE 1 TB May Y CERNJanuaryOC192TBDY FIUJanuaryOC12TBDY FNALMarch 10 GE TBDTBD I2MarchMPLSL2VPNTBDTBD MITMayOC48TBDTBD SLACSept TBDTBD UFFebruary 1 TB May Y UMMarch 10 GE 1 TB May Y

International Partners One of the UltraLight program’s strengths is the large number of important international partners we have: AMPATH AARNet Brazil/UERJ CA*net4 GLORIAD IEEAF Korea/KOREN NetherLight UKLight As well as collaborators from China, Japan and Taiwan. UltraLight is well positioned to develop and coordinate global advances to networks for LHC Physics

UltraLight envisions a 4 year program to deliver a new, high-performance, network-integrated infrastructure: Phase I will last 12 months and focus on deploying the initial network infrastructure and bringing up first services (Note: we are well on our way, the network is almost up and the first services are being deployed) Phase II will last 18 months and concentrate on implementing all the needed services and extending the infrastructure to additional sites (We will be entering this phase starting approximately this summer) Phase III will complete UltraLight and last 18 months. The focus will be on a transition to production in support of LHC Physics; + eVLBI Astronomy Workplan/Phased Deployment

  GOAL: Determine an effective mix of bandwidth-management techniques for this application-space, particularly: Best-effort and “scavenger” using “effective” protocols Best-effort and “scavenger” using “effective” protocols MPLS with QOS-enabled packet switching MPLS with QOS-enabled packet switching Dedicated paths arranged with TL1 commands, GMPLS Dedicated paths arranged with TL1 commands, GMPLS  PLAN: Develop, Test the most cost-effective integrated combination of network technologies on our unique testbed: 1. Exercise UltraLight applications on NLR, Abilene and campus networks, as well as LHCNet, and our international partners Progressively enhance Abilene with QOS support to protect production traffic Progressively enhance Abilene with QOS support to protect production traffic Incorporate emerging NLR and RON-based lightpath and lambda facilities Incorporate emerging NLR and RON-based lightpath and lambda facilities 2. Deploy and systematically study ultrascale protocol stacks (such as FAST) addressing issues of performance & fairness 3. Use MPLS/QoS and other forms of BW management, and adjustments of optical paths, to optimize end-to-end performance among a set of virtualized disk servers UltraLight Network Engineering

UltraLight: Effective Protocols The protocols used to reliably move data are a critical component of Physics “end-to-end” use of the network TCP is the most widely used protocol for reliable data transport, but is becoming ever more ineffective for higher and higher bandwidth-delay networks. UltraLight will explore extensions to TCP (HSTCP, Westwood+, HTCP, FAST) designed to maintain fair- sharing of networks and, at the same time, to allow efficient, effective use of these networks. UltraLight plans to identify the most effective fair protocol and implement it in support of our “Best Effort” network components.

Selecting the data Transport protocol Tests between CERN and Caltech Capacity = OC Gbps; 264 ms round trip latency; 1 flow Sending station: Tyan S2882 motherboard, 2 x Opteron 2.4 GHz, 2 GB DDR. Receiving station: (CERN OpenLab): HP rx4640, 4 x 1.5GHz Itanium-2, zx1 chipset, 8GB memory Network adapter: S2io 10 GE Linux TCP Linux Westwood+  Linux BIC TCP FAST 3.0 Gbps 5.0 Gbps7.3 Gbps 4.1 Gbps

MPLS/QoS for UltraLight UltraLight plans to explore the full range of end-to-end connections across the network, from best-effort, packet- switched through dedicated end-to-end light-paths. MPLS paths with QoS attributes fill a middle ground in this network space and allow fine-grained allocation of virtual pipes, sized to the needs of the application or user. UltraLight, in conjunction with the DoE/MICS funded TeraPaths effort, is working toward extensible solutions for implementing such capabilities in next generation networks TeraPaths Initial QoS test at BNL

Optical Path Plans Emerging “light path” technologies are becoming popular in the Grid community: They can extend and augment existing grid computing infrastructures, currently focused on CPU/storage, to include the network as an integral Grid component. They can extend and augment existing grid computing infrastructures, currently focused on CPU/storage, to include the network as an integral Grid component. Those technologies seem to be the most effective way to offer network resource provisioning on-demand between end-systems. Those technologies seem to be the most effective way to offer network resource provisioning on-demand between end-systems. A major capability we wish to develop in Ultralight network nodes is the ability to dynamically switch optical paths across the node, bypassing electronic equipment via a fiber cross connect. The ability to switch dynamically provides additional functionality and also models the more abstract case where switching is done between colors (ITU grid lambdas).

Network monitoring is essential for UltraLight. We need to understand our network infrastructure and track its performance both historically and in real-time to enable the network as a managed robust component of our overall infrastructure. There are two ongoing efforts we intend to leverage to help provide us with the monitoring capability required: IEPM MonALISA Both efforts have already made significant progress within UltraLight Monitoring for UltraLight

Shown is one example of some of the progress that has been made in monitoring for UltraLight. MonALISA has been augmented with a real time module that provides complete picture of the connectivity graphs and delay on each segment for routers, networks and AS ROUTERS NETWORKS AS

End-Systems performance Latest disk to disk over 10Gbps WAN: 4.3 Gbits/sec (536 MB/sec) - 8 TCP streams from CERN to Caltech; windows, 1TB file Quad Opteron AMD GHz processors with 3 AMD-8131 chipsets: bit/133MHz PCI-X slots. 3 Supermicro Marvell SATA disk controllers + 24 SATA 7200rpm SATA disks Local Disk IO – 9.6 Gbits/sec (1.2 GBytes/sec read/write, with <20% CPU utilization) Local Disk IO – 9.6 Gbits/sec (1.2 GBytes/sec read/write, with <20% CPU utilization) 10GE NIC 10 GE NIC – 7.5 Gbits/sec (memory-to-memory, with 52% CPU utilization) 10 GE NIC – 7.5 Gbits/sec (memory-to-memory, with 52% CPU utilization) 2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (memory-to- memory) 2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (memory-to- memory) PCI-Express, TCP offload engines PCI-Express, TCP offload engines A 5U server with 24 disks (9 TB) and a 10 GbE NIC is capable of 700 MBytes/sec in the LAN and 330 MBytes/sec in a WAN is $ 25k today. Small server with a few disks (1.2 TB) capable of 120 Mbytes/sec (matching a GbEport) is $ 4K.

Network Connect Milestones January 2005: NLR Cisco wave connecting LA to CHI BGP peering with Abilene & ESnet at CHI. BGP peering with Abilene & ESnet at CHI. MPLS upgrade at CHI. MPLS upgrade at CHI. Two virtual fibers connecting CERN to Caltech Two virtual fibers connecting CERN to Caltech End-systems at Chicago in UltraLight domain. End-systems at Chicago in UltraLight domain. Layer 2 connection to Abilene at Los-Angeles Layer 2 connection to Abilene at Los-Angeles Extension of UltraLight Network to CERN Extension of UltraLight Network to CERN February 2005: Connection to HOPI at 10 GE Connection to FLR at 10 GE Connection to FLR at 10 GE March 2005: 10 GE link to UM via MiLR Connection to BNL at OC48 Connection to BNL at OC48 May 2005: Connection to MIT at OC48 April 2005: “Manual” provisioning across UltraLight August 2005: Semi-autonomous provisioning process. September 2005: Connection to SLAC at 10 GE NLR Cisco wave from exclusive to scheduled use NLR Cisco wave from exclusive to scheduled use

UltraLight Near-term Milestones Protocols: Integration and of FAST TCP (V.1) (July 2005) Integration and of FAST TCP (V.1) (July 2005) New MPLS and optical path-based provisioning (Aug 2005) New MPLS and optical path-based provisioning (Aug 2005) New TCP implementations testing (Aug-Sep, 2005) New TCP implementations testing (Aug-Sep, 2005) Optical Switching: Commission optical switch at the LA CENIC/NLR and CERN (May 2005) Commission optical switch at the LA CENIC/NLR and CERN (May 2005) Develop dynamic server connections on a path for TB transactions (Sep 2005) Develop dynamic server connections on a path for TB transactions (Sep 2005) Storage and Application Services: Evaluate/optimize drivers/parameters for I/O filesystems (April 2005) Evaluate/optimize drivers/parameters for I/O filesystems (April 2005) Evaluate/optimize drivers/parameters for 10 GbE server NICs (June 2005) Evaluate/optimize drivers/parameters for 10 GbE server NICs (June 2005) Select hardware for 10 GE NICs, Buses and RAID controllers (June-Sep 2005) Select hardware for 10 GE NICs, Buses and RAID controllers (June-Sep 2005) Breaking the 1 GByte/s barrier (Sep 2005) Breaking the 1 GByte/s barrier (Sep 2005) Monitoring and Simulation: Deployment of end-to-end monitoring framework. (Aug 2005) Deployment of end-to-end monitoring framework. (Aug 2005) Integration of tools/models to build a simulator for network fabric. (Dec 2005) Integration of tools/models to build a simulator for network fabric. (Dec 2005)Agents: Start development of Agents for resource scheduling (June 2005) Start development of Agents for resource scheduling (June 2005) Match scheduling allocations to usage policies (Sep 2005) Match scheduling allocations to usage policies (Sep 2005)WanInLab: Connect Caltech WanInLab to testbed (June 2005) Connect Caltech WanInLab to testbed (June 2005) Procedure to move new protocol stacks into field trials (June 2005) Procedure to move new protocol stacks into field trials (June 2005)

Summary: Network Progress For many years the Wide Area Network has been the bottleneck; this is no longer the case in many countries thus making deployment of a data intensive Grid infrastructure possible! Recent I2LSR records show for the first time ever that the network can be truly transparent; throughputs are limited by end-hosts Recent I2LSR records show for the first time ever that the network can be truly transparent; throughputs are limited by end-hosts Challenge shifted from getting adequate bandwidth to deploying adequate infrastructure to make effective use of it! Challenge shifted from getting adequate bandwidth to deploying adequate infrastructure to make effective use of it! Some transport protocol issues still need to be resolved; however there are many encouraging signs that practical solutions may now be in sight. 1GByte/sec disk to disk challenge. Today: 1 TB at 536 MB/sec from CERN to Caltech Still in Early Stages; Expect Substantial Improvements Still in Early Stages; Expect Substantial Improvements Next generation network and Grid system Extend and augment existing grid computing infrastructures (currently focused on CPU/storage) to include the network as an integral component. Extend and augment existing grid computing infrastructures (currently focused on CPU/storage) to include the network as an integral component.

Conclusion: We’re Ready for the next step  The network technical group has been hard at work on implementing UltraLight.  Significant progress has been made, allowing us to build upon these achievements.  Our global collaborators are ready to work with us on achieving the UltraLight vision.  We are ready to ramp up our efforts and we need the additional resources of our second year funding to keep our momentum. We look forward to a busy productive year working on UltraLight!