NSF cloud Chameleon: Phase 2 Networking

Slides:



Advertisements
Similar presentations
ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
Advertisements

Cisco GENI Rack Introduction and Solution Configuration details
Sponsored by the National Science Foundation1April 8, 2014, Testbeds as a Service: GENI Heidi Picher Dempsey Internet2 Annual Meeting April 8,
Take your CMS to the cloud to lighten the load Brett Pollak Campus Web Office UC San Diego.
Updated: 4/8/15 CloudLab. Updated: 4/8/15 CloudLab Clouds are changing the way we look at a lot of problems Impacts go far beyond Computer Science … but.
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
ExoGENI Racks Ilia Baldine
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
GEC21 Experimenter/Developer Roundtable (Experimenter) Paul Ruth RENCI / UNC Chapel Hill
SDN in Openstack - A real-life implementation Leo Wong.
FI-WARE – Future Internet Core Platform FI-WARE Cloud Hosting July 2011 High-level description.
Virtualized FPGA accelerators in Cloud Computing Systems
Extreme Networks Confidential and Proprietary. © 2010 Extreme Networks Inc. All rights reserved.
Networking in the cloud: An SDN primer Ben Cherian Chief Strategy Midokura.
Updated: 6/15/15 CloudLab. updated: 6/15/15 CloudLab Everyone will build their own clouds Using an OpenStack profile supplied by CloudLab Each is independent,
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Sponsored by the National Science Foundation GENI and Cloud Computing Niky RIga GENI Project Office
Today’s Plan Everyone will build their own clouds
SDN Dev Group, Week 2 Aaron GemberAditya Akella University of Wisconsin-Madison 1 Wisconsin Testbed; Design Considerations.
Software-defined Networking Capabilities, Needs in GENI for VMLab ( Prasad Calyam; Sudharsan Rajagopalan;
1 Supporting the development of distributed systems CS606, Xiaoyan Hong University of Alabama.
VICCI: Programmable Cloud Computing Research Testbed Andy Bavier Princeton University November 3, 2011.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Sponsored by the National Science Foundation GEC14 Session: SDN * in GENI Marshall Brinn, GPO July 11, 2012 * Software-Defined Networking.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
Sponsored by the National Science Foundation 1 GEC16, March 21, 2013 Are you ready for the tutorial? 1.Did you do the pre-work? A.Are you able to login.
1 | © 2015 Infinera Open SDN in Metro P-OTS Networks Sten Nordell CTO Metro Business Group
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Advanced Computer Networks Lecturer: E EE Eng. Ahmed Hemaid Office: I 114.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
CloudLab Aditya Akella. CloudLab 2 Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts,
Vignesh Ravindran Sankarbala Manoharan. Infrastructure As A Service (IAAS) is a model that is used to deliver a platform virtualization environment with.
1 1. A new DAQ/HLT testbed - layout. 2 Core Switch >11 U Empty space for devices under test Patch Panels LFS ? One DAQ/HLT testbed, also physically Central.
ExoGENI OpenFlow support Ilya Baldin Paul Ruth RENCI Director for Network Research and.
© 2015 MetricStream, Inc. All Rights Reserved. AWS server provisioning © 2015 MetricStream, Inc. All Rights Reserved. By, Srikanth K & Rohit.
ExoGENI GENI Going Forward Tasks Ilya Baldin RENCI Director for Network Research and Infrastructure.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
Open vSwitch: Extending Networking into the Virtualization Layer Ben Pfaff Justin Pettit Teemu Koponen Keith Amidon Martin Casado Nicira Networks, Inc.
Today’s Plan Everyone will build their own clouds
Instructor Materials Chapter 7: Network Evolution
Bryan Learn Pittsburgh Supercomputing Center
Buying into “Summit” under the “Condo” model
HybNET: Network Manager for a Hybrid Network Infrastructure
OpenStack Swift Where do big data go? Eben van Zyl
GENI Dynamic SD-WAN Provisioning
FlexPod Update.
Module 2: DriveScale architecture and components
Local Area and Wide Area Networks
Welcome! Thank you for joining us. We’ll get started in a few minutes.
NextGENI: The Nation’s Edge Cloud
Regional Software Defined Science DMZ (SD-SDMZ)
AGLT2 Site Report Shawn McKee/University of Michigan
From Theory to Action: A Blueprint for Experimentation on Large Scale Research Platforms Abhimanyu Gosain Technical Program Director October 10, 2017.
The Need Addressed by CloudLab
Computer software.
Local Area and Wide Area Networks
Chameleon and ExoGENI Paul Ruth (PI) Bonus Demo:
Cloud-Enabling Technology
GENI Exploring Networks of the Future
Cost Effective Network Storage Solutions
7.3 Example Use Cases Spirent Automation Platform Technologies.
The Cambridge Research Computing Service
Managing allocatable resources
An Introduction to Software Defined Networking and OpenFlow
Presentation transcript:

NSF cloud Chameleon: Phase 2 Networking Paul Ruth RENCI – UNC Chapel Hill pruth@renci.org

BackGround: ExoGENI ExoGENI About 20 sites Each is small OpenStack cloud Dynamic provisioning of L2 paths between them (sometime from a pool of existing vlans) 2

Chameleon: Current Switch Switch Chameleon Core Network Core Services To UTSA, GENI, Future Partners Switch Standard Cloud Unit 42 compute 4 storage x2 Core Services Front End and Data Mover Nodes 504 x86 Compute Servers 48 Dist. Storage Servers 102 Heterogeneous Servers 16 Mgt and Storage Nodes Chameleon Core Network 100Gbps uplink public network (each site) Chicago Austin SCUs connect to core and fully connected to each other Heterogeneous Cloud Units ARMs, Atoms, low power Xeions, FPGAs, GPUs, SSDs, etc. Switch Standard Cloud Unit 42 compute 4 storage x10 A flash by slide that shows that Chameleon is distributed over 2 sites connected by a 100Gbps network – the details are on the next. Core Services 3.6 PB Central File Systems, Front End and Data Movers

NEW hardware in Phase 2 4 new Standard Cloud Units (32 node racks in 2U chassis) 3x Intel Xeon “Sky Lake” racks (2x @UC, 1x @TACC) in Y1 1x future Intel Xeon rack (@TACC) in Y2 Corsa DP2000 series switches in Y1 2x DP2400 with 100Gbps uplinks (@UC) 1x DP2200 with 100Gbps uplink (@TACC) Each switch will have a 10 Gbps connection to nodes in the SCU Optional Ethernet connection in both racks More storage configurations Global store @UC: 5 servers with 12x10TB disks each Additional storage @TACC: 150 TB of NVMes Accelerators: 16 nodes with 2 Volta GPUs (8@UC, 8@TACC) Maintenance, support and reserve To provide the most configurability in the testbed, the two new Skylake SCUs at Chicago will each be outfitted with a DP2400 switch; each will provide a 10G connection to all 32 nodes in the SCU. These switches have two 100Gb uplinks; one each will be hooked to the UC core network which will be upgraded to handle these connections, and one each will be hooked to the other to provide a fully configurable 100Gb east-west connection between the racks. This will allow the ability to test both within and between rack configurations. A single DP2200 switch will be added to the new SCU at TACC, also connected to the TACC core at 100Gb with 10Gb connections to each of the 32 nodes. This will allow 100Gb end-to-end testing over the WAN with full SDN capability at each end.

NEW hardware in Phase 2 4 new Standard Cloud Units (32 node racks in 2U chassis) 3x Intel Xeon “Sky Lake” racks (2x @UC, 1x @TACC) in Y1 1x future Intel Xeon rack (@TACC) in Y2 Corsa DP2000 series switches in Y1 2x DP2400 with 100Gbps uplinks (@UC) 1x DP2200 with 100Gbps uplink (@TACC) Each switch will have a 10 Gbps connection to nodes in the SCU Optional Ethernet connection in both racks More storage configurations Global store @UC: 5 servers with 12x10TB disks each Additional storage @TACC: 150 TB of NVMes Accelerators: 16 nodes with 2 Volta GPUs (8@UC, 8@TACC) Maintenance, support and reserve To provide the most configurability in the testbed, the two new Skylake SCUs at Chicago will each be outfitted with a DP2400 switch; each will provide a 10G connection to all 32 nodes in the SCU. These switches have two 100Gb uplinks; one each will be hooked to the UC core network which will be upgraded to handle these connections, and one each will be hooked to the other to provide a fully configurable 100Gb east-west connection between the racks. This will allow the ability to test both within and between rack configurations. A single DP2200 switch will be added to the new SCU at TACC, also connected to the TACC core at 100Gb with 10Gb connections to each of the 32 nodes. This will allow 100Gb end-to-end testing over the WAN with full SDN capability at each end.

Corsa DP2000 Series Switches Hardware Network Isolation Sliceable Network Hardware Tenant controlled Virtual Forwarding Contexts (VFC) Software Defined Networking (SDN) OpenFlow v1.3 User defined controllers Performance 10 Gbps within a site 100 Gbps between UC/TACC (Aggregated) Using open, programmable APIs, it is possible to create logical switches within a physical switch, in order to isolate network resources for different customers, or business use cases needing different policies, all while running on the same, shared infrastructure.

Network Hardware Chameleon Core Network Standard Cloud Unit Internet 2 AL2S, GENI, Future Partners Chameleon Core Network 100Gbps uplink public network (each site) 100 Gbps (Aggregate) Stacked Switches (Logically One) 100 Gbps (Aggregate) Standard Cloud Unit Standard Cloud Unit Standard Cloud Unit Corsa DP2400 Corsa DP2400 Corsa DP2200 Design strategy for hardware: Large homogenous partition Support for data intensive computing Introduce diversity horizontally and vertically Chicago Austin

Isolated Virtual SDN Switch Provide Isolated Networks (~Spring 2018) BYOC– Bring your own controller: isolated user controlled virtual OpenFlow switches (~Summer 2018) Corsa Switch Standard Cloud Unit VFC (Tenant A) VFC (Tenant B) Compute Node (Tenant A) Compute Node (Tenant A) Compute Node (Tenant B) Compute Node (Tenant B) Ryu OpenFlow Controller (Tenant A) OpenFlow Controller (Tenant B)

Chameleon: SDN Experiments Internet 2 AL2S, GENI, Future Partners Chameleon Networking RENCI added to the team Hardware Network Isolation Corsa DP2000 series OpenFlow v1.3 Sliceable Network Hardware Tenant controlled Virtual Forwarding Contexts (VFC) Isolated Tenant Networks BYOC – Bring your own controller Wide-area Stitching Between Chameleon Sites (100 Gbps) ExoGENI Campus networks (ScienceDMZs) Austin Chameleon Core Network 100Gbps uplink public network Chicago Standard Cloud Unit Corsa DP2400 Switch VFC (Tenant A) VFC (Tenant b) Compute Node (Tenant A) Compute Node (Tenant A) Compute Node (Tenant B) Compute Node (Tenant B) oals: -production testbed for SDN experimentation (production?) -high-bandwidth wide-area SDN experiments Gaps: -Testbed for users to experimenting with wide-area SDN -Isolation below VLAN -looking forward to creating “network appliances” Ryu OpenFlow Controller (Tenant A) OpenFlow Controller (Tenant B)

Chameleon to ExoGENI Stitching Prototype ExoGENI slice Dynamic Chameleon Stitchport Stitched L2 path Dynamic VLANs Connectivity to ExoGENI Stitchport

Chameleon to NCBI via ExoGENI

pruth@renci.org Thank You